Flapjax: A Programming Language for Ajax Applications

I am immensely pleased to report that our paper on Flapjax was accepted to OOPSLA 2009.

This paper presents Flapjax, a language designed for contemporary Web applications. These applications communicate with servers and have rich, interactive interfaces. Flapjax provides two key features that simplify writing these applications. First, it provides event streams, a uniform abstraction for communication within a program as well as with external Web services. Second, the language itself is reactive: it automatically tracks data dependencies and propagates updates along those data?ows. This allows developers to write reactive interfaces in a declarative and compositional style.

Flapjax is built on top of JavaScript. It runs on unmodi?ed browsers and readily interoperates with existing JavaScript code. It is usable as either a programming language (that is compiled to JavaScript) or as a JavaScript library, and is designed for both uses. This paper presents the language, its design decisions, and illustrative examples drawn from several working Flapjax applications.

The real heroes of this story are my co-authors. Leo, Arjun, and Greg were there for the initial, heroic-effort-based implementation. Jacob and Aleks wrote incredible applications with our dog food. Shriram, of course, saw the whole thing through. Very few of my contributions remain: the original compiler is gone (thank goodness); my thesis work is discussed briefly in How many DOMs? on page 15. Here’s to a great team and a great experience (and a great language)!

Debounce and other callback combinators

It is serendipitous that I noticed a blog post about a callback combinator while adding a few drops to the Flapjax bucket.

Flapjax is nothing more than a coherent set of callback combinators. The key insight to this set of callback combinators is the “Event” abstraction — a Node in FJ’s implementation. Once callbacks are Nodes, you get two things:

  1. a handle that allows you to multiply operate on a single (time-varying) data source, and
  2. a whole host of useful abstractions for manipulating handles: mergeE, calmE, switchE, etc.

The last I saw the implementations of Resume and Continue, they were built using this idea. The more I think about it, the more the FJ-language seems like the wrong approach: the FJ-library is an awesome abstraction, in theory and practice.

PHPEnkoder 1.3

Ron Blaisdell pointed out that my use of noscript elements wasn’t XHTML compliant. Instead of using noscript tags, each enkoded section is preceded by a span containing the “you don’t have JavaScript” message. When the dekoded text is written to the document, this span is deleted.

The latest version is up on PHPEnkoder’s home page and the WordPress plugin directory. (For some reason, PHPEnkoder doesn’t come up when you search for it in the directory, but Google can see it. I’m not sure what the problem is here…)

Practical OCaml

Suppose you were trying to run some experiments about L1 D-caches. (You may also suppose that this is a homework problem, but that’s life.) You’re given a trace of loads and stores at certain addresses. These addresses are 32-bits wide, and the trace is in a textual format:
1A2B3C4D L
1B2B3C4D L
represents a load to 0x1a2b3c4d, followed by a store to 0xdeadbeef, followed by a load to 0x1b2b3c4d. (You might notice the two loads may be in conflict, depending on the block and cache size and the degree associativity. In that case, you might be in my computer architecture class…)

This is problematic. Naturally, you’d like to process the trace in OCaml. But did I mention that the trace is rather large — some 600MB uncompressed? And that some of the addresses require all 32 bits? And some of the statistics you need to collect require 32 bits (or more)? OCaml could process the entire trace in under a minute, but the boxing and unboxing of int32s and int64s adds more than twenty minutes (even with -unsafe). I felt bad about this until a classmate using Haskell had a runtime of about two and a half hours. Yeesh. C can do this in a minute or less. And apparently the traces that real architecture researchers use are gigabytes in size. Writing the simulator in OCaml was a joy; testing and running it was not.

There were some optimizations I didn’t do. I was reading memop-by-memop rather than in blocks of memops. I ran all of my simulations in parallel: read in a memop, simulate the memop in each type of cache, repeat. I could have improved cache locality by reading in a block of memops and then simulating in sequence; I’m not sure how the compiler laid out my statistics structures. I could’ve also written the statistics functions in C on unboxed unsigned longs, but didn’t have the patience. I’d still have to pay for boxing and unboxing the C structure every time, though. Still: one lazy summer week, I may give the code generation for boxed integers a glance.