C# GC Leaks

Reading this experience report from the DARPA challenge via Slashdot, I wondered: if event registration caused object retention, how can we deal with these memory issues in Flapjax?

Worrying about memory leaks in JavaScript is a losing battle — the browsers have different collectors. But given functional reactive programming in other settings (e.g., Java/C#), how can we solve this kind of problem? We looked at deletion briefly, but never had time to address the issue in detail. The complexity in our case is that the event registration encodes the entire application — how do you decide that a certain code path is dead? It may be helpful to view a functional-reactive program as a super-graph of data producing, splitting, merging, and consuming nodes; the application state is the subgraph induced by the active nodes reaching the consumers. Then there’s a certain static overhead for the program, plus the dynamic overhead of its active components.

Most of the Flapjax code I’ve written has been for the user interface, so binding and unbinding isn’t much of an issue: if the interface changes temporarily (e.g., tabs), the older behavior is usually recoverable and shouldn’t be collected. When working with more complex models with longer lived state, a deletion policy is more important. Shriram has been working on larger Flapjax applications with application logic as well as presentation — I wonder if he’s run into serious GC issues.

Lifting in Flapjax

In the Flapjax programming language, ‘lifting’ is the automatic conversion of operations on ordinary values into operations on time-varying values. Lifting gets its name from an explicit operation used with Flapjax-as-a-library; we use the transform method call or the lift_{b,e} functions. To better understand lifting, we’ll walk through a simple implementation of the Flapjax library.

I consider the (excellent) Flapjax tutorial prerequisite reading, so it will be hard to follow along if you’re not vaguely familiar with Flapjax.

The following code is all working JavaScript (in Firefox, at least), so feel free to follow along.

Continue reading

The price of cloneNode

So the function fix_dom_clone described in my last post isn’t exactly cheap. In fact, it’s far and away where my lens library is spending most of its time.

Function Calls Percentage of time
fix_dom_clone 4920 47.22%
deep_clone 6022 5.57%
get 3513 5.51%
dom_obj 20842 5.34%

I’ve implemented a few optimizations to reduce the number of times it needs to be called, but its recursion is brutal. The DOM treeWalker might be more efficient than what I have now. I don’t think it can matter much, because according to this website, IE and Safari don’t support it.

Attack of the cloneNodes

So the solution to the bug I had yesterday was fixed with a call to element::cloneNode to avoid aliasing. This introduced, to my great consternation, another bug — some DOM nodes were reverting to their default value. Had I written this down in my (as yet hypothetical) bug journal, it might have become more clear. Instead, I slaved away in Firebug for a few hours without results.

Thinking about it clearly, the problem had to be in cloneNode. I ended up having to write the following recursive fix-up function:

 * List of all DOM event handler names.
var dom_events = 
['onblur', 'onfocus', 'oncontextmenu', 'onload',
'onresize', 'onscroll', 'onunload', 'onclick',
'ondblclick', 'onmousedown', 'onmouseup', 'onmouseenter',
'onmouseleave', 'onmousemove', 'onmouseover',
'onmouseout', 'onchange', 'onreset', 'onselect',
'onsubmit', 'onkeydown', 'onkeyup', 'onkeypress',
'onabort', 'onerror']; // ondasher, onprancer, etc.

 Fixes copy errors introduced by {@link element#cloneNode}, e.g. failure to copy classically-registered event handlers and the value property.

 @param {element} o The original DOM element
 @param {element} copy The result of o.cloneNode()
 @return {element} A modified copy with event handlers maintained
function fix_dom_clone(o, copy) {
    if (!(dom_obj(o) && dom_obj(copy))) { return; }
    for (var i = 0;i < dom_events.length;i++) {
        var event = dom_events[i];
        if (event in o) { copy[event] = o[event]; }
    if ('value' in o) { copy.value = o.value; }
    // recur
    var o_kids = o.childNodes;
    var c_kids = copy.childNodes;
    for (i = 0;i < o_kids.length;i++) {
        fix_dom_clone(o_kids[i], c_kids[i]);

Oof. Unsurprisingly, there are a few efficiency issues.

My bug was weird and unexpected, and the W3C DOM level 2 spec doesn't allude to problems like this, but looking at a Mozilla bug report on the topic, it seems that the W3C DOM level 3 spec says that "[u]ser data associated to the imported node is not carried over". I guess if that's true for event handlers, it's also true for the value property. Oh well. I'd feel better about this irritating API "feature" if they said "associated with".

Another nasty bug — and an idea

I spent about two hours tracking down a DOM node sharing bug — nodes were being put into a new structure outside of the document before the salient data had been read out. While there was no information in these nodes, the lens system insisted that they still be there. (More on that — eventually.)

After finally tracking it down and writing a version of cloneNode that also copies event handlers, everything worked. Between this and the last prototype aliasing bug I had, I got an idea. A programmer could keep a “bug journal”, a list of bugs found and described first by their behavior, then by their solution (and, if those two aren’t descriptive enough, the underlying problem should be described as well). For example, two days ago I ran into my first genuine typing bug in JavaScript — a type checker would have rejected my program, and from the errors generated it wasn’t obvious where the problem was.

This practice could be useful in a few ways. First, the process of writing down the description can help the programmer find the solution. Tedious, but perhaps worthwhile. Surely some bugs would end up being described post facto, since it’s not worth the time when the fix is fairly clear. Second, the solution may add to a ‘bag of tricks’ at the programmer’s/team’s disposal. Third, the bug and solution tease out invariants in the program, and so the bug journal could be gleaned for inter-module and system-level documentation.

Fourth, and dearest to my heart, I think it’s an interesting way to evaluate programming languages. The bug log of a programmer writing an e-mail interface in Java and that of one writing such an interface in JavaScript would provide for some interesting contrasts. To provide more than anecdotal evidence, you’d need to use a much larger sample size of programmers and kinds of programs being written.

I think the bug journal would differ profoundly from examination of bug tracking logs. Only the truly mysterious bugs and the large, architectural shortcomings make it into the tracker. On a daily basis, programmers struggle with making buggy code workable — before it ever hits version control. So bug tracking logs can highlight difficulties in design and with the team, but a bug journal shows exactly what a programmer has to deal with.


I was referred to XSugar as another practical example of bidirectional programming; I read the paper, Dual Syntax for XML Languages and played with the tool (a bit).

The basic idea is pretty clever. You specify a single grammar, showing at the nonterminal level how the XML syntax relates to the non-XML syntax. For example:

person : [Name name] __ "(" [Email email] ")" _ [Id id] = 
              [Name name]  
              [Email email]  

Relates the line Michael Greenberg (email hidden; JavaScript is required) 00000000 to the XML <student sid="00000000"> <name>Michael Greenberg</name> <email>email hidden; JavaScript is required</email> </student>. (Note that Id, Name, and Email are all regular expressions defined at the top, lex-style.) Since there’s a relation between the textual format and the XML format at the grammar level, the transformation in either direction is achieved by a transformation of the parse tree generated — they call (a derivation of) it the UST, or unifying syntax tree. This parse tree is first translated to the more abstract UST, which is then “unparsed” into the other format.

To guarantee a unique unparsing, they perform ambiguity checks on the grammar. The perform can perform an additional check against an XML Schema (or DTD, or whatever) to ensure that all syntactic non-XML generates validating XML. They do this by extracting an “XML graph” — a flowchart for the tree structure — from the XSugar specification, which can then be checked against an XML Schema. This check boils down to “encoding content models and schema definitions as finite string automata over a common vocabulary and checking inclusion of their languages” (see XML Graphs in Program Analysis, which seems a little light on detail).

The work is good — it solves a problem, and simply so. (Much more simply than in the Kawanaka and Hosoya paper, biXid: a bidirectional transformation language for XML. They have slightly looser ambiguity requirements, but with a weird effect.) The error messages they give are okay, but they don’t seem so marvelous that they merit inclusion in the paper. Writing wise, the description of the XML graph could be clearer, and their error messages aren’t entirely clear to me. But it’s hard to criticize their result — 2372 lines of ad hoc Python to 119 lines of XSugar!

I was disappointed by only one thing. I wondered whether I might use XSugar for automatic conversion between SXML and XML; unfortunately, what I came up with was:

WS = \r|\n|\t|" "
AT = "@"
TOP = "*TOP*"
LP = "("
RP = ")"

node : LP [Element e] 
	LP AT [attriblist as] RP
	[nodelist ns] RP
	= <[Element e] [attriblist as]>[nodelist ns] ;

attrib : LP [Attribute a] [Text v] RP =

But then I was stuck — I realized that one can’t abstract over attributes. It doesn’t seem like this sort of abstraction is impossible, but the feature seems to be missing. It would require another style of syntax to make it clear which context nonterminals were in — attribute or node. Or I may just be missing something — the journal paper is the only documentation, short of JavaDoc.

Google E-Mail Masking

As an alternative to the Enkoder, Google Groups turns e-mail addresses in headers and in document text into links to CAPTCHAs, viz. the documentation. They’re not using the hardest distortions I’ve seen, but there seems to be some good contact between the letters.

I wonder what the space looks like between the fully automatic but very fragile Enkoder system and the many-click and -keypress CAPTCHA system. A system using Javascript combined with an easy Turing test would give the best of both worlds. With the CAPTCHA Turing test reduced to, say, a single click, the self-evaluating Javascript could combine that weaker test with a computational payment.

So I see two possible directions for the Enkoder system. One is a user-initiated computational payment: a “decode data” link, when clicked, changes to show “loading…” until the computation produces whatever was enkoded — an e-mail link, data, whatever. The other is a user-initiated puzzle: click the e-mail, a popup div or window presents an intuitive puzzle of moderate difficulty that is generated via the Enkoder-self-evaluating Javascript method; solving it correctly should then deliver the data.

The first is just an engineering issue. First I’d need to collect data on how long dekoding takes on different platforms, and to try to maximize my control while preventing the explosion of the enkoded data in size. (Modular multiplication? I knew MA158 Cryptography would be good for something!) The second direction needs some serious thought about puzzle design. It’s important to be acultural and not too intellectually challenging, but still hard for a machine to figure out. Secondly, it’s not clear how to prevent replay attacks if the Turing test is all client-side, so it has to be difficult to brute-force.

Flapjax Templates

The response to the template language in Flapjax has been mixed, to say the least. The most common complaint is that they mix content with style. True — this can be done with our templates. But nothing stops a developer from putting CSS in the HTML directly. Nothing except good sense, that is.

In response to a recent post on our discussion group for Flapjax, I wrote about this briefly:

…the only compiler specific syntax is the templates: curly-bang — {!, !} and triple-stick — |||. But these are parsed out of HTML PCDATA and attribute nodes, so a JavaScript/Flapjax programmer needn’t consider them. We introduced the syntax as a way to reduce the clutter of “insertDomB” and “insertValueB” calls. For example, the template language is:

It's been {! currentSeconds ||| (loading...) !} seconds since the beginning of 1970, and it's all been downhill.

while we must write the following without it:

It's been (loading...) seconds ... .

This can become a huge mess, so the templates simplify it. We wouldn't want to encourage doing all of your computation in the display layer! I'd compare it to Smarty or JSP -- while you can include complex computations in the tags, it's meant only to ease the interface between models in code and presentations on screen.

I think that soundly characterizes the template language. Shriram wants that in the tutorial, so that'll show up there eventually. (Some people take classes during the semester, but the Flapjax team has much more important things to do. Or so we're told.) But it would be a big mistake to see Flapjax as providing just that -- the functional reactivity is the interesting bit.

Tonight we're presenting the language to the undergraduates in the DUG; there's an exciting contest announcement coming up, as well. Who has time for midterms?