Formal Methods – weaselhat http://www.weaselhat.com Tue, 08 Nov 2022 13:42:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Answer set programming (ASP) is the powerhouse technology you’ve never heard of http://www.weaselhat.com/2022/11/07/asp/ http://www.weaselhat.com/2022/11/07/asp/#respond Mon, 07 Nov 2022 16:10:38 +0000 http://www.weaselhat.com/?p=1067 The first person to explain answer set programming (ASP, pronounced ‘ay ess pee’) to me (Joe Osborn) told me about the three line implementation of graph coloring and the termination guarantees. Like any PL person, I recoiled in horror: you have an always-terminating language where I can solve NP-complete problems in three lines? So every time I write a program of three lines or more, I have to wonder… how exponential is this? No thank you!

I was wrong. Answer set programming is fast and easy to use; it’s a cousin to SAT that offers different tradeoffs than SMT, and I think PL folks have a lot to gain from ASP. As some evidence in that direction, let me tell you about an upcoming paper of mine appearing at POPL 2023, with Aaron Bembenek and Steve Chong.

From SMT to ASP: Solver-Based Approaches to Solving Datalog Synthesis-as-Rule-Selection Problems
Aaron Bembenek, Michael Greenberg, and Stephen Chong (POPL 2023)

https://greenberg.science/papers/2023popl_asp.pdf (artifact https://zenodo.org/record/7150677)

Datalog synthesis-as-rule-selection

Our paper is, on the face of it, about a program synthesis problem for Datalog: given a collection of candidate rules and an input/output example, select the candidate rules that transform the input to the output. The previous state-of-the-art (ProSynth) used a CEGIS loop: use Z3 to guess some rules, try it in Soufflé, use why- and why-not provenance to feed information back to Z3 to make a better guess. Our work gives three new solutions to the problem:

  • Datalog as a monotonic theory in SMT. Monotonic theories get a big performance boost, and modern solvers like Z3 and CVC4 support them. And Datalog is the monotonic theory ne plus ultra: we [read: Aaron] wrote Z3 and CVC4 plugins that turn any Datalog program into a monotonic theory. You can use this to do the CEGIS loop with a single call to SAT (but many calls to Datalog).
  • Using loop formulae to force SMT to find a least stable model. Every Datalog program has a logical denotation as the least model satisfying its Clark completion. Borrowing from ASP implementation techniques, we can use the Clark completion extended with loop formulae to rule out the hallucinatory models SMT is prone to finding. You can use this approach to do the CEGIS loop with not so many calls to Datalog, but possibly many calls to SAT.
  • Just encode it in ASP. The conventional ASP implementation strategy (grounder and solver) found in tools like clingo admits a direct encoding of the synthesis problem; our translator is just 187 SLOC. ASP lets you name that tune in just one note: you call the grounder, then you call the solver, and then you’re done.

The gist of it is that the ASP solution is not only vastly simpler than the others, it outstrips them in performance, showing a ~9x geomean speedup compared to the state of the art. (I wrote previously about how to summarize early versions of these numbers.) Practically speaking, the ASP synthesizer always returns in under a second, while every other solution shows slowdowns on some of the benchmarks, taking tens of seconds or even timing out at ten minutes. There’s lots more detail in the paper.

I should add that the existing benchmarks are pretty easy: the examples are small, with example facts and candidate rules numbering in the low hundreds. In this setting, anything more than a few seconds isn’t worthwhile. We have some more criticism of the problem setting in the paper, but I can sum up my feeling as: when confronted with the benchmark challenge of implementing strongly connected components (SCC) in Datalog, who has the technical capacity to (a) write up an input/output example for SCC for a non-trivial graph, (b) generate several hundred candidate rules, (c) install and run a program synthesis tool, and (d) verify that the resulting program generalizes but (e) lacks the technical capacity to simply implement SCC in Datalog? Transitive closure is the “Hello World” of Datalog; strongly connected components is the “Fahrenheit-to-Celsius” of Datalog. The benchmarks don’t characterize a problem that anyone needs to solve.

Answer Set Programming

While I’m proud of the contributions outlined above, I’m most excited about introducing ASP to a broader audience. It’s a historical accident that ASP is more or less confined to the AI subfield, where a lot of great SAT and SMT research continues apace—not to mention the program synthesis work published at venues like AAAI and IJCAI under the umbrella of ‘inductive logic programming’. (Any PL person working on synthesis should look closely at Popper and ILASP.)

Our paper goes into more detail, but I like to contrast ASP with SMT by talking about justification. ASP will find solutions that are in some sense well justified by your problem statement (formally, it will find stable models); SMT has no such guarantee.

SMT solvers do an incredible job of finding witnesses for existentials: just (declare-const x Foo) and SMT will find your Foo, no problem. Well, maybe a problem: SMT’s approach to finding an x of type Foo is to go into the desert on a vision quest—it’ll keep hallucinating new values for x until it finds a satisfying one. On the plus side, that means SMT can find an x that doesn’t appear anywhere in the constraints you gave it. On the minus side, that means SMT is straight up hallucinating values—and you’re the one controlling set and setting to make sure SMT stays this side of sanity.

ASP never hallucinates. If something is in an answer set, it’s there for a reason. ASP’s non-monotonic semantics lets you freely mix negation and recursion (unlike Datalog, which forces negation to be stratified); the stable model semantics guarantees you won’t get nonsensical, circular answers that justify their premises with their conclusions.

Put simply: ASP is a SAT-like discipline that lets you work efficiently and correctly with inference rules. SMT is a SAT-like discipline that lets you work efficiently and correctly with existentials over equality and theory predicates. PL work needs both styles of reasoning; as we’ve shown, ASP can bring simple solutions and startling efficiency over complex appoaches using SMT. Maybe it’ll work for your problem, too?

]]>
http://www.weaselhat.com/2022/11/07/asp/feed/ 0
When is program synthesis worthwhile? http://www.weaselhat.com/2022/05/24/when-is-program-synthesis-worthwhile/ http://www.weaselhat.com/2022/05/24/when-is-program-synthesis-worthwhile/#respond Tue, 24 May 2022 14:52:49 +0000 http://www.weaselhat.com/?p=1032 Program synthesis is an appealing notion: I give you a pretty good (but incomplete) description of a program, and you give me a working program back out. Flash Fill—program synthesis by example that ships in Excel—is the standard example that PL people trot out, and it is an incredible success. (Check out Sumit Gulwani’s POPL 2015 keynote (video in supplemental material) if you’ve never heard of it.)

Fiat Cryptography is another great example of program synthesis success. Where Flash Fill is programming by example, Fiat Cryptography is deductive synthesis: here, an expert manually refines a specification of arithmetic operations used in elliptic curve cryptography to achieve fast and verified implementations of the necessary primitives. Fiat Cryptography is used in BoringSSL, a part of Chrome and Android (check out the paper).

These two approaches are quite different: Flash Fill offers nearly instantaneous feedback as part of an existing interface; Fiat Cryptography is effectively a set of Coq lemmas and tactics, only approachable by experts. (You might wonder why they’re both called program synthesis! For that, I’d follow Sam Tobin-Hochstadt’s second-hand quote: “a synthesizer is just a compiler that doesn’t work”, that is, these tools are both synthesis because they take specifications as input and produce valid implementations as output, when they produce output… but they don’t always succeed in producing output.)

Flash Fill and Fiat Cryptography are tech-transfer success stories working on starkly different timescales: Flash Fill is an end-user synthesis that takes examples and produces Excel formulae in a fraction of a second; Fiat Cryptography is a tool for experts to generate extremely specialized code in a more interactive, involved way, possibly taking days to generate something acceptable. Given these remarkable differences, I wonder… when is program synthesis worthwhile?

A theory of effort

Suppose you’re trying to write some program, and you could (a) use program synthesis or (b) write the program by hand. Let’s say it takes E time to come up with an appropriate specification, S time to synthesize a solution, and I time to inspect and approve a solution. Conversely, suppose it takes P time to write the program and T time to test and approve it.

In general, synthesis is only worthwhile when E + S + I < P + T. Considering that most programmers are more familiar with conventional programming tasks and users are often reluctant to install and use a complex new tool, it’s likely that synthesis is only reasonable when E + S + I <<< P + T (we read <<< as “is much less than”). Let’s take this theory for a spin.

Flash Fill

Flash Fill is example based, where simple string transformations are canonical examples. Writing up some example string transformations might take a few seconds, i.e., E ~ 10. Synthesis is very fast—say, S ~ 1. Inspection might take longer—from a few seconds to a minute or so. Let’s be careful and say I ~ 30 (though your typical Excel user may be a little more YOLO about things).

Conversely, programming string manipulations in Excel are fiddly; I’d certainly have to spend a minute to find the right functions, figure out the annoying indexing bits (another minute—or more), and then go through and verify that I got it right. A user with less programming expertise might take longer. Let’s say P ~ 120 and T ~ 20—though many users will give up before figuring it out at all.

Here synthesis is much faster than writing the program by hand: we have 31 ~ 140. It’s a huge savings in time for an experienced programmer to use Flash Fill, never mind a user less familiar with programming.

Fiat Cryptography

Fiat Cryptography bakes in a set of techniques for deriving very efficient implementations of cryptographic arithmetic primitives. In this case E amounts to choosing an appropriate prime and its representation and encoding it in Coq. I pored over their commit logs, but I couldn’t find an example of new primes being introduced. Let’s say it takes a day, though I suspect it’s much less. (For example, it takes a few days for them to implement new optimizations—a new prime amounts to simply picking a representation.) S is the time it takes for Coq to compile the code—I haven’t tried it, but it takes about 1.5hrs to compile in their CI. Inspection is… free, or rather, the low, low price of trusting their trusted computing base (TCB)! Okay, okay: their validation in CI takes another 1.5hrs. Trust but verify. Verify but test.

On the other hand, what does it take to implement and verify operations for a new prime by hand? Months! And at the end of those months, one has less confidence in the final product compared to an unverified one; verifying the handwritten code will take months more.

Again, synthesis is substantially faster: a day or two, compared to many months. As a bonus, the output of synthesis is substantially more trustworthy than what you’d get rolling your own.

RLIBM

Another example of well situated, worthwhile synthesis is RLIBM, a collection of fast and correct elementary floating point functions. RLIBM doesn’t advertise itself as synthesis, but that’s exactly what it is: given a specification (e.g., log2f) and a correct oracle, they generate appropriate polynomials for approximation. A number of the talks are available online.

RLIBM is less of a ‘drop in’ replacement than Fiat Cryptography, but they’re already seeing plenty of success in getting their code into LLVM. Synthesis is slow—it can take quite some time for them to generate and validate polynomials (i.e., E, S, and I are on the order of days judging from their interactions with LLVM devs). But the value proposition is huge: their approach yields more efficient code, offers a single polynomial for multiple rounding modes and bits of precision, and is more correct than the state of the art. What’s more, the start of the art is also very expensive: Intel has a whole team of mathematical experts working on this problem (P is very, very high); for float32, I = T, since both cases use exhaustive checking.

New frontiers

Synthesis is worthwhile when its costs (coming up with a specification/examples E, running the synthesizer S, and inspecting the solution I) are substantially less than the cost of programming by hand (programming the solution P and testing it T). That is, synthesis is worthwhile when E + S + I <<< P + T.

My estimate is conservative: the framing assumes that all we get from synthesis is a replacement for programming, no better than what we’d produce ourselves. But for users without programming experience, Flash Fill doesn’t just speed things up, it makes new things possible. Similarly, Fiat Cryptography doesn’t just produce a fast C implementation of arithmetic primitives… its proofs add confidence that the result is correct! RLIBM is faster and more precise than existing solutions.

Performance changes how users use software. Fabian Giesen is quoted to similar effect in Dan Luu’s post on productivity and velocity; Shriram Krishnamurthi has made similar comments, too (Thanks to Sam Tobin-Hochstadt and Dan Luu for the links!) Such a maxim certainly applies here: synthesis opens new doors. Non-programmers get help writing simple programs; programmers save substantial time and effort. Either way, synthesis boosts productivity.

When is synthesis not worthwhile?

We’ve seen three examples of program synthesis that have clearly provided value in the real world. Flash Fill, Fiat Cryptography, and RLIBM are clearly worthwhile. When is synthesis not worthwhile?

Let’s consider the case of general-purpose programming. A variety of program synthesizers can generate code in functional or imperative languages. A common example in these settings is linked list reversal. A quadratic solution in a functional language is fairly simple if you know about append or snoc, but harder without it; efficient, linear versions in functional and imperative languages are a bit tougher.

Many academic tools are example based, in which case it’d be reasonable to say that E ~ T. For synthesis to be worthwhile, then, we would need S + I <<< P. That is, the time to run the synthesizer and inspect the resulting program must be substantially less than the time to write the program itself.

Writing a linear-time linked list reversal in OCaml takes me about 30s; less in Haskell. Imperative linked list reversal is a bit slower—say, 2.5min. Testing either of these won’t take long—a few seconds suffices in an FP REPL, or as much as a minute to write the JUnit in Java.

Existing synthesis tools can handily beat my programming time, so… what gives? Why aren’t program synthesis tools everywhere? I think there are a few reasons:

  • In fact, E > T. Looking at synthesis benchmarks, the tools that beat my time take many more examples than I would bother with. Tail-recursive reversal takes 14 examples in an older tool I know of… and that’s to generate the two-argument version, not the outer function that one might care about. If I know to generate the accumulator-passing type of the inner function, why do I need help writing list reverse?
  • I is large. I looked for research on how I and P relate, i.e., how long it takes to write a function as compared to understanding and agreeing with someone else’s implementation, but I couldn’t find much concrete information. I suspect that if the standard is ‘believing a program to be correct’, then I is on the same order as P when T is small, i.e., programs that can be easily tested are as hard to check as they are to write. As T increases, then I is on the order of T. (But who cares what I think? It’s a testable hypothesis!)
  • S should include the cost of invoking the tool… and maybe the amortized cost of acquiring the tool. Flash Fill just comes with Excel and is quite discoverable in the UI, but existing tools need to be downloaded, compiled, and installed, and they don’t have much in the way of editor integration.

Alternatively, consider GitHub’s Copilot. You write a little prose explanation and a function header, and it does the rest. Copilot seems to set E quite low, but I is comparatively high. What’s worse, E != T when you give Copilot textual descriptions rather than input/output examples. Copilot addresses HCI issues nicely—it’s easy to install and easy to use—but it produces particularly low-confidence code. For Copilot code, I’d say I = T, i.e., I’d want to thoroughly test anything that came out of Copilot. (And more importantly, for licensing reasons, I wouldn’t want to use anything that came out of Copilot.)

To sum up: synthesizing code for general purpose languages has E + S + I less than P + T in some cases, but not so much less than P + T that it seems worthwhile.

My examples of worthwhile synthesis offer two primary ways for synthesis to be worthwhile: either focus on fast-turnarounds and excellent usability (very low E, S, and I), or specialize in a domain where the state of the art is excruciatingly hard (P and T are huge and you have a story—verification, validation, etc.—for keeping I low). Flash Fill’s successful tech transfer depended heavily on HCI improvements. Fiat Cryptography’s and RLIBM’s success did not—but they target specialized domains.

Don’t be discouraged

Program synthesis offers a novel relationship between programmers people and computers. I am not at all saying to give up on forms of synthesis that don’t meet my criteria of E + S + I <<< P + T. Some forms of synthesis may not be worthwhile yet, but we don’t know what changes and opportunities the future holds!

Program synthesis is an exciting new area, and there’s lots of PL work on it. It’s unreasonable to expect every academic paper to be immediately applicable, so I don’t expect every proposed synthesizer to meet my criteria. It is important, however, to be realistic about where we stand. If you’re working on synthesis, think about how E, S, and I compare to P and T for you. If your approach doesn’t yet hit my criteria, what needs to change to make that happen?

Postscript

There’s some great discussion on the Twitter thread announcing this post.

]]>
http://www.weaselhat.com/2022/05/24/when-is-program-synthesis-worthwhile/feed/ 0
How to cook a KAT for your pet theory http://www.weaselhat.com/2022/05/13/how-to-cook-a-kat-for-your-pet-theory/ http://www.weaselhat.com/2022/05/13/how-to-cook-a-kat-for-your-pet-theory/#respond Fri, 13 May 2022 15:34:53 +0000 http://www.weaselhat.com/?p=1023 Kleene algebra with tests is a beautiful, powerful framework for reasoning about programs. You can easily encode conventional While programs into KAT, and KAT enjoys decidable equality. Reasoning with KAT feels like you’re cheating Alan Turing himself: here we are, deciding nontrivial properties of programs!

The gist of KAT is that you write programs using a regular expression like notation: + for parallel composition, ; for sequential, and * for iteration. So you might encode:

while x > 0:
  y += 1
  x -= 1

As (xGt0; incY; decX)*; ¬xGt0, where xGt0 is a ‘test’ and incY and decX are ‘actions’. KAT’s equivalence decision procedure can prove that this program is equivalent to any finite unrolling of itself… neat!

NetKAT is the most impactful application of KAT: it’s an influential and successful academic project, and its ideas can already be found in numerous real, production systems. In light of NetKAT’s remarkable success… why don’t we apply KAT more often?

What’s hard about KAT?

On its own, KAT proves plenty of nice theorems, but none of them reason about particular program behaviors. In the code snippet above, xGt0, incY, and decX are uninterpreted—there’s no relationship between, say xGt0 and decX. That is, you might expect that ¬xGt0;decX;¬xGt0 is equivalent to ¬xGt0;decX, because decrementing a number less than or equal to 0 will yield a number that is also less than or equal to 0. The names of our tests and actions are suggestive, but KAT treats them absractly. If you want to reason about the semantics of your tests and actions, you need to build a custom, concrete KAT. NetKAT reasons about fields on packets, and doing so means building a particular, concrete KAT with particular actions. The original paper spends quite a bit of effort proving this new, custom KAT has a sound, complete, and decidable equivalence checking.

Worse still, KAT’s metatheory is very challenging. To create NetKAT, Nate Foster and company worked through closely related ideas for a few years before Nate joined Cornell and started working with Dexter Kozen, KAT’s progenitor. Only then did they realize that KAT would be a good fit, and they got to work on developing a concrete KAT—NetKAT. Unfortunately, “collaborate with Dexter” is an approach that doesn’t scale.

How to cook a KAT

In an upcoming PLDI 2022 paper, “Kleene Algebra Modulo Theories: A Framework for Concrete KATs”, Ryan Beckett, Eric Campbell, and I show how to generate a KAT over a given theory, i.e., a set of tests, actions, and their equational theory. We call the approach Kleene algebra modulo theories, or KMT. The paper covers quite a few examples:

  • booleans and bit vectors
  • monotonic natural numbers
  • unbounded sets and maps
  • NetKAT

What’s more, our approach allows for higher-order theories, like taking the product of two theories or using finite-time LTL to reason about another theory. (Our approach abstracts and generalizes Temporal NetKAT, which is just a concrete instance of our more general method.)

To build a KMT, you provide primitive tests and actions, along with weakest preconditions relating each pair of test and action. There’s an ordering requirement: a test must be no smaller than its preconditions. With these in hand, we’re able to automatically derive a KAT with good properties in a pay-as-you-go fashion:

  • If your theory is sound, the KAT is sound.
  • If your theory is complete, the KAT is complete.
  • If your theory’s satisfiability checking is decidable, we can derive a decision procedure for equivalence.

I’m particularly excited that our framework is prototype-ready: our code is implemented as an OCaml library, where you define theories as functors. Please try it out—mess around and write your own theories, following our examples. We hope that KMT will significantly lower the bar for entry, making it easier for more people to play around with KAT’s powerful equivalence checking.

What’s the catch?

There’s more than one way to cook a KAT. KMT generates KATs with tracing semantics, i.e., the exact trace of actions matters. In KAT+B! or NetKAT, later updates override earlier ones, e.g., x:=false; x:=true ? x:=true… but KMT will treat these terms differently, because they have different traces. KAT+B! deliberately avoids tracing; NetKAT only traces at predefined points, by means of their dup primitive, which marks the current state as historically salient. There’s no deep reason for KMT to use tracing, and we believe KMT can be generalized to support dup-like controls for tracing.

The ordering constraint on weakest preconditions is a strong one. Our natural numbers, sets, and maps must be monotonic: they may grow or shrink, but not both. They cannot be compared, e.g., two natural-valued variables x and y can be compared to constants but not each other.

KMT is also just a prototype. It’s fast for small programs, but it takes dedicated work to make a KAT’s decision procedure efficient enough on more serious examples.

Why are you talking about cooking KATs?

The greatest POPL paper of all time is Manna and Pnueli 1983, “How to cook a temporal proof system for your pet language”. Why? Just take a look a the first page:

The header of the paper offsets the author names to the right. A line drawing dominates the top: a dog wags its tail, tongue dripping eagerly in front of a kabob marked with "ADA" and "shared variable" and "CSP".
I rest my case.

Update

KMT won a distinguished paper award at PLDI!

]]>
http://www.weaselhat.com/2022/05/13/how-to-cook-a-kat-for-your-pet-theory/feed/ 0
Bridging the gradual typing gap at OOPSLA 2021 http://www.weaselhat.com/2021/09/21/bridging-the-gradual-typing-gap-at-oopsla-2021/ http://www.weaselhat.com/2021/09/21/bridging-the-gradual-typing-gap-at-oopsla-2021/#respond Tue, 21 Sep 2021 14:34:43 +0000 http://www.weaselhat.com/?p=972 I want to believe in a future where the lion will lie down with the lamb; we’ll beat our swords into plowshares; and developers will migrate dynamic prototypes to robust static systems with confidence. But these Aquarian visions are elusive. Having a map of the road to paradise in theory doesn’t mean we know how to get there in practice. Let me tell you about two papers at OOPSLA that shuffle us a few steps forward on this long pilgrim’s trail.

A vintage poster of "Hair", the American Tribal Love Rock Musical, with a trippy inverted head. This poster advertises a performance at the Aquarius Theatre in Los angeles.

Migrating programs

How do you actually get a program from Scheme into ML? Or from JavaScript into TypeScript? The theory of gradual typing goes far beyond these pedestrian questions. In principle, we know how to reconcile dynamism with much more complex systems, like information flow or refinement types or effect systems. But there’s very little tooling to support moving any particular Scheme program into ML. (If your program is a Racket program, then you’re in some luck.)

People have studied program migration before, under a variety of names. Papers go back at least to 2009, arguably even earlier. There are lots of different approaches, and most comprise some form of type inference and custom constraint solving—complex! Worse still, there’s been no consensus on how to evaluate these systems. Luna Phipps-Costin, Carolyn Jane Anderson, me, and Arjun Guha dug into program migration. Our paper, “Solver-based Gradual Type Migration”, tries to build a map of the known territory so far:

  1. There are competing desiderata: maximal type precision, compatibility with code at different types, and preserving the existing semantics of your program, i.e., safety.
  2. We evaluate a variety of past techniques on prior benchmarks, and we devise a novel set of “challenge” problems. Our evaluation framework is robust, and you could plug in other approaches to type migration and evaluate them easily.
  3. We introduce a new, very simple approach to type migration, which we call TypeWhich. TypeWhich uses an off-the-shelf SMT solver. You can choose how compatible/precise you want it to be, but it’ll always be safe.

I’m excited about each of these contributions, each for its own reason.

For (1), I’m excited to formally explain that what you’re actually trying to do with your code matters. “Gradual typing” sensu lato is pretty latus indeed. Are you migrating a closed system, module by module? Or are you coming up with type annotations for a library that might well be called by untyped clients? These are very different scenarios, and you probably want your type migration algorithm to do different things! Bringing in these competing concerns—precision, compatibility, and safety—gives researchers a way to contextualize their approaches to type migration. (All that said, to me, safety is paramount. I’m not at all interested in a type migration that takes a dynamic program that runs correctly on some input and produces a statically typed program that fails on the same input… or won’t even compile! That doesn’t sound very gradual to me.)

For (2), I’m excited to be building a platform for other researchers. To be clear, there’s a long way to go. Our challenge problems are tiny toys. There’s a lot more to do here.

For (3), I’m excited to have an opportunity to simplify things. The TypeWhich constraint generator is simple, classic PL; the constraints it generates for SMT are straightforward; the models that SMT generates are easy to understand. It’s a cool approach!

One tiny final note: Luna has done a tremendous amount of incredibly high quality work on this project, both in code and concept. She’s just now starting her third-year of undergraduate study. So: watch out! You ain’t ready.

Typed functional programming isn’t about functions

If there’s a single defining ‘killer’ feature of typed functional programming, it isn’t first-class functions at all: it’s algebraic datatypes. Algebraic datatypes help make illegal states unrepresentable and ASTs easy to work with. They’re a powerful tool, and their uptake in a variety of new-hotness languages (Kotlin, Rust, Swift) speaks to their broad appeal.

Moving Scheme code to ML is an old goal, and it’s the bread and butter of the introductory sections of gradual typing papers. But are we any closer than we were fifteen years ago? (I’d say “yes”, and point at Typed Racket, or “nobody knows what’s happening anyway” and point at Idris’s Chez Scheme runtime.)

Stefan Malewski, me, and Éric Tanter tried to figure out how algebraic datatypes play with dynamic features. Our paper, “Gradually Structured Data“, uses AGT to ‘compute’ static and dynamic semantics for a language with possibly open algebraic datatypes and the unknown type in a few flavors (?, the unknown type; a new ground type for “datatype”, the same way int and bool and ?->? are ground; and a new type for “any open datatype”). The features gel in a nice way, letting us express some cool behaviors (see Section 2 for how one might evolve a simple JSON API) and sit in a novel space (see Section 5 for a thorough comparison to related features).

I’m particularly pleased that we’ve found a new place in the design spectrum (per our feature chart in Section 5) that seems to support incremental program migration (per our examples in Section 2)—and it’s formally grounded (by using AGT in the middle, formal sections).

This paper came out of conversations with Éric after my screed about gradual typing’s two lineages at SNAPL (see also my followup blogpost, “What to Define When You’re Defining Gradual Type Systems”). There’s plenty more to do: what about separate compilation? What are the right representation choices? How should runtime checks really go, and how can programmers control the costs?

I remember a question I was asked after giving the talk for “Contracts Made Manifest” at POPL 2010 with some panic fondly. That paper compares the latent approach to contracts in Racket-then-Scheme (well structured runtime checks at module boundaries) to the manifest approach (runtime checks are a form of type coercion, occurring anywhere) in the emerging refinement types literature (Sage, Liquid Types, etc.). I had shown that the two aren’t equivalent in the presence of dependency, and I concluded by talking about how the two implementation approaches differed. So: somebody asked, “Which approach should you use?” To be honest, I had hardly even thought about it.

So, suppose you wanted use algebraic datatypes and dynamic features today: which language should you use? I’ve thought about it, and the answer, sadly, is, “It depends”. OCaml’s polymorphic variants get you a long way; Haskell’s Dynamic could work great, but it’s badly in need of usable surface syntax. (I’ve tried to get Richard Eisenberg to help me with the fancy work to make that happen, but he’s justifiably worried that the Haskell community would run him out of town.) Scala, Haskell, and OCaml are your best bets if you want true algebraic datatypes. If you’re more relaxed about things, Typed Racket or TypeScript could work well for you. If what you’re looking for is a type system expressive enough to capture interesting dynamic idioms, then I think there’s a clear choice: CDuce. Ever since un bel recensore anonimo at SNAPL 2019 showed me that CDuce can type flatten, I’ve been impressed. Check this out:

let flatten ( Any -> [ (Any\[Any*])* ] )  (* returns a list of non-lists ???? *)
  | [] -> []                              (* nil *)
  | (h,t) -> (flatten h)@(flatten t)      (* cons *)
  | x -> [x]                              (* anything else *)

Look at that type! In just a few lines of CDuce, we can show that flatten produces not just a list of elements, but a list of things that are not themselves lists. The price here is that CDuce’s types are set-theoretic, which means things are a touch different from what people are used to in OCaml or Haskell. But if you’re okay with that, CDuce is a serious contender!

Coda: see you at OOPSLA?

I’m planning on going to OOPSLA 2021 in Chicago, given the twoopsla and the opportunity to present a paper from OOPSLA 2020, “Formulog: Datalog for SMT-based static analysis”, with Aaron Bembenek and Steve Chong. I’ve already blogged about it, but I’m excited to get to give an in-person version of the talk, too. You can still watch Aaron’s excellent recorded talk on YouTube and enjoy the cabin vibes. There won’t be cabin vibes at my OOPSLA 2020 talk, but there will be terrible jokes. So: think about it. Will I see you at OOPSLA? I hope so!

]]>
http://www.weaselhat.com/2021/09/21/bridging-the-gradual-typing-gap-at-oopsla-2021/feed/ 0
Cast notation: a case study http://www.weaselhat.com/2020/09/30/cast-notation-a-case-study/ http://www.weaselhat.com/2020/09/30/cast-notation-a-case-study/#comments Thu, 01 Oct 2020 04:17:28 +0000 http://www.weaselhat.com/?p=900 I recently wrote on the SIGPLAN blog about how PL notation is a barrier to entry. While the arguments are mine, I acknowledge the many folks who helped me write it in the post. Ideas from an interesting conversation with Neel Krishnaswami came up again in a comment from Jeremy Gibbons. The universe has spoken: here’s what I think about cast notation.

First, here’s what Neel said:

I think that the argument in this note is a bit under theorized. As I see it, there are three fundamental forces at work:

1. Infix operators are hard to parse without extra side information (eg, precedence).

2. Center-embedding is hard to parse for psycholinguistic reasons.

3. Semantics is about relating things, and it is easier for people to see connections when the notational differences between the two things are small.

So when you compare cast(e, S, T) to e <S => T>, the functional notation wins in point 1 and loses on point 2. This is because cast(cast(e, S, T), T, U) has nested structure in a way that e <S => T> <T => U> does not—the second one can parse as exp cast*.

I don’t know the work on gradual typing well enough to say with any confidence, but I would have expected the second notation to be a bit better for 3. The semantics of the term e is a map Γ -> S, and if the meaning of a cast is an embedding function S -> T, then [[ e <S => T> ]] = [[e]]; [[<S => T>]] — i.e., the parts of the term can be interpreted using composition in a diagrammatic order without changing the relative position of the subterms.

My guess in this last part is also an example of the dynamic that leads to bad notation — we pick our notation at the outset, based on a guess about which semantic properties are important, and we can get stuck with bad notation if the properties that actually are important differ from the ones we guessed when designing the notation. (Well, people can also just have bad taste, I guess.)

—Neel Krishnaswami, personal communication.

Both Neel and Jeremy zeroed in on a feature I really like about the e <S => T> or e :: S => T notations: they’re neatly diagrammatic. Jeremy goes further, noting that these notation suggess that cast might compose, as in e <S=>T> <T=>U> ~= e <S=>U>.

If I were forced to choose a notation, I do think these two are the best… with a slight preference for e :: S => T. (I think Phil Wadler introduced this notation, but I’m too tired to run it down just now. Let me know in the comments? Edit: thanks to James McKinna for pointing out the source—“Blame for all” by Ahmed et al.—in a comment, below!)

So why do I prefer a function? In short: the notations suggest identities which don’t actually hold.

Casts don’t compose

Whether you’re casting between simple gradual types or dependent refinements… casts don’t actually compose! Consider a term f : int -> ?, i.e., a function f that takes an int and returns… something.

We can cast f to be purely dynamic, writing f' = f :: (int -> ?) => (? -> ?). These types are compatible, i.e., they differ only by having or not having a ?. Now eventually f' may flow out of the dynamic part of our code and arrive at some context that thinks f' ought to be a bool -> bool function, casting it. So we get:

f' :: (? -> ?) => (bool -> bool) =
(f :: (int -> ?) => (? -> ?)) :: (? -> ?) => (bool -> bool) =
f :: (int -> ?) => (? -> ?) => (bool -> bool) 

Now, composition would say that we ought to be able to convert the above to f :: (int -> ?) => (bool -> bool), but such a cast is statically forbidden—these types aren’t compatible because their domains are incompatible, i.e., you can never cast from int to bool! (If you’re surprised that compatibility isn’t transitive in this way, a whole literature awaits. I recommend Abstracting Gradual Typing by Garcia, Clark, and Tanter as a starting point: transitivity can fail.)

In the contracts world, e :: S => T => U ~= e :: S => U is just as bad. What if S = U = {x:Int|true}, but T = {x:Int|x>0}. By eliminating the cast in the middle, we’ve forgotten to check that e is positive! Such a forgetful semantics comes up as a possible space-efficient semantics, though you can do better.

Cast congruence

Where Jeremy talked about composition of casts, Neel talked about compositional semantics: that is, the postfix notation directly suggested a diagrammatic denotation, as in [[ e :: S => T ]] = [[ e ]] ; [[ S => T]]. My experience with casts suggests that this intuition isn’t a helpful one, for two reasons.

First: the key ingredient for space-efficient evaluation is not using conventional composition for casts, but rather treating casts in your continuation specially. That’s no ordinary semi-colon! A “cast congruence” lemma lets you recover conventional reasoning, but it takes quite some work to get there.

Second, treating casts as first class (i.e., utterable without being directly applied) forces you to think about very strange terms, like <S1->S2 => T1->T2> <S1 => S2>. (Just… don’t. For why?) Just as for primitive operations, it’s simplest to force casts to be fully applied.

Use coercions

I don’t like these notations for casts because they offer bad suggestions. A textual notation is modestly more cumbersome here, but it’s worth it for clarity to newcomers. It’s particularly worth skipping fancy notation in this setting, because casts are merely a technical device for a core calculus, not a part of the surface language.

But the real truth is: if you’re interested in higher-order runtime checks, you really should be using Henglein’s coercions anyway. (And check out Henglein and Rehof’s Scheme implemented on those principles while you’re at it.) Coercions are much clearer than casts, compose naturally, and are the source of space efficiency and fast implementations. What’s more, blame safety for coercions is straightforward… it’s syntactic!

Postscript: the beam in my eye

You might say, “Well, Michael, it seems like all your papers use the <S => T> e notation. Who are you to judge?”

I used that notation first for POPL 2010, when we (me, Benjamin Pierce, and Stephanie Weirich) tried to figure out whether or not contracts and hybrid types were the same. (They are at simple types. They aren’t if you have dependency—they treat “abusive”, self-contradictory contracts differently.) I just stole Cormac Flanagan’s notation from Hybrid Type Checking, changing an \rhd to a \Rightarrow. (Ugh, why change it at all? My guess: I could draw \Rightarrow better on the board.)

I’ve stuck with that notation… and that’s the heart of the problem. People get used to a board notation, and then they decide that their preferred shorthand is what should go in the paper. What’s good for you when you’re doing the work may not be helpful or even clear to new folks.

I like Neel’s final diagnosis. Don’t invent a notation in the first paper. Use whatever you want on the board, but publish with text first. Like a good stew, your notation will be better on the second day, once the flavors have had time to marry. Later on, if you decide you do need a notation, you’ll know exactly which identities you want your notation to suggest… and which it should not suggest!

]]>
http://www.weaselhat.com/2020/09/30/cast-notation-a-case-study/feed/ 1
What to Define When You’re Defining Gradual Type Systems http://www.weaselhat.com/2020/08/13/what-to-define-when-youre-defining-gradual-type-systems/ http://www.weaselhat.com/2020/08/13/what-to-define-when-youre-defining-gradual-type-systems/#respond Thu, 13 Aug 2020 20:53:41 +0000 http://www.weaselhat.com/?p=859 So you want to define a gradual type system, like all the cool kids? My SNAPL 2019 paper imagines three possible motivations:

  1. Expressiveness. You have nothing to lose but your static chains!
  2. Interoperation. Gradual typing seamlessly weaves the dynamic and static worlds into a single fabric.
  3. Typing itself. Static typing offers myriad benefits: enjoy them today!

You don’t have to pick just one. Or maybe you have a different motivation—I’d love to hear it. If you’re motivated by one of these goals but aren’t sure what to do, the paper offers a variety of challenge problems in Section 3.

Now, what do you have to do to define your gradual type system? You have to come up with a type system that has a question mark in it, of course. (You can also write any or dyn or Dynamic or *—whatever works for you.) But what else?

  • A surface language. Since Siek and Taha’s seminal 2006 paper, gradual types have commonly been expressed via elaboration: a source language (with nonexistent or optional or partial type annotations) is translated to a core language that makes all dynamism explicit. What is your source language? Even if you don’t define your source language formally, give an intuition about how programmers will experience it. Can programmers control what’s dynamic and what’s static? Do you ever reject source programs? Which? (GTLC rejects true 5—even in dead code—but different source languages could do different things.) Why is your line the right one to draw?
  • Concrete examples. Ideally drawing from real-world examples, what might be good about gradual types in your context? What new programs do you allow? What problems do you avoid? What guarantees do you gain? Make your example programs good! As Alan Perlis said, “A program without a loop and a structured variable isn’t worth writing”. Examples from the SNAPL paper include: the flatten function, JSON processing, or the “attach-to-the-request” idiom in middleware.
  • Operations. What can we do with base types? Having real operations around will force you to answer hard questions about your source and core languages. How does equality work, i.e., what can be compared and what are the answers? Does your dynamic language reject 5 + "hit"? What about 5 + ((λx.x) "hit")? If you truly have a dynamic type, what operations can you do on it? Which can fail? Is there a way to check at runtime whether casting to a static type will succeed before you commit to such reckless behavior?
  • Control. Include conditionals or some other nontrivial notion of control flow. The first published rules for gradual typing that used a notion of ‘meet’ came in 2012! The way you treat join points in control says a lot about the ergonomics of your system. Church encodings do not cut the mustard.
  • Type semantics. Are your types worth the pixels they’re written on? What do they mean? If I have a value of a given type, what guarantees do I have? You don’t need to give a formal type semantics, but it’s important to know what to expect. If I write a function λx:T. e, what can I actually assume about x in e? If T is int, do I know x is an int, or could it blow up? What about ref int… can reading fail? Writing? What about list int? Does pattern matching on it cause conversions, or possible failure? What about…

The SNAPL 2019 paper argues that there are two ‘lineages’ of gradual typing: one which starts from statically typed languages and relaxes or modifies the type system to include dynamic features, and one which starts from dynamic languages and tries to develop a static type system that can accommodate your ‘preexisting conditions’—legacy code. Whichever lineage you’re working in, each item above is worth carefully considering.

I want to conclude by calling out a paper that more people ought to know about; it does a good job on most of these points. It came out the same year as Alanis Morisette’s acclaimed international debut album, Jagged Little Pill.

(The official ACM version is less complete than the technical report—alas.) They are clear about their surface language (Scheme—with argument lists and call/cc, but not arbitrary set!). They have an entire section of concrete examples, with good demonstrations of how conditionals work with their coercion parameters. They even draw on examples from the literature, citing Mike Fagan’s thesis (which is a goldmine of examples). They don’t give a formal type semantics, but they do explain (with positive and negative examples) how type coercion parameters and polymorphism interact to achieve in their elaborated ML the ad hoc polymorphism necessary to implement their source Scheme.

I also want to highlight this paper because it’s one that I’ve never heard folks actually talk about, though it seems to be cited well enough. I urge anyone who is interested in gradual types to read it. Just like Alanis’s cri de coeur against the shallow world of pop, some things from 1995 are worth revisiting.

Ron Garcia gave helpful feedback on a draft of this post. Thanks, Ron!

]]>
http://www.weaselhat.com/2020/08/13/what-to-define-when-youre-defining-gradual-type-systems/feed/ 0
Formulog: ML + Datalog + SMT http://www.weaselhat.com/2020/08/07/formulog-ml-datalog-smt/ http://www.weaselhat.com/2020/08/07/formulog-ml-datalog-smt/#comments Fri, 07 Aug 2020 16:47:05 +0000 http://www.weaselhat.com/?p=835 If you read a description of a static analysis in a paper, what might you find? There’ll be some cute model of a language. Maybe some inference rules describing the analysis itself, but those rules probably rely on a variety of helper functions. These days, the analysis likely involves some logical reasoning: about the terms in the language, the branches conditionals might take, and so on.

What makes a language good for implementing such an analysis? You’d want a variety of features:

  • Algebraic data types to model the language AST.
  • Logic programming for cleanly specifying inference rules.
  • Pure functional code for writing the helper functions.
  • An SMT solver for answering logical queries.

Aaron Bembenek, Steve Chong, and I have developed a design that hits the sweet spot of those four points: given Datalog as a core, you add constructors, pure ML, and a type-safe interface to SMT. If you set things up just right, the system is a powerful and ergonomic way to write static analyses.

Formulog is our prototype implementation of our design; our paper on Formulog and its design was just conditionally accepted to OOPSLA 2020. To give a sense of why I’m excited, let me excerpt from our simple liquid type checker. Weighing in under 400 very short lines, it’s a nice showcase of how expressive Formulog is. (Our paper discusses substantially more complex examples.)

type base =
  | base_bool

type typ = 
  | typ_tvar(tvar)
  | typ_fun(var, typ, typ)
  | typ_forall(tvar, typ)
  | typ_ref(var, base, exp)

and exp = 
  | exp_var(var)
  | exp_bool(bool)
  | exp_op(op)
  | exp_lam(var, typ, exp)
  | exp_tlam(tvar, exp)
  | exp_app(exp, exp)
  | exp_tapp(exp, typ)

ADTs let you define your AST in a straightforward way. Here, bool is our only base type, but we could add more. Let’s look at some of the inference rules:

(* subtyping *)
output sub(ctx, typ, typ)

(* bidirectional typing rules *)
output synth(ctx, exp, typ)
output check(ctx, exp, typ)

(* subtyping between refinement types is implication *)
sub(G, typ_ref(X, B, E1), typ_ref(Y, B, E2)) :-
  wf_ctx(G),
  exp_subst(Y, exp_var(X), E2) = E2prime,
  encode_ctx(G, PhiG),
  encode_exp(E1, Phi1),
  encode_exp(E2prime, Phi2),
  is_valid(`PhiG /\ Phi1 ==> Phi2`).

(* lambda and application synth rules *)
synth(G, exp_lam(X, T1, E), T) :-
  wf_typ(G, T1),
  synth(ctx_var(G, X, T1), E, T2),
  typ_fun(X, T1, T2) = T.

synth(G, exp_app(E1, E2), T) :-
  synth(G, E1, typ_fun(X, T1, T2)),
  check(G, E2, T1),
  typ_subst(X, E2, T2) = T.

(* the only checking rule *)
check(G, E, T) :-
  synth(G, E, Tprime),
  sub(G, Tprime, T).

First, we declare our relations—that is, the (typed) inference rules we’ll be using. We show the most interesting case of subtyping: refinement implication. Several helper relations (wf_ctx, encode_*) and helper functions (exp_subst) patch things together. The typing rules below follow a similar pattern, mixing the synth and check bidirectional typing relations with calls to helper functions like typ_subst.

fun exp_subst(X: var, E : exp, Etgt : exp) : exp =
  match Etgt with
  | exp_var(Y) => if X = Y then E else Etgt
  | exp_bool(_) => Etgt
  | exp_op(_) => Etgt
  | exp_lam(Y, Tlam, Elam) =>
    let Yfresh = 
      fresh_for(Y, X::append(typ_freevars(Tlam), exp_freevars(Elam)))
    in
    let Elamfresh = 
      if Y = Yfresh
      then Elam
      else exp_subst(Y, exp_var(Yfresh), Elam)
    in
    exp_lam(Yfresh,
            typ_subst(X, E, Tlam),
            Elamfresh)
  | exp_tlam(A, Etlam) =>
    exp_tlam(A, exp_subst(X, E, Etlam))
  | exp_app(E1, E2) => 
    exp_app(exp_subst(X, E, E1), exp_subst(X, E, E2))
  | exp_tapp(Etapp, T) => 
    exp_tapp(exp_subst(X, E, Etapp), typ_subst(X, E, T))
  end

Expression substitution might be boring, but it shows the ML fragment well enough. It’s more or less the usual ML, though functions need to have pure interfaces, and we have a few restrictions in place to keep typing simple in our prototype.

There’s lots of fun stuff that doesn’t make it into this example: not only can relations call functions, but functions can examine relations (so long as everything is stratified). Hiding inside fresh_for is a clever approach to name generation that guarantees freshness… but is also deterministic and won’t interfere with parallel execution. The draft paper has more substantial examples.

We’re not the first to combine logic programming and SMT. What makes our design a sweet spot is that it doesn’t let SMT get in the way of Datalog’s straightforward and powerful execution model. Datalog execution is readily parallelizable; the magic sets transformation can turn Datalog’s exhaustive, bottom-up search into a goal-directed one. It’s not news that Datalog can turn these tricks—Yiannis Smaragdakis has been saying it for years!—but integrating Datalog cleanly with ML functions and SMT is new. Check out the draft paper for a detailed related work comparison. While our design is, in the end, not so complicated, getting there was hard.

Relatedly, we have also have an extended abstract at ICLP 2020, detailing some experiments in using incremental solving modes from Formulog. You might worry that Datalog’s BFS (or heuristic) strategy wouldn’t work with an SMT solver’s push/pop (i.e., DFS) assertion stack—but a few implementation tricks and check-sat-assuming indeed provide speedups.

]]>
http://www.weaselhat.com/2020/08/07/formulog-ml-datalog-smt/feed/ 3
The Big Brzozowski http://www.weaselhat.com/2020/05/08/the-big-brzozowski/ http://www.weaselhat.com/2020/05/08/the-big-brzozowski/#comments Fri, 08 May 2020 22:29:07 +0000 http://www.weaselhat.com/?p=819 When I mentioned how important smart constructors are for Brzozowski derivatives on Twitter, when Prabhakar Ragde raised an interesting question:


The question is subtle: it’s not about how many possible derivatives there, but rather a question about size in the worst case. If I start parsing a string s with a regular expression e, how big might the nth derivative of e be?

I livecoded an empirical answer. TL;DR: the worst-case derivatives are exponentially large!

I hacked up a driver that would (a) enumerate all regular expressions of a certain size and (b) calculate their 1st through nth derivatives, while (c) logging the largest size seen at each step. (You can get the raw CSV from the repo.)

Data in hand, I did some simple analysis in R. First, I plotted the data on a logarithmic graph with linear fit: et voilà , it’s exponential.

a clear linear fit on a logarithmic graph

With that graph, I granted myself permission to fit an exponential curve:

nice fits on a conventional graph, though the smart constructor algorithm is flatter compared to the sharply spiking plain one; R^2 for the fit is in the high .90s for both

So: as you take Brzozowski derivatives, your regular expression will—in the worst case—grow exponentially. That sucks. Unsurprisingly smart constructors don’t give asymptotic savings, even though they save you a huge amount of overhead.

What do you mean, ‘big’?

In my empirical analysis, I used one measure of size: the number of nodes, counting empty regexes as being of size 0. But there are others! In particular, there’s a height measure on regular expressions, too.

In the worst case, the size of a regular expression is exponential in its height. Can I characterize how the height of a regular expression grows with the Brzozowski derivative?

I built a small model in Coq and proved that:

  • The height of the derivative of a regular expression E is no greater than twice the height of E itself (deriv_height). I doubt that this is a tight upper bound.
  • There is no constant k that bounds the size increase of the derivative (deriv_height_constant_general). The general thrust of the proof is as follows. Consider e = seq (alt epsilon e1) empty, where e1 is a regex with maximal derivative height, i.e., h (d c e1) = k + h e1 for some c. Note that alt epsilon e1 is nullable and also has maximal derivative height. We have that h (d c e) = 2 + h (d c e1') by definition; by assumption, h (d c e) <= k + h e = k + 1 + h e1'. By assumption again, h (d c e1') = k + h e1'. But then how can it be that 2 + k + h e1' <= k + 1 + h e1'?

With these two proofs combined, we can be confident that the exponential growth here is real, and not an artifact of small models. It would be interesting to determine what the multiplier actually is.

In these proofs, I set s empty = h empty = 0. It felt right at the time, but maybe it’s a little weird. So I redid the proofs setting them to 1. The dude abides.

Where next?

I have several remaining questions. Can we characterize which terms have large derivatives? Do they have equivalent forms with smaller derivatives? I suspect right-associating a seq will be better than left-associating it. Are there rewrites that ensure non-nullability (or minimize nullability) of the left-hand sides of seqs?

]]>
http://www.weaselhat.com/2020/05/08/the-big-brzozowski/feed/ 3
Smart constructors are smarter than you think http://www.weaselhat.com/2020/05/07/smart-constructors-are-smarter-than-you-think/ http://www.weaselhat.com/2020/05/07/smart-constructors-are-smarter-than-you-think/#comments Thu, 07 May 2020 16:50:01 +0000 http://www.weaselhat.com/?p=785 CS 181-N Advanced Functional Programming is a project-based elective I’m teaching this semester. The first half of the course teaches the students basic Haskell programming, from purity to FAM and IO, by way of parsing. The second half of the course has the students build a small project using a pure FP (Haskell and Elm, this go round). The projects were kicking off just as we pivoted to video.

I’ve been streaming my own project as a way of showing students different software engineering practices: how to use CI, how to write good tests and generators, how to run benchmarks, how to file bug reports, and so on. It’s been fun.

My ‘project’ is the Regularity, a library for working with regular languages and automata. We get a lot of A-plot Haskell mixing with B-plot theory of computation.

After implementing a few kinds of NFA (and seeing that they’re much more efficient than naive “find all matches” regex matching), I turned to Brzozowski derivatives. Most undergraduates aren’t exposed to the idea of derivatives outside of a calculus class, so I thought it’d be fun to show. Here’s the implementation, from Regex.hs:

data Regex =
    Empty
  | Epsilon
  | Char Char
  | Seq !Regex !Regex
  | Alt !Regex !Regex
  | Star !Regex
  deriving (Eq, Ord)

dMatches :: Regex -> Text -> Bool
dMatches re t = nullable (T.foldl (flip deriv) re t)

-- re `matches` c:s iff deriv c re `matches` s
deriv :: Char -> Regex -> Regex
deriv _c Empty         = empty
deriv _c Epsilon       = empty
deriv  c (Char c')     = if c == c' then epsilon else empty
deriv  c (Seq re1 re2) = 
  alt (seq (deriv c re1) re2) (if nullable re1 then deriv c re2 else empty)
deriv  c (Alt re1 re2) = alt (deriv c re1) (deriv c re2)
deriv  c (Star re)     = seq (deriv c re) (star re)

-- `nullable re` returns true iff re accepts the empty string
nullable :: Regex -> Bool
nullable Empty         = False
nullable Epsilon       = True
nullable (Char _)      = False
nullable (Seq re1 re2) = nullable re1 && nullable re2
nullable (Alt re1 re2) = nullable re1 || nullable re2
nullable (Star _re)    = True

It only took a few minutes to implement the derivatives, and performance was… fine. Thinking it would be nice to show the concept, I did smart constructors next:

empty :: Regex
empty = Empty

epsilon :: Regex
epsilon = Epsilon

char :: Char -> Regex
char = Char

seq :: Regex -> Regex -> Regex
seq Epsilon re2     = re2
seq re1     Epsilon = re1
seq Empty   _re2    = Empty
seq _re1    Empty   = Empty
seq re1     re2     = Seq re1 re2

alt :: Regex -> Regex -> Regex
alt Empty   re2     = re2
alt re1     Empty   = re1
alt Epsilon re2     = if nullable re2 then re2 else Alt Epsilon re2
alt re1     Epsilon = if nullable re1 then re1 else Alt re1 Epsilon
alt re1     re2     = if re1 == re2 then re1 else Alt re1 re2

star :: Regex -> Regex
star Empty     = Epsilon
star Epsilon   = Epsilon
star (Star re) = Star re
star re        = Star re

But then something surprised me: with smart constructors, the Brzozowski-based matching algorithm was substantially faster than NFAs, with or without epsilon transitions!

How much faster? Here are the times as reported by criterion.

Matching (a|a)* …on a50 …on a100
Naive, all-matches 71.4 µs 255   µs
NFA with epsilon transitions 31.6 µs 55.0 µs
NFA 4.88µs 9.29µs
Brzozowski derivative, dumb constructors 61.8 µs 262   µs
Brzozowski derivative, smart constructors 1.88µs 3.73µs

I knew smart constructors were an optimization, but I was blown away! If you didn’t know about smart constructors, you might shrug off Brzozowski derivatives as a curiosity for math dweebs. But with just a tiny bit of optimization from smart constructors, Brzozowski derivatives offer the best performance and the simplest implementation, at least on these meager tests.

It would be interesting to dig into why the dumb constructors are so slow. It doesn’t seem to be laziness, so is it just allocation churn? (We’re not even using hash-consing, which can provide another performance boost.) Is it large, redundant regexes? Something else?

Regular-expression derivatives re-examined by Owens, Reppy, and Turon is closely related and worth a read! Relatedly, Neel Krishnaswami has some lovely theoretically discussion, too.

]]>
http://www.weaselhat.com/2020/05/07/smart-constructors-are-smarter-than-you-think/feed/ 2
Flapjax on PL Perspectives http://www.weaselhat.com/2019/12/03/flapjax-on-pl-perspectives/ http://www.weaselhat.com/2019/12/03/flapjax-on-pl-perspectives/#respond Tue, 03 Dec 2019 13:23:27 +0000 http://www.weaselhat.com/?p=768 Shriram Krishnamurthi, Arjun Guha, Leo Meyerovich, and I wrote a post about Flapjax on PL Perspectives, the SIGPLAN blog. (Thanks to Mike Hicks for helping us edit the post!)

Flapjax won the OOPSLA MIP award for 2009 (though the SIGPLAN website isn’t yet up to date). Our blog post is about the slightly unconventional way we worked: most of the Flapjax work happened in 2006 and 2007, but we didn’t even try to write the paper until several years later (Leo and I were in grad school). Rather than recapitulate those ideas, go read the post!

]]>
http://www.weaselhat.com/2019/12/03/flapjax-on-pl-perspectives/feed/ 0