Software – weaselhat http://www.weaselhat.com Mon, 20 Jun 2022 13:33:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 How to cook a KAT for your pet theory http://www.weaselhat.com/2022/05/13/how-to-cook-a-kat-for-your-pet-theory/ http://www.weaselhat.com/2022/05/13/how-to-cook-a-kat-for-your-pet-theory/#respond Fri, 13 May 2022 15:34:53 +0000 http://www.weaselhat.com/?p=1023 Kleene algebra with tests is a beautiful, powerful framework for reasoning about programs. You can easily encode conventional While programs into KAT, and KAT enjoys decidable equality. Reasoning with KAT feels like you’re cheating Alan Turing himself: here we are, deciding nontrivial properties of programs!

The gist of KAT is that you write programs using a regular expression like notation: + for parallel composition, ; for sequential, and * for iteration. So you might encode:

while x > 0:
  y += 1
  x -= 1

As (xGt0; incY; decX)*; ¬xGt0, where xGt0 is a ‘test’ and incY and decX are ‘actions’. KAT’s equivalence decision procedure can prove that this program is equivalent to any finite unrolling of itself… neat!

NetKAT is the most impactful application of KAT: it’s an influential and successful academic project, and its ideas can already be found in numerous real, production systems. In light of NetKAT’s remarkable success… why don’t we apply KAT more often?

What’s hard about KAT?

On its own, KAT proves plenty of nice theorems, but none of them reason about particular program behaviors. In the code snippet above, xGt0, incY, and decX are uninterpreted—there’s no relationship between, say xGt0 and decX. That is, you might expect that ¬xGt0;decX;¬xGt0 is equivalent to ¬xGt0;decX, because decrementing a number less than or equal to 0 will yield a number that is also less than or equal to 0. The names of our tests and actions are suggestive, but KAT treats them absractly. If you want to reason about the semantics of your tests and actions, you need to build a custom, concrete KAT. NetKAT reasons about fields on packets, and doing so means building a particular, concrete KAT with particular actions. The original paper spends quite a bit of effort proving this new, custom KAT has a sound, complete, and decidable equivalence checking.

Worse still, KAT’s metatheory is very challenging. To create NetKAT, Nate Foster and company worked through closely related ideas for a few years before Nate joined Cornell and started working with Dexter Kozen, KAT’s progenitor. Only then did they realize that KAT would be a good fit, and they got to work on developing a concrete KAT—NetKAT. Unfortunately, “collaborate with Dexter” is an approach that doesn’t scale.

How to cook a KAT

In an upcoming PLDI 2022 paper, “Kleene Algebra Modulo Theories: A Framework for Concrete KATs”, Ryan Beckett, Eric Campbell, and I show how to generate a KAT over a given theory, i.e., a set of tests, actions, and their equational theory. We call the approach Kleene algebra modulo theories, or KMT. The paper covers quite a few examples:

  • booleans and bit vectors
  • monotonic natural numbers
  • unbounded sets and maps
  • NetKAT

What’s more, our approach allows for higher-order theories, like taking the product of two theories or using finite-time LTL to reason about another theory. (Our approach abstracts and generalizes Temporal NetKAT, which is just a concrete instance of our more general method.)

To build a KMT, you provide primitive tests and actions, along with weakest preconditions relating each pair of test and action. There’s an ordering requirement: a test must be no smaller than its preconditions. With these in hand, we’re able to automatically derive a KAT with good properties in a pay-as-you-go fashion:

  • If your theory is sound, the KAT is sound.
  • If your theory is complete, the KAT is complete.
  • If your theory’s satisfiability checking is decidable, we can derive a decision procedure for equivalence.

I’m particularly excited that our framework is prototype-ready: our code is implemented as an OCaml library, where you define theories as functors. Please try it out—mess around and write your own theories, following our examples. We hope that KMT will significantly lower the bar for entry, making it easier for more people to play around with KAT’s powerful equivalence checking.

What’s the catch?

There’s more than one way to cook a KAT. KMT generates KATs with tracing semantics, i.e., the exact trace of actions matters. In KAT+B! or NetKAT, later updates override earlier ones, e.g., x:=false; x:=true ? x:=true… but KMT will treat these terms differently, because they have different traces. KAT+B! deliberately avoids tracing; NetKAT only traces at predefined points, by means of their dup primitive, which marks the current state as historically salient. There’s no deep reason for KMT to use tracing, and we believe KMT can be generalized to support dup-like controls for tracing.

The ordering constraint on weakest preconditions is a strong one. Our natural numbers, sets, and maps must be monotonic: they may grow or shrink, but not both. They cannot be compared, e.g., two natural-valued variables x and y can be compared to constants but not each other.

KMT is also just a prototype. It’s fast for small programs, but it takes dedicated work to make a KAT’s decision procedure efficient enough on more serious examples.

Why are you talking about cooking KATs?

The greatest POPL paper of all time is Manna and Pnueli 1983, “How to cook a temporal proof system for your pet language”. Why? Just take a look a the first page:

The header of the paper offsets the author names to the right. A line drawing dominates the top: a dog wags its tail, tongue dripping eagerly in front of a kabob marked with "ADA" and "shared variable" and "CSP".
I rest my case.

Update

KMT won a distinguished paper award at PLDI!

]]>
http://www.weaselhat.com/2022/05/13/how-to-cook-a-kat-for-your-pet-theory/feed/ 0
Processing Semi-Structured Data in the Unix Shell http://www.weaselhat.com/2021/06/29/processing-semi-structured-data-in-the-unix-shell/ http://www.weaselhat.com/2021/06/29/processing-semi-structured-data-in-the-unix-shell/#comments Tue, 29 Jun 2021 14:07:35 +0000 http://www.weaselhat.com/?p=936 The Unix shell is incredibly powerful. I use it routinely for simple tasks (moving files around), routine work (grading scripts), and in my development process (building, deploying, etc.). When I’m working with text, the shell and its ecosystem is excellent: patching together cat, find, grep, sed, tr, and cut with shell pipelines and redirections is a convenient, expressive, and fast way to inspect and edit files.

But my shell toolchain is much less helpful when working with semi-structured data, like JSON and YAML. Folks have made wonderful contributions to the shell ecosystem to help—tools like jq and gron. These two tools provide new languages for manipulating JSON. It may be embarrassing to admit for a programming languages researcher, but… I’m kind of maxed out on new languages.

So I built a new tool that lets you use your usual shell tools to work with modern file formats: ffs, the file filesystem.

A GIF showing the following shell interaction, editing JSON in place.

~/ffs/demo $ echo '{}' >demo.json
~/ffs/demo $ ffs -i demo.json &
[1] 56827
~/ffs/demo $ cd demo
~/ffs/demo/demo $ echo 47 >favorite_number
~/ffs/demo/demo $ mkdir likes
~/ffs/demo/demo $ echo true >likes/dogs
~/ffs/demo/demo $ echo false >likes/cats
~/ffs/demo/demo $ touch mistakes
~/ffs/demo/demo $ echo Michael Greenberg >name
~/ffs/demo/demo $ echo https://mgree.github.io >website
~/ffs/demo/demo $ cd ..
~/ffs/demo $ umount demo
~/ffs/demo $ 
[1]+  Done                    ffs -i demo.json
~/ffs/demo $ cat demo.json 
{"favorite_number":47,"likes":{"cats":false,"dogs":true},"mistakes":null,"name":"Michael Greenberg","website":"https://mgree.github.io"}~/ffs/demo $ 
~/ffs/demo $
Editing JSON in place using ffs.

ffs lets you mount semi-structured data as a filesystem: objects and lists correspond to directories, while other types correspond to regular files. You can mount a file in one format, edit the filesystem, and write it back in another.

All you need to run ffs is FUSE, a kernel module that supports userspace filesystem. You’ll want libfuse on Linux, or macFUSE on macOS. Download a binary and play around!

]]>
http://www.weaselhat.com/2021/06/29/processing-semi-structured-data-in-the-unix-shell/feed/ 2
Formulog: ML + Datalog + SMT http://www.weaselhat.com/2020/08/07/formulog-ml-datalog-smt/ http://www.weaselhat.com/2020/08/07/formulog-ml-datalog-smt/#comments Fri, 07 Aug 2020 16:47:05 +0000 http://www.weaselhat.com/?p=835 If you read a description of a static analysis in a paper, what might you find? There’ll be some cute model of a language. Maybe some inference rules describing the analysis itself, but those rules probably rely on a variety of helper functions. These days, the analysis likely involves some logical reasoning: about the terms in the language, the branches conditionals might take, and so on.

What makes a language good for implementing such an analysis? You’d want a variety of features:

  • Algebraic data types to model the language AST.
  • Logic programming for cleanly specifying inference rules.
  • Pure functional code for writing the helper functions.
  • An SMT solver for answering logical queries.

Aaron Bembenek, Steve Chong, and I have developed a design that hits the sweet spot of those four points: given Datalog as a core, you add constructors, pure ML, and a type-safe interface to SMT. If you set things up just right, the system is a powerful and ergonomic way to write static analyses.

Formulog is our prototype implementation of our design; our paper on Formulog and its design was just conditionally accepted to OOPSLA 2020. To give a sense of why I’m excited, let me excerpt from our simple liquid type checker. Weighing in under 400 very short lines, it’s a nice showcase of how expressive Formulog is. (Our paper discusses substantially more complex examples.)

type base =
  | base_bool

type typ = 
  | typ_tvar(tvar)
  | typ_fun(var, typ, typ)
  | typ_forall(tvar, typ)
  | typ_ref(var, base, exp)

and exp = 
  | exp_var(var)
  | exp_bool(bool)
  | exp_op(op)
  | exp_lam(var, typ, exp)
  | exp_tlam(tvar, exp)
  | exp_app(exp, exp)
  | exp_tapp(exp, typ)

ADTs let you define your AST in a straightforward way. Here, bool is our only base type, but we could add more. Let’s look at some of the inference rules:

(* subtyping *)
output sub(ctx, typ, typ)

(* bidirectional typing rules *)
output synth(ctx, exp, typ)
output check(ctx, exp, typ)

(* subtyping between refinement types is implication *)
sub(G, typ_ref(X, B, E1), typ_ref(Y, B, E2)) :-
  wf_ctx(G),
  exp_subst(Y, exp_var(X), E2) = E2prime,
  encode_ctx(G, PhiG),
  encode_exp(E1, Phi1),
  encode_exp(E2prime, Phi2),
  is_valid(`PhiG /\ Phi1 ==> Phi2`).

(* lambda and application synth rules *)
synth(G, exp_lam(X, T1, E), T) :-
  wf_typ(G, T1),
  synth(ctx_var(G, X, T1), E, T2),
  typ_fun(X, T1, T2) = T.

synth(G, exp_app(E1, E2), T) :-
  synth(G, E1, typ_fun(X, T1, T2)),
  check(G, E2, T1),
  typ_subst(X, E2, T2) = T.

(* the only checking rule *)
check(G, E, T) :-
  synth(G, E, Tprime),
  sub(G, Tprime, T).

First, we declare our relations—that is, the (typed) inference rules we’ll be using. We show the most interesting case of subtyping: refinement implication. Several helper relations (wf_ctx, encode_*) and helper functions (exp_subst) patch things together. The typing rules below follow a similar pattern, mixing the synth and check bidirectional typing relations with calls to helper functions like typ_subst.

fun exp_subst(X: var, E : exp, Etgt : exp) : exp =
  match Etgt with
  | exp_var(Y) => if X = Y then E else Etgt
  | exp_bool(_) => Etgt
  | exp_op(_) => Etgt
  | exp_lam(Y, Tlam, Elam) =>
    let Yfresh = 
      fresh_for(Y, X::append(typ_freevars(Tlam), exp_freevars(Elam)))
    in
    let Elamfresh = 
      if Y = Yfresh
      then Elam
      else exp_subst(Y, exp_var(Yfresh), Elam)
    in
    exp_lam(Yfresh,
            typ_subst(X, E, Tlam),
            Elamfresh)
  | exp_tlam(A, Etlam) =>
    exp_tlam(A, exp_subst(X, E, Etlam))
  | exp_app(E1, E2) => 
    exp_app(exp_subst(X, E, E1), exp_subst(X, E, E2))
  | exp_tapp(Etapp, T) => 
    exp_tapp(exp_subst(X, E, Etapp), typ_subst(X, E, T))
  end

Expression substitution might be boring, but it shows the ML fragment well enough. It’s more or less the usual ML, though functions need to have pure interfaces, and we have a few restrictions in place to keep typing simple in our prototype.

There’s lots of fun stuff that doesn’t make it into this example: not only can relations call functions, but functions can examine relations (so long as everything is stratified). Hiding inside fresh_for is a clever approach to name generation that guarantees freshness… but is also deterministic and won’t interfere with parallel execution. The draft paper has more substantial examples.

We’re not the first to combine logic programming and SMT. What makes our design a sweet spot is that it doesn’t let SMT get in the way of Datalog’s straightforward and powerful execution model. Datalog execution is readily parallelizable; the magic sets transformation can turn Datalog’s exhaustive, bottom-up search into a goal-directed one. It’s not news that Datalog can turn these tricks—Yiannis Smaragdakis has been saying it for years!—but integrating Datalog cleanly with ML functions and SMT is new. Check out the draft paper for a detailed related work comparison. While our design is, in the end, not so complicated, getting there was hard.

Relatedly, we have also have an extended abstract at ICLP 2020, detailing some experiments in using incremental solving modes from Formulog. You might worry that Datalog’s BFS (or heuristic) strategy wouldn’t work with an SMT solver’s push/pop (i.e., DFS) assertion stack—but a few implementation tricks and check-sat-assuming indeed provide speedups.

]]>
http://www.weaselhat.com/2020/08/07/formulog-ml-datalog-smt/feed/ 3
Flapjax on PL Perspectives http://www.weaselhat.com/2019/12/03/flapjax-on-pl-perspectives/ http://www.weaselhat.com/2019/12/03/flapjax-on-pl-perspectives/#respond Tue, 03 Dec 2019 13:23:27 +0000 http://www.weaselhat.com/?p=768 Shriram Krishnamurthi, Arjun Guha, Leo Meyerovich, and I wrote a post about Flapjax on PL Perspectives, the SIGPLAN blog. (Thanks to Mike Hicks for helping us edit the post!)

Flapjax won the OOPSLA MIP award for 2009 (though the SIGPLAN website isn’t yet up to date). Our blog post is about the slightly unconventional way we worked: most of the Flapjax work happened in 2006 and 2007, but we didn’t even try to write the paper until several years later (Leo and I were in grad school). Rather than recapitulate those ideas, go read the post!

]]>
http://www.weaselhat.com/2019/12/03/flapjax-on-pl-perspectives/feed/ 0
smoosh v0.1 http://www.weaselhat.com/2019/05/15/smoosh-v0-1/ http://www.weaselhat.com/2019/05/15/smoosh-v0-1/#respond Wed, 15 May 2019 19:05:07 +0000 http://www.weaselhat.com/?p=754 I’ve been building an executable formalization of the POSIX shell semantics, which I’ve been calling smoosh (the Symbolic, Mechanized, Observable, Operational SHell).

I’m pleased to announce an important early milestone: smoosh passes the POSIX test suite (modulo locales, which smoosh doesn’t currently support). I’ve accordingly tagged this ‘morally correct’ verison as smoosh v0.1. You can play around with the latest version via the shtepper, a web-based frontend to a symbolic stepper. I think the shtepper will be particularly useful for those trying to learn the shell or to debug shell scripts.

There’s still plenty to do, of course: there are bugs to fix, quirks to normalize, features to implement, and tests to run. But please: play around with the smoosh shell and the shtepper, file bug reports, and submit patches!

]]>
http://www.weaselhat.com/2019/05/15/smoosh-v0-1/feed/ 0
New paper: Word expansion supports POSIX shell interactivity http://www.weaselhat.com/2018/03/15/expansion-supports-interactivity/ http://www.weaselhat.com/2018/03/15/expansion-supports-interactivity/#comments Fri, 16 Mar 2018 00:43:29 +0000 http://www.weaselhat.com/?p=685 I’ve been thinking about and working on the POSIX shell for a little bit over a year now. I wrote a paper for OBT 2017, titled Understanding the POSIX Shell as a Programming Language, outlining why I think the shell is worthy of study.

For some time I’ve had the conviction that word expansion—the process that includes globbing with * but also things like command substitution with backticks—is somehow central to the shell’s interactivity. I’m pleased to have finally expressed my conviction in more detail: Word expansion supports POSIX shell interactivity will appear at PX 2018. Here’s the abstract:

The POSIX shell is the standard tool to deploy, control, and maintain systems of all kinds; the shell is used on a sliding scale from one-off commands in an interactive mode all the way to complex scripts managing, e.g., system boot sequences. For all of its utility, the POSIX shell is feared and maligned as a programming language: the shell is feared because of its incredible power, where a single command can destroy not just local but also remote systems; the shell is maligned because its semantics are non-standard, using word expansion where other languages would use evaluation.

I conjecture that word expansion is in fact an essential piece of the POSIX shell’s interactivity; word expansion is well adapted to the shell’s use cases and contributes critically to the shell’s interactive feel.

See you in Nice?

]]>
http://www.weaselhat.com/2018/03/15/expansion-supports-interactivity/feed/ 1
PHPEnkoder 1.14 http://www.weaselhat.com/2016/08/19/phpenkoder-1-14/ http://www.weaselhat.com/2016/08/19/phpenkoder-1-14/#respond Fri, 19 Aug 2016 13:32:55 +0000 http://www.weaselhat.com/?p=614 Per a request of sk19er on the WordPress forums, I’ve implemented a new feature for PHPEnkoder: you can use the [noenkode] shortcode to turn off PHPEnkoder on a specific page. As always, you can get PHPEnkoder from WordPress Plugin Directory.

]]>
http://www.weaselhat.com/2016/08/19/phpenkoder-1-14/feed/ 0
Installing ctypes and ctypes-foreign on OS X with brew and OPAM http://www.weaselhat.com/2016/02/10/installing-ctypes-and-ctypes-foreign-on-os-x-with-brew-and-opam/ http://www.weaselhat.com/2016/02/10/installing-ctypes-and-ctypes-foreign-on-os-x-with-brew-and-opam/#respond Wed, 10 Feb 2016 22:04:56 +0000 http://www.weaselhat.com/?p=571 I recently had some trouble getting ctypes working, so I thought I’d share my solution.

I found the Real World OCaml chapter on FFIs, and I tried following their advice first. They suggest:

brew install libffi
opam install ctypes

But it doesn’t quite work. For one, you also need to install ctypes-foreign. For two, the brew installation of libffi doesn’t automatically install itself globally, so ctypes can’t find it.

Here’s the right incantation:

brew install libffi
cd /usr/lib/
sudo mv libffi.dylib libffi.dylib.orig
sudo ln -s /usr/local/opt/libffi/lib/libffi.dylib
LIBFFI_CFLAGS=-I/usr/include/ffi CFLAGS=-I/usr/include/ffi opam install ctypes ctypes-foreign

Now, this is merely a good start. Once I had this much working, I still couldn’t get OCaml to call C functions! It came down to a bear of a linking issue… no combination of LD_PATH and LD_LIBRARY_PATH and LD_PRELOAD would help, nor would -lflags -cclib,-L. I just couldn’t get ctypes to see my shared library! I finally found an option to ld that does the trick: -force_load. Here’s my build line:

ocamlbuild -no-hygiene -pkg ctypes.foreign -lflags "-cclib -force_load /path/to/libfoo.a" bar.native

I imagine you can get away without -no-hygiene; I couldn’t get corebuild to work at all.

In any case, special thanks to Arjun Narayan for helping me debug this, and to Spiros Eliopolous for mockery absolutely nothing.

]]>
http://www.weaselhat.com/2016/02/10/installing-ctypes-and-ctypes-foreign-on-os-x-with-brew-and-opam/feed/ 0
Twitter bots and OAuth http://www.weaselhat.com/2015/02/09/twitter-bots-and-oauth/ http://www.weaselhat.com/2015/02/09/twitter-bots-and-oauth/#respond Tue, 10 Feb 2015 03:37:13 +0000 http://www.weaselhat.com/?p=525 I’m working on a Twitter bot, and I ran up against something very annoying: apps need to be on an account with a mobile phone number. I have just one mobile phone, and it’s already tied to my real Twitter account. Rather than finding a way to get another mobile number, I had the bot authorize my app using Twitter’s OAuth API. Here’s how to do it.

Step one: collect your API keys. There’s a consumer key and a consumer secret, both available from your app’s page at apps.twitter.com. Never commit these to a repository or post them anywhere. I put them in a special file that I tell git to ignore, keys.py.

Step two: collect OAuth tokens. The key trick here is to collect tokens that are permanently good, using the out-of-band (OOB) PIN method. To actually send the requests, I used the requests-oauthlib library for Python. Here’s how to do it:

from requests_oauthlib import OAuth1Session
from keys import *

twitter = OAuth1Session(consumer_key, consumer_secret) # loaded from keys.py!
twitter.params['oauth_callback'] = 'oob' # make sure we're in PIN mode
r = twitter.post('https://api.twitter.com/oauth/request_token')

At this point, r.text will tell you, HTTP request parameter style, your oauth_token and oauth_secret. Add these to your keys.py file and copy the oauth_token to the clipboard.

Step three: authorize the app. Fire up a web browser and log in to the bot’s account. Go to https://api.twitter.com/oauth/authorize?oauth_token=[whatever your oauth_token was]. You’ll get a PIN number. Add it to your keys.py file.

Step four: finalize the authorization.

from requests_oauthlib import OAuth1Session
from keys import *
twitter = OAuth1Session(consumer_key, consumer_secret)
twitter.params['oauth_verifier'] = pin
r = twitter.get('https://api.twitter.com/oauth/access_token?oauth_token=' + oauth_token)

Now r.text will give you a new oauth_token and oauth_token_secret. Save these—they’re OAuth access tokens, which are how you can actually make API calls.

Step five: check that it worked. Log in to the bot account, go to settings, and check the apps subheading—your app should appear. You can tweet programmatically, now:

from requests_oauthlib import OAuth1Session
from keys import *

twitter = OAuth1Session(consumer_key, consumer_secret, oauth_access_token, oauth_token_secret)
twitter.post('https://api.twitter.com/1.1/statuses/update.json?status=Testing.')

I would be lying if I said I was 100% confident that everything from here on out is easy, but it seems like these access tokens are all you need after PIN based authorization.

]]>
http://www.weaselhat.com/2015/02/09/twitter-bots-and-oauth/feed/ 0
PHPEnkoder 1.13 http://www.weaselhat.com/2015/02/09/phpenkoder-1-13/ http://www.weaselhat.com/2015/02/09/phpenkoder-1-13/#respond Mon, 09 Feb 2015 20:55:36 +0000 http://www.weaselhat.com/?p=522 I’ve resolved some E_NOTICE-level messages that were showing up when people set WP_DEBUG to true. Thanks to Rootside for pointing out this problem on the WordPress forums. As always, please let me know on the forums or via email if you run into any problems.

]]>
http://www.weaselhat.com/2015/02/09/phpenkoder-1-13/feed/ 0