Subscribe to RCast on iTunes | Google Play | Stitcher
Beginning with a set of interactive rewrite rules, Greg Meredith discusses his work with Mike Stay to derive a proof theory that exhibits a Curry-Howard style correspondence. Specifically, it enjoys a cut-elimination property that mirrors the notion of compute in the rewrite rules. Isaac DeFrain and Christian Williams join the conversation.
The transcript is located below the slides.
Slides











Transcript
Greg: I want to talk about the proof theory side of LADL and give people a perspective on what that means, why it’s important, and the work that we’ve been doing most recently on that side of the puzzle. A lot of logicians, type theorists, and mathematicians make a distinction between model theory and proof theory when they’re talking about logics. The model theory lives on the side of semantics. It’s the place where you interpret your formulae. More recently, it’s also the place where you interpret your proofs. You build up mathematical models of the things that are your framing.
For example, you can look at Boolean logic as a logical entity. Then you can ask for models. Those models are Boolean algebras. You can look at modal logics and ask for models. Those models are sets of worlds. You could even think of them as relationships across a range of Boolean algebras. Where the relationships represent some kind of reachability notion, you can get from here to there. Those are the worlds that are mentioned in modal logic. That interpretation, for example, was given by Kripke.
Those are some examples of what we might think of as logic and model. When we talk about model checking, which is an automated process—or at least that’s one of the aims, to create an automated process—you’re looking at the validity of the formula in terms of whether or not it has any models. If you come up empty, if the only models for a particular formula are the empty gadget in your model framework, like the empty set or the empty list or whatever your empty gadget is, then your formula is uninhabited and not valid. That’s one way that you can go and check things.
For example, when you look at the Hennessy-Milner logics, which were proposed by Matthew Hennessy and Robin Milner for the Pi calculus, that’s done in terms of a model theory. The models of those formulae are exactly the sets of processes that verify the formulae. Let me just check in with Christian and Isaac and just see if that piece of the puzzle makes sense.
Christian: Yeah, that makes sense.
Isaac: Specifically regarding the Hennessy-Milner stuff, you’re saying the way we think about a formula in terms of a model is in terms of the collection of processes that have the property that’s basically expressed by the particular formula that we’re dealing with.
Greg: Yep, that’s it. Exactly. By the way, that particular way of building a model is called realizability. When your models are the computations that manifest the property, that’s realizability. There’s another fancy property that relates logic to computation and to category theory and type theory. That’s the Curry-Howard property. In Curry-Howard—this was first observed and then the more people looked at the property, the deeper it got, so now it’s a property to demand or ask because it has such good benefits—your programs correspond to your proofs and your types correspond to your formula.
For example, you can get this by fiat. You can say that we’re going to build a type theory for process calculus, and those types are just formulae that we provide for some logic for the process calculus. You get it by fiat. You get it for free because you’re saying we’re taking on Curry-Howard as a requirement of the system.
People were working independently on intuitionistic logic and the simply type Lambda calculus. Curry and Howard noticed this correspondence between the types of Simply typed lambda calculus and the formulae of intuitionistic logic and wrote down the translation between the two. Later people discovered there was a correspondence between the proofs in intuitionistic logic and the programs in Simply typed lambda calculus.
The thing that’s of interest is that the cut rule corresponds to the formation of application, which is a formation of a reduction expression or a redex. There’s a theorem called the cut-elimination theorem that says that every proof can be transformed into a proof that doesn’t use the cut rule. That’s very much like normalizing a lambda term by running beta reduction wherever your applications have a lambda expression in the operation position and something else in the argument position.
Those two things, beta reduction and cut-elimination, correspond to each other. You get this really tight correspondence in the Simply typed lambda case between the notion of computation and the logic that you reason about. The two branches were done independently and then they noticed that there was this deep correspondence.
Later, through the work of Abramsky and others, it came to be realized that this is a collection of desiderata to have when you’re designing a logic, that it should have this property. In fact, the whole LADL program grows out of recognizing that when you demand both realizability and Curry-Howard, that nails down your notion of logic considerably. There’s not a lot of wiggle room in terms of what a logic can be. This is just an extension of what Abramsky was noticing in his pioneering work on domain theory and logical for, where he would start with a domain theoretic construction, turn a handle, and derive a logic. LADL is doing something similar.
We’ve talked about the model theory side. I have been sneaking in this idea of a proof; there’s the whole proof-theoretic side. Before we jump there, I want to make sure that everything I said up to that point makes sense.
Isaac: Thanks for the history lesson on the Curry-Howard correspondence stuff. I hadn’t heard all of that linked together, and then also bringing it all back to LADL. That was really cool.
Greg: My pleasure. I really hope that this information doesn’t get lost. It is really fascinating that intuitionistic logic and the Simply typed lambda calculus were developed independently. The correspondence was only noticed post facto. In some sense, it’s a bass-ackwards way of approaching things. That’s fairly typical of how mathematics has developed. People are following intuitions that are closely related, but not knowing that they’re converging toward each other until they talk to each other.
Initially, when I formulated the basic examples driving LADL research, it was all on the model theory side. I was well aware that ultimately we were going to have to come up with a proof theory. What we were looking for was to algorithmically generate approved theory such that it had a cut-elimination theorem. You can see the cut-elimination theorem is essentially driving the shape of the proof theory.
What we’re going see is that the formation of every term constructor is going to correspond to the introduction of a logical constructor. You also have the constructors for the collections. Whenever you have term constructors that form reduction expressions—again, we’re talking about an arbitrary rewrite system with the property that it’s interactive. Without loss of generality, we can say that interactivity means that you have a term constructor that’s binary in the lefthand side of your rewrite rule. Whatever happens on the righthand side is not constrained. The lefthand side must be a binary term constructor. It has to take two terms.
You can have more than two, but we don’t have to consider ternary, quaternary, et cetera. We don’t have to consider the i-ary term constructors because they can all be boiled down to binary.
Isaac: That was something I didn’t understand when I looked at this at first, what you meant by interactive, but you just mean that the terms interact with one another. And that’s multiple.
Greg: Right. The one thing you can’t have is a term that doesn’t have any internal structure just evolving on its own, like some constant magically turning into some other constant. We’re eliminating that. Those kinds of rules are typically called heating rules, where a lot of term rewrite systems lend themselves to chemical analogies. It’s not surprising because if you look at chemical equations, they are exactly term rewrite.
Christian: Why is that called heating?
Greg: When you look at those kinds of rules in the chemical context, such a transition would happen when you’re applying heat. It’s the same sort of intuitions.
Isaac: I’ve always thought of heating rules as having terms being knocked out for context. Then you’re evaluating the term and putting the result back into the context.
Greg: You could think of it that way, assuming that you’ve got some kind of context re-write situation. In fact, that’s a nice foreshadowing of what we’re going to do.
What we’re going to do is closely related to blockchain (and to RChain specifically). We’re going to modify these rewrite rules and add a resource. You have to follow the chemical analysis. We’re going to add a catalyst. Without the catalyst, what was originally a redex becomes inert. It’s only in the presence of the catalyst that you get the rewrite rules.
That’s what we’re going to do. Before we get there, in terms of the proof theory side of things, whenever we’re forming a redex or using the binary operator that would create a term that’s on the lefthand side of the rule, we now get two kinds of proof rules.
One kind of proof rule forms without the catalyst. Essentially, it’s forming a data structure that has all the information you would need in order to perform a computation except the catalyst. In the presentation we’re talking about today the catalyst has no interior structure, no internal information. We’re adding this constant on the side that enables the rewrite.
The other proof rule we make is one where you form the redex and add the catalyst. That gives us proof rules that mark the computational potential and computational actual. That’s essentially how we’re forming the logic. There’s this one piece of the puzzle I’m leaving out intentionally for this discussion. We’ll circle back to it. But that’s the basic shape.
Let me just check in and see, does this make sense to you guys? We’re going to build logical formulae that correspond to the collections. An example in the case where your collections are sets is one formula would be true, another formula might be “or,” which would correspond to union, or “true” corresponds to all the terms in your language, “union” corresponds to a disjunction. Those logical operators come from the collection piece. Then there are introductions of logical formulae made by using the term constructors from your rewrite system to build up formulae.
As an example, if your rewrite system is the Rho calculus, the par constructor is a term constructor and you can use that to form logical formulae as well as just terms. You could do true, par, true. That would place a demand that your process is at least composite. It’s got at least two threads. That’s an example.
There are further elaborations so that when you make terms that are on the lefthand side of a redex, you get two rules, one for making that term but without the catalyst, and one for making the term with the catalyst. Does that make sense?
Christian: What was your original motivation for the catalyst?
Greg: The original motivation for the catalyst is to prevent reduction in unwanted contexts. We noticed at the time that we were doing it that this was closely related to introducing resource constraints on a notion of computation providing models of what happens in blockchain. It had two benefits, so it seemed like a reasonable thing to take a look at. Here we’re continuing to reap the benefits of this idea.
Those are the proof rules. It’s easy to see whenever you form proofs that have a redex with a resource, then you can eliminate that from your proof by simply replacing it with the righthand side of the rewrite rule. If you do the types right, the types will remain the same. You get a cut-elimination theorem by design. You get it for free.
It follows all of the demands that we’re making. We’re insisting Curry-Howard; we’re insisting that we have realizability. All of those things are shoving us into this box. There’s not a lot of degrees of freedom for the design of these logics, which is exactly what we want. We want this to be algorithmic. We don’t want to have any creative steps. They all have to be dictated, essentially, by the shape of the input that was being provided, which is the notion of collection, the notion of term, and the rewrite rules.
The next piece of the puzzle has to do with how this relates to—well, actually, there are several pieces. How does this relate to linear logic? How does this relate to various notions of computation? Where is the negation in all of this?
With respect to linear logic, what we can see is that when you form the lefthand side of a rewrite rule, when you form a redex and you don’t supply the resource, then what you’ve got is a tensor. I said we’re forming a data structure, but we don’t actually have the right to call it a data structure until we can extract the data from that gadget.
Whatever the term constructor is, if we can’t pull the data out of that term constructor and do something with it, then it’s just stuck there. It doesn’t do anything until we apply some resource to it. Then it can evolve. If we want to give it some utility, then we have to be able to extract it.
Dual to this tensor, we algorithmically add a term that allows us to access the components. That’s going to correspond to the linear logic par. That will cause us to add another cut rule in which we cut a tensor with a par. There are essentially two kinds of ways to do that. You can cut them end on end.
Again, we’ve demanded that the system is interactive. There’s some term constructor, call it K that is binary. We have two of them, effectively. We’ve got a K and the par, which can extract the things from the K. The par is like the shadow of the K (or it’s dual to it).
We can now plug these terms end on end. Let’s say the two terms, T1 and T2 and U1 and U2. T1 and T2 are formed by the term constructor K and the U1 and U2 are formed by the par. You can form a K of a K of T1/T2 and a K of U1/U2. That’s end on end. Or you can form a K of a K of T1 and U1 and a K of T2 and U2. I don’t know if you can picture that in your mind, but one is essentially wiring things up in series and the other is wiring them up in parallel.
That breaks the thing down into essentially two flavors of extraction. You can get two logical operators like that. That gives you a sense of how this corresponds to linear logic with the exception of what’s going on with negation. All of that stuff is very straight forward, easily verifiable. I’m not worried about any of that stuff.
I’m giving you a preview into work because we haven’t written it up, we haven’t published it, or had it peer-reviewed. That part is fairly easy to see that it just works. Now the next bit is much more speculative and potentially controversial, but quite fun to think about.
As background, what we’re using to construct the proof theory: we’re following the guideposts set up by Abramsky. If you look at his paper, “Computational interpretations of linear logic,” he essentially reverse-engineers a notion of a term, which he calls a proof expression, that exactly lines up with linear logic, formulae as types. He does this for classical linear logic.
We’re basically using that as a template to build our system with the extra added guidance that we use the R as the thing to suspend computation. I should also mention that we’re using R in a punny way. It’s not just R for resource, it’s also R for reflection, in the sense that without the R, when you form a redex, you’ve essentially suspended computation. This is just like call/cc kinds of discipline in Scheme or other functional languages. I’ll make the connection even to the delimited continuations in just a minute.
This is all related to how we think about negation. When you have the par and the tensor together, that gives you the ability to disassemble suspended computations and potentially modify them and then reassemble the suspended computations and then add a resource and reflect that back. Forming a redex without an R is what’s called reification. You’re now taking everything that you would need to have in order to make a computation in the original system but you’re denying it the resource to actually run, and so it’s suspended. It’s all packaged up as a data structure. I strongly recommend going to look at 3-LISP and Rosette and the other reflective systems. This is exactly the recipe for this sort of procedural or functional reflection part of the puzzle. Pulling things apart and modifying them, doing some compute on those data structures, putting them back into place, then adding an R, that’s called reflection.
You have both reification and reflection. We’re using these capacities to their fullest extent in formulating the proof theory. This is one of the ways that we depart from Abramsky’s recipe, but it’s already present in Abramsky’s recipe in the sense that he has a two-part sequence. When you form these assertions, that a particular term in your term language satisfies a particular type, that kind of expression is called a sequent. His sequents on the term side, the terms have two pieces. One is some advertised site of interaction. The other is a body where some computation is in a potentially suspended mode. That body is nested. It’s not just that you have computation suspended, but you have this whole tree of suspended computations. Each of those represents resumption points. That’s in particular how he’s using the to model within the plus side of linear logic and the of course and why not side of linear logic.
Christian: What do you mean by reflection?
Greg: Basically, continuation. Where you resumed the computation. The trick is to recognize that R provides a context for either reflecting or reifying computation. If you give it enough structure—in particular, you allow it to form trees or other kinds of data structures—then you get back all of the machinery that Abramsky was getting with his bipartite structure for the terms. That’s the intuition there.
That’s closely connected to the notion of negation in the following way. You think of a negation as A/arrow/false or A/arrow/MT. When you doubly negate then you should get back to A, but you don’t necessarily get back to A, which is why we have a thing called intuitionistic logic, but you can insist on closing it up so you’re only looking for the terms where negating twice gets you back to where you started.
That gives you what’s called an involuted negation in the sense that not not A is A as opposed to not not not A is not A, which is the intuitionistic case. Now you can start to see just by looking at the types we’re saying that A/arrrow/something, whatever that is, that looks just like a continuation. In particular, if you go A/arrow/something to something. So A arrow R to R—I should have picked a different letter than R. The reason I chose R is because if you look at the types of the maps that you use for the continuation monad, they look just like that. R is the result type. What we’re saying is that the notion of negation is closely connected to adding sufficient structure to R to model a notion of not just resuming but a whole stack of continuations or resumptions.
Christian: Can you go over that more slowly?
Greg: Sure. It’s easier if we had a whiteboard. What we’re saying is—and again, I strongly urge looking at the literature here—there’s a close connection between the Scheme, call with current continuation, and the law of excluded middle. Essentially, you can view it as either I have an A, or if you think about in terms of the RChain tuple space or RSpace, either I have an A or I suspended my computation until I have an A. The suspended computation until I have an A, that’s A/arrow/whatever the type of the results and computation is. I think you see where that’s going.
We’re explicitly talking about suspending computation. How do we suspend computation? We suspend computation by either by withholding the resource. We can do better than just simply withholding the resource. We can create a resource context that is marked as suspended either because it doesn’t have something or is in some other state.
That’s what I’m talking about: giving the resource additional structure. We know that ultimately if we’re going to model things like the blockchain, the resource itself has to have a kind of counting structure. We have to be able to have multiplicities of R. When you use it, you have to spend it down. Equations like R par R, or R applied to R, or whatever your term constructor is, equals R can’t hold if you’re going to use R to model resources like phlo or Ethereum gas. If you’re going to use R to model gas or phlo, then R has to have at least the structure of multiplicity. You have to be able to add them and subtract them. That’s expanding the structure of R.
Now comes the coup de gras, the really cool stroke, which is that if you don’t want to import through R at a foreign notion of computation, but you know that R has to have additional structure, where would you get this additional structure?
Christian: Making it more than a constant; making it an interactive term in the language.
Greg: Where would we get that term structure? If we don’t want to input a foreign notion of computation.
Isaac: From the term calculus itself.
Greg: Exactly. You can do a reflective trick. Yeah. There are two that are available and applicable. I’ve already talked about one extensively. That’s where you make R be a term context. The context term pairs a location and that gives rise to the whole space calculus. You can use this space calculus to model this extensively. That’s one solution.
Here’s another solution. I’ve mentioned the whole red-black sets. For people who haven’t been following RChain forever, it’s a cool trick. You imagine a set theory made with red curly braces and a red element of predicate and another set theory with black curly braces and a black element of a predicate. Red element of cannot see into black curly braces; the black element of cannot see into red curly braces. This setup gives you the ability to model things like Fraenkel set theory without leaving the safety and comfort of a set theory that arises from empty. You don’t have to take on the risk of this whole universe of atoms. You just have to have two distinct copies of set theory that are recursively inter-defined. The atoms for the red set theory are the black sets and the atoms for the black set theory are the red sets.
Christian: These elements represent either active or inactive computations?
Greg: You can do the same thing. You can, for example, take the Rho calculus and you can make a red Rho calculus and a black Rho calculus. Then the black terms are R resources for the red calculus. The red terms are R resources for the black calculus. Now you have as much structure as you want in your R but you’ve isolated them. They’re not interacting. You don’t have to worry about any interference on your R resource from the computation that it’s enabling and vice-versa.
Christian: How do you know which element to pick that has the right structure?
Greg: You use whatever coding tricks you’ve used before; we know them quite well. You just use the entire structure and code up what you need for whatever structure of R you want to supply. If you want multiplicity in the case of the Rho calculus, you only need the par operator and some things that don’t interact. They just get eaten away.
An example would be if you just want multiplicities, then you use the church style encoding of the naturals into Rho and that’s your R. Now you have exactly the multiplicity structure and it’s isolated from interacting. Once you’ve done that, then you know you can compile it down into just an ordinary Rho without this red-black bifurcation by doing a namespace discipline.
Isaac: You’re saying that would afford you a catalyst that you can treat like phlo or some kind of computational resource?
Greg: That’s exactly right. Now we’ve tied the whole thing together. Here’s how you derive your proof theory and we’re deriving the proof theory. Here’s how the notion of resource connects to the notion of negation and here’s how you can impose different—and remember, there’s some contexts I keep around in my head that maybe not other people do, which is the whole monadic account of delimited continuations.
If you peer into that work, what you discover is that they’re essentially keeping a data structure for resuming. In their case, the data structure is a stack. In Abramsky’s case, because he has concurrency, the data structure for resumption is a tree. You can use whatever data structure you want that supports the kind of resumption strategy. That resumption strategy is intimately related to notions of fairness and notions of liveness and yada yada yada, which is closely connected to the points that I was making about error-correcting codes.
We’ve got this framework in which all of those connections are elucidated: precisely how the negation is related to your resumption strategy and how that resumption strategy corresponds to structure that you demand from your R. That’s all laid out in a kind of algorithmic fashion. If you want particular properties of your negation that turns into properties that you demand of the R structure, then you go demand those of your term calculus. You get this nice recursive encoding. That lays out, by and large, the shape for the proof-theoretic side of LADL in such a way that we get this correspondence between the reduction in the rewrite system and the cut-elimination in the proofs. It’s a pretty construction.
Christian: At the point where you’re talking about how to implement the resources, you said you could have either gone the route of the space calculus or with these red-black sets. How do you connect those two strategies and why would you prefer one over the other?
Greg: Ah, yes. With red-black you don’t have a clear notion of location. With the space calculus, the reason it has that name is because we have this clear notion of location. If you want to have a notion of location, if that’s somehow important to your computation, suppose that we wanted to reason about located failure. We’re in the situation where we’re reasoning about distributed systems and we want to be able to talk about regions of failure. The space calculus is a better tool for the job because there’s a crisp notion of location.
Christian: It seems like a really comprehensive solution for a lot of the issues.
Greg: Right. In the other case, if you don’t yet have a notion of location, then you can just use the red-black construction for the calculus.
Isaac: It doesn’t seem like a coincidence that we can get this notion of an algorithmically-generated proof theory from this LADL algorithm, and this naturally supports a notion of resource, which you can then even program further and make it support a notion of computational effort or phlo or something like that for like blockchain purposes. I wanted to hear an explanation of how all of these things fit together so nicely.
Greg: It is very much like the history of Curry-Howard. I’ve been trying to hold all the pieces together in my mind, but I did not have a clear understanding of how they all fit together until the last few weeks. I’m genuinely surprised that there’s this notion of resource that’s related to resumption—that negation, resumption, and resource are all recursively intertwined.
Basically, what you’re saying, whether you’re talking about Ethereum or any other smart contracting platform that uses a resource to control computation, you’re saying the resource has the ability to suspend computation. That’s your assertion. It didn’t dawn on me that that’s effectively continuations. It has to be related to negotiation. Connecting the dots has been this slow, laborious process. I wish I were faster at it. But I don’t think it’s a coincidence. I don’t know that anyone has put all the pieces together this way. We seem to be pushed into pioneering efforts.
Christian: This is very nice.
Isaac: I’m going to have to listen to this again to fully understand everything you were talking about. This has been a really cool discussion. I even left something out of my characterization there, which is that it also supports realizability and Curry-Howard correspondence.
Greg: If you take those as your starting points, those are your demands for your logic, then everything else falls out. We’re backed into this. That’s basically what the LADL journey has been all about. This is my line in the sand. I can’t cross it; I can’t violate those conditions. Everything else is just the implication of having those as your design criteria.
Christian, what we really need now is we need nominality. If we can have nominality, we’ve got a really lovely account of logic. There are some desiderata. For example, I would love to be able to extend the notion of collection to past set-like gadgets. But it doesn’t seem to be a way to do that and have a distributive law. There something has to give and I don’t quite know what that is yet.