Developer Podcast RCast Research

RCast 58: LADL Proof Theory in Detail


Subscribe to RCast on iTunes | Google Play | Stitcher


Greg Meredith goes deep into LADL with Isaac DeFrain and Christian Williams. Please click here to read and comment on the paper discussed in this podcast.


Transcript


Greg: This week is a technical deep dive in this conversation about proof theory. Last time I gave a sketch of the intuitions. I want to present what it looks like if you have a minimal theory. I’m going to screen share so we can talk; I’ll make this graph available for people to look at who are listening to the podcast later. 

We’re going to take in a graph-enriched Lawvere theory, and we’re going to produce a new graph-enriched Lawvere theory in stages. This graph-enriched Lawvere theory represents an encoding of a notion of computation. Effectively, it’s a rewrite system. The constructors of the theory are going to be term constructors and we’re going to have some rewrite rules.

The way we’re doing graph enrichment is a little bit different than the way Christian and John Baez have done graph enrichment. We actually add the theory of graphs on top. What that does is it gives us a way of talking about source and target, which is a little bit closer to the standard SOS semantics. It’s just a notational convenience from my perspective. Maybe there are other technical details that we might talk about, but let’s take a look at what a minimal such would look like. 

We have one non-trivial generator constant. Since our rewrite rules are interactive, we have to have at least one term constructor that takes two arguments. That’s what it means to be interactive. The lefthand side of our single rewrite rule is going to take a constructor in two arguments and then produce a new term.

There’s one constant, C, and then there’s a binary term constructor, K, then there are left and right identities for K. Those might not always exist or they might coincide. In the case of Lambda, you have a left identity but you don’t have a right identity. In the case of Pi or Rho, you have both identities, but they coincide because the term constructor K, which is par in that instance, is community. The source of your rewrite rule Rho must be K of TU. The target is just going to be some function of T and U. It computes a new term based on the information from T and U.

Christian: I’m a little confused about why Rho is a rewrite rule. Right now, just writing it out, it looks like another operation.

Greg: It’s the source and target part of it that gives it the rewrite rule little semantics.

Isaac: Because you’re treating everything as an edge so you can talk about source and target.

Greg: Exactly. Now we’re going to take one step. We’re going to get to our goal in two steps.

Isaac: Can I ask you a question before we move on?

Greg: Sure, go ahead.

Christian: With that target equation, you’re saying the target of that rewrite is F of T and U, what’s the F all about?

Greg: It’s some way of calculating a new term from the information given at T and U.

Christian: Okay. That’s cool.

Greg: For example, in the case of Lambda, you know T is going to be the abstraction. Say Lambda X gives you M. N is the argument to the abstraction; that’s U. When you calculate F of TU, what you’re going to get is M, where you substitute M for X everywhere.

Isaac: Okay, I’m on board.

Greg: Now we’re going to modify this by introducing an evaluation context. You could do this in one of two ways. You can make R—which is our evaluation context, R standing for resource—take a term and produce a term, or you can make R be a constant, and then you supply it to one side of K. Either one of those is workable, at least in the theories that we’ve looked at. Mike Stay prefers this presentation because of what happens with Lambda. You can do it either way. 

We’re going to add two more operations. This corresponds to the fact that if I jumped to the rewrite rule first, then I can explain the other term constructors. Now we’re changing the source of the rewrite rule. The only time you get to rewrite is if you have R of K of TU. If you don’t have that R, K just sits there; it doesn’t evaluate. Since we’ve modified it in that way, K becomes a pairing operator of some kind. It’s effectively a kind of tensor because you don’t necessarily have a way of getting anything out. 

You can think about this in terms of computational reflection. In the old theory, KTU provided all the information you needed to do a computational step. Now what we’re saying is you have to have this additional piece, which is R, but there’s no structure to R. R is just a gating, like adding phlo to a computational step in Rholang. That’s the basic intuition. 

You could think of denying R is now making K reflective. K now reifies your computation. What you’d like to do is to extract the information stored in the reify compensation. We provide two extractors: one which we call par, that takes two terms and gives you a term, and the other, which we call cut. Par basically does when we reassemble things, it’s going to reassemble them or wire them up in parallel; cut will wire them up in series.

Isaac: Just to be clear, you’re saying that by denying the resource to the original computation or rewrite, you’re effectively turning it into data, which you can then put into other computations.

Greg: That’s exactly it. Our third and final step would just add one more sort. This is the final expansion of the original theory. This extra sort represents the notion of a context. When our term data type is a differentiable functor, this sort of corresponds to Delta T.

Isaac: That’s like Connor McBride’s zipper.

Greg: That’s exactly it. That means we’re going to add another constant which is hole, which goes from one to the context type. Then we have left and right introductions. You can put a context in the left side of K and a term in the right side and now you get a new context, or you can put a context in the right side of K a and a term in the left and now you get a new context. We’re going to distinguish when a context is applicable, meaning when you actually do a substitution for the hole. We’re going to write that with left and right bangs, depending on whether the hole is in the left side or the right.

Isaac: It’s like abstraction, right?

Greg: This is like abstraction except that we only ever have one hole. It’s even more restricted than linear abstraction.

Isaac: Okay. You have the left and the right abstraction.

Greg: Exactly. Then we have a notion of substitution. If I have the hole and get back a term, and then everything else remains the same. But we have some identities. When I apply an at to a banged hole on a T, I get back T, then when I apply an at to a banged context, I distribute the at inside. Those are the extra equations that we add to this theory. 

Christian: Do you need any equations for par and cut to describe how they work? 

Greg: No, we don’t. The cut will determine all of that. Now I can define a notion of arrow. This is prefacing what’s to come. Because these are expressed in terms of taus, which are types. Effectively, an arrow is going to be the collection of context kai that I can find some T that allows me to decompose the context. So the T fills one side and there’s a hole in the other. For any U in tau, when I applied U to that kai, if there is such a rewrite B, that B is in tau prime.

Isaac: Is this arrow, I think, guarantee?  

Greg: Yes. There’s an ISO from this definition to Caires’s definition. But that’s exactly what it is. It’s a rely guarantee notion.

Isaac: Okay.

Greg: Now we can form these types. We start with a top, which is going to denote all terms.  If you have a tau and a tau prime, you can form the arrows in either direction. All Cs are now lifted up to be types. Since these taus denote sets, we also have the empty tau; that’s a constant. That’s going to correspond to the absurdum in certain cases. You can also take taus and union them together. You can apply an R to a tau. It’s a tau, which you’ve now resourced. You can make Ks of taus. 

Then we have all the linear types. Now we have the Rho tensor, which takes two taus, and that’s basically going to be a K without a resource, but it’s cuttable. It’s indicating which way you’re going to wire things up when you cut. K without the indication, you don’t know how to wire it up. The same with tars and caps are like the tensor, but it’s an indication that it’s a K that you’re going to unpack with a cut in series.

Then we have some abbreviations. We actually have lots of negations. It depends on what you put at the pointy end of the arrow. If you put the identity at the right identity at the pointy end, that’s like the perp operator, linear negation. If you put the empty set at the pointy end, that’s like classical vacation. Then you have left and right versions of those.

Isaac: Can I ask a question about the types? 

Greg: Yes.

Isaac: You have something like the empty set, right? You said the taus were basically sets. This is like a realizability thing, right?

Greg: Exactly. The interpretation is realizability.

Isaac: That empty type, is it different from bottom or is it the same thing?

Greg: It functions in much the same way that bottom does.

Isaac: Okay, cool.

Greg: Now we can go on to our proof rules. I’ve color-coded it. If you follow the red side you’re building terms. If you follow the blue side, you’re making proofs. When you have both sides together, you’re typing your terms. When you type your terms, you can view it as constructing a type for your term, but you can also view it as extracting a term from a proof. It works both ways. 

That goes back to this whole point about correct by construction. At the end of the day, correct by construction is all about extracting programs from the proofs that they’re correct. That’s the idea. I’m going to read a turnstile as thinks that. We’re going to build proofs in terms of sequence. We have both intuitionistic versions of these sequent calculi, and we have classical versions. In the intuitionistic version, the lefthand side of the turnstile is some typing context, which can have commas in it, which is just interpreted as a list of assumptions. The righthand side has exactly one typing assignment on the righthand side. 

If we use gamma and delta to range over the typing contexts, then gamma turnstile T colon tau is red gamma thinks that T is typed tau. That’s how we’re building these rules. Our ground of this is that we have the empty typing context to derive that a term C is of type C. Since every constant gets its own Singleton type, that’s essentially recording that decision. That’s how the whole system gets off the ground. Well, there’s one other way it gets off the ground, which is the axiom rule, which I’ll talk about in a little bit. 

Then tensoring, just as if gamma thinks that T is a typed tau, and delta thinks that U is of typed tau prime. If you take gamma together with delta, you have both typing contexts, then you get that a K of TU is a type Rho tau tensor tau prime. If you have a single typing context gamma that thinks that T is of type tau and U as a type tau prime, then gamma thinks par of T of TU is a type Rho par tau tau prime. You can see the pattern. 

Christian: So that next cut tells you why you needed them to be in the same context? 

Greg: That’s exactly right. When we do a cut, we have a cut rule now, we only ever cut pars against tensors and cuts with caps and things like that. If I have a par and a tensor, which is just a K—it’s a K that’s been denied to a resource—when I cut them, I’m going to supply a resource, which will allow it to evolve. Inside the R context, I have the capability to evolve, but I’m not going to show all that evolution in the type. I’m only going to show the result, which is RRK, which is the righthand. 

Christian: Why RK? 

Greg: Because if you actually run this through the arrow definitions, you’ll see that that’s exactly what you get.

Isaac: I wanted to speculate as to why it should be the RK. Is it because we’re looking at the Rho par of tau perp and tau prime?

Greg: That’s exactly what’s going on. Tau perps eat up taus. 

Isaac: Cool. I definitely did not see that the first time I looked at it, but hearing you explain it again, it makes a lot of sense.

Greg: That’s exactly what’s going on. Basically, this is like a generalized categorical composition. When you have an A to a B, B and a B to a C, they’re occurring in opposite polarities. When you plug them together, the B gets traced out. This is the generalized version of that tracing out. 

Christian: But in the formation of the part you didn’t require—oh, I see. The duality is happening across the par. 

Greg: Exactly. Notice what we mean by parallel. Now we’ll see the N series version—I serialize and here I’m insisting on a single gamma—we could do a version of gamma with a delta; I’m still debating on which one is better. It’s probably better to separate it out into a delta thinks that U of type tau prime. Anyway, the point is that you’re declaring an intention. You form a K. This gets in the way of principle typing because this is when a programmer gives us information that is tied to Rho cap, then they’re saying, “I intend for this to be used in a serial way as opposed to a parallel way.”

Christian: So you get to choose?

Greg: Yes, you choose. The program is indicating, “this is how I want to use it.” It’s a type annotation.

Isaac: I wanted to ask if that corresponds to that newer syntax for Rholang where we have—well, obviously we still have the parallel composition, but then we also have sequential composition. Does this have anything to do with that?

Greg: Obliquely.

Isaac: Okay.

Greg: It’s in the same vicinity, but it’s for a different different purpose.

Isaac: Understood.

Greg: In the same way that we can pack up T of U into a K, we can set up for unpacking, which is we package up a T in  a U in a cup, then we know that when you’ve got a T and a U packaged up in a cup, when you cut it together with a cap, they’ll be extracted. 

This is exactly the same kind of thing, but notice that our types are more sophisticated now. We now have to use the arrow to get the sequencing exactly right. In the deserialized cut, we’re insisting that T is of type  tau double prime arrow tau perp, and U is of type A tau double prime—did I get that right? I think I need an extra perp in there somewhere. We also have tau prime arrow tau in the lefthand side, so that T prime is tau prime arrow tau—oh no, I’ve got it right. 

We’re going to run KTU so that tau double prime is going to be supplied to the arrow, which is going to give us the tau perp. Then the tau prime is going to be supplied to the tau prime arrow tau. You’re going to run to a tau perp and a tau, and then those are going to run together and we get what we need. Arrow basically allows us to do sequentialization. 

Christian: The perp was what again?

Greg: The perp is just an abbreviation. It’s a form of arrow. When you stack these, what we’re really getting is a double negation translation. 

Christian: Why is it going to be sufficient to focus on these types that are going to annihilate each other? 

Greg: Basically, what this gives us is the session type capability. This whole thing is now showing that we have both the original sort of model theory capabilities—the Caires capability—together with the session type capability. We’re marrying the two together. These proof rules are all about the session type stuff.

Isaac: Can you say a little bit more about the session type and how you’re thinking here? I don’t know a whole lot about session types.

Greg: Session types have largely been viewed from the perspective of linear logic, like Wadler’s paper, “Sessions as Types,” I think is what it’s called.

Isaac: It’s about resources that are available on specific channels or something.

Greg: Basically. Thinking of a session as occurring over some group of channels is really what it comes down to. It’s reminiscent of a generalized version of what you see when, for example, a program opens a session to a database. You know that that has a very particular shape. The shape is “open session, do a bunch of reads and writes, closed session.” 

Session types generalize what you could do with that. It goes further; it goes past that. I’m not dissing Phil here, but in many ways Wadler’s semantics are very dissatisfying because tensor means sequential composition. 

We can see that cut is just a special case of either the par or the serialization. The ordinary cut of a T prime with a tau perp and a T with a tau, that’s really just a special case of the other. You can build this notion of a linear cut from that. 

Isaac: You’re saying that’s where the cut-elimination comes from.

Greg: Exactly. It’s really obvious, but any proof that uses a cut can be turned into a proof that doesn’t use a cut. The way you do that is wherever you got a cut, you replace it with your righthand side of the rewrite rule.

Isaac: That makes sense.

Greg: Proof normalization is exactly the same as running the program. We’ve got the tightest possible Curry-Howard correspondence. We have a notion of axiom and this builds upon a notion of context. If you type hole as tau in your typing context, then you get that that hole is of type tau in the righthand side of the turnstile. That’s axiom. 

Then can do left and right cut. We can form Lambdas by discharging our whole obligation, saying that, “Hey, you really need to apply this to something.” If I have gamma together with hole is typed tau thinks that U is typed tau prime, then gamma thinks that U bang is typed tau prime on the pointy arrow of a left-pointing arrow tau. I butchered that. It’s easier to say from the other side. So if gamma together with hole is typed tau, and T is of typed tau prime, then gamma thinks that bang T is of typed tau arrow tau prime. So if you give me a tau, I’ll give you a tau. That’s basically what both of these rules are saying.

Christian: This is saying the whole is of type tau. Does the hole have a determined unique type or could it be of multiple types?

Greg: If the hole is of typed tau, you’re saying you can prove that the hole is of type tau. Either way.

Christian: Okay. But we can make a context have holes of any type.

Greg: That’s what the axiom rule says.

Christian: Okay.

Isaac: Is typing the hole just saying that context is expecting a certain type?

Greg: Yes. It’s just like a simply typed Lamda calculus. Think of the hole as a variable. The variable is of a particular type. Now we can have context cuts and we can cut on the left or the right. Let me do the right one. The right one is if gamma thinks that bang T is of type tau arrow tau prime, and delta thinks that U is of type tau prime—did I get that wrong? Ah, I did. There’s a typo. Delta thinks that U is of type tau, then a gamma together with delta thinks that RKTU is of R tau prime. It does exactly what you expect. It’s recording the type that’s at the end of having done the rewrite. You can’t see the rewrite stuff happening inside. Does that make sense?

Isaac: Yeah. You just see the resulting type.

Greg: That’s exactly right. The term is ready for rewrite. It’s a K of a TU supplied with an R. There are not a lot of surprises. It’s just that now we’re going to have only one-sided sequence. Everything is pushed to the right of the turnstile. We do everything in terms of the perp operator. Now in truth, we should have not only an axiom with a perp on the right, but we should also have one with the perp on the left. But I haven’t put those rules in. Everything else just kind of follows. 

The real difference is whether or not your system admits only a double perp close things, in which case you have a classical gadget, or if your system doesn’t enjoy double perp closedness, in which case you have the intuitionistic setting. For example, Lambda doesn’t enjoy double perp closedness. You can find a subset inside Lambda that does, but Lambda itself doesn’t obey the law of excluded middle. You would use the intuitionistic version.

Christian: Were those two deductions above the only way to form something of type tau triangle tau prime?

Greg: Those are the only ones you got. 

Christian: And you can get everything that way? 

Greg: I think so. We’ll shake out in the proof. 

Isaac: I have an intuition for what you mean by double perp closed things. I guess my intuition is coming from thinking about the vector spaces and inter-products. It’s probably not the right way to…

Greg: Actually, that’s a really good way.  By double perp closed, if you take the perp of the perp you get back to where you started.

Isaac: Okay. That’s what I thought. You also had some notion of the polars. It looked like the other notation that you defined—I forget what it was at this point, but it’s like a Curry symbol or whatever you want to call it.

Greg: If you think about a topologically and you interpret perping or negation or whatever as closure—if I have some topological space and then I take the closure of it and then I take the closure of N—so the second time it doesn’t move because it’s already closed. But that first time I may have included more points. For example, if I take a unit interval without the end points, and now I take the closure, then I’madding the endpoints. Then when I take the closure again, I won’t add any new points, but it’s the same space. That first time I did add the endpoints. Double perping it doesn’t get me to the original space. That’s one piece of intuition. If you’ve got double perp closedness, or some kind of involuted operation, then you can use the classical piece of this. Otherwise, you’re going to use the intuitionistic piece. 

Christian, notice that U is particularly some complicated context. U is going to have a hole in it. 

Christian: Why?

Greg: U is not a hole. It’s a context.

Christian: All of this is context?

Greg: Exactly. I start from a hole and then I can build Ks with holes and all kinds of stuff. In particular, I can start with two holes and I can make a K with two holes out of that. Then I can fill one of the holes and I can abstract over the other one. That’s how you get all of the richness. Basically, U has some hole very deep down inside it. The typing context is recording the fact that you’re going to have to discharge that at some point.

Christian: I feel like at the beginning you were describing how to inductively build up context as terms, but I don’t remember seeing how you build up all contexts with their corresponding typing judgments.

Greg: That’s what this is. That’s what this is doing.

Christian: Right now you’re not including any of the constructors.

Greg: That’s up here. All of these rules apply down here. 

Christian: Okay.

Isaac: I’m really glad that you explicitly said that  U had the hole in it because I was not thinking about it as context. This makes a lot more sense now.

Greg: I should probably use kais instead of Us.

Isaac: That it might be helpful.

Greg: Good idea. Thank you. That’s great feedback. That’s pretty much it. Obviously, there are a lot more details. I’m now confident enough in the hole construction then I’m going to do an implementation. I’ll probably do it over the Christmas holiday. It should be a ton of fun.

As we mentioned before, we introduced this R out of thin air. If you want, you can split your original theory into a red version and a black version. For one of the theories, the R can be the other color, and vice-versa. You don’t have to cons up an R out of thin air. 

You can also generalize your context to give you richer nominal phenomena. Then you can make all of those names arise. What remains is, either to take the et al approach with the higher-order syntax to do the nominal stuff or to use the closet approach with the nominal Lawvere theories. Then we’re done. We’ve got all of the juicy bits of Lambda and Pi accounted for. One thing I am interested is to run this on the ambient calculus, which is not a substitution-based calculus, and see what kind of logic I get. But there’s the algorithm.

Isaac: You said that the ambient calculus is not a substitution-based in calculus. I knew that, but it’s been a while since I’ve looked at ambient calculus, so what is the notion of reduction in ambient calculus?

Greg: It’s whole-tree rewrite.

Isaac: Oh, that’s right.

Greg: When an N hits a term, it’ll move into the ambient. When an out hits an ambient, it’ll move out of the ambient. It’s reconfiguring the whole tree. That’s why it’s not so good for internet stuff because you’re effectively reconfiguring the whole internet. It’s not local.

Isaac: That sounds expensive.

Greg: Yeah. Even though the rewrite rules are local, the effect of the rewrite rules is global. Pi and Rho and things like that are much better because it’s really a local exchange, whereas ambient stuff is not a local exchange.

Isaac: That’s a great point.

Greg: I’ve given lots and lots of talks where I go through the different calculi and I say, “This one isn’t so good for this reason and this one isn’t so good for that reason,” but I’ve never written it up and made it really crystal clear for people to understand why the join calculus doesn’t work for programming the internet.

Isaac: It would be super interesting if you’re willing to do that.

Greg: Yes. 

Isaac: Maybe it’s a low priority item.

Greg: It’s low priority for me, but I’d love to get someone to collaborate with on it. 

Christian: In this paper you’re describing a class of enriched Lawvere theories that you consider some sort of theories of interaction? 

Greg: Yeah, that’s right. 

Christian: Then you’re describing this construction you can do on all of them to augment them and equip them with resources and contexts and these useful types. 

Greg: Yes. 

Christian: And you’re going to be proving nice things about that.

Greg: That’s exactly right. In terms of the format of the paper that we hope to get out in the not-too-distant future. After presenting the minimal theory, then we’ll apply it to Lambda and Rho. It depends on whether or not we go all the way to nominality. We might just apply it to SKI and the Rho combinators. The nice thing about the Rho combinators is I don’t have to deal with nominality. I don’t have to have some exogenous supply of names. That shows the theory in action applied to a particular pair of rewrite systems that have nice properties. 

Then we do the proofs of the cut-elimination, all the good Curry-Howard stuff, and then show a few properties. In the 2005 namespace logic paper, one of the cool things about the paper was all the properties that I listed that I just came up with off the top of my head. “Here’s a cool one. We can do a firewall. I can encode XML schema.”

 Just illustrating that there are some really fun properties here, like deadlock freedom for example, or non-leakage of information from a higher-security context to a lower one. Those are properties that we can pretty easily write down in the corresponding type system. That should be more than enough. You’ve got a fairly broadly applicable algorithm. The output of the algorithm includes a type system that can say really interesting properties. The corresponding system has cut-elimination proof normalization properties. That’s enough goodies to constitute a decent paper.

Isaac: Do you know of any influence that, like confluence of rewrite system, would have on the system that’s being built?

Greg: It’ll make it much more likely to have principle types. 

Christian: One other thing about the types. In the first half you were talking about an enriched Lawvere theory, and then in the second half, you were talking about a certain type system, but where exactly is that type system happening? Are you talking about it purely syntactically or are you going to try to fit it into the Lawvere theory framework?

Greg: What you’re asking is, what’s the ontological status of the type system? 

Christian: Yes.

Greg: To be perfectly honest, I think we get another category and the types should be the object of that category. Lawvere theory is happening in one category and the type system is happening in another category. What we should do is to see that category and then we see the morphisms are the proofs in that category. If we’ve done it right, there’ll be a relationship between that category and the category where the Lawvere theory arose. There would be some functor between the two.