Learn Podcast RCast Research

RCast 40: Rewriting the Derivative


Subscribe to RCast on iTunes | Google Play | Stitcher


Greg Meredith talks derivatives and reflection with Isaac DeFrain and Christian Williams. The following slides correspond to the discussion. The transcript follows the slides.


Transcript


Greg: I would like to talk about some work that began in 2017 when I observed that there’s a derivative at play when you do the reflective closure. Recently, Mike [Stay] and I have been looking at that work in a new light. Once you have this observation that the reflective closure behaves in a derivative-like fashion, then you can focus on rewrite rules that use reflection and reunification. That turns out to be sufficient to capture all of the rewrite rules for all of the complete notions of computation that we know about, but it’s not necessary.

That’s okay. We don’t need to be necessary and sufficient. We’re just interested in a rewrite format that is sufficiently expressive—that we can restrict attention to that, and then insist that if you were going to give us other rewrite formats that you gave us an encoding in that presentation. That’s a lot to take in already. Let me stop and check in and see if that all makes sense. 

Isaac: Could you say a few words about just what you mean by reflective closure?

Greg: If you are given a parametric type, let’s say T is parametric in X, then you could form the type R, which is equal to T of R. That closes up the variable. 

Isaac: Like a fixed point manner. 

Greg: Yes, exactly. Let’s begin formally and then work our way out to the meaning of it and why this is relevant and important. On my screen, I’m assuming that I’ve got this polynomial functor that’s parametric in some collection of constants: bold K is the entire collection and KI index is over that collection and some variable X. So for the ith power of X, we have the ith tag or constant. That should be pretty straightforward. Nothing fancy there. It’s not quite as rich as a Lawvere theory, not quite as rich as a thesis, but it’s a polynomial functor.

Now we can take a formal derivative with respect to X, which basically works just like the derivative. You’re punching a hole in every place where X could occur. And so at the ith position, it could occur in the first position in the second position, yada yada… You get I-1 of those, so it looks just like a derivative. To fill the hole that we just punched in the data type, we take the product with something—whatever it is we want to fill the hole with, we’re going to take the product with that. In this case, we’re going to take the product with some type that’s parametric in KNX, and we’re calling it N. N stands for name. You’ll see how names arise in a minute. 

Essentially, by taking a product with names, we’re filling the hole that we just punched in the data type with the named type. I spelled out that calculation. If we say T prime is the derivative of T with respect to X cross N, then you can distribute that through all the components of the sum that T is. That means that where you just lost an X component, you’re replacing that X component with the name. Does that make sense? 

Christian: K is a sequence of natural numbers?

Greg: K is a bunch of tags. It’s indexed by natural numbers. Think of it as term constructors.

Christian: And each KI is the set of tags of i-aryoperations. 

Greg: That’s right.

Christian: Then you’re poking a hole but it’s still an i-ary operation and we want to fill that hole with a name. 

Greg: That’s exactly it. Now we’ve got names filling the holes. Since we’ve got names we want name management technology. Lambda is the best name management technology, but we’re going to do an impoverished form of Lambda. You can abstract over names, but you can only apply an abstraction to a name. That’s because we allow names to be dropped. Dereferenced. 

At some point, someone’s going to tell us how a name is related to a term. If we’re given that magic, then we could dereference the name to a term. That’s what the first term in this Lambda data type is. The second one is an abstraction. The third one is an application of an abstraction to a name. We ground these out. These are like built-in types for Lambda calculus. They can either be a context or they can be a term. 

Isaac: These contexts are the derivatives that you were talking about before?

Greg: That’s correct. Well, it’s a derivative with a hole in it that’s a name. 

Christian: Why are we only applying a term with a distinguished name to another name? 

Greg: Because we can drop names. We get the equivalent of higher-order capability because every name can be dereferenced to a term. 

Christian: Where are we seeing that dereferencing here?

Greg: That’s the first term in the Lambda. Now, as promised, we say that a name is really just a decorated term, which is saying that names are freezing computations. Finally, we take the fixed point with respect to X. That’s the data type that we can derive. The important thing is that we don’t do any fixed one calculations until the very end. Basically, if you’re familiar with Haskell-type shenanigans, we remain at the two-level type decomposition.

We don’t do any recursion that we don’t have to. Lambda is recursive because it mentions, but the T is not recursing. A T could become recursive and become a term type at some point, but at the top of the calculation, it’s basically like a scaffold. You could build a term calculus out of it, but it’s really just a skeleton of a term calculus. 

Step-by-step, we fill in that skeleton with pieces that will be necessary in order to do calculations—computations in particular. At the very end, we close everything up. This gives us a kind of decomposition of each of the features that we need to add to our language. 

We start off with some built-in calculations, which are the operations indicated by the KI constructions. Those are term constructors for some built-in notion of computation. Then we make contexts out of those things. Then we fill those contexts with names. Then we provide name management technology, which is the Lambda. We provide the weakest form of name management technology. It’s weaker than Lambda calculus. 

Christian: What do you mean by name management? 

Greg: The way I think about Lambda—it took me some years to come to this—if you have some kind of namespace construction, or namespace that you need to be able to do computations on, Lambda is your best bet. It’s the simplest way to have abstractions and fill those abstractions. There’s this substitution of values for variables. That’s Lambda’s only job. That’s what I mean by name management technology. I want to be able to just make abstractions and then boil them away through application.

Christian: This is distinct from the abstraction analog in the Rho calculus, which is input. 

Greg. No. What we’ll find is that we recover all of that. It’s easy to recover the Rho calculus. 

Notice that we now do this for any term calculus. At a minimum, this would give you Rho with a set of built-in operations where the built-in operations are given by T. 

The other piece is that all of this is completely generic. There’s nothing special about this. The only thing that we insist on is—and for this talk I’ve made T very tame—ultimately, it just has to be differentiable. I have to be able to poke a hole. It can be a pretty sophisticated functor as long as I can poke a hole in it, and then fill the hole with names, and then do abstraction over names, and then provide the reflection of names of terms as names, and then finally do the fixed-point calculation.

I keep hammering this point out because it’s really important to understand how generic this is. You might be interested in this, Christian, since you’re interested in adding reflection to Fiore’s program. This should give you a foothold. 

Christian: Yeah, definitely. 

Greg: Let’s look at an example. First, notice that this gives us reify. The @ map will give you a Lambda KX to an NKX. X. Sorry, there’s a typo in the slide: a Lambda KX to an NKX. If you have an application, then you can get a TX out of it. But if you don’t have an application then all bets are off. That’s a partial function. 

In general, you want to at least respect that @ star at of T is equal to at of t. In the case of Rholang (as opposed to the Rho calculus) you have that at and star inverses of each other. The T you’re taking care of it is literally being frozen. When you say, give me the address of T, that address can be used to retrieve a T that is structurally equivalent to T and not just T-ish. If it’s T-ish, you can’t even demand that they’re bisimilar because that ends up being halting hard. As long as you get it on the nose exactly inverse, then you can, by demonstration with Rholang, get a solution to that equation. 

There are two different equations here. One is the special case of the other. Therefore, when you form an abstraction, the abstraction of N over star of N applied to at T should be T. It should reduce to T. 

Isaac: That reminds me of the semantic substitution from Rho calculus. 

Greg: Yeah, that’s exactly right. That’s the semantic substitution vocabulary. These two equations are giving us enough to recover Rho calculus. We’re giving a very abstract account of Rho calculus. If I’ve got K tags for out and par, then you can see for yourself that I can form a term that looks like this. That’s how I can do a red X. I can now make a rewrite rule that’s exactly the four-rewrite rule. Again, I’m just using Lamba N dot T as short for the abs in T term in the data type. Does that make sense, Christian?

Christian: In general, how are we deriving the reify and reflect maps from the functors?

Greg: That equation tells you how to do reify. If you hand me a T, I pair it with at. 

Isaac: Reify is just quote, basically. 

Greg: That’s right. That’s exactly what it is. 

Isaac: That’s great. 

Greg: Reflect is partial. It only works if you’ve got one of the application gadgets. If you don’t have an application gadget, we say bottom and fail. 

Christian: What we’re doing doesn’t apply to any polynomial functor. It needs to be of a certain form.

Greg: No, the whole thing works for any polynomial functor. It’s just that…

Christian: You might get bottom.

Greg: You might get bottom for Lambda. If you give me a Lambda term that is an abstraction, I can’t auto-generate from an abstraction a term. Unless you say that the abstraction is a term. If you go from Lambda to Lambda as opposed to Lambda to T, you can get what you want. 

Now we can build Rho calculus out of this formulation. We can build a reflective variant of the Lambda calculus. We can do lots and lots of things with this gadget. There’s literally an infinite number of rewrite systems that we can capture with this formulation.

Isaac: This is really cool. You’re basically providing a framework and then you can put whatever term constructors you want in there as those case of I’s and generate any reflective calculus built from those term constructors. 

Greg: That’s exactly right. 

Isaac: That’s super cool.

Greg: One calculus that’s out of reach directly is the Ambient calculus. The Ambient calculus doesn’t work by substitution. It works by whole-tree rewrite. You have to go through an extra step. You basically have to turn every Ambient calculus thing into some data, and then do pattern matching on the data with your substitution machinery, and then reflected back into your term. 

One Christmas I gave a talk on how all that works. You can’t make ambient fitness pattern directly. You have to go one level of indirection in order to get there. But in general, we can do all the computational calculi in this style. 

The reason I’m interested in this goes back to LADL. Essentially, in the last few months, Mike and I have been investigating what the relationship is between the Caires-style types and the session types—and I’ll focus on on the Abramsky formulation of the session types as opposed to the way Wadler does it. Ambrasky’s formulation is very crisp and clearly laid out in terms of realizability. That realizability construction allows one to see that the effective distinction between session types and the Caires-cell types is the relationship between the negation and execution.

When we’re talking about LADL, we formulate it in terms of some components. You need a monad for your collection, you need a monad for your term calculus, and then you need your distributive law. In the case of the Caires types, the negation is coming from the collection. In the case of sets, you’ve effectively got a Boolean gadget. It has a crisp notion of reflection in terms of the Boolean machinery. 

In the case of the Abramsky types, the negation is coming from the evolution. It’s coming from the term language and its rewrites. The slogan, or the way to think about Ambrasky’s characterization, is he takes the Girard quantale. For listeners who are not familiar with linear logic, there’s a particular semantics; all models of linear logic are some form of quantale. A presentation of every quantale can be factored through sets of sequences. You have some initial set of generators. You consider all the sequences of those generators—that’s monoid construction. Then you consider sets of those sequences. Every model of linear logic factors through one of those gadgets. The free one will look like that; something that’s more constrained will have some equations on it. That’s a piece of folklore about linear logic. 

The Girard quantale is effectively spelling out how you take that free gadget and then add in the linear negation. To add the linear negation in, you have something which is called a dualizer. If the generators that you started with are actions, then the dualizer says what undoes or screws up those actions—what’s in conflict with those actions. 

To negate for each thing in a set of sequences, you take the things that screw them up and then you’re looking only at the double-closed sets. Take whatever screws up my set of sequences and then take whatever screws that up, I should get back to the original set. The double close construction is effectively the Girard quantale. Ambrasky splits that so that the actions are not the generators, but we understand that the actions are evolutions of the rewrite rule. 

Isaac: What are these conflicts between actions? 

Greg: Just given. It’s part of data that you give. That’s what the dualizer is. 

Abramsky splits that into the rewrites. He relates the notion of action to the rewrites of terms. That’s why he sometimes calls it “splitting the atom of computation.” When you look at his negation, it’s clearly formulated in terms of the rewrite rules. Popping back up, that is the difference between the negation and session types, which is a linear negation and is related to the action of computation, and the negation from the Caires types, which comes from the collection part.

What I wanted to do was to give a construction, which allows me to have LADL give you both notions of types. In order to do that, I need to talk about rewrite rules. I don’t want to talk about every kind of rewrite rule. If you think now of TK as a carrier of dynamics, then if you have zero arguments, then there’s no interaction possible. One argument is a potential for interaction, but it’s not until you get to binary term constructors, whether it’s from the initial terms or from the Lambda piece, it’s not until you get binary that you start to get interaction. 

You can also prove that anything higher than binary can be faithfully encoded down to binary. You’re only interested in binary because that gives you a notion of computation as interaction. Even if you focus on just interactive rewrite rules, that’s still too rich to try to give an account of what we’re trying to do. 

What I was looking for was a further constraint on the rewrite rules. I knew more about the rewrite rules, which would allow me to give the session-type construction. The reflection reification thing turns out to give you everything you need. 

I’ll just give a sketch of that, because this has been a very technical session. I’ll give a high-level sketch of what’s going on; in another talk we can look at the details. 

If you understand how the tuple space works, then you can understand essentially what’s going on. Not the Gelernter-style tuple space, where you don’t necessarily have reified access to continuations. But in the RSpace approach that we take with Rholang and RChain, essentially the idea is that at a name you’ve only ever got either data or continuation or the name is not being managed by the tuple space. That is equivalent to the law of excluded middle. 

This data integrity constraint on the storage is related to an observation that goes back 30 or 40 years where people realize that you could encode excluded middle in terms of call CC. The intuition is exactly what we see in the tuple space: the idea that there’s some data present at a name is the proposition about the data; that’s the name proposition of that data. If there’s no data at that name, you freeze the computation that is the observer that’s asking; you make it to be a continuation. When data does get put at that name, that’s the only time that the observer ever gets to pop out. The observer will see either A or not A. 

Christian: I remember you mentioning this at the RChain conference. It’s a really cool idea. 

Greg: It’s not mine. It goes back 30 or 40 years. I’m trying to remember the researcher who observed it. I know that Ralph loader explained it to me back in ’92. It was already old then, but I think I was the first person to actually use it as a foundation of a data integrity constraint for a data source.

Isaac: You’re basically saying that if you’re doing some computation, you have an environment that you’re doing the computation in, and if you need data at some name, if it’s there, that’s great. You take it and we proceed with the computation. But if it’s not, you just freeze it at that point, take a snapshot of it…

Greg: Freeze the computation that wants it at the name. If you look at the data type, we have everything we need to build that at the type level. The Lambda term that we built here gives us everything we need.

Isaac: In terms of the abstraction and application?

Greg: That’s right. Everything is there to either have data which will be a TKX. Or, in this case, the simplest way to do data is as names, which is just a decoration of a TKX. Either we’re going to have data available or we’re going to have continuations available. The trick is that our type system is going to only allow for terms that balance placing data at names. 

It looks like a context-free language. Effectively, in recognizing context-free languages, the only trick that context-free languages have above regular languages is that they can balance parentheses. That’s the trick. We’re effectively adding enough type machinery to make sure that in terms of this data abstraction thing going on, that they’re balanced. If you think of the abstraction is open paren, then the data is closed paren. 

Isaac: That makes sense. 

Greg: Now we can do this at the type level, so now I can give you session types. That’s the essence of the construction. It’s taken some time to get it clean.

The thing that I like about it, to widen the scope, is that I’ve made it really clear what the role of the derivative is here. When I’m made the first observations, I was just like, okay, I know that I can use the derivative to get to this reflective closure gadget. Then the question is how to articulate the fact that I’ve got a derivative related to evolution. Now I’ve laid out all the steps for how that’s done. 

In the context of what we talked about last week, where we talked about knots and I was talking about how I really feel that computation ought to start to be applied to the dynamics of physical systems, not in the sense of writing computer programs to implement Newtonian or Legrandian models of physical systems, but to use the fundamental principles of computation to elucidate principles of physical systems. 

That’s a big gap in the way we have been thinking about computation. There is an explicit program. John Wheeler suggested that we ought to bit (i.e. physics) from bit (i.e. computation). His whole program, bit for bit, when he proposed it computer science wasn’t nearly as developed. Girard was just contemplating linear logic. But this far down the road, we know a lot more about what’s making computation tick and how it relates to logic and type theory. We’re starting to see some of those ideas being employed in a reformulation of quantum mechanics. We know a lot more about it. What I’m trying to suggest is that we clearly have a notion of derivative that’s clearly related to the the dynamics or the system. 

We’re starting to get even the math of the notions of computation lining up with the way that math of physics has traditionally been carried out. That’s part of the motivation to give this kind of factorization and presentation, so that we understand that this notion of derivatives is playing a foundational role in the evolution of the system. We can explain exactly how the two are related. We can calculate and we can make predictions about how the derivative plays a role with respect to the evolution of the system. That should take us a few steps further down the road in terms of being able to employ computation to the modeling of physical systems. 

Christian: Do you think the derivative is an analog of of something that is already known in physics?

Greg: Oh, yes. There are some category theorists who’ve been exploring very similar ideas and maybe we could do an RCast about that. It’s related to the management of resources and how the resources are being used in the formulation of dynamics. It’s right there on the page. We can see exactly what it’s doing. It’s taking a contributor to the dynamics and then knocking one out. That’s what the formal latency and Newtonian derivative does.It takes a contributor and knocks it out.

Isaac: This is really interesting to me because in physics the derivative seems like such a natural thing. You have position and you take the derivative, and now you have velocity and take the derivative, and then now you have acceleration; all that seems very intuitive and straightforward. But it’s very interesting that this same notion of derivative, at least the way you formulated it here in terms of a polynomial, is giving you a similar kind of notion except, for me at least, it’s very difficult to think of the analog position and velocity in this case. 

Greg: That’s because in the standard way that physics is used to model dynamics, time is folded in. Whereas here we’ve unfolded time and time only happens when we do a reduction. We’ve done a further decomposition. This is the very nugget that I’m trying to get to. Physics hasn’t unfolded that bit. A lot of the formulations of relativity are attempting to unfold that bit and they get stuck. That’s why you have this space-time gadget. That’s precisely the heart of the matter. 

If you refactor physics so that time isn’t essentially a kind of coordinate, time arises from the dynamical evolution of things. The dynamical evolution happens because your computing. Then we can get a notion of time that emerges from that, but not the other way around. It’s essentially a repackaging of dynamics. 

There are some physicists who’ve argued for this. Julian Barbour, in his book, The End of Time, he talks about something like this. He tries to give them more intuitive account. Here I’m trying to make some very concrete proposals for where the refactoring is. 

This is what I’ve said over and over again. If you look at the way classical mathematics packages dynamics, vector space just sits there. No dynamics happens until you have a function of it. Whereas if you look at the process calculus or the Lambda calculus or any of these notions of computation, it’s the other way around. The algebraic gadget has the evolution together. 

Evolution and structures, structure and function, go together. This is why it’s really hard to give categorical accounts of computation. All of the category theory has been so heavily oriented toward the classical view where you decompose structure and function in this other way. We just look at structure and then we use morphisms to encode dynamics. The Lambda calculus or the Pi Calculus or the Rho calculus, the gadget is not well-specified. You don’t have a computation gadget until you also have the rewrite rules. 

If you’re going to build a category, your notion of category would have to have morphism that preserve the rewrites. You have a rewrite system and you go to another rewrite system, you have to have a sense of what morphism would preserve these rewrites. That’s a very difficult idea to get a hold of.  Christian, does that make any sense? 

Christian: Yeah, that makes sense. That’s what this whole enrichment idea is about: integrating the dynamics into the structure.

Greg: Yeah, exactly.