Subscribe to RCast on iTunes | Google Play | Stitcher

Christian Williams discusses his recent trip to study with Marcelo Fiore with Greg Meredith and Isaac DeFrain.

## Transcript

**Christian**: I just returned from a long trip in Europe where I was attending summer school and conferences. The best part was visiting a professor at Cambridge named Marcelo Fiore. We had a really good time starting potential projects together and mostly me learning from him. He’s been working on a big project for over two decades now called Abstract Syntax; one could also call it Algebraic type theory. In some ways it’s a big “theory of theories,” but one could say more precisely that it’s trying to give the Algebraic foundations of formal languages.

The basic idea is that a traditional algebraic theory—we’ve talked about these in the past RChain podcasts—for example, the theory of a monoid. There’s this nice idea in category theory where we can abstract algebraic notions that were associated with sets. We can say, all we really mean by, where before we would say that a monoid is a set equipped with the binary operation and an identity such that it’s associative and immutable, category theory lets us recognize when we have hybrid structures. We have some object in the category that has a binary operation and an identity such that it’s associative. You could have something that’s a space, that also has some kind of algebraic structure.

This is very useful for a lot of purposes. It lets you use this philosophy called internalization. You can look at some complicated definition and then you realize it’s just a *blank* and *blank*. It’s a monoid internal to “cat” or something. There’s a basic limitation to these theories. Ideally, we want to give algebraic structure to programming languages. In particular, the process calculi that we’re going to be using at RChain.

As it stands, you can’t write down a theory of the Rho calculus or a theory of the Pi calculus, in this sense of a theory. These kinds of categories lack a notion of variable binding. When you write, for example, the input. The input operation in the Rho calculus is the one which binds a variable. When you say 4 and then X arrow A, you’re listening on channel A and receiving something that you’re going to call X. Then, in the continuation P, you’re going to do something with that X. The point is that X in your notation is just a place holder. It only has meaning relative to this expression.

The basic rewrite reduction of the Rho calculus (or the Pi Calculus) is a substitution into an input. If you’re inputting on channel A, and in parallel someone is sending on channel A—in the Rho calculus, these would be quotes of processes—then the data, the process which is being sent on channel A, is communicated across and it’s substituted for that distinguished bound variable X. The continuation starts with whatever was substituted.

That’s the subtle thing that you cannot express with normal algebra. There’s a similar problem in the Lambda calculus, in the foundations of normal functional programming. When you write Lambda X dot X plus X, you want to apply it to 2, you’re going to substitute the 2 and for the X and get 2 plus 2. The idea is that an operation which binds a variable in this way and makes this distinguished placeholder that you intend to substitute, that’s a different kind of operation in any of these.

About 20 years ago, Marcello Fiore and Gordon Plotkin and Danielle Turi wrote this paper called “Abstract Syntax with Variable Binding.” It essentially solves this problem. It lifts normal algebra from normal categories called Lawvere theories up into functor categories. Like the category of functors (and natural transformations between them), when you fix two categories, this contains a lot of really rich structure in which you can do all of the normal algebra that you did before, but you’re capable of expressing a lot of stuff that you couldn’t before.

That’s what I came to Cambridge to learn from him. I’ve learned a lot and I feel like I now have the right tools to sit down and write an actual enriched typed theory of the Rho calculus and start to give the full algebraic foundations.

**Greg**: Thanks for that. That’s a nice characterization. I think we should mention that when you say that there isn’t a theory, you mean one of these algebraic theories or higher-order abstract algebraic theories that represents it, that satisfies internalization criteria.

There are a few other subtleties. One is that Fiore’s syntax doesn’t actually account for the rewrites. A monoid has no unidirectional rewrites. It has only bidirectional rewrites.

**Christian**: This is one of these things where he’s worked out how it works but he hasn’t published it.

**Greg**: You’ve been looking at graph enrichment as a particular approach. People have looked at this extensively. Plotkin’s SOS rules as a format for expressing rewrites are very successful. To the best of my knowledge, there isn’t a consensus on a rule format that is widely accepted by the community as capturing all the things that we want to be able to capture. This is actually a point I was talking to Mike [Stay] about because the graph enrichment is a nice tool but it doesn’t articulate all the structure that these rule formats articulate.

Getting a handle on all of that fine-grain structure is important, especially if you’re someone who is actually designing a language. Whether you’re talking about the transitions in the Java virtual machine or just the rewrite rules of the ambient calculus, there are a lot of interesting cases to cover. You need some fine-grain structure to be able to articulate those cases in a way that economically specifies the language.

That leads me to a question: Has Marcelo tried this for the ambient calculus? Because the ambient calculus is not substitution-based. It’s a very interesting puzzle.

**Christian**: I’m not sure if he’s doing it for the ambient calculus, but I know he’s done it for the Pi calculus.

**Greg**: If you recall, you and Mike and I were looking at two issues. One is making the HAM set be a collection of the type of collection in the distributive case. In other words, the HAM set is enriched in that way. The other is the HAM set is enriched in the rewrite rules. We were curious about how those two enrichments might interact.

Mike had an alternative proposal where you encode the theory of the graph, the rewrite part of the structure, as a theory, which would then separate the two. The enrichment would be in terms of the collection and then the graph part would be added in as an additional theory. The thing that speaks for that approach is that the source and target equations become a language, a more fine-grain language for your rule format.

The final point that’s of interest is that the types that you end up with are not related directly to either the types that we get from the Caires-style formulae—as in spatial behavioral logics—nor are they like the session types. These are categorical types and their expressiveness with respect to concurrency is yet to be explored.

The point of the LADL program is to provide an underpinning for explaining both the Caires and session types and showing how there’s something underneath them that does the work. One of the interesting success stories there is that you can unify the arrow type of the simply type Lambda calculus with the Hennessy-Milner modal. What have you done with respect to the investigation of the types?

**Christian**: I was hoping to explain the idea of researchers. Isaac, did you have any questions?

**Isaac**: I wanted to hear more as to how this variable binding idea differs from these arity operators.

**Christian**: The idea is that we have a category of contexts (in the type theory sense). If you’ve ever seen these judgments, wherein the hypothesis you have these terms of these types, then you can deduce these terms of this type. The first half is called the context. It’s your current place in your reasoning that you are.

**Isaac**: Like type environment?

**Christian**: Exactly. The idea is that we have a category of contexts. The objects in there are types. If you take a product of a bunch of them, that’s another object that represents having one term of each of those types. What we want to do is think about functors from the category of contexts; when we start enriching, it’s going to be our enriching category.

The idea is that a functor from the category of context to set. Let’s take the category to be finite sets. This takes in just the finite set N as a kind of generic in variable context and gives you the set of all terms in some language that you could form with those N variables. A functor like capital Lambda—from finite sets to sets—for every natural number it’s telling you for natural number N, the set of terms in the Lambda calculus that you can form within three variables.

It’s describing the stratification of the language. If you imagine the process of generating a language—the Lambda calculus is generated by variables, application, and abstraction—when you specify these as these judgment rules, they depend on the context.

The one for variables is really easy. If you’re in the context of N free variables, you can project any one of them, and you can have X sub I. For application, if you have N context N, you can derive term T1 and in context N you can derive term T2. Then in context N, you can derive their application to each other.

Those two are easy. The one that’s doing something interesting is Lambda abstraction because it’s binding a variable. The way that you write this is that if you’re in context N plus 1 and you can derive term T, then in context N you strip away that distinguished plus 1, you can derive Lambda X of N plus 1 dot T. This brings you from a higher level of N plus 1 context back down to context N and you’ve plucked that distinguished plus 1 and turned it into a distinguished bound variable. It was free on top and then it became bound on bottom.

**Isaac**: That makes sense.

**Christian**: The functor’s keeping track of these levels. At any level, it can tell you exactly which variables are bound versus free.

**Greg:** Functoriality is guaranteeing what?

**Christian**: It’s guaranteeing that these terms are stable under renaming. More prisons in that category of context are just renaming variables, shuffling them around. Since we’re in an artesian category, the extra structure we have is that you can duplicate variables or delete them. If you ever want to be in a type theory where you don’t want to be able to do that—so in linear type theories—then you change the category of context to the category of finite sets in bi-junctions rather than in any functions. So you can only shuffle variables around.

**Isaac**: So the functoriality is playing the role of alpha congruence?

**Greg**: Yes. That’s one of the things that makes this theory appealing, at least from a computer scientist point of view. People who came up with the notion of functoriality we’re not thinking about alpha congruence. The fact that functoriality suddenly explains alpha congruence is a point in favor of this approach.

**Christian**: What’s going on here is that in addition to normal unary operations, we now have this new kind of arity, which is secretly an arity of a certain functor from finite sets to set; it’s the inclusion of finite sets into sets. It’s called depreciative variables. When you’ve taken this as an arity of an operation, rather than just taking it in a certain number of things of certain types, you’re actually taking in a distinguished free variable. The Lambda abstraction is an operation from delta—it’s called a lowercase delta—of capital Lambda to Lambda. It takes in a Lambda term plus one distinguished free variable and gives you another Lambda term where it’s now bound.

This is the key difference that lets you start talking about the formal languages that logicians and computer scientists really care about because substitution is the heart of computation. It depends on the notion of variable binding. Basically, you lift normal algebra to these categories of functors on categories of context. You can do everything that you could before, except you now have this variable binding.

**Greg**: By the way, we should give a little bit of providence. Yes, while Fiore et al worked this out in this setting, the original intuition came from John Reynolds, who was attempting to explain reference semantics in ML. He wanted to be able to keep a variable state, where you can grow or shrink reference locations in the environment.

If you don’t have an immutable state in your environment, you can handle this in other ways. When you do have mutable state, which is roughly on par with the new construction in the Pi calculus, then you need a different story. He proposed the functor categories to model those semantics.

This idea has been floating around for a good 40 years and was adapted and made prettier and cleaner, but the idea has been around for a while.

**Christian**: I don’t know much about that program of Reynolds. Isaac, did that make sense?

**Isaac**: Yeah, definitely. Thank you for the additional explanation.

**Christian**: That was the basic idea that Fiore et al outlined in a 1999 paper. Since then, the project has been built up more and more. The basic idea with functors on finite sets to sets has to do with untyped or one-typed languages with Cartesian context. The category of finite sets—technically, it’s opposite category—is the free category with finite products, on just one, the terminal category. In this, you can only express, for example, the untyped Lambda calculus.

For example, the way that you could start getting types involved is that instead of just taking that free construction on just the one object category, you could take it on some set of types. If you do the same thing, except your category of contexts is pairs of natural numbers, then slots of the tuple represent that you have now have two types. And if you do the same thing where you’re taking functors on this, now your context says you have three of the first type and four terms of the second type.

The subtlety about the Rho calculus is that you don’t just have two types that are existing in parallel. You have these reference and dereference operators that turn the types into each other—or sorts, if Greg would prefer which remember sorts. What would you call the distinction between names and processes?

**Greg**: Normally those would be called sorts. I understand in the categorical semantics you call them types. In the computer science world, those are sorts.

**Christian**: In this case, there’s going to be more structure. Rather than just taking like the free category—if you’ve never heard of the term free, basically it means that you generate some kind of structure where you don’t impose any extra rules. If you want to take some alphabet and form the free monoid on it, then what you’re doing is that you make the set of all words that you can form in that alphabet plus an operation that’s just like putting words together. This is called free because it’s like the simplest possible monoid structure that you can put on that set. Similarly here, we start out with some initial data…

**Greg**: Do you mind if I interrupt for just one second? The way I think about free versus other structures is that free has no equations. Whereas other monoids might exhibit some equations, such as commutativity or item potence.

**Christian**: So it still technically has the equation of associativity, but it doesn’t have extra equations.

**Greg**: No extra equations. In general, if we’re not talking about monoids, but we’re talking about just arbitrary algebraic structures, free means no equations. In the case of monoids, you have to have at least the associativity and identity equations. It’s useful for people who are thinking about this when you add equations, you’re erasing structure. Free has all the fine grain structure. As you get more and more equationally constrained, structure gets erased or identified.

**Christian**: For example, this isn’t really how it’s going to work, but it’s an approximation. Instead of just having a set of type, what we’re doing is that we would start with the small little category. It just has one object for names and one object for processes. Then these two arrows—one going from process to name and one going from name to process—such that certain equations hold.

This isn’t the full story because you won’t quite be able to express communication on that level. But the point is like instead of just taking a free category on some set of types, the sorts in the Rho calculus have this much richer structure. They’re already intrinsically related.

Even just properly stating the theory of the Rho calculus in this framework is going to be a very interesting paper. Incorporating the enrichment might even be the whole paper on top of that. It’s hard to judge exactly how difficult it’s going to be. I’m really looking forward to it because the Rho calculus brings in some new ideas—most of all reflection—that has never been considered in this abstract syntax framework. I think it will add really interesting new dimensions to those ideas.

**Greg**: That’s cool. Can you give us some thoughts? Have you gone down the rabbit hole a little bit to tell us whether or not there are any dragons or red queens down that rabbit hole?

**Christian**: Unfortunately, no. I’ve just gotten back from these trips. Fiore and I spent two weeks together at the end of June. Since then, I’ve been at these conferences and summer schools. He thinks the notion of reflection is really interesting and is certainly willing to help us work on it.

**Greg**: Oh, good. Did you have a chance to show him the space-time calculus, where you don’t just reify processes as names, but you reify contexts as well? Here we’re talking about term context as opposed to type context.

**Christian**: No, I didn’t show him that.

**Greg**: That’s okay. Reflection is already a big step for the community, as I continue to find it.

**Christian**: It’s brand new to almost everybody. He’s really integrated into this high-level research community. Informing him about this stuff is going to be really useful.

**Greg**: The computer science community doesn’t understand the utility of namespaces yet. Back in 2005, when I was at the trustworthy global computing satellite of Echaps, I just began to touch on the idea of namespaces. I gave them a few examples, but then I withdrew from that community in order to go after biology.

We do on a regular basis partition namespaces because that’s what URLs do. But the more you can do this in an abstract way, where you can logically characterize namespaces, the more power you get in terms of being able to reason about end-to-end behavior of a system, such as being able to build a compile-time firewall.

I think the community has not yet internalized the power of namespaces. I’m still just coming to grips with them as the results on the encoding reaction rates in namespaces. That was a shock and a surprise to me. I did not see that coming. It is a big step. The computer science community proceeds at a deliberate pace.

**Issac**: I’m really excited to see where all of this stuff goes because I’ve been looking into a namespace logic a lot recently, all the behavioral spatial types. I’m really interested to see where this research that you’re doing ends up.

**Christian**: Regarding the namespace logic, when I was showing it to Marcelo towards the end of my stay, he was looking at this interpretation. If you go to the namespace logic paper, there’s the page where it lays out the interpretation of all the Rho calculus operators, but then also you’re augmenting the signature with logical operators. That’s how we’re getting more expressiveness on our types.

The interpretation of the logical operators is pretty straightforward. I mean *and* corresponds to *intersection* or a response to union, et cetera. The ones that are doing the hard work is the interpretation of par, for example. In order to split an atom like that you need to be able to consider all possible pairs of processes such that when they’re put in parallel, they act bisimilarly to the process that you’re interpreting.

**Greg**: No, not bisimilar.

**Christian**: Sorry. Structurally equivalent.

**Greg**: There’s a big difference.

**Christian**: It was really interesting. When he saw that this is one of the advantages of being hyper-educated, he said this looks like this thing in category theory called the detensor product. We started to work it out. If you have monoidal structure on a category, for example, you’re thinking about par as an operation on types where these terms in the namespace logic are being considered as the types, this gives you a way to lift that monoidal structure to a monoidal structure on the category of functors on that category.

If here we’re thinking about all of these terms in the namespace logic as being types, or objects in the category of context, and the operations are giving us operations on that category, they’re going to lift the category of functors on that category. It’s not definite yet, but it’s looking like this may give an automatic way to naturally split apart operations. Not just the par, but any of these binary or unary operations on types. There’s a chance that this could give a streamlined way to generate these interpretation functions where if you have a signature for a certain process calculus, and then you augment it with a signature of a certain kind of logic that you want to add to it, it’s not definite yet, but it’s very tantalizing right now.

**Isaac**: What did you say a categorical contract is called?

**Christian**: It’s called a deconvolution. So the in-lab article about it might be difficult, but basically just picture—are you kind of comfortable thinking about a monoidal structure on a category?

**Isaac**: Yeah, I think so.

**Christian**: It’s basically the simplest way if you’re considering—I keep saying functors but we’re really talking about presheaves on the category context—it’s the simplest way of lifting monoidal structure on the category of context to the category of presheaves. If down in the category of context those are our types, the presheaves are like the languages that are typed in that way. We’re lifting those operations on the types to the operations on the languages themselves to terms of the language.

**Greg**: Of the things that you and Marcelo have talked about, the deconvolution is the one that’s really interesting to me. I’ve long thought that there ought to be a role for deconvolution in the setting. If it turns out that we can make that work and it gives us a better sense of what makes all of this tick, that would be great news.

**Christian**: If you read that formula for the interpretation of P par Q…

**Greg**: You mean fee par C?

**Christian**: Sorry, yeah. It’s saying…was it R?

**Greg**: I usually use M to go after the terms. So you take the total language, the set of all terms, and you find all the M in them such that it decomposes into an M1 par in 2, so it’s structurally equivalent to an M1 par in M2, and M1 satisfies fee and M2 satisfies C.

**Christian**: Yeah, exactly. There exists M1, M2 such that M1 par M2 is equivalent to M. That’s precisely the formula for deconvolution.

**Greg**: Oh, very interesting.

**Christian**: There’s a certain kind of universal construction in category theory called end and coend. They’re like limits and co-limits that understand variants. The co one is like a really fancy *there exists*. It’s actually a big generalization of the existential quantifier. So you’re saying coend over all types. You’re checking for other types because these could be other objects in your category of context, which are terms in the names of those logic. It would be easier with paper, but it’s literally a translation of that formula. That’s why he lit up when he saw it and said that’s the detensor product.

**Greg**: Wow, that’s really cool.

**Christian**: I think it’s a big hint at something. I think that even to do the theory of the Rho calculus right, with the right kind of types in this framework, you have to have already figured out LADL. Because if we really do want LADL to give us the types, from the very beginning we’re going to have to be describing taking functors on whatever this category of types is like. I think we might have to figure this out at the same time as the theory.

**Greg**: I think you’re probably correct. Samson Abramsky described these interaction categories. If you look at the way he described what the types are and how he generates the category, there’s this big wide-open invitation.

Can you take the Caires types or the session types or something that’s sitting underneath them and build the category where the types are the objects and the morphism are the computations. That’s the really interesting question. And so far no one has done it.

**Christian**: Which paper is that?

**Greg**: If you just Google “interaction categories,” there’s a bunch of papers he and Simon Gay and Rajagopal Nagarajan and Guy McCusker all looked at various instantiations. At least one incarnation of the ontology that Samson was sketching out, interaction categories was the foundation. Then the game semantics was an instance of interaction categories and CCS—they could give a type system for CCS, they couldn’t do Pi. The CCS was another instance of interaction categories, but interaction categories was the overarching construction. Mike and I are getting awfully close to that territory again because we’ve been looking at session types and they’re spurred on by the no-go theorems.

The types can only have certain shapes. Mike said what about session types, where they don’t have that shape. Then we looked carefully at that and it turns out that they do, it’s just that negation is now folded into execution. That’s not the case with the Caires types. The Caires types negation is all about the collection, but in the session types work, or at least in the Abramsky flavor of session types, the negation is mutually defined in terms of cut, which is the execution operation. Again, you recover this foundational structure. We’re getting awfully close to the territory you’re talking about. So it might be a good idea for you and Mike and me to talk.