Developer Podcast RCast Research

RCast 25: The Rho Combinator


Subscribe to RCast on iTunes | Google Play | Stitcher


Greg Meredith and Mike Stay discuss updates to their latest paper, “Name-free combinators for concurrency.” Christian Williams joins the conversation.

Transcript


Greg: We’re talking this week about an approach to computation that has a long history. In the same way, the design level thinking talks that we gave, we’re focused at the calculus level—going up and considering calculi themselves as abstractions. Here we’re atomizing. We’re going downward and reducing the amount of structure in the calculus in order to find some sort of finer-grained or smaller building blocks in the calculus itself. These are called combinators. 

We have Mike Stay, who co-wrote a recent paper that we’ve been working on for a little while, and Christian Williams, who’s been a regular on our podcast. Why don’t you guys weigh in? 

Mike: Combinators were something that Schönfinkel came up with to deal with bound variables in logic. Whenever you say for all X, P of X is true, it’s some property P that holds about all of the things you’re talking about—that for all, with the X there introduces this variable that is bound, then in what comes later. 

In English, you could think about modeling statements using predicates. Like if I say, “Mike likes toffee,” you’d have this likes predicate, and then it would take two parameters. One is the subject, which is Mike, and the other is the object, which is toffee. If you try and parse English that way, it works until you get a word like everything. If you say “Mike likes everything,” there’s no single concept that fits there. What everything does is take the predicate and put this quantifier up front: for all X likes Mike X. Schönfinkel, in trying to get a grasp on how these bound variables would work, discovered that he could actually get rid of them.

There’s a Wiki page on combinatory logic. The universal basis for combinators that makes them just as powerful as Lambda calculus is called the SKI basis. You can think of S and K and I as addressing schemes. That you have some term, and you want to insert it into some other term. That’s what X is doing. It’s a placeholder for some other thing you’re putting in there. When you say “Mike likes everything,” it’s a placeholder for any particular thing, and we’re quantifying over all of them. 

The variables are marking a spot in the term. Because in Lambda calculus, it’s an applicative calculus—that means that you’re taking these functions and applying them to data to get the result—it’s one big tree of applications and Lamda abstractions. When you’re eliminating the Lambda abstractions, all you’re left with their applications. So S and K and I, our address is an addressing scheme for how to get to the spot you want to be at in the application, 

S says X might appear to the left of the application or to the right; K says it doesn’t apply here; and I says it does apply here. So S duplicates the data and sends it on down to the right-hand and left-hand side. K deletes the data, and I sticks the data in right there. 

For linear Lambda calculus, where you can only use data exactly once in an expression, there’s a different basis. It’s BCI calculus. B says it appears on the left, and C says it appears on the right, and I says it appears right here. Combinators can be thought of as addressing schemes for getting data to where it needs to be. 

When you only have a single tree of applications, it’s pretty easy to do, but when you have a soup of processes then it gets more interesting. When you are sending messages between processes on names, the names themselves are addresses. The idea of getting rid of names in Pi calculus is a very odd thing to do. 

When Yoshida was trying to do abstraction elimination, she was able to get rid of the part that looked like Lambda calculus and came up with a really nice set of combinators there, but she still depended on using names on the fresh generation of names, so she left in the new name operation. 

When we tried to get rid of it, it was a little tricky. Essentially, Greg’s insight with Matthias Radestock was that combinators have names—this is a retelling of the story; it’s not how they thought about it originally—but the upshot is combinators have structure that tell you how to get to a particular spot in the term. If you’re going to eliminate names, you have to have something else that has enough structure to tell you how to get to where it’s going. They have this argument that you can’t implement an infinite set of structureless atoms on a computer, which would be needed for a true model of the Pi Calculus. 

Instead, we have to represent them with internal structure on a computer, so we might as well embrace that. Then the question is: What sort of internal structures should we use? As Greg really likes to do, he separated the calculus into the Pi-like part that has names and the set of names that have their internal structure. Then he said, “I’ve got a set of things with internal structure that is the Pi-like calculus. I’ll just close the loop and take the fixed point and they’ll have this nice theory that doesn’t depend on anything else for the Rho calculus.” In this case, once you do that, it allows you also to exploit the structure of the names, to use them as locations of processes, and so we can get rid of the fresh name component of the calculus and just use reflection instead.

Greg: That’s a nice description. That’s a good summary of what the paper does. There’s another aspect to it, which I learned the hard way back when I was doing a variant of XLang inside of Microsoft, we were looking at a language called Highwire, which was sort of Son of XLang, and maybe a halfway point between there and Rholang. 

We were quite interested in a combinator-based compilation scheme. In the same way that the OCaml people started off with a combinator-based implementation and the Miranda people, David Turner, implemented a combinator-based scheme for Lambda, we implemented a combinator-based scheme for this Pi-like language and discovered, to our dismay, this incredibly huge exponential growth. It would swamp our machines. Even toy programs would just complete swamp our machines because of the exponential growth. 

While it does make it easier to reason about certain aspects of programming, it really doesn’t make it easier to express programs and write them down at scale. There’s an enormous amount of compression that happens when you have an abstraction-like capability. That turned out to be a really interesting, hard-won insight that we got about the combinator stuff. 

I was curious: Christian, as someone coming to this fresh, what’s your thinking about a combinator-based approach to computation? 

Christian: This is a really interesting approach. That’s something I was always wondering about with combinator is, they give a nice variable three description of languages, providing this compilation scheme. I always wondered if there was going to be this blow up in the size of the program when you actually decide to implement it. How do you get around that kind of issue? 

Greg: In general, I argue that Lambda abstraction (or abstraction in general) binding is a good thing. It gives you this really powerful abstraction mechanism. That’s why I’ve been a proponent of a higher-level view of abstraction. I really like the nominal stuff, or the stuff that Fiore et al have developed, where you make abstraction alpha equivalents and all of that phenomenon be worthy of a first-class representation in the theory. 

Christian: We’ve been talking about that stuff a lot and I’m hoping some of my research is going to involve it. So far I’m not really understanding how we might take a higher-level approach to describing binding, versus how does that relate to this totally opposite perspective of trying to break Rholang down into these combinators.

Greg: One of the reasons that Mike and I have been interested in the combinator approach is because it does alleviate the LADL work from having to worry about binders. We can give a completely binder-free account of the LADL algorithm. We can have a notion of LADL that is not equipped to handle binding phenomenon, and then apply it to something like the Rho combinator discipline, and then get a LADL-derived type system for a calculus that we know is rich enough to encompass all of the mobile capability. It has that advantage, although as we’ve talked about, there are some practical disadvantages because you really don’t want to use that for programming. If you have any way to do abstraction at all, that’s a far superior way to program in reality. 

Christian: Have you implemented this compilations scheme yet for Yoshida’s combinators?

Greg: I did it a long time ago. Like back in the early two-thousands. I did that implementation actually with the guy who is currently the CEO of a company called MindTouch. Together we wrote that compiler. We were just shocked at how big the thing got. Mike, I’m curious as to what your thoughts are with respect to the LADL program and how we take it with respect to the combinator stuff versus some nominal account.

Mike: Nominal issues are tricky. Gabbay and Pitts invented this idea of nominal sets and then Clouston turned it into nominal Lawvere theories, which is sort of building the mechanics of variable substitution into the language that you’re using for describing your programming language. It’s a way of souping up your Lawvere theory. 

For anyone who doesn’t know what a Lawvere theory is, it’s very similar to the idea of an interface in your favorite object-oriented programming language—that you describe some data type using the language of interfaces, and then the implementations of that interface of all the mathematical gadgets of that kind. You could have an interface for employees, or an interface for monoids, or an interface for whatever structure you have in mind. In this case, the language of nominal Lawvere theories lets you talk about binders for variables; it reifies them.

The unfortunate part is that the idea of the nominal Lawvere theory is very new. There’s not a lot of information about how well it works with the existing theory. How do these new features change what it means to be a monad? For example, any Lawvere theory corresponds to a finitary monad. What do the nominal Lawvere theories correspond to? Nobody knows. Or at least, I don’t know. 

Greg: They mentioned that as future work in Clouston to the papers. 

Mike: That’s a downside. There is a model by Stark for doing binding, but that’s in terms of functor categories which you don’t interpret inset. It’s a different kind of denotational semantics that’s rather more complicated. Stark’s approach is a more complicated notion of denotational semantics. In Lambda calculus, a Lambda term corresponds to a partial function. It’s very clear how to interpret it.

If we can talk about this stuff using something close to a denotational semantics that we recognize, then that’ll be helpful. Christian’s work goes a long way toward that, laying down the basic theory involved in a graph-enriched Lawvere theory. This is one that lets you talk about rewrite rules between terms.

Christian: I’ve really started to fall in love with enriched theories. It gives you such a general framework for describing any kind of formal language with accompanying operational semantics. Our paper is almost done. We’ve just been having fun thinking of a broad range of examples. It seems like the main thing that’s missing from it is a notion of binding. Once the right notion of binding is put into this framework, I feel like you’ll be able to do huge amounts of computer science within that and translate most programming languages into this nice mathematical framework.

Like Greg was saying recently, I have been investigating an alternative approach to the nominal stuff. That is with the Fiore’s abstract syntax with variable binding. It does involve functor categories, and it takes a little more time to get used to, but it does seem really nice. Greg and I arrived at the conclusion that reflection, just like the reflective capacity of the Rho calculus, allows for us to bypass a lot of the machinery, nominal set theory involved, because a lot of the machinery was just about shuffling around the atoms of the names in the Pi calculus, for example. Because with reflection you can dynamically create new names in a systematic way, hopefully, a lot of that won’t be necessary. We won’t have to worry as much about names and we only have to worry about variables. 

Greg: One of the things I think is so important, I’m not sure that people in the wider programming community or the wider audience for RChain, understand like how subtle a lot of these things are. The notion of binding: we’ve got three different possible explanations on the table. We’ve got the combinatory explanation, the nominal work of Gabbay and Pitts, and the abstract syntax of Fiore et al. 

Each one of them is compelling in its own right. Each has its own agenda. It’s not always the case that those agendas are lined up exactly with how programming is used in industry…

Mike: That’s not even mentioning the one where you build in the machinery for name handling into the theory itself. We could write a term constructor for building environments and then handle all of the substance and look up and so on in the theory. 

Greg: That’s explicit substitution—yet another one. But the important point is that each of these different features of programming languages, whether you’re talking about binding or fresh name generation—and those are separate, they’re different ideas, as Yoshida shows with her or work—or you’re talking about concurrency, or you’re talking about a monadic abstraction, any of these language features are relatively subtle. 

On the one hand, a good accounting for them, there’s a certain amount of work that’s involved. But on the other hand, if you want to get type systems that allow you to constrain and control and see into the operations and structure and mechanics of your programs, then you also have to have this stuff nailed down fairly well. Sometimes, it takes a while to really elucidate each of these features and how they interact.

That’s been part of both the joy and the frustration of the journey. One of the things that Mike has reiterated several times is that giving an account of names is tricky. We submitted an earlier version of this paper to a conference called Express SOS. The reviewers were quite kind, in the sense that they said it’s written with consistently high quality. They encouraged us. They said this is an important line of research, don’t give up on it, but we have to reject it because there’s a type error in one of your equations. 

I was like, “What?” I went back and looked and realized they’re right. There’s an easy fix. But when you do that easy fix, it ripples through more than a half-dozen equations throughout the paper. It was a lot of fine, detailed work to make sure that I threaded the correct type information through all the equations. It’s very tricky.; it’s not a slam dunk. It’s a pretty subtle phenomenon. I think people should appreciate that about a lot of a programming language semantics. We don’t really know what our programs do.

Christian: I just love thinking about the problem of binding and substitution because it’s the simplest possible aspect of computation that everybody takes for granted. But it’s really the most fundamental. 

Greg: I agree 100%. It’s been a problem that’s been on people’s minds for a long time. I know that John Reynolds proposed that for fresh allocation. He proposed the functor categories solution, my goodness, probably 30 years ago. Schönfinkel proposed the combinator stuff sixty or seventy years ago, Mike?

Mike: Easily the twenties. 

Greg: Wow, okay. I was thinking the thirties. That’s a really long time. The takeaway is that people should have from this is that these are really basic aspects of programming. It’s taken the better part of a century to get a handle on how these things actually work.

There’s a lot of depth to what programmers do. We sling these concepts around, but in many ways, it’s the miracle of being able to use your hands. Nobody picks up a coffee mug by thinking, “Okay, well in order to do that, I have to use the tendons leading to my thumb in the top two fingers, and I have to use the tendons down the forearm.” Nobody goes through that kind of low-level instruction with respect to the mechanics of the operation of their arm. 

The same is true with driving a car. People drive cars without having a clue as to how the engine is actually operating. In the case of a Toyota Prius, the engine is even sealed over; you can’t even do your own maintenance on the car. 

There is these kinds of phenomena where we use them, but we don’t know how they actually work. That’s what’s going on with a lot of programming languages. Unfortunately, that leads to a lot of costs in the industry. With Javascript, there’s a clear example where we’ve got this language that a lot of developers use to put together the infrastructure of the web, yet we don’t really know the semantics of Javascript. Not only do we not know the semantics, not knowing the semantics makes it hard to give a really compelling type… 

Mike: Semantics differ between implementations. There is a formal semantics that has been implemented, but it is horribly complicated. It’s a several-hundred-page document.

Greg: My argument is that the average web-slinger doesn’t know that document, doesn’t understand the implications of that document. They’re slinging code without really understanding the meaning. 

Mike: They’re restricting themselves to a semantic subset of Javascript. There’s a project called Lamda JS that has identified that semantic subset and given a much easier formal semantics as long as you restrict yourself to that case. But you have to be sure that you’re setting up your environment yourself. You can’t trust that anybody else hasn’t broken those semantics when you go to program, if somebody else’s code runs before yours.

Greg: Yeah, exactly. Then there are languages like Ruby, where you can monkey patch, you can slide in underneath the definitions of operators and change the meanings of other people’s programs.

Mike: You can do that in Javascript, too. Not with combinators, but most of the basic functionality.

Greg: That’s right. I’d forgotten monkey patching is available in Javascript as well. These are pretty subtle issues. They end up resulting in a very high cost of development because one can sling code, not really know what it does, and then be very surprised at a lot of different outcomes. What we would love to do is to be able to constrain those surprises, at least a little bit. 

That’s where the value of good type systems come into play. In order to have the type systems have the most visibility into these languages, we need to have a good account of all the features and how the features interact. That’s really at the heart of the work that the Rho combinator stuff is doing and what Christian is doing with the enriched Lawvere theories, is to explore and consider each of each different way we might think about these features so that we can then give good type-theoretic accounts of them. He’ll climb up to realistic languages and give them good solid type systems, but do that in a way that’s algorithmic. 

Most type systems today have been handcrafted. When you talked about Scala’s type system, that was handcrafted, and the first version of that was handcrafted in a way that wasn’t even sound, which also results in a kind of cost to the industry. If we have an algorithmic approach to the generation of these type systems, then we get soundness and completeness of the type system by the construction of the algorithm. 

All of this fits together marching towards a research program that delivers type systems across a wide range of computational phenomenon. The Rho combinator paper is just carving off one tiny little slice of the language features to do that. 

Any other thoughts about this work or how it applies to blockchain or anything else? 

Christian: I’m curious to hear you talk more about this paper. I’m still working through it myself. 

Greg: Sure, I can say a few words, and I’d love to hear Mike’s perspective as well. When we reworked the paper, I took a little bit more time to explain the role of compositionality and talked about it at two levels. Once you start thinking compositionally, it’s actually a surprise that the community didn’t find the Rho calculus much sooner. If you write it down diagrammatically, the idea of putting a loop in is just standard feedback loops in circuit design. It should have been found right away. 

People really haven’t gotten the compositional thinking into their bones yet, despite the fact that the computer science community, the category theory community, the types community, each of those communities is really committed and understands the power of compositionality. But it still takes a long time to internalize that style of thinking. 

The other level of the role of compositionality that I point out is that the proof of the correctness of the approach comes by composing the correctness that we get from Yoshida. Essentially, all we’re doing is taking her technique and threading the reflective machinery through her technique. That’s it. That’s the essence of the paper. 

In some sense, it’s not a particularly surprising result. We’re adding a couple of combinators to her combinator set for reflection, and then the DRef to which takes the reflective thing and it turns it back into a running program, and then just making sure that wherever she uses new, that we replaced that with an appropriate appeal to the reflective machinery. The biggest insights that one needs in order to do that is just that with the reflective capability—as it is described for this family of calculi—you can’t ever have a process, the quote of which is used in any of the names of that process.

So if P uses X, X cannot refer to or be built out of the quote of P. Automatically, any name that’s built from the quote of P is fresh or distinct from any of the names used by P. You can locally find something that’s fresh just given a process. You don’t have to go and do anything magical to go to a name service and say, “Hey, give me a fresh name,” or go to some infinite supply of atoms and say, “Hey, give me a fresh name.” You get this kind of local property of freshness that is a nice feature of the reflective machinery. Those are the key insights as I see them in the paper. Maybe Mike has a different take on this.

Mike: That was it. It was basically taking what you had done to replace new and replication in the Pi calculus and then porting it over wholesale to Yoshida’s combinator set-up. The paper itself is very technical. It has lots of things going on, translating from Pi calculus to Rholang (more or less, Rho calculus), and then translating from Rho calculus to Yoshida’s calculus with her machinery copied in there, but then taking the complicated machinery from the translation of Pi calculus to Rholang and embedding all of that complication into the translation into Yoshida’s. It’s a bunch of code, but the ideas are very simple.

Greg: Exactly right. That the nature of the paper. Does that help you, Christian? 

Christian: Yeah. I was wondering, what was Yoshida’s original motivation for the combinator? 

Greg: I would encourage anyone who has the interest and motivation to go and read the paper. Because with the common tutorial approach, she’s able to get a handle on expressiveness that is a very subtle phenomenon in computation; the stratification of expressiveness that gives rise to the hardest problems in computer science. P equals NP is an example of an expressiveness stratification problem. That’s still an open problem after decades and decades. After Yoshida proves that this combinator set forms a basis for pressing all Pi calculus programs, then she looks at subsets, and she says, “If I don’t have this combinator, what can I express? If I don’t have that combinator, what can I express?” She gives a lovely little diagram which shows the expressiveness categories built from the subsets of the combinator.

Mike: The same sort of thing happened in Lambda calculus, and we got Linear Lambda calculous and Affine Lambda calculus. She was breaking it down in much the same way. Given different subsets of these calculi, what sorts of programs can we write? What logics do they correspond to? I don’t think she actually got into which logics they correspond to that.

Greg: She certainly was able to classify the class of programs. That’s what her diagram—if you factor that through some LADL-like construction, then you’ll get a similar kind of thing for various logics.

Christian: Would RChain have use for that kind of stratification? 

Greg: Yes. 

Mike: You mean, would Rholang? 

Christian: Yes.

Greg: My sense is that financial contracts are restricted in a way more than just general application programming. If you want to write a game, you probably want the full range of Turing complete. My guess is that nearly all financial contracts need to have a notion of reversibility. You can do a notion of reversibility in the Rho calculus by doing a continuation saturated form. Then since every step in your computation then gives you a reification of the continuation, you can squirrel away your state before you take any steps. That’s a really onerous and heavy-handed way of doing things. 

The other approach would be to restrict the expressiveness of your contracts so that you get reversibility without the same kind of cost, without the same kind of overhead. That would be a really great use for reasoning about financial contracts. I would submit that the financial contracts are going to be covered by some linear programming discipline because you basically want everything to balance out. 

If it’s a linear programming discipline without the use of Bang or Whimper, some kind of replication operator and storage operator, you get reversibility for free. That was an interesting point that Sampson proved almost fifteen years ago now. That would have significant applications for restricting and type checking that this contract within the expressive domain of financial contracts.

Christian: It looks like Yoshida’s combinators are nonlinear because they have the duplication in the kill. Is there a corresponding linear replacement of that?

Greg: That’s the idea, exactly: you come up with good linear replacements. You can take inspiration from BCI. You can also take inspiration from Yoshida and Honda and Berger did a lot of work on linear type disciplines for constraining Pi calculus programs. They were able to use this to give a type-guided interpretation of the Lambda calculus into Pi. 

You can use it also just to pick out linear subsets. You can go the other way around. Use a linear-type discipline to pick out a sub-language, and then compile that down into the combinators. You’ll identify the restricted set that way. You can go either way round. You come up with your own linear combinators, or you come up with restricted usage of the Yoshida combinators, and then you can compare the two results. In some real sense that aught to commute in some way, but I don’t know how to make that crisp yet. 

Christian: It seems like a big space to explore. It’s definitely not one compilation scheme. These combinations are basically generators for the whole space of programs. There are tons of ways of presenting the Rho calculus. In terms of generating these basic programs, each one is going to have its own ups and down. 

Greg: Absolutely. The resulting algebra for SKI starts to look a lot more like classical algebra because you don’t have fancy complicated generators like Lamda abstraction. You just have three letters and all the words you can form over those letters. It has a more generators-in-relations presentation.

Now, of course, there’s the operational semantics, but it’s a lot more like a classical presentation of an algebra. In fact, some of the presentations call that thing an applicative algebra. Yoshida’s combinators don’t have that property because she’s still got new everywhere. It’s sprinkled all throughout. 

When you go through the Rho combinator, the resulting structure looks just like a classical algebra in the sense that you now have these things that you combine with a par. You’ve got some communitive operation that you can use to join up these basis elements. It now has a presentation that looks a hell of a lot more like a classical algebraic structure. That’s what I meant before at the top of the talk when I was saying it atomizes or eliminates internal structure of the calculus. You’re stripping away all these fancy term constructors. You just have par. 

Any other final comments we want to make?

Christian: For any listeners interested in combinators, I recommend the book, To Mock a Mockingbird. It’s a very cool book that teaches you about a combinatory logic through this elaborate metaphor of a forest of songbirds. 

Greg: That’s a really fun book. Next time we’ll be talking about the Brain Candy I gave in a closed-door session, where I talked a little bit about some rewriting the validator harness in terms of metabolically controlled continuation. There’s actually a whole bunch of results along those lines that connect up the work of the Casper work to biological metaphors. I’m looking forward to having that chat as well.