Select Page

Subscribe to RCast on iTunes | Google Play | Stitcher﻿

Following up on RChain president Greg Meredith’s Brain Candy talks on the weekly community calls, he is joined by software engineer and RChain contributor, Christian Williams, for this first in a series of discussions of rho calculi.

The slides below are for reference; scroll down for the full transcription.

Transcript

Greg:  This podcast will focus on design level thinking around computational calculi. The point of it is to illustrate that the Rho calculus is not one calculus; it’s actually a whole family of calculi. In the spring of last year, I gave a short course on an introduction to the design of computational calculi. There was a lot of background material that the group needed to come up to speed on so that we couldn’t really do design level thinking. I was thinking again about how to convey that the Rho calculus is not just one calculus; it’s a family of calculi. I’ve thought of a visual approach to talk about the information that can lift people up to design level thinking without having to know a lot about the energy tails, the algebraic details in the presentation and their relationship to type theory and theory of computation. As a part of the conversation, I invited Christian Williams. Do you want to introduce yourself?

Christian: Thanks for having me on this. I have been recently hired as a researcher for RChain along with John Baez here at UC Riverside. We’re working on the category theory aspects of RChain. We’re really looking forward to it.

Greg: Part of the reason for having Christian on the call is that this design level thinking is really fostered by a categorical gadget called a string diagram. That will become clear as we go through this. I know this is a podcast, but we’ve actually got some pictures that we’ll make available.

The first picture is of a circuit that has effectively two inputs and one output. That circuit—just think of it as a logic block that requires two signals to come in before you can produce a signal out. Now what goes on inside that logic block is just the Pi calculus. It’s generating the stream of all processes, which are legal processes in the Pi calculus. In order to generate that stream of all processes, there are two things we need that the Pi calculus does not provide. One is a notion of the zero process. You have to say what the stopped process is. The other is the notion of a name or a channel, also called the port. I use all of those terms synonymously. Part of the Rho calculus’s critique of the Pi calculus is that any real implementation of the Pi calculus cannot provide an infinite supply of names that are atomic with effective equality. That would mean that your equality is too smart to fit inside of a computer—a finitary computer that we can build today.

We’d have to provide a notion of name that had some internal gadgetry, like strings, have internal gadgetry or numbers have internal gadgetry or URIs have internally recognizable components. As soon as you do that, then you’re sneaking in a notion of computation.

Christian: In practice, are they generating the names as natural numbers and then they just import those?

Greg: Different implementations do different things. Some implementations use some kind of De Brujin indexing, or together with natural numbers; strings of various implementations use different approaches. The most important thing is that all of them share this characteristic that names have an internal structure; their internal components to names.

Even if Pi Calculus says all of those things are just messy details, if you want to have an effective theory, a theory that really helps out implementers, then you’re going to need to articulate a theory of names. One way to do that without importing some theory of names from the outside world is to modify our circuit. The picture of the circuit now has only one input. Again, it’s a box. There’s one input which is the zero. Then we tap the output for processes and we do a little feedback loop that goes back into names, but we make sure that before it’s fed back in, it’s not fed back in as a process, but it’s sent through a little converter that converts processes into names.

You could think of that as the quote operation in the Rho calculus (or Rholang). We can see that this is well founded because at least you have a zero coming in so you can feed the zero in, it goes out as a process that gets tapped, then fed back in and quoted. Now we have a name which is the quote of zero. That can be used to generate more processes.

You can see this in the string diagram. But the interesting point about this diagram is that now this diagrammatic technique, in fact, is that now we can start to see that with this feedback loop, we have lots of other changes that we could bring. For example, since we know that the set of names, which I might call at P or at process, is isomorphic to the set of processes, then anything that I could represent or encode in names would be a reasonable name to import from the wild.

We’ve kind of tamed to the thing that people were doing in the first place, which was either using natural numbers or strings or URIs or any of these other technologies. We’ve tamed all of that by saying, in reality, our names come from quoted processes, but since quoted processes are rich enough to encode strings or natural numbers (or anything like that), we can go ahead and import those in the raw, as it were, because we know if we needed to we could get rid of them.

Christian: Even though these aren’t being imported, they need to be expressible in the Rho calculus so that you can do an internal equality check of name.

Greg: In the theory that’s the case. Notice that we can just use the imported notion of equality. It’s faster to compare two numbers on today’s computers than it is to compare the equality of two Rho processes syntactically. we know that we could in principle turn that calculation into the other one. It’s safe to do. We haven’t stepped out of the bounds of the computational model. The same with strings, and the same with URIs, and a bunch of other things.

What that looks like in the diagrammatic picture is now we have a box and there’s an input for the zero. There’s also an input for these names that we’ve imported, which we’ll just call “in” and there’s a little junction. Because we tap off of the output, which is labeled P. We feed that back into the top and at the same time we also feed in the name.

Now we have two different kinds of names: either an import name or a quoted process. This is starting to look a lot more like Rholang. It’s how we get from the pure Rho calculus to Rholang, and how we know that Rholang is safe. Because when we import these external names, we know that we haven’t done anything messy or fussy semantically. We’re just doing something that we could do in the pure Rho calculus, but because it’s friendlier or nicer to human beings, we can go ahead and import these other kinds of names.

The interesting thing is that we can employ this same technique on the other set of inputs. The only built-in input for the Rho calculus is the zero process, the inert process, or the thing that stopped. Because processes encode all Turing computable behavior, it’s also safe to import other kinds of Turing computable behaviors, such as arithmetic. We can import the natural numbers together with arithmetic as built-in processes.

The diagram here is the original Rho diagram. Now we do the same kind of joining trick that we did before, but over on the zero side of the inputs to the process box. So we have a little box that takes in two inputs. Those are the built-in processes and the zero. Then we feed those two streams as one stream down into the zero input for the process circuit.

Christian: This is very practical because in theory you can do arithmetic and the Lambda calculus or the Pi calculus or Rho calculus, but it’s definitely a lot more involved than some simpler ways that we can do it now. Like you said, this is what Rholang has had in mind the whole time.

Greg: That’s exactly right. It’s a funny thing because again, we had these intuitions from functional languages. The functional languages—like Lisp for ML or OCaml or think or Haskell—they all intuit the idea that you could just swap in some built-in behavior for the Lambda terms that would normally encode that. To the best of my knowledge, no one has codified it in quite this way. Whereas here it’s very clear what it is we’re doing, what the semantics are and why it’s safe.

Christian: Can you go into a little more detail about what you mean by safe?

Greg: If someone presents us with some computational behavior, if we don’t have a crisp, clear way to encode it into the Rho calculus, then we’re left with a big question mark as to what the semantics are of including that feature in the language.

Christian: So when you draw these boxes, these junctions that are importing, that’s an actual translation.

Greg: That’s exactly right. Probably a more accurate way to say it is there’s a refinement of this diagram down to the original Rholang diagram where B gets boiled away.

Christian: I see.

Greg: That was a great question. The interesting point is that we can combine both of these things. We can import built-in names and we can import built-in processes. Now we are effectively at a big chunk of how Rholang sits semantically with respect to the Rho calculus.

The biggest difference between this new gadget and Rholang is that there are some additional control flows, syntactic sugar, that is added into Rholang, such as choice and join. We know how to do both of those as well. That’s kind of outside of the scope of this discussion.

Christian: Is that control flow coming from the combination of be and in?

Greg: No, the control flow is a separate thing. Choice is a separate encoding; you have to treat it slightly differently. That’s really at the operational semantics levels. You modify the four to take in multiple inputs, and then you modify the comm rule to allow for multiple outputs to hit those multiple inputs.

Christian: It’s really impressive how well this idea lends itself to this visualization. I think that’s going to help people a lot.

Greg: I hope so. I’m trying to enable this design level thinking. For people to understand how much semantic terrain we can cover with just one framework, as you and I were talking earlier, when I began studying computer science, it was really a much wider field. People have become climatized to this idea that to go from just one idea to the next you have to completely switch formalisms.

If people are used to going from regular expressions to context-free languages to find at a time at it with a couple of pushdown stores, but in fact, those are all really different gadgets. Here we’ve expressed a range of computational characteristics with variations on a single gadget.

Christian: That’s completely analogous to what’s happened in math in the past 50 years or so. Because all the different worlds of math were fairly separate, and then there were subjects that were kind of at intersections of certain subjects, but it was only recently that we finally internalized this broader perspective of viewing everything in terms of objects and morphisms and those higher relations.

Greg: This is very much inspired by that kind of trend. To that end, part of the reason I mentioned context-free languages and regular expressions is that there are interesting computational phenomena that are less expressive than in Turing complete and regular expressions. It’s really hard to sit at a Unix terminal and make your way around without regular expressions.

Likewise, it’s really hard to navigate input streams in a program without regular expressions there. They’re very useful. Similarly, for any kind of language-based computational phenomenon, it’s really hard to navigate the world without context-free languages. They’re a significant tool. Parsers and parser generators are a significant part of programming today. My feeling is that you’d want to be able to smoothly capture all of those capabilities as well.

It turns out that you can do a similar kind of thing in the Rho calculus. If we imagine that the original Rho calculus diagram—so we have our box with the zero input, and then we tap off of the output where we generate processes—before we allow a process to be fed back in quoted form to be a name, we check it for a property. The property that we’re checking is a syntactic form of the process.

For example, we might check to see that we only allow names that are output forms, or we only allow names that are four comprehension forms, or we have a finite set of names that were allowed, and either the processes in that finite set or it isn’t. These are the kinds of filters that we would employ.

Christian: How could you filter for a syntactic notion that’s more complicated, such as, give me only a context-free grammar sub-calculus, in the Rho calculus. How would that work?

Greg: The big trick between context-free and the regular is paren balancing. Any subset of the Rho forms that allow for regular plus paren balancing, and you can kind of see how you might do paren balancing by matching inputs and outputs.

Christian: What do you mean by that balancing?

Greg: Parenthesis balancing. Being able to check that an S expression is well-formed. Every open has a corresponding close. You can see how every open has a corresponding close corresponds to discipline on the fours and the outputs. It’s a neat trick and you can apply this filter to that and that gives you a sub-calculus of the Rho calculus that has this restricted expressive power. Again, it’s all within this single framework.

Christian: What’s an example of a time when you want to make one of these restrictions, just a simple example?

Greg: Let’s say that you want to reason about a sub-calculus that is context-free. That to me is really good. You’re interested in that because you know how to do parser generators.

Another example that’s really important, especially for the future of Rholang, has to do with providing a total functional language. A lot of time, what you’d like to do is compute a value rather than evolve to new behavior. A lot of programmers are more comfortable and familiar with this idea of computing a value, which is exactly why Rholang 1.0 is the shape that it’s in.

Christian: I just realized this serves as a bridge to most programmers that are not used to this kind of thinking. This filtering would be precisely the bridge that you can start with a sub-calculus that you’re more familiar with and then expand.

Greg: That’s exactly right. One sub-calculus is a total functional language, which means it’s not just that it’s a confluent, so the non-determinism doesn’t really matter in terms of the value, but it’s also terminating. For example, the well old tapes system math expressions or is a total functional language.

We came in at Turing complete and then when we looked at things that are less expressive than Turing complete. We can actually consider things that are above Turing complete. Maybe some people would be shocked or surprised by the idea that computation could be beyond Turing complete. It turns out that real number computation, like dealing with the actual real numbers and not some computable fragment or subset, is clearly greater than Turing complete. In fact, there’s a whole tower of expressiveness that goes out beyond Turing complete. It’s an infinite tower, where the real numbers can be considered as oracles to Turing machines that answer certain questions that are beyond Turing complete. In the same way that you have these hierarchies of carbonalities, you have these hierarchies of oracles that provide answers that some machine at that strata cannot answer.

We can give a similar kind of thing. Let me just start with a simple example. You can imagine that there are names that are not reachable by the Rho grammar. In particular, these are all names that are fixed points. If you write down their recursive equation, X is equal to the quote, so the at sign of output on zero X, that’s going to be zero bang, open paren, zero bang, open paren, ad infinitum.  We can represent that kind of gadget as in terms of lazy calculations. You could write down all right in Haskell or any number of languages that provide a lazy computation, which allows you to capture fixed points like that. Then you could extend a Rholang to have the usual quoted processes as names and that name.

It turns out that if you have that grammar, there are names that are unreachable by that grammar, you could close up by adding additional fixed points. In fact, the limit of that process is having a fixed point operator on the definition of names.

Christian: Are you having to explicitly list out which of these recursive process you’re going to be adding? Or are you saying that you can add all fixed points of the room?

Greg: Typically, there’s a notation for fixed points called all the mu. All the mu legal expressions: mu x.p, where x is a name that is a blind double in P, and then he’d take the at of that. Now say that names in this calculus or either quoted processes or these mu X processes. That gets all the finitely generatable infinitary ones. There’s still more beyond those, but we can write those down.

The diagram for this includes a little box for the fixed point that feeds into your name gadgets. You have the tap off of the output P that feeds into your little combiner along with another diagram that’s generating these fixed points that feed into the end. It’s a lot easier just to look at then for me to explain in English. Maybe we’ll make sure that the diagrams are available for people who want to take a look.

There’s a point that I made before, which is just that these infinitary gadgets are very much like roots of numbers. It’s very much like Galois theory. This kind of extension is almost a perfect analogy with the idea of taking a field and extended it with a root; when we extend the field with the root that root like a name that is one of these fixed point gadgets. That turns out to be a powerful metaphor for how to think about not just the Rho calculus but also going back to the Pi calculus. You can do a semantic interpretation of the new operator in terms of one of these roots.

Christian: This has to do with thinking of recursive data structures as polynomials. Is the community familiar and comfortable with that idea? I assume that was a part of their class.

Greg: I did not cover that in the class, but it’s well known in the programming language semantics area. I don’t think that the programming language semantics folks have connected this idea of roots to this. But people in the RChain community are certainly familiar with this idea of an unforgable name. These roots are very much like unforgable names.

Once we’ve opened the door to these fixed points, or infinitary structures, there’s no reason why we can’t just add in all kinds of crazy oracles. Any constructible numbers, any number you can get to with a competent straight edge, is reachable by one of these sequences of field extensions of value roots. That’s a theorem from way back. But there are numbers, like the transcendental numbers, which are simply not reachable in this fashion like Pi and E and things like that.

We could imagine that we can throw in names that are like Pi and E, just so that we have a sense of how numerous the transcendentals are. If you throw away the rational numbers from the reals, you’d never miss them. You will wound the real numbers around on a dartboard and you had a dart that was infinitely precise, always hit a number, and you guarantee that whenever you throw the dart, you hit the dartboard, and you threw the dart infinitely many times, it never hit a rational number.

Christian: With something like E, do you think that corresponds to a certain canonical process that’s important for some reason?

Greg: For sure, because it’s related to the derivative, so it’s definitely related to a canonical process. We can certainly define a notion of derivative on processes; we can ask for the fixed points of the derivative, and then E would be in some sense a notion of the quotation of that process. From the point of view, expressively and computability, one of the things that’s really cool is we can now just feed into our calculus, these other crazy wild names that throw in information that’s just simply not reachable by the normal notions of computation. They become, again, oracles that computation can consult. So we get this stratification.

We have one framework that gives us less than Turing complete, Turing complete, and greater than Turing complete. We don’t just have to do this on names. We can also do this on built-in processes. We can now include oracles that do all kinds of crazy stuff for us.

Even before we go out into the outer stratosphere of computation, there are still all kinds of fun things we can do inside Turing complete. An example in the red-black calculus. This is another technique that I’ve never seen anyone else contemplated and it can be used in lots of different domains. I’ll give an example in the Rho calculus and then point out that you could apply it to set theory as well.

You can take the diagram that we built for the Rho calculus that had additional built-in processes, and you can make two copies of that diagram. Imagine you’ve got a red copy of that circuit, and you’ve got a black copy of that circuit, and you tap off of the red process output and you feed it into the built-in port for the black processes. Likewise, you tap off of the black process port. We’re generating the stream of processes coming from the black diagram and feed that into the red built-in port. Now they’re mutually recursive. You have two calculi that are mutually recursive on each other.

One way to think about this is that it represents a kind of resource-sharing capability. So a red process could run to zero and terminate, or it gets to the point where it’s a black process. At that point, you swap over into black processing. You can run that down to a zero or to a red process, and then you’re back into the red processing. The way to think about that is you’ve got one resource and you’ve got a yield operation between two different entities that are utilizing that singular resource. The yield operation corresponds to switching back and forth between red and black.

Christian: This is such a good idea. This is building in the resource awareness that at the most basic level.

Greg: It’s cool. One of the things that’s interesting is that you could impose this back down into the Rho calculus, and the way to do that is because the collection of names is infinite. You can have two infinite subsets. You can bifurcate the namespace, and then you can impose discipline on processes that only use one namespace or the other. You could completely code this red-black calculus into a discipline on using the names, but that turns out to be messy. It’s hard for a programmer to get the name discipline right, but it’s very easy for them to just choose one keyword yield. You could take the sort of higher level clean language that has this yield keyword and compile it down into this more sophisticated namespace management stuff. That’s one of the values of the red-black calculus.

Christian: These diagrams really start to expand thinking about what’s possible. Somebody’s initial reaction to seeing the diagrams might be, how has that actually implemented? Is it completely summarized in the grammar generators for the Rho processes right next to them?

Greg: That’s right. On the diagrams that I created, what goes inside the boxes is summarized in the algebra. What we’re trying to do is show that there’s a box and wire way of thinking and there’s an algebraic way of thinking and that they fit together in a certain way. In some ways, it’s the difference between algebra and geometry. If you have an algebraic geometry, you can go back and forth between those two different ways of thinking.

Christian: That’s a string diagram’s all about.

Greg: That’s precisely right. I’m not sure exactly which category it would be best to interpret these string diagrams. It’s pretty clearly symmetric monoidal, but apart from that…

Christian: Very nontrivial feedbacks.

Greg: That’s right. Being able to do feedback cause you have to think about which categories to interpret the feedback. In the final picture, I summarize all the diagrams that were explored so that people can get a sense of the variety of calculi. We start from the Pi calculus and we go all the way out to the red-black calculus. This gives us a sense of the kind of features we would want in the category that would interpret these string diagrams but also helps people see how this can organize our thinking when we’re thinking about which features. It gives us a kind of architectural playground.

Christian: This is this some really amazing stuff.

Greg: Thank you. That’s very kind of you to say. I just want people to be able to see that there’s more there than just the Rho calculus. It’s a lot more than that and we’ve barely scratched the surface. This is what sits at the heart of RChain. The sort of little engine of creativity is right at the heart of RChain.

Christian: When I see them all together like that, one of the things I wonder is: Two people are using different calculi that are similar. Obviously, this framework gives a way implicitly in which they’re related. How do you get two different calculi to communicate?

Greg: Oh man, I’m so glad you asked that question. I wish we had like a whole other hour session. Here’s what I can say. Milner and Lifer and some other students went after a thing called bigraphs, in which they were trying to create a semantic framework, in which they could put calculi side-by-side and compare them. I spoke with Robin at the time that he was doing bigraphs and he said it was some of the hardest mathematics he’d ever done in his life.

A lot of people recognize that the bigraphical framework is challenging. It’s not as easy conceptually as the Pi calculus, and for good reason. There’s more going on because he wants to be able to account for Ambient calculi and Lambda calculus and a bunch of stuff within a single framework. At least with the Rho calculus framework, I have a very crisp way of talking about the thing that you’re trying to capture when you talk about a morphism from one thing to another.

Before I go there, let me just quickly motivate. Why would you want to do this? Why would you want to situate two calculi next to each other? It turns out in 2003, Jan Gray, Alan Brown, Satnam Singh, and I all worked together to build a Pi calculus chip. We realized the chip on an FPGA. And then Satnam realized that the chip was so tiny that he could tile the surface of an FPGA with this chip. Now you suddenly have this problem of how to assign work to each of the chips.

It occurred to me that you could do a tree-like structure in which the leaves are these Pi calculus chips and the interior nodes are Ambient calculi, which are all about shunting the work off to the right regions, which will ultimately get down to leaves, which are doing Pi calculus, like computation. Now you literally have this hardware realized, a combination of Ambient and Pi.

Now in order to reason about the correctness of the behavior of this thing, which is doing Pi calculus at scale with a giant workload, you want to be able to reason about both Ambient and Pi in the same thing. That is a strong motivation, in my opinion, for the kind of work that Robin was doing, whether or not bigraphs is the right way to do it.

Christian: Is Ambient calculus a sub-calculus of the Rho calculus?

Greg: You can do an encoding, but it’s not a sub-calculus. They’re very different beasts. At least if you restrict yourself to families of Rho calculi, you can talk about the comm rule in a new way. You can imagine that when a four comprehension meets an output, the output communicates the process Q into all the locations where the bound variable is. It communicates the quotation of Q into all of those. Now instead of communicating cue, we could wrap Q up in a context K. What this represents is a mediation between an environment and…it allows you to scale.

A lot of biological phenomena—we’ve all heard about the butterfly wings in the hurricanes—but that metaphor is actually real in a cell. The kind of temporal scale that happens in the cytoplasm is like femtosecond—it’s really, really fast. Whereas what’s happening in the genetic machinery when the DNA is read and proteins are produced,  that’s happening like in this second level or even minute level. So you have this vast scale difference, and you could imagine that accounting for that is done in terms of this context thing. It’s a great idea to stop there and leave this as a tantalizing unfinished idea that we’ll handle during another Brain Candy podcast.