Subscribe to RCast on iTunes | Google Play | Stitcher

Greg Meredith is joined by Isaac DeFrain and Christian Williams for this second edition of Brain Candy.

Transcript


Greg: One way to characterize the delta between what we’ll talk about today and what we talked about last time is that the building blocks that we were using to ring changes on the different calculi were all related to the structure of the terms. We were looking almost entirely structurally at the kinds of things that could be rewritten, but we weren’t looking at the rewrite stuff itself. That also provides a source of building blocks to build new calculi. 

In many ways, it’s one of the things that distinguishes this approach to the notion of dynamics from what has traditionally been developed within classical mathematics. It’s important to understand this if you really want to understand how there’s been this very dominant paradigm that’s held sway for several hundred years in terms of how we think about systems that have behavior, whether we’re talking natural systems like the gravitational effects of the moon on the tides or fluid dynamics or the kinds of actions we see in a cell-signaling regime. 

All of those have been treated in a particular way when people have thought about them mathematically. But when people think about computation since the 1930s, there’s been a different sense of what dynamics look like and feel like. To set the stage, let me talk about some standard algebraic structures that have been used to model dynamics. 

There are algebraic structures called monoids. There are different ways to conceive them; an ordinary listener might think about this in terms of clocks. If you imagine an analog clock with a round face and hands moving around, we know what it means to do certain kinds of movement around that face and understand that there’s a cycle. Let’s say it’s 11 in the morning and we go three hours forward, then we crossed the barrier of 12 and reset things and get back to two. There’s a cyclical nature—or a modular nature—to those kinds of systems that have behavior. 

We also see that kind of cyclical or modular behavior when we look at waveforms. There’s a cyclical nature to a sine wave that shows up. That’s one way of thinking about it and that gives rise to a notion of a monoid, and a monoid gives rise to the notion of a group. So monoids don’t necessarily have undo kinds of behaviors. You could certainly wind the clock back, but we normally think of time is as running forward. There’s not necessarily an undo behavior. 

But in groups for every kind of behavior, there’s an undo behavior. In groups, there are natural notions of transactions (or at least transactions that can be undone). If you have nonreversible behaviors—so if you press the button and the missile gets fired, there’s no undo on that. Maybe you could fire another missile and hit that one, but that’s it. It’s a different kind of approach to the transactional behavior that’s preventing the transaction from completing as opposed to the transaction is complete and we can undo it.

Groups provide this kind of idea. There are a wide range of different notions, all the way up to vector space and things that can be built on top of vector spaces. Vector spaces are typically used in physics. What I want to say about all of these data structures is that the way we think about the dynamics is not really in the data structures themselves. We think about the dynamics most frequently as functions between these data structures. In particular, in physics, the dynamics are not captured by the vectors. The dynamics are captured by linear functions from one vector space to another or other kinds of functions from one vector space to another. That gives rise to a certain way of thinking about the world that actually colors the way people think about the world.

It’s directly related to these computational calculi. So when we, as Christian could confirm, when we talk about categories, we talk about gadgets in some kind of structure or information-preserving morphism between these gadgets. So we can think about a category of vector spaces. I’ve got a vector space and then I can look at some kind of function from one vector space to another, and all the dynamics that we used to model dynamics and physical systems live in those morphisms and not in the vector space gadget itself. 

That has a very strong implication in terms of what’s being preserved versus where the dynamics are. When you look at calculi, like the Lambda calculus or the Pi calculus or any of these other things, the dynamics are included in the structure.

Last time we spoke, we talked about different kinds of calculi that are built off of different term structures. We were looking at different ways that we could form processes from core building blocks, like names. The dynamics with the calculus—and what makes it a calculus in some sense, or makes it a computational calculus—is that you have these rewrites that take one process shape to another. In the case of the Rho calculus, if you’ve got output meeting input or input meeting output, then that reduces to the continuation of the input along with the application of a substitution. If you were going to consider different calculi—that’s what we were doing in the last time we were considering a range of different calculi—then you might want to consider a category of these different calculi to kind of organize your thinking. 

Now when you talk about what has to be preserved from one calculus to another, you have to preserve the rewrite dynamics, because the rewrites are packaged with the calculus—with the algebraic gadget comes this rewrite. Figuring out what is preserved in the rewrite is a very interesting question. Let me stop there and check in with Isaac and Christian and see if my rambling makes some sense.

Christian: It does. It’s an interesting question. It’s a little bit distinct from the categorical perspective. Like you said, I’m wondering if what you’re talking about is related to something like bisimulation?

Greg: Bisimulation is the notion of equivalence that naturally arises on top of rewrite systems. You could actually find a notion of bisimulation inside the category. You could look at trees of morphisms and talk about relationships between trees of morphisms. If you think of each morphism as a kind of rewrite—but categories impose these extra equations or relationships on top of the morphism; it erases certain intentional structures. If I start from a particular object or a particular collection of objects, and then I look at all the morphisms coming out of them, I can think of that as a structured tree that is a rewrite system. Then I can ask if there’s another tree living in that same category and if they have this bisimulation relationship. 

That orientation is wrong with respect to Curry-Howard. Curry-Howard is going to ask that our computations are the morphisms. I just don’t think people understand this because there’s so much siloed thinking. People do not understand how differently computation thinks about dynamics. We’ve had this kind of reign of the Newtonian, Laplacian, Lagrangian view of dynamical systems that have been very dominant in physics and other ways of chemistry and biology. 

Computation does not take that perspective at all. It is really different in terms of the way it thinks about dynamics. When we look at the kinds of challenges that we find in physics, it might be that the computational perspective provides better tools.

The really glaring problem in physics is the disconnect between the small and the large. General relativity is somehow conceptually disconnected from quantum mechanics. Intriguingly, the computational dynamics are scaling-variant. They have a kind of fractal shape to them that looks like it might be a direction in which that problem is resolved. Just this broad stroke’s lineup of the two notions of dynamics is really important. I don’t think it’s well understood in either community how fundamentally differently these two communities think about dynamics.

Isaac: That really makes sense. In your example of vector spaces or groups, you’ve totally separated the static structure from the actual dynamics of the system. Whereas in these computational calculi, all of that is just mixed into one bag and you get everything all at the same time. It’s obvious in a sense that the question of how you treat those dynamics would be radically different. Then what exactly is the important thing to focus on is, of course, another interesting question.

Greg: Exactly. I want to provide one other motivation, which is very physical and practical, and then talk about how, at a minimum, Rho provides a really interesting approach to the dynamical part of this, and contrast it to some of the other attempts that have been pursued by people like Milner and Lifer and Ole Ansen and others. Back in the day, in 2002, I had hired Satnam Singh from Xilink, where he was the CTO, into Microsoft to work on a little project I was working on where we were building a Pi calculus execution environment and a programming language based on the Pi calculus called HighWire. At that point I was privately, with other people, sharing this idea of the Rho calculus, but I wasn’t using that at all inside of Microsoft.

We were still very much committed to the Pi calculus as the core model of execution. Satnam and Alan Brown,  who’s a genius logician, and Jan Gray, who’s a Harvard guy as well. We all got together and built a Pi calculus chip that we could render on an FPGA surface. One of the things that Satnam observed was that the size of the chip, the surface area of the chip relative to the surface area of the FPGAs of that time, was relatively small. Genius that he is, he said, well, why don’t we just tile the FPGA with these chips? He came up with a cool tiling algorithm. 

As soon as you do that, the next thing you think about is, I’ve got this steady stream of work coming in. Back then XML was still a big thing, and I showed that there was an interesting coding of XML into Pi calculus, and that you could use certain Pi tricks to do some XML processing. You can imagine a work stream of XML structure data coming in. 

How do you assign that workload to particular chips on the FPGA? This is a nontrivial problem—scheduling in general, nontrivial problems, all the way to the traveling salesman problem. An interesting thought occurred to me: there was another calculus, the Ambient calculus, which is kind of good at getting work into the right region. It’s more of a tree calculus or a tree rewrite calculus. It doesn’t even work by substitution. It works by rewriting whole trees; trees built from the terms.

The thought would be that you use an Ambient calculus to assign the work to leaf nodes in your trees. You organize your tiling as if it were a tree. Interior nodes—their only job is to get work out to leaf nodes. Then you have an assignment of “what’s a leaf” to “what’s an interior” based on a notion of tiling. This is the rough shape of the idea. Now you want to ask about the correctness of the end-to-end system.

That means that you now want to reason about this Uber-calculus, which combines the Ambient calculus with the Pi calculus. That is a really nontrivial problem. We don’t have a nice theory, which in detail works out how to reason about correctness, in these kinds of composite calculi. That was sort of thing that Robin Milner was considering at the time, which was a formalism called bigraphs, in which he was looking at a way to combine calculi. 

If you think about the talk that we had last time, we were looking at calculi arose from building blocks. But we didn’t look at the dynamics. Now the dynamics are right in our face if we want to reason about this kind of composite, Ambient calculus, Pi calculus kind of thing. Let me just stop there and just check in and see. Does the motivation make sense to folks?

Isaac: Yeah. Let me try to clarify the picture that we’re thinking about. You’re saying that we have some sort of tiling or arrangement of these chips, and some of them are designated as leaf chips or whatever, some of them are designated more as like higher-level nodes, so that nodes will receive whatever the workflow is. Using the Ambient calculus somehow decide which leaf node it should pass that work to, then they’ll resolve whatever they need to resolve. 

Greg: Right. That’s exactly right. 

Isaac: Okay, great. That’s really interesting.

Christian: About this idea of going from these static mathematical structures with the dynamics being represented with morphisms between them. It seems like in recent years, in mathematical computer science, this idea of thinking of computational languages as enriched algebraic theories, where you weaken equations such as associativity to an actual associator within the theory, that seems like that is what is putting a dynamics internal to the language itself. That’s the translation into the mathematical worlds that now, if we want a proper notion of a morphism between different languages, that not only changes the structure, the terms, but the structure of the rewrites, then it’s going to be about going from a one category to a two category. It’s preserving the rewrites also.

Greg: Yes, yes, exactly. If you look at the direction that the bigraphs work took, it was just a few years in when they realized that some kind of higher categorical structure is more likely to bear fruit in this regard. I’ll be honest, I think a lot of people find that the bigraphs work is nontrivial to work. Your calculations are not quite straightforward. 

I remember when I spoke with Robin in that time frame, he said he was doing the hardest mathematics he’d ever done. What Robin finds hard, maybe others might find especially challenging. In many ways, the Rho calculus came out of my rebellion that functor categories are probably not the right solution if we want to provide a name-free notion to software engineers who are not yet familiar with category theory.

In some sense, the Rho calculus represents a naive way of getting rid of names in the Pi calculus without a lot of extra categorical machinery. In a similar vein, I tackle this idea of composing systems. I started with a simplifying assumption: I’m going to keep the term structure the same. I’m not going to look at how to turn Pi terms into Lamba terms, and then somehow preserve Pi rewrites as Lambda rewrites. I’m not going to look at some mapping of Pi turns into Ambient terms, or Ambient terms into Pi terms, and then figure out how to preserve rewrites across that gap. I’m going to leave the term structure the same. So my morphisms don’t take us outside of the domain of our term gadget, whatever it is.

Then the question is, can I just isolate the dynamics and think about that compositional structure in the dynamics? There’s a very natural metaphor that I came up with that goes back to a folklore kind of thing—the whole butterfly flapping its wings and that relationship to a hurricane. It also shows up when we talk about biological systems. For example, if you look at a cell, the signaling regime from the cytoplasm down to the nucleus is happening on extremely compressed timescales, like femtoseconds. Whereas the production of proteins inside the nucleus reading some section of the DNA and assembling the proteins on the basis of that read is happening in minutes to hours.

Those are really widely separated timescales. Yet in the cell, they interact. It is really hard to get differential equations to work well in that setting, where you’ve got these grossly different timescales, but you have the dynamics interacting. 

Another place where we have this kind of idea of different layers of dynamics is in networking protocols. We commonly understand that HTTP runs on top of TCP, and TCP runs on top of another layer protocol, and we understand what it would be like to connect an application that speaks TCP with an application that speaks HTTP. We understand that kind of protocol impedance matching. Does that make sense to you guys?

Isaac: Yeah. I’m just thinking back to the material you were talking about in your computational calculi course, about how to actually do this translation in terms of different calculi. 

Greg: That’s exactly what I wanted to bring to light. In the context of design-level thinking about calculus. Last time we talked about design-level thinking, which gave us some building blocks to explore different calculi design, now I want to give another set of tinker toys, which are basically these contexts and what they do for us in terms of giving us building blocks for the comm rules.

Christian: So you’re talking about the challenge of how to translate between languages that think on very different scales?

Greg: I’m really just, just talking about the combination of comm rules, or dynamics. I’m just opening up OmniGraffle right now. I’m going to make a slide and it will help me talk it through as we’re going. If we think about the ordinary comm rule in the Rho calculus, it’s the one rule that doesn’t have a hypothesis. Structs rule has a hypothesis that P reduces to P prime; in the par rule has a hypothesis that P reduces to P prime. But the comm rule has no hypothesis. If you can recognize the left-hand side of the comm rule, you can rewrite to the right-hand side. 

That comm rule, in the Rho calculus, inside the form I’ve got some four comprehensions, Y from X. I’m reading some value which I’m going to store in Y from the channel X. Whenever that value is available, I will run P with Y appropriately filled in with the value I read. I’m running that in parallel with some process that is outputting, let’s say the code of the process Q. It’s outputting Q, but it will be received as the code, so it’s riding on X, Q, and the other process is reading from X and stuffing that into Y. When I do the rewrite, in the ordinary Rho calculus, that par is rewritten to P, in which I substitute the code of Q for Y inside the body of P. That’s our basic comm rule.

But we could make a different comm rule. Let’s imagine that we have a notion of context. Here a context is going to be any process term that I have poked a hole in. For example, in the case of a form comprehension, where we have the continuation P, I could put a hole there instead; or in the case of an output where we have the process Q, I could put a hole there; or in the case of a par, I could knock out one side of the par or the other

Isaac: To be filled with the process later.

Greg: Right. Then I can define a notion of applying a context to a process, which is just filling the hole with the process. If I use K to range over contexts and P to arrange over processes, then K to P is just going to be the process in which the whole is replaced by a P in K. It’s another kind of substitution. 

Since Rho is higher order, we can pass processes around. We have a pretty clear way we could interpret context directly in the calculus if we so desired. Every context can be turned into a four comprehension because we can send processes along channels. 

Instead of substituting the code of Q, what I could do is given a context K, put Q into K, and then take the code of that and substitute that for Y. The interesting point about that is that we can now interpret this context K as providing some kind of impedance match. We’re putting Q into the body of P, but maybe Q is at the wrong level. It’s at the wrong scale or it’s not at the right point in the protocol stack. So we need to do some kind of translation. The context represents that kind of impedance matching. Does that make sense to you guys?

Christian: Yeah, I think so. For the given comm rule, are you fixing a given context?

Greg: For each context will produce a new comm rule. Now notice that if I have two of them, let’s say I have comm case of one and a comm case of two, because we have this substitution structure, I could compose comm case of one with comm case of two by composing the context. 

Isaac: So each context would be a translation into—I guess the example I’m thinking of IP TCP—one context could be the translation and the TCP one context could be the relation into IP. You’re saying we can compose these things. If we have a translation into one and then go back to the original and then translate it to the other if you wanted to.

Greg: That’s exactly right. This can give us a notion of morphisms. For each of the different kinds of calculi that we considered last time, we can build the notion of context, and now we can build a notion of comm role associated with that. Now we can get a category that is where any one object in there is one of these things equipped with this comm role. Now you can talk about what is structure-preserving in this case and how to do a composition of dynamics in this case. That make sense?

Isaac: Yeah. So you’re saying because the dynamics are kind of baked into the model already, now having this notion of composing comm roles given different contexts, we can extend this notion of morphism to include the comm rules. 

Greg: Exactly. You can talk about structure, preservation, exactly what it is you’re preserving. You have a notion of morphism that arises directly from this notion of the composition of the dynamics. You can backfill the notion of morphism that has structure-preserving because you have this notion of composition.

Christian: Are the comm rules the morphisms? 

Greg: The comm rules dictate your morphisms.

Isaac: The comm rules are now something the morphism we’ll need to preserve, right? 

Greg: That’s correct. That is precisely the idea. This is much more restricted than the scope that Robin Milner was considering with bicategories. But now at least I have this natural setting in which I clearly got a notion of how to decompose dynamics, and therefore I have a notion of what structure I want to preserve in terms of dynamics. It’s effectively going to be some kind of structure preservation on the context. 

Christian: Are we talking about a category of calculi?

Greg: That’s right. It’s a category of calculi.

Christian:  Which structure exactly are you preserving? 

Greg: You can now do a structure on the terms that you want to preserve, which extends to structure on the context that you want to preserve.

Christian: What would that look like? What is the structure? Are you breaking down the structure? 

Greg: The structure of the context is derived exactly from structure terms.

Christian: As a tree structure?

Greg: Yes. Now we have a notion of morphism which allows us to build the notion of category. Then we can potentially loosen this to allowing different term structures. But it’s more complicated. It’s going to dramatically increase the complexity. The technique of using context-based comm rules is widely applicable. You could do the same thing to the Lambda calculus. Instead of doing a direct substitution, you apply a context to the term before you do the substitution. The technique is actually broader than just the Rho calculus. Does this make sense? 

Christian: Yeah. This is really nice. 

Greg: I’m trying to make a couple of different points. For the RChain community—the person who is interested in that—the Rho calculus is sitting in a larger context that allows us to explore in a mathematically rigorous way, a wide range of different computational dynamics for our smart contracting language. 

We can imagine that there’s a whole giant application protocol stack that grows up around RChain. Let’s say that we build a rich music application, and that music application has a whole protocol embedded in it that is entirely at the business level. Now we want to talk about the interaction of protocol dynamics with the protocol dynamics associated with, let’s say a social network dApp, that’s also built on top of RChain.

Now we can talk about the end-to-end correctness of the dynamics in the music app as it fits the dynamics in the social media app using this technique. That’s really important because we have to be able to build correctness in a step-by-step fashion. We can’t assume that we have this God’s eye view of all the applications that have ever been built or all the variations of the protocol that are ever going to be rolled out. We have to be able to do these piecemeal semantics, compositional semantics, that allows us to do end-to-end correctness. That’s why this kind of technology is so important.

Isaac: Does this also allow for modularity with the rules that you’re using? Could we take some protocol, develop it, like say for a music app specifically, and then take that into another one just by composing the protocol with whatever it is that we’re currently looking at?

Greg: That’s the idea. It’s not just at the app level. We know that we’re going to be rolling out different versions of the protocol. Over time the protocol will evolve because we’ll fix bugs or we’ll find optimizations or the market conditions will change and we have to respond to new market requirements. All of those things are going to put pressure on the protocol to evolve. But not every validator will accept every change. 

Validators will say, “smart contracts, in this version, are much more lucrative than the ones that you’re proposing.” Others may take a more future-oriented look and say, “Oh yeah, we’re really excited about those kinds of changes, so we’re rolling those out immediately.” Now you have heterogeneous networks, you want to be able to reason about the directness of contracts that run end-to-end.

Isaac: Yeah. A pretty important point to make here is just the ability to change the protocol, just by being able to have this modularity and make small changes without going back and totally changing everything from the beginning can make them just slightly modifying things much easier.

Greg: Yeah, exactly. That’s another piece of the puzzle. I also think that this gives us a really interesting way to begin to look at some of these other physical dynamics. That’s why I started with some of those metaphors—the butterfly wings/hurricane metaphor. There’s something here in the computational way that the computational view of dynamics that can say a lot to physics and chemistry and biology. I got interested in modeling very, very simple physical systems like knots. They have a crisp, clear interpretation in terms of the process calculi. 

But you can go quite a bit further. You can definitely code notions of location. These are very abstract notions of location that have a much wider range of application than we typically find with metrics spaces. My sense is that there needs to be a fairly energetic conversation about how these kinds of tools and techniques can help the application of mathematics to physics and to the physical sciences in general.

Christian: So you’re saying that this parameterization of the communication rules by context is going to serve as the primary link between languages because it’s going to suffice that if you have a certain rewrite in one language, there’s always going to be a way to translate it to, not exactly a primitive rewrite in the other language, but if you encapsulated in this bigger context, it’s always going to be valid and meaningful in the other language? 

Greg: I’m not saying that it’s always going to be valid. What I am saying is that this is a reasonable first guess as to the kind of discipline that you might want to impose. It’s only been in the last 30 years that we’ve had a notion of gold standard programming language semantics or computational dynamics, a comparison of behavior. Specifically, the notion of full abstraction was only put forward in the late eighties/early nineties. Full abstraction basically says if I have two computational systems, calculi or some other kind of system, and I have an interpretation from one to the other, it’s got to be the case that let’s say P and Q are two different computations in system one. We can talk about their translations in system two. Then if P and Q are equivalent in the first system, then their translations need to be equivalent in the second system and vice versa. 

This is our notion of full abstraction. It’s obviously inspired by similar kinds of notions of equivalents in category theory. But this is the first time that we’ve had this kind of gold standard. There are lots and lots of interesting systems that are not related by full abstraction. For example, to date, there has been no fully abstract encoding of the Ambient calculus into the Pi calculus, which strongly suggests that there’s a higher degree of expressiveness in the Ambient calculus than in the Pi Calculus.

Isaac: Is there an encoding of the Pi calculus into the Ambient calculus? 

Greg: That’s easier to do. Likewise, there is no hard preserving encoding of mixed output into the asynchronous Pi calculus with the input sum. It’s very clear that there’s some kind of expressiveness in mixed summation settings that just doesn’t occur. To bring it full circle to answer to your question, this notion of context can give us a wider spectrum of comparison so we can talk about things that are not fully abstract, but that does have a context-based encoding that is fully abstract up to the context. That widens the discussion of the comparison of dynamics. 

Christian: Has anything been done like this before? 

Greg: To the best of my knowledge, this is the first time ever I’ve talked about these things. I have notebooks and notebooks and notebooks filled with results that I’ve never talked about. For the RChain community, this is what they’re getting when they bet on Rho calculus. It’s not just this one calculus. It’s this much, much wider array of tools and techniques that we can employ over the lifetime of the chain. My hope is that it’s not just RChain that this has applicability to, but a much wider setting or set of challenges that face us both in terms of physics and in terms of computer science. 

Christian: You said the Ambient calculus was probably more expressive than the Pi calculus. Does it have a fully abstract embedding into the Rho calculus?

Greg: I’ve given an encoding; I haven’t ever proved that it’s fully abstract. Intuitively it looks fully abstract. We have a lot of extra machinery in the Rho calculus than we do at the Pi calculus.

Isaac: I don’t know much about the Ambient calculus, but it seems somewhat similar to this idea because it’s based on having the notion of context built-in.

Greg: It’s full tree rewrite. You don’t do substitution; you actually rewrite the tree. This is a really interesting idea in and of itself. If rolling it out over a network, you’re potentially doing whole network rewrites. You could see why that’s a lot harder. Even if you virtualize the network, it requires a lot more cooperation amongst the nodes in the network that are affecting these computations.

Christian: I hope you start sharing all these results.

Greg: Step by step. What I’ve found is that each of us has our own sort of passions and pursuits and projects and, and to lend attention to someone else’s stuff, there’s got to be a motivation. Step by step, what I’ve been doing is to create a network of motivations that make it worthwhile to go and look at these kinds of results.