Developer Learn Podcast RCast

RCast 30: Reaction Rates in Rho-calculus

Subscribe to RCast on iTunes | Google Play | Stitcher

Greg Meredith and Isaac DeFrain discuss reaction rates as it pertains to RChain, Rholang, and Rho-calculus. The slides below the transcript correspond to the talk.


Greg: Today, I wanted to talk about the idea of reaction rates. It’s a notion that comes to us from an application of the mobile process calculi to the modeling of physical phenomena, such as chemical reactions and biochemical reactions and biological processes in general. 

The semantics of modeling reaction rates in the process calculi end up getting complicated. The rates are associated with channels; then you do a probability distribution of a selection of comm events based upon the reaction rates. It’s easy to code up, but getting the semantics compositional on good sound footing is not trivial. Radu Mardereand Luca Cardelli came up with a version that’s pretty good. I recommend that people take a look at it if they’re interested in Stochastic Pi calculus 

Isaac: Is the semantics that they wrote up our compositional?

Greg: Yes, but it’s pretty complicated. A few years ago, I had this thought fly by that with the annihilation-style reduction semantics that’s available with the Rho calculus I could design namespaces to give me whatever reaction rates I wanted. The thought is that you have this kind of recursive reduction relation; I’ll make sense of that. But with the recursive reduction relation, you then add every recursion step, calculating a notion of rate based upon that until you get all the way down to the ground and then that’s a unit. From there you can design namespaces because you have an infinite number of names associated with the infinite processes. You could design namespaces that give you whatever reaction rates you wanted.

Isaac: One thing I wanted to ask about this annihilation-based comm rule: it’s totally different than the usual comm rule that we’re talking about, right? The usual comm role is not even a specific instance of this annihilation comm rule. Is that right? 

Greg: The relationship is a little more nuanced and subtle. For those people who are listening, I have a set of slides we’ll make available on the website. I’m staring at the first slide. It addresses the notion of annihilation itself. This is part of a more general idea that Milner put forward a long time ago, that if you had some kind of Algebra on whatever your names are, then you could define some kind of relation, an abstract relation called channel co-channel. You could use that to determine when you get a reduction. If you have a for comprehension and an output that meet together on a channel co-channel pair, then you can get a reduction. Then the annihilation idea is to supply a channel co-channel relation. 

Isaac: That makes sense. Just to clarify, when you say channel co-channel, you’re just talking about a source and target channel, right? 

Greg: That’s exactly right. They’re either ends of a communication channel. One notion of annihilation—there are many—but one notion is, let’s say P annihilates Q (or they annihilate each other cause it’s a commutative relation), if there’s any R that P PAR-Q eventually reduced to, then that implies that R eventually gets reduced to zero. What that means is that all paths out of P PAR-Q eventually get to zero. That’s the definition. 

With this kind of quantification, you have to be careful if you’re trying to think about it in terms of Lawvere Theories or Enriched Lawvere Theories. That’s why I throw this out there. I’m interested in testing Christian Williams and John Baez’s notion of graph-enriched semantics. Stochasticity is one area where we need to check and see does their method extend. The other is this annihilation idea. 

The annihilation idea, in the context of stochasticity, might give us a reconciliation between a normal notion of reduction and a stochastic notion of production. The thought is that you have a new comm rule that says that if P annihilates Q, then if we hang a for comprehension off of the name of P (or at P) and we hang a send-off of at Q then it reduces in the usual way.

Now you might say, “Well, hang on a minute. The annihilation relation depends upon the definition of production and now you’ve made reduction depend upon the annihilation. Does this circularity groundout? Do we ever get off the ground?” 

The first thing to note is that zero annihilates zero. If you put zero in PAR with zero, that’s equal to zero. All paths out of there get you to zero. At a minimum, we have that zero annihilates zero. We can then use at zero as a channel for communication. We have at least one channel. 

I’m going to design a namespace. I’m going to define a function from the natural numbers into processes. I’m going to define two such functions. I’ll call them P and negative-P. So P of zero is just zero and negative-P of zero is just zero. Then P of N is going to be the for comprehension that listens on the at of P of N minus one for Y and then does zero. We recurse down, forming a for comprehension one level down—the name of that—then negative-P of N is that the dual of that. We take at of negative-P of N minus one and send zero over that channel. It’s an easy induction to prove that P of N annihilates negative-P of N. 

Isaac: It makes it really helpful that the continuation on P of N is just the stopped process. They’re always going to reduce to the stopped process, so that makes the annihilation really easy.

Greg: That’s correct. You can make it more sophisticated, but for purposes of our discussion, that’s right. By the way, I wanted to mention something practical here before we take the next step, which is that we don’t actually need the quantification in the sense that in a practical setting you could do what nature does. You could let all these processes run in superposition and whichever channel co-channel pair got you down to zero first, that would be a way to break the race. If there are some that run forever, they don’t give you good service. They’re not good channel co-channel pairs.

I’m using the word superposition deliberately because there is a way to give a kind of quantum mechanics-style interpretation. You think of all these processes as potentially running and then you use the quantum notion of probability amplitude to get the distribution over which you would make a selection for the race.

Isaac: Interesting. Would treating it that way produce a similar distribution as to what we’ll see later in the slides? 

Greg: No, that’s a very different world. For the longest time, since about 2006, I finally realized that there’s an interesting question about quantum mechanics in general, which is that what proves that notion of probability distribution is actually a notion of probability? We have an axiomatic formulation of probability. Then there’s this interpretation of the physics that says, “Well, hey, when I whack a dual against an operator applied to a vector, that is, in fact, a probability amplitude.” That interpretation, you need to go and check and see if that, in fact, does axiomatically satisfy the definition of probability. In fact, it doesn’t. It’s actually an alternative notion of probability. It’s a notion of probability that’s bigger, that includes negative probabilities. 

I would argue that that’s actually a better axiomatization of probability. It’s one that corresponds with our notion of what probability means in natural systems to a high degree of accuracy. That’s why I’m saying it’s actually pretty different than stochastic notions that have been put forward by even the Mardereand Cardelli formulation. That notion of probability is not the same as the one you get from quantum mechanics. That’s why they’re different. 

Isaac: What does negative probability mean. 

Greg: It has to do with whether or not your probability amplitude is a complex number. Scott Aaronson gives has an interesting blog on exactly these points. It’s a cool area to go and investigate. For the mathematicians in the crowd, you can lift a lot of this up to quaternions and octonions.

The same built-in ocean of probability that you get from the complex numbers and the quantum mechanics interpretation you can rejigger or iterate the same construction with quaternions and octonions. And those are the only ones available. Those are the only three that fit that particular style or notion of probability. 

Isaac: Interesting.

Greg: It’s a crazy world. Art is long; life is short. I really wish I had time to go in and investigate this further because it’s really fascinating. 

Back to reaction rates. We just designed a namespace. The namespace bifurcates into some input-based names and some output-based names. As long as we always pair up inputs and outputs at the same level, they annihilate each other, so we now have an infinite set of channel co-channel pairs. 

Now I want to turn our attention to what I was originally aiming to talk about, which is this idea of rates. I’m looking at a slide in front of me now, I think it’s slide four or five, where I just look at a for comprehension in parallel with an output. I’m going to assign a cost to affect the reduction at all. We’ll call that R. Then I’m going to assign a cost to the transmission of data and I’m going to call that T. When I recurse—because in the annihilation style comm rule I’m listening for Y from at-P and I’m sending on at-Q—I now have to go and check to see if they annihilate. We can assume that they annihilate, but I still have to go and do the cost of that.

Isaac: They’ll reduce, so we’ll get a reaction rate from that. 

Greg: That’s exactly right. So I do R plus P, so that’s the single-step reduction, whatever that is, together with the transmission costs times, whatever it costs, whatever the rate is associated with P PAR-Q.

Isaac: When you’re talking about the cost of transmission, what exactly are we talking about? Are we talking about sending this message on the channel Q?

Greg: That’s right. In particular, the only observable effect of sending the data across the channel is to plunk the data in with a substitution. Essentially what we’re measuring is the complexity of the substitution. Rather than a number, it would really be a function. But for purposes of our discussion, we can just approximate it as a number. 

The first thing to note is that if my source and target are P of N minus one and negative P of N minus one, then what we would calculate is R plus T to the N. Because that would bottom out, it would recurse and recurse and recurse to bottom out at P of zero and negative P of zero; we would just be multiplying R plus T N times if we start out at N minus one. 

What we can see is that we’re already hitting probability. I’ve now designed a namespace that provides rates that are governed by the binomial theorem and we didn’t even break a sweat. It took us three lines to get there. 

Even though I’m being a little bit fast and loose—and I’ll say exactly where I’m being fast and loose in just a minute—the mathematics that we just went through is literally less than an order of magnitude of the complexity of the stochastic presentations. I’m getting a notion of rates that clearly correspond to a notion of probability. 

I want to point out that we did a simple namespace design, but we could really be a lot more creative in our namespace designs. That begs an interesting question: How do we actually calculate that that rate of P PAR-R? There might be races in there. The other thing is that there might be terms of the form, D reference X. That’s a legal process and I haven’t talked about how to calculate the rate of D reference X. Let me address the races idea first. 

Isaac: Before we go to that, I remember from the Rho calculus paper that you had a notion of annihilation in there that was more general than this annihilation that you presented, because it wasn’t that the processes eventually reduced to zero, it was that they reduced a sum, a fixed process, or something along those lines. 

Greg: You have to be super careful. They have to be special fixed processes. Otherwise, you don’t get commutativity. There are certain properties you have to go and check. 

Isaac: Okay. That makes sense. Have you done a reaction rates for that type of annihilation as well? 

Greg: No, I haven’t. It could be done in further work, but it’s a really good question. I have a feeling that the requirements on the states that you get to in order to preserve certain properties are going to be closely related to the notion of probability that you end up with. Interesting math there as well. 

But onto this idea of races. Let’s define the down arrow P (or just down P) to B, the predicate that there’s a Q that P reduces too. You can find some Q that P reduces too. Then we can define races of P to be all the triples—LRNT—such that P is structurally equivalent to the PAR of LRT and some S. That either L reduces with T or R reduces with T but not both. They’re in conflict. It’s a real honest-to-god race. You can only go one direction or the other. 

Isaac: That makes sense.

Greg: Then you just collect all those triples. You can recursively do this, but we’re just taking the first level. What we’ll do to calculate P PAR-Q is to take a normalized sum over the races. We sum up the rates for L PAR-T and the rate for R PAR-T. Then we divide through by some weighted measure of the set races RQ. There’s some wiggle room in terms of how you weight that. You could just take its cardinality, which is what I’ve done on the slide, but you could do some other things as well. For the mathematicians, it might be interesting to go and play with that and see what different weights give you.

Then what we’ll do is we’ll just neglect any contribution from the D references. We’ll simply say that they don’t contribute to the rate because unless there’s a substitution there inert, and we covered the substitution from the transmission contributions. It can be safely neglected. 

Isaac: You’re saying if one of these LRT terms is some D reference name, then we can forget about it. 

Greg: What I’m saying is in a PAR, we’ll sift the PAR into races and these other things, and the other things contribute nothing. 

Now our definition covers all the possibilities. We’ve given a well-defined function from processes to numbers. That gives us more than enough room to go and design namespaces that provide whatever kind of distribution you’d like to see.

It’s a different way of thinking. It’s a different way of programming. That’s important. This is what I’ve been trying to argue for a long time, is that people should think about the opportunities for designing their namespaces to get the kind of computation they want. This works from the point of view of security; it works from the point of view of sharding. Now I’m arguing it works for designing it if you wanted to include a notion of rates. 

Just to make this make contact with RChain, the whole idea of having Phlogiston-controlled reduction is literally giving a rate to reduction. There’s this direct correlation between cost and compute. If it costs more, it’s going to, in general, be slower to compute. RChain is a stochastic implementation of the Rho calculus. That’s neat because there are all these tools out there for doing simulations of stochastic processes. 

In particular, Andrew Phillips, over 10 years ago, built the stochastic Pi machine. That has been used over the last decade to model chemical and biological phenomena. Andrew Phillips is now head of computational biology at Microsoft Research. In the slides, I reference a paper that came out in 2009 or 2010, where they look at a particular set of reactions that are related to MHC loading, which has to do with certain kinds of peptide production. Before I go into a more complicated set of chemical equations, what I want to do is give the schema here for people to understand. 

Imagine that you’ve got a chemical equation that’s of the form, reagent one plus reagent two, reacts at a rate of K to produce a product. If you want to interpret this in a Pi-like calculus, we can actually go through it compositionally. We just use the normal double brackets. We put them around the entire equation. What we’re going to say is that the lefthand side of the equation is going to reduce according to a rate in the stochastic version of the calculus to the product. Then we can recurse. 

Now we say that reagent one plus reagent two is going to be the meaning of that. The interpretation of that is going to be the interpretation of reagent one in PAR with the interpretation of reagent two. Now we actually have a reduction, we have a red X expression—we potentially have a red X expression. Now it’s making sense. We’re able to give an expression in a form that lines up with the way the calculus reduces. In some sense, chemistry was always meant to be written in process calculi. 

Isaac: It’s interesting to think about the presence of two reagents and the same solution (or what have you) just being two processes running in parallel with one another. 

Greg: Actually you get the notion of concentration. It’s not just two processes, it’s however many. The idea of moles corresponds to multiplying that many copies of those processes, which again makes these kinds of simulation tools like SPEM really interesting because now people can use these tools to model out validator behavior. Under these kinds of loads with these kinds of requests for these kinds of smart contracts, what are we going to see? You can think of those as big chemical soups and you can write down those solutions and get simulations.

We have off-the-shelf tools that are available for RChain that you can get from other research projects that we’re using them for something entirely different and apply them to modeling out validator economics.

Isaac: That’s pretty incredible. You can set up a ton of different scenarios and test them all out and make sure we get the expected behavior in each one of them. 

Greg: That’s exactly right. Or you can determine that you’re going to get some anomalous behavior. “Oh look, here we get a spike in the concentration of this contract. Well, that’s interesting.” 

Around Christmas time we start to see these kinds of perturbative behaviors in which was then taken this thing into a different equilibrium. We can import the tools, but we can go further than that. We can say, “Hey, this biological process, this immunological response, this neurological response, kinetic proofreading, these kinds of biological things which we know have specific functions can be imported into the designs of our smart contracts.”  

Isaac: That’s so cool. 

Greg: This is why I keep saying, “We should be copying nature, we should be copying nature, we should be copying nature.” But now I’m saying, “And we can copy nature.” It’s not us just sitting with that sketchbook and sketching what we see. We can make this a lot more precise, a lot more specific. 

To put some flesh on the bones, probably everyone knows that if you take some sodium and chloride in equal parts, they react to form salt in ACL. Now we can interpret this. The interpretation is that the process that represents sodium in PAR with the process that represents chlorine becomes the process that represents sodium chloride. 

The question is: What would be a process that represents sodium? Sodium has one valence electron—it wants to give away an electron, whereas chlorine, which would very much like to have an electron. You can actually take it a step further. You can model this behavior of electron sharing. You make a new scope which represents the electron. Then you have sodium sending on a channel and chlorine receiving on the same channel that electron. The bond between them could just be the new scope or it could be that they keep sharing the electron. In the next step, the chlorine sends it back to sodium and sodium receives it and then sodium sends it back to chlorine. That would represent your sharing behaviors. You can actually get a fairly detailed picture of the information flow that’s associated with nuclear chemistry in terms of the process language. 

Isaac: So the electron is the message that they’re passing back and forth between each other. 

Greg: That’s exactly right. And it’s covered by a new scope. It’s a piece of data that the new scope represents a kind of bonding. In fact, one of the things that I did was to model Michaelis-Menten that way. Michaelis-Menten is an interesting reaction schemata where you have a reagent and a catalyst, and the catalyst promotes the reagent to a new state. Before it does this promotion, they end up in a particular kind of bonded pair where the bond can dissolve, they can go back to just being reagent and catalyst, or you can go forward to the promoted state.

If you chain these Michaelis-Menten reactions together, it becomes the basis for something called kinetic proofreading, which allows you to convert energy to signal fidelity. I did a Pi calculus analysis of Michaelis-Menten. The twist on that analysis that I did, the people who pioneered this work were Bill Silverman and Ehud Y. Shapiro and Aviv Regev and Cardelli and Andrew Phillips—that was some of the prominent members of the crowd—but they weren’t looking at the logical perspective. My argument was that while it’s interesting to model process behavior, I think it’s more interesting to look at it in terms of spatial behavioral types. You could write down a logical formula and here use the spatial formula combiner for the PAR, so you have PAR of two formulae. But then the evolves to becomes implies eventually law. 

You can do the same kind of compositional interpretation of the chemical equations, but to get logical formulae that processes didn’t satisfy. Then you can turn something like Michaelis-Menten into a scheme and you can go and check to see if certain systems of equations are Michaelis-Menten mechanisms. 

Isaac: Interesting. 

Greg: It becomes a classification mechanism. If I go back to the situation that was from this paper I was mentioning, there’s this fairly complicated biological process that has to do with the formation of a peptide-loading complex. It’s easier just to look at the picture in the slides. There’s a collection of chemical equations that describe this system. They’re nontrivial, but they still fit on a page.

Those equations can be directly in encoded in the STEM language, which is just one syntactic generation away from Rholang. Where they have let, and then process name equal, and then a process body, instead of writing “let,” we write “contract.” Where they have “do or or,” that’s our select statement. The rest of it is they have outputs and they have inputs just like we do. We write for comprehensions is slightly different than the way they write their inputs. By and large, if you look at the STEM language, you can see the Rholang syntax in there. It’s not a big stretch to grab that. 

We can do two things. One, we could import their biological descriptions over into Rholang, which would enable us to import the growing body of knowledge of biological systems. But we could also look at it at the type level. We do a behavioral type description of this and then go and check to see, “Hey, is this a collection, this gaggle of smart contracts I wrote? Is that a Michaelis-Menten-style system?” In other words, does it learn signal fidelity? That would be a really interesting question to ask of smart contracts that are evolving in the wild.

The final point I wanted to make is that the methodology is different. The two overlap in the sense that the argument is that you would go to experimental data to derive rates empirically. You pour some stuff in beakers and you do multiple experiments and get a sense of the rates at which various things react. You derive those empirically.

There are two approaches. Either you can do this sort of stochastic formula where you get a probability distribution over the comm events that respect the rates or you program a namespace that would give you those rates. Those are two different ways of thinking about it. 

Programming a namespace is a different way of thinking about the world. From what I have seen, the RChain community loves this kind of programming mind twist, this kind of brain candy. It might be of interest to that community. 

It’s an interesting range of ideas. We’re going from the theoretical out to the very edges of what a good notion of probability is down to, “Can we use these tools which were originally conceived for simulating by biological systems to simulate RChain deployments?” Then to this other idea of, maybe we can begin to learn from life. If we can import these biological processes into the system, maybe there’s something that these biological processes do, such as an immunological defense response, that might turn into a good security mechanism. Now we’ve got the tools together to be able to do that. 

Isaac: It’s really cool how all of these different pieces are coming together. I definitely didn’t realize the full scope of your, “we need to model nature and do as nature does and be compositional.” I definitely didn’t see how far that analogy actually went. Now I’m seeing we can treat all of the validator behavior on RChain as some kind of biological system that’s governed by certain laws or some kind of chemical system that’s governed by certain reaction rates, and that’s really cool.

Greg: I think it’s cool, but I also think it’s necessary. The one thing that we can see over and over and over again is that our attempts to scale, whether large-scale agriculture or large-scale manufacturing, something is amiss. Whereas nature seems to just scale gracefully. Up in the Pacific Northwest, it’s so lush and beautiful at this time of year. I wander amongst the trees and there’s no king tree directing all of this. 

Isaac: Well they’re all in concert with one another, they all communicate with one another. 

Greg: The more we learn about the network of communication amongst the trees, the more astonished we are. They use the mycelia layer to connect with each other. They actually share resources. When a tree is distressed, other trees will send it resources via this mycelia network. It’s a crazy distribution system. There’s something really awe-inspiring about the way nature scales. I would like to make it very practical to copy nature.

Isaac: Why reinvent the wheel when we already have the best example of a decentralized distributed system available to us? Just mimic that. 

Greg: Exactly. A part of good mimicry is figuring out what it actually is that we’re mimicking. This goes back to flight. People look up at the sky and they see birds flying and they go, “We should be able to do that.” Putting these tools together is a Wilbur and Orville Wright kind of effort.