Subscribe to RCast on iTunes | Google Play | Stitcher
In Part 1 of a two-part series, Greg Meredith is joined by Christian Williams to investigate biological metaphors in thinking about distributed consensus platforms that relate to Proof of Stake and Casper.
The following slides are referenced during the podcast. Scroll past them to read the full transcript.










































Transcript
Greg: In the last RCast—not the very last one, but the one before—we spoke about error-correcting codes, and gave a surprise twist ending as to how it related to larger issues. I hope this will be a two-part series. I want to talk a little bit about using biological metaphors to think about the distributed consensus platforms that we’ve been talking about with the Proof of Stake and Casper. Obviously, since the previous work was all about getting liveness constraints, fitting those two together, but then sort of opening up to the larger perspective to see that these two lines of research actually fit together in another way, and that other way is an attempt to begin to develop a notion of a living world.
A lot of times, in the canon of western science, life doesn’t exist at the ground; life emerges as a phenomenon above that. I want to propose an alternative view in which life really is available from the ground level up. We don’t have to jettison or discard what we’ve come to expect out of scientific rigor and scientific precision and mathematical formalism in order to address a more living world.
That’s the larger perspective. Let me dive in into some of these biological metaphors to help get things rocking and rolling. Part two of the overall talk is a lifelike agency and consensus.
Recently my attention was attracted to an article in which a group—I think it was at Indiana—had built lifelike materials. The material was self-assembling. It was self-replicating and metabolically controlled. Pretty lifelike under many respects.
The question came up for me: Can we formalize this notion of lifelike behavior? That is the starting point for this piece of work that I’m trying to lay out here. The first question is: Is mere resource constraint lifelike? On the slide that I have in front of me (and others can see on the blog) if we write down Rho calculus processes, where we have a process that is hunting for food from some food source. For food from S, whereas S is the source, do P, so if we just make P be that minimal thing which is to hunt for food and then continue, if you put that in parallel with an environment that supplies some finite amount of food, so that would be P of S in parallel with E of S, where S is the channel along which the food gets transmitted. If E is really just the parallel composition of a bunch of food-generating events, sending food on S, in parallel maybe N times, then after in transition steps, that’s parallel composition just goes back to being P of S. Does that make sense, Christian?
Christian: Yeah. You’re just eating all the food.
Greg: Exactly. Eating all the food. But the interesting point is that the P is not very lifelike, because if you put it back into an environment that supplies more food, P just continues. It’s heartier than a tardigrade; it can’t ever die. That, in my opinion, helps us understand what is lifelike. Lifelike processes need to be able to die.
A thought would be to do something that’s kind of two level. On the outer surface of this lifelike process, it’s searching for food, but on the inner behavior—in some sense, the continuation of having received the food—we convert that food into energy.
We might want to look at the conditions of how that energy is deployed. You can check and see, if we’ve got enough food, then we’ll convert it to energy and make it available through some metabolic channel. If there isn’t enough food—and again this is just a cartoon; we’re just characterizing the rough shape of life—if there isn’t enough food that will trigger some kind of shutdown process. In the case of cell dynamics, that would be called apoptosis, which kills the process.
We can make that decision independent of the continuation of the process. There’s a decision-making apparatus, and that’s running in parallel with a selection or a choice, where the choice is either we get energy from the metabolism or we get the signal to trigger apoptosis and we stop. Separating the decision from the continuation behavior is important because it makes it modular.
Then we might compare that to another behavior, where we’re searching for food and we check to see is this enough food, in which case we would continue, and if it’s not enough food then we stop. That’s a one-level process rather than a two-level process. It doesn’t really have any real internal logic other than checking to see if that’s enough food. There’s no autonomy inside the continuation. It doesn’t have two parts to it that are effectively independent but interacting.
Christian: I can’t decide between different actions, how to use the energy.
Greg: That’s correct. That’s exactly right. If we think about it, one of the interesting things is that metabolically controlled continuation—the world continuation means the continuation of the lifelike behavior—it’s not the only discernible feature of life. Life also autonomously reproduces. You could imagine leaving the decision part alone, so the decision is checking to see if we have enough food to continue or whether we need to shut down. Changing the continuation behavior so that it’s more refined, you leave in the options to continue at the per status quo or to shut down. But to add in another option.
Now let’s imagine that we have two channels. We have a metabolic channel and we have a reproduction channel, and then we do a join on both of those. We’re going to get some energy off the metabolism and we’re going to get some other energy off of the reproduction channel. Then we’ll sum the two energies together. If it turns out that we have enough energy, whatever enough means, from this sum, then instead of just continuing, we will duplicate this lifelike process. It’ll do more than just continue. It will be a cell undergoing mitosis; it’ll split into two.
Christian: Do you mean that you are receiving energy from the act of reproduction? You’re not thinking of necessarily one individual, but a small group reproducing is increasing the total energy, or you’re receiving energy that’s devoted specifically to reproduction?
Greg: Receiving energy that’s devoted specifically to reproduction. We imagined that the metabolic channel and the reproduction channel are effectively internal channels. The check that we do when we sum the two energies, one from the metabolic channel, one from the reproduction channel, either it’s enough, in which case we go ahead and reproduce with the cell divides, or we take the energy that we’ve got, which is the sum of the energy that came from the food and the energy that came from the reproduction channel, and we put it back out on the reproduction channel. We’ve squirreled away some of the food energy for reproduction.
This is very similar to what we see in reproduction in life. The organism that’s capable of reproduction has to deal with this race between feeding the host of the progeny, the parent to the progeny, and feeding the progeny. That’s a real trade-off that happens for those, like in the case of sexual reproduction, the female has to deal with the fact that some of her energy is now going to the child. Her metabolism often adjusts pretty significantly as a result.
Again, this is just a cartoon picture. The Rholang code that I have on my slide is very simple, but it does capture something essential about these characteristics of life. You can divide this into an independent analysis and decision-making mechanism, versus an action that’s taken with respect to that decision, which allows one to independently refine those two modules.
That turns out to be something that happens quite often in lifelike settings because these kinds of control structures in the biological world are complex and it’s difficult to get right. Wanting to preserve structure in a certain way while adapting structure gives rise to this kind of split. If you get an analysis and decision-making procedure nailed down, you can reuse that across a wider range of cases just by changing the action part, that kind of reuse is favored by natural selection.
Then it also turns out that metabolically controlled continuation and autonomous reproduction are just part of the story. Life has functional behavior. One of the ways that we think about this is: the physicist doesn’t really talk about the purpose of the orbit of the planet or the purpose of the gravitational effect of a storm. But when you go up through the ranks to biology, biologists are all but forced to speak of the purpose of the immunological response to the viral infection. It’s hard to avoid that kind of language when you’re talking about lifelike phenomenon.
There’s this question that arises that has to do with, can we modify our lifelike behavior to account for functional behavior? With the Rho calculus, it’s actually pretty easy because we can make it parametric in functional behavior. Instead of P being parametric just in the channel for a food source, it’s also parametric in some functional behavior.
Let’s say we supply it as a name. So it’s parametric and asked for the food source, and at B or B is some functional behavior, like the web-weaving behavior of a spider. That’s an example of some functional behavior. In the case that it does have enough energy to continue, and the case where that energy is not being used for the reproductive process, then when it continues it also engages that functional behavior. The continuation is P in parallel with whatever that functional behavior might be.
The nice thing about this is, again, like the code that I’m looking at on my screen, it really hasn’t changed very much. As I add these lifelike characteristics, the code itself is remaining relatively stable, which gives us some degree of confidence that we might be moving in the right direction.
The other thing to point out is that if I go back to the original Rho calculus paper and I look at the way we’ve encoded replication in the Rho calculus, the interesting point is that the code that we’ve written down is not much more than an elaboration of that code. In fact, in the paper, I point out that the encoding of Pi calculus replication that I’ve given runs away. It’s a runaway process. That reflects the semantics that Robin gave for replication. Nobody ever liked Robin’s semantics for replication, precisely because it was an infinitary object when what you really wanted was some kind of guarded replication. I point out that this could be done, but I don’t supply the code. You can see this as me finally saying, “Okay, well if you haven’t been able to write it down, here’s a version of it.” This lifelike harness was always present in the Rho calculus.
What have we got in terms of our lifelike characteristics? Well, we’ve got metabolically controlled continuation, autonomous reproduction, functional behavior, but life also adapts, at least on planet Earth. A good chunk of that generational adaptation is genetically-controlled evolution.
The interesting thing about the Rho calculus is that it gives us the reflective part of it—when you have a process, say P of S and at B, like we’ve been talking about—if you took the code of that process, that gives you something that is akin to the genome of that process. If we think of the process itself has a phenom or the creature, then the genome is represented by the code of that process. If we modified the code of that process, we would get a different female, a different creature.
Christian: That’s a cool idea.
Greg: It’s a lot of fun and we can go pretty deep in this, but before I go down that path, I just want to point out that now this does, in fact, relate to Casper-like validation. In particular, we can now implement the notion of slashing as a form of starvation. If we have a validator behavior, if we starve that validator of food, then that corresponds to slashing it.
One of the things that’s been very tricky with respect to proof of stake, in general, has been how to represent the fact that validators are absolutely going to want to de-risk their investment. It’s hard to force them into a situation where one staking address equals one instance of validator behavior.
If that’s not the case, then you run the risk of not having a secure network because you don’t know how much compute is really associated with the staking address. But here we can be quite explicit about the size of the population associated with the staking address. That staking address is explicitly represented by the S, the food source address that is the parameter of this lifelike process.
The detail gets even more fine-grained in terms of how this analogy maps onto the way we think about Casper and RChain. In particular, we can think of the food as the token, and the energy that the food is converted to is gas. The internal metabolic transformation, then the internal metabolic regulation of the continuation, the validator behavior, is done in terms of gas, which is our notion of energy. The food is the REV or whatever staking token is being supplied in order to run the validator.
But the analogy goes even further because, as I’ve tried to suggest, staking corresponds to, if you starve the process of food, then it will go through apoptosis. It will eventually shut down. It won’t be able to reproduce, and it won’t be able to engage in any functional behavior.
The nice thing is you can do what we sometimes have called slashing to the bone, where you cut the steak all the way down and the validator can’t do anything more. Or in this case, you can starve it so that only some of its population dies off. You don’t have to slash all the way to the bone. You can have consequences of behavior.
For example, if a validator equivocates, you don’t want that behavior to persist at all, so you want to wipe it out. So the validator’s lying, saying one thing to one side of the network and another thing to the other side of the network so that double spend could happen. You want to wipe out that population of behaviors.
But if the validator stumbled and there was some corruption in a block—so it either created or promoted an invalid block—that may not actually be the fault of the validator. There may be some extenuating circumstances. You could imagine punishing it for not having caught that problem, but not punishing it to the point where it’s just removed from the pool.
Again, life is a wonderful analogy. The other side of this that is really interesting is that the functional behavior is now serving client requests. When we run this extra B thing, that’s a contract that the validator has decided is worth running. The mapping of the lifelike behavior, which we did entirely from first principles in terms of how to model life, has now given rise to a harness that captures all of the salient features of the validators down to some pretty fine level of detail.
Let me stop there and check in. Christian, does what I say makes sense?
Christian: Yeah, it’s making a lot of sense. One thing I was wondering is, on this current slide, when you’re splitting off a behavior from metabolizing food, it seems you would also need to attach resource dependence onto that behavior, or it would somehow need to remain tied to the organism. Otherwise, you end up splitting off tons of behaviors and would be doing multiple things at once.
Greg: I am so glad you mentioned that cause I addressed that in the subsequent slides, and it gives rise to some really interesting analysis—perfect segue. In order to increase the narrative tension, let me defer my answer for a bit, and return back to this idea of using reflection to develop a notion of genetically-controlled evolution of the validator behavior, then I’ll come back to what you’re talking about. The two ideas fit together in an interesting way.
One of the things that we mentioned earlier was just a reflection gives us a handle on a way to represent the distinction between the genome and the phenome. And the genome is the code of P and the phenome is P. For those people who are not familiar or maybe haven’t heard of it, there’s a line of computer science research called genetic algorithms. This was founded by John Holland, who has a seminal book called, Adaptation in Natural and Artificial Systems.
He characterizes evolution as a kind of search. He says that it’s a feature search. He analyzes different approaches to search. In particular, you could look at mutation as a way to search the feature space under selective pressure. He compares that to what happens with sexual reproduction and he identifies a genetic operator called crossover, where you have two organisms, each of which has a pair of genetic information.
Let’s say we have a mother and a father. The mother has two parts to its genetic information and the father has two parts to its genetic information. When they reproduce, the progeny can take part of its genome from the mother and part of it from the father. This makes sure there’s an exploration of the features of the mother and the features of the father.
Holland analyzes this and shows rigorously that feature search that is predominantly based on crossover, with a tiny little bit of entropy from mutation, is far superior to mutation by itself. A mutation by itself is largely a random walk and generally ends in bad results in a feature search that’s subject to pressure.
Christian: Somehow crossover has a way of strengthening itself or guiding itself?
Greg: That’s exactly right. There’s a little bit of subtlety. The genetic information ultimately needs to be able to expand. We were talking about crossover plus mutation and the mutation, every once in a while, has to expand the features available. There’s some subtlety there (that I’m not going to go too much into the details about), but I heartily recommend that people go look at the area of genetic algorithms and how they apply to what’s called genetic programming—using genetic algorithms to have computer programs generate computer programs. I heartily recommend Holland’s book.
I just want to point out that using reflection, we can define crossover for Rho calculus processes in general, which means we can define it for the functional behavior. Just as an example, if I have a functional behavior that is a par, so if I have B that is equal to a bsub M for either male or mother, depending on which way you want to take this, in par with the bsub F, which is either female or fathers—I’m being completely agnostic as to the order in which these are placed—then you have a B prime, which is equal to bsub M prime in parallel with the bsub F prime, we now have that bipartite structure that was necessary for the crossover operator.
So I can define a collection of progeny, where you take the components from the mother and the father. This is just combinatory. You can get a bsub M in parallel with a bsub F prime, or a bsub M prime in parallel with the bsub F, or you can get the bsub M in parallel with a bsub M prime, or the bsub F in parallel with the bsub F Prime. Those are the four possible progeny.
You can do a similar thing for four comprehensions. If you pair up four comprehensions, you can define a crossover operator. In fact, for all of the two-part term constructors, you can define these genetic operators.
Christian: The four comprehension is really interesting because not only are you mixing the continuations, but where you’re looking, and then vice versa, you can evaluate.
Greg: I agree with you. You’re spot on because you get more progeny. You have this ability to turn code into running processes and running process into codes. You have more total options for generating progeny. That’s an important observation. You made a really good observation when we talked about this a while back. You said that if you recurse to these operators, that might be a way to generate new alleles.
Christian: You could completely interlace two codes and that would produce a ridiculous amount of possible progeny, and then somehow you want a good way of selecting from that randomly.
Greg: That’s correct. You can imagine applying this to the generation of tests. I have often used in testing a technique called fuzzing. In fact, I’ve got a whole approach to this where I define a mechanism whereby you generate a stream of terms. You give not just one, but an infinite stream of terms. That stream is seeded by some initial set of conditions.
Here we’re talking about generating that stream of terms from this genetic-like reproduction. But now, in the case of testing, we have a clear sense of what it means for a test to succeed. It means that when you put it in parallel with some program that you’re testing, it causes that program to fail in some way or to behave in a manner that doesn’t comply with its specification. You can now run this whole mechanism. Whenever a test is successful, then it’s selected as more likely to reproduce. If you put all the pieces together, you get this infinite stream of tests that are getting better and better at breaking your programs, which is cool.
Now we can tie it back to the question that you raised earlier, which is how do we control the functional behavior part. If the functional behavior part is a smart contract that has been deployed by a client, how do we resource constrain that? It’s related because if you analyze how reflection is showing up in the Rho calculus, it shows up as a loop in the syntactic constructors. You get a kind of fixed point. We talked a little bit about that. Mike pointed out this construction when we talked about the Rho Combinator last week.
You can do that loop here. At a minimum, you could pick out a B that is also metabolically controlled. You could have a B that is an iteration of the validator harness, this lifelike validator harness, on top of some other more constrained functional behavior.
That’s how we answer your question. I can just articulate B so that it’s only allowed to run in this metabolically-controlled approach. The next piece of the puzzle that’s really interesting is that we can make this a fixed point. We started with the idea that B is equal to the lifelike process applied to S and some B prime, where B prime is the actual client request. Now we can modify that and say that this recursive operator, say R of P, is defined to be P of S of RP.
Now we’ve got the loop and this gives you this infinite tower of validator behaviors. I actually talked about this with Jack Eck, who worked with me on the dao bug paper, and did some initial work on Rholang, that you could have this infinite tower of validator behaviors. Here now we’ve written that down. We’ve bitten the bullet and said, “Well, what’s a validator-like behavior look like?” Okay, here’s what it looks like.” Then it turns out that it’s written in such a way that you can actually characterize this tower as a fixed point of this recursively closing up the equation.
Christian: What does this term mean exactly?
Greg: You’ve got a wider notion of validation. It’s like a tower of simulators. Each behavior is being simulated at the level outside of it. It’s very reminiscent of the reflective tower of 3-lisp. For people who are interested, Brian Cantwell Smith came up with the idea of reflection initially and coded it up. He was a student at MIT, coded up as 3-lisp, but he expressed it in terms of this infinite tower. It was quite mysterious to a lot of computer scientists until Friedman and Wand published a seminal paper called the “Mystery of the Tower Revealed,” in which they give a non-reflective account of the tower, which is similar to this fixed point construction that I’ve given here.
Christian: They don’t tell you about this kind of computer science…
Greg: It’s good, fun stuff. I heartily recommend that people read Brian Smith’s Ph.D. thesis, and then the “Mystery of the Tower Revealed.” Brian built the language called 3-lisp, and then Freeman and Wand came up with the language called Brown. It gives you the same kinds of behaviors but with this non-mysterious presentation.
Interestingly, we can classify different lifelike behaviors. These two ideas—the application of genetic algorithms and classification—actually go hand in hand. Just recognizing that a lot of biology is essentially categorizing things, classification scheme: kingdom, phylum, class, order, family, genus, species. That’s a rubric for the biologist to give some structure to the zoo of life forms.
But the LADL approach, the generation of types, allows us to come up with some types that cover, at a minimum, the functional behaviors. You could imagine using this both at compile time to insist that the only functional behaviors that get attention fit compile-time characterization of a certain behavior, but you could also do it at runtime. You search for behaviors that match a particular type across a stream of submitted client requests.
This allows the validator to be selective in terms of the functional behaviors the client requested is accepting. I think that’s considerable value because it allows for the evolution of a market, where validators are favoring certain kinds of client requests on the basis of what it is that they do.
Christian: That’s important.
Greg: That’s an advantage that is not currently contemplated in other smart contracting platforms.
Christian: In other platforms, they have no way of distinguishing, and they’re just validating requests?
Greg: They’re just validating. In Ethereum, the validators don’t have any view or insight into the client requests themselves. This is a fundamentally different approach, but also it’s different because we can lift our genetic programming techniques at the term level to the type level. All those crossover operators that we defined for terms we can also define for types. This is because, as we discussed earlier, every term constructor that was bipartite—it took either two processes or two names or a name and a process—those all can give rise to obvious crossover operators.
With LADL, every term constructor is also a type constructor. That’s one of the main points. We can lift all of our crossover machinery up to the type level. This is important because we can now do this co-evolutionary thing, where we’re looking to classify things at the same time that we’re looking for things.
An example for me, just from my own experiences, is the mathematical process. My inner dialogue says something like this must be true. Then another part of my brain says, wait a minute, what about this counterexample? Then the other side goes, the players are proposing theorems, so these are types, and then the opponent is proposing counterexamples and those are terms. There’s a kind of type-term dialogue that’s happening, which is co-selecting for classification schemes and counterexamples to those classification schemes.
This is the kind of thing that we can now do. We can automate, with the Rholang machinery, with the Rho calculus machinery. That’s incredibly powerful both on the scientific side, to go and do a search for lifelike computational behavior and classify it, but also it’s incredibly powerful on the blockchain side itself.
Christian: You could probably utilize the results in a game semantics.
Greg: I agree. 100%. One of the things that I have talked a lot about in private but not so much in public is my opinions about deep learning and its approach to AI. When I was growing up in computer science, AI was not just neural networks. It was a much bigger field, including genetic algorithms. I’ve become rather sad that deep learning has dominated the scene recently because it suffers. It’s not compositional. If you have a neural net that recognizes the age of faces and a neural net that recognizes the expression of faces, it’s not so easy to combine them into a neural net that recognizes the age and expression of faces.
We can contemplate different architectures there, but the point of things like the Rho calculus or the Lambda calculus or the various computer science machinery that we’ve applied in the RChain setting is that it’s compositional from the ground up. You have compositionality built-in. You don’t have to go and discover compositionality later. That turns out to be fundamental to composition itself. Compositionally defined structures typically coincide with monads or, at the very least, some kind of endofunctor.
Trying to find it post facto is often misguided, to put it bluntly. Here we’ve got this compositional setting. The other side of this is—Mike has told me that there are some techniques for trying to find the Y of a neural network—but here we have the building blocks. We could actually say what the notion of an explanation is.
A run of this classification counterexample kind of thing gives rise to a crisp notion of explanation. We’ve organized it so that we can get a Y and we can compose. We’ve done this all in an extremely compact form in a setting where we can resource constrain these genetic algorithms so we can resource constrain these search techniques. That makes for some of the most powerful AI building blocks that you could have.
That’s why I think this ends up being kind of fun in that regard. But there’s a wider picture that I want to get to next time. Just as a teaser, we can take these ideas about lifelike, infinite towers of simulations and reapply them to our notion of an emergent space-time, and start to get a really different angle on how to think about physical phenomena.
Christian: Amazing stuff.
Greg: Well, we’re having fun, right? The whole point of research is to have at least have some fun. Probably most of the ideas are wrong, but maybe once in a while, there are some gems in there. We’ve got to have some fun along the way in.
Any final thoughts or comments on your side, Christian?
Christian: I just really appreciate that this is your guiding perspective. I think it is so important. It’s not a coincidence that it’s going to be a natural and better framework. I think it’s going to make programmers think differently about how they’re designing all this stuff. When there’s this clear analogy, and the programs are not just floating by themselves, but they’re explicitly like these organisms that are subjective and constrained. You showed a ton of really great ideas today. They’re all awesome.
Greg: That’s very kind of you. Hopefully, we’ll get more and more people pounding on them and we’ll see if they have any merit at all.
Here is a transcript of this talk I did on otter.ai:
https://otter.ai/s/716b934a