Subscribe to RCast on iTunes | Google Play | Stitcher


Greg Meredith talks to Isaac DeFrain and Christian Williams about the last year of RChain research.

Transcript

Greg: I was looking over the drive where I keep the slides and I wanted to go back over the results that we’ve been talking about since June. I’m not sure that everyone follows all the RCasts sequentially, so I wanted to give a chance for people to catch up and take stock of what we’ve done. Does that sound good to you guys? Do you mind doing a retrospective? 

Isaac: That sounds great. 

Greg: In the last few months we’ve had the discussion of the space calculus and its application to error-correcting codes and their application to liveness. Essentially, we’ve got a notion of programmable liveness that we’ve encoded in variant of the Rho calculus. The variant of the Rho calculus itself is quite interesting. The notion of name is coincident with the notion of the location and the term. Those are pretty profound results. 

The fact that the calculus allows you to give a notion of programmable liveness, which we’re putting into RChain and testing with it these days, is a really important milestone. Do you guys recall those discussions? Now that you’ve sat with them for a few months, is there anything that stood out for you?

Isaac: When we talk about a name representing a location in the term, does this do more for us than just a context? How does it relate to a context?

Greg: That’s a great question. It goes back to the notion of zipper. Thirty or 40 years ago, people were interested in the question of applicative data structures. By applicative data structures, what they were referring to was immutable data structures that you could encode directly in the Lambda calculus without the issue of having to do so much copying. The nice thing about mutability is that you can do direct, in-place modification, which means that you don’t have to reconstruct a lot of the data structure. 

In functional programming, if you have immutability, to make a change you have to copy a bunch of the data structure over and then insert the change and then continue with the rest of the copy. What a fringe computer scientist and mathematician did was to realize that if he kept the context as a part of the data structure—let’s talk about a tree—you split the tree into a tree with a hole in it and the sub-tree that you plug into the hole. That’s a notion of location. A context by itself is not a location because you could plug multiple different sub-trees into that whole. A sub-tree by itself is obviously not a location because we’re trying to find locations in trees. When you have the pair, you have a notion of a location. Conor McBride showed that you could generalize this not just to trees but to any differentiable data structure. He gave a definition of a differential data structure in terms of a differentiable functor. He identifies a class of differential functors. The differentiation is what gives you the context.

If you have some functor T, you can take its derivative Delta T, and pair that up with a T—the T is the sub-gadget, the sub-T that’s going to fit into the whole inside Delta T. That’s the notion of location. It turns out that you can differentiate the Rho calculus term construct. This is closely related to notions of equivalence in these kinds of calculi. People use the notion of context to give a notion of equivalence. Two terms or equivalent; when you substitute them in all contexts, they give you the same result. This notion of context was developed independently in the programming language semantics community. We’re taking advantage of this reflective idea of program as data and data as program. We’re making a version of the Rho calculus in which the names of the calculus are locations. 

The ordinary comm rule doesn’t work. Because the comm rule needs a context, it’s missing two things when it does the substitution. It only has the sub-term, but it needs a context. When you run 4Y from XP send on XQ, you’ve only got the Q. But you need a Q and a context pair. That’s where we utilize this notion of catalyst, which is guarding a context.

Christian: When we first started talking about this space calculus, I was amazed at what a huge generalization it was from the Rho calculus. As soon as you incorporate the context into the communication role, you can be continually teleporting to completely different situations. It widens the range of possibilities so much.

Greg: Absolutely. If you look at this idea of explicit context, basically that’s what’s underlying every notion of call with continuation. All the delimited continuation work, call CC, all that stuff is about being able to teleport, as you say, into other contexts. It turns out that that is closely related to transactional semantics. The context is the surrounding computation that you return into. That tells you what your transactional boundary is. 

From that perspective, there should be no surprise that in fact we get this notion of programmable liveness coming out of this context teleportation idea. We could take a whole generation of grad students and have them elaborate and fill out those results.

That’s obviously important because the co-op has been testing Casper under fairly severe conditions. We see that Casper suffers from liveness considerations that are not like what was pointed out by Kyle and company when they looked at Casanova. There are other issues that face Casper and other proof-of-stake protocols. They’re all going to be facing these kinds of liveness considerations. It’s really important to have a framework in which to be able to reason about this stuff and quickly explore options. That’s exactly what we’ve done. It’s not just really interesting mathematics and really interesting computer science. It has practical application immediately.

Christian: Did you say that the dev team has started using the space calculus?

Greg: No, they’re not using the space calculus. They’re using the stuff from the programmable liveness. You can boil it down to the justification structure. It becomes constraints on justification structure. 

Just this morning, we recognized that you can use the same idea on the other side. A few weeks ago, we talked about how liveness and fairness are related but they’re not the same. We’ve put in the basic fairness structure, but then we realized that we can elaborate that so that we utilize the justification structure to give us different priorities on blocks. To make this clear, the synchrony constraints are on the production of blocks, on the production side of the validator. The fairness constraints are on the consumption side. The validators are looking at different blocks from other validators. The fairness is all about making sure that that did they give equal opportunity to the different validators. 

Here you can utilize the justification structure to also influence priority. If on the consumption side, a validator is receiving blocks from another validator and those blocks are primarily self-justified—you have a block justified by another block that was from the same validator and no other justification in there—then that can be de-prioritized. You can use the same tricks on both the consumption side and the production side, which is a lovely extension of the results. It wouldn’t happen if we didn’t have this framework.

Christian: If this podcast is a review, would it be good to go back to how context structure gives you a notion of programmable liveness? 

Greg: If you look at error-correcting codes, those can be viewed in two different ways. One is how you modify the message that sent versus programmatic behavior that corresponds to modifying that message. If you look at the example of an encoder-decoder pair, if they’re sending a single bit, they can send three copies of the bit. The decoder looks at the three copies and then does a translation from the different outcomes into an interpretation of the event that actually happened. If you copy the bit three times and send that over or send the bit three times, there’s a little bit of a discussion: the notion of the duality between those two.

If you do that, then on the other side, three zeros is clearly a zero; three ones is clearly a one; two zeros and a one, you interpret that as majority wins—that is a zero. Two ones and a zero, that’s a one. We’ve covered all the cases. That’s a notion of error-correcting. You can turn that encoder-decoder pair into a context. The context is essentially implementing the encoder-decoder pair. 

I can go into further detail but people can go back to the back to the slides and take a look at that. 

Christian: Which slides are they again?

Greg: Programmable Liveness

If you pick just a particular encoder-decoder pair that corresponds to picking a particular context. Then you’re stuck with one error-correcting strategy for the entire life of the protocol. You can do better than that if you can change the context as you go along. That’s where the programmable part of the liveness construction takes place. 

The context is giving you the ability to code up your encoder-decoder pair. The fact that you can change the context as the protocol evolves is the programmability of that encoder-decoder pair. That’s the programmable liveness bit. 

The other thing that we did, which I thought was really fun, was look at the biologically-inspired consensus. Then went on Towards A Living World: what would it be like to have more lifelike computational agency? And beyond resource constraint, the ability to replicate. You connect the resource constraint to the continued existence of the validators; that looks remarkably like Casper. It looks remarkably like the RChain architecture. 

This is just following biological agency. It gets better because reflection gives you this genomic phylogeny-ontogeny distinction. There is a lot of structure that we can exploit from the biological world directly in the Rho calculus setting or in this reflective setting. It turns out to allow us to mimic those kinds of things. 

My higher-order argument has been that our best bet (in terms of scale) is to imitate nature as much as possible. Nature scales way better than humans do. Our artifice is wonderful, but it just doesn’t match the kind of amazing things that you see, where you get an entire air cleaning system built out of these distributed components, in terms of the forest of the planet built from these autonomous agents, the trees. I feel like that was also really useful and important work because it helps shore up this claim that I’m making. It provides mathematical- and computer science-based evidence toward this claim. Did you guys have any particular reactions or thoughts? 

Christian: I just thought that stuff was really cool. 

Isaac: It’s natural to think of the analogy to chemistry and reactions being some sort of interactions with Rho calculus terms or Rholang. That analogy is very solidified at this point with the chemical abstract machine. To see the analogy to biology was really interesting.

Greg: Absolutely. In fact, we showed that there was a different way to encode reaction rates. You can get a different model of stochasticity in the Rho calculus. That shows some of the power of namespaces. I’m still a little bit stung from a recent reviewer who did not understand the power of reflection when it comes to being able to define namespaces. That’s largely because they’re not thinking about the internet and how the internet is carved up into namespaces. That’s what URIs are all about, carving up the locations at which we hang these resources into the namespaces. Being able to do that in a way that is interlocked with the execution is really important. This reaction rates stuff where you encode stochasticity in terms of namespaces is another example of the power of these reflective namespaces. 

I keep wanting to come back to the fact that each one of these pieces is done within an equational setting. We give a formal system that can be directly encoded in a wide range of existing theorem provers model checkers, and all of the results we’ve pushed forward in terms of these equational arguments.

Christian: The analogy between reflection and the distinction between phylogeny and ontology is really deep and useful. What was your running example in that podcast for an application?

Greg: It turns out that the architecture of the validator is exactly. We forget about validation, we forget about RChain, and we just say, I want to resource constraints program that has to eat to live and eat to reproduce. It turns out that when you write that down, it’s almost identical to the code structure of validators in Casper. That was the running example. Here’s a validator; we derived it from first principles. Just a program that has to eat to live and reproduce.

Christian: Did we talk about how the philosophy of that kind of design contrast with approaches to modern approaches to AI?

Greg: We did. This gets right to the heart of LADL, which I’ve been holding in reserve. Reflection allows for genetic programming and genetic programming has a certain amount of exploration. There’s some data that suggests that there’s probably a there there. Reflection affords the ability to make that more streamlined. You can get programs that reason about their own structure and evolve their own structure. That’s clearly moving toward AI. 

When you add LADL in, you can start to constrain how they evolve according to logical constraints. You would like to be able to program fumbles. Imagine that you pour a bunch of chemicals or organisms into a soup. How do you program those chemicals and organisms? Part of the way to do that is to program at the logical level, which is programming at the level of collections. 

That’s part of the thing that’s really important about LADL. We want to provide this new style of programming, which is declarative in some sense, but it’s not declarative programming in the sense of prologue. It’s declarative in the sense that you’ve got this population of autonomous agents that you want to evolve toward a particular behavior. The logical interpretation of LADL that says a formula is a collection or a population of programs that evinces a particular property. If you can program at the level of the logic, that allows you to program in terms of these evolving populations.

It also gives you this ability to compose. In this day and age, AI is synonymous with deep learning, which is just a fancy word for neural nets. They suffer two problems. One is composition. We don’t have a clear compositional approach to neural nets. You have this equational presentation: this net does this; that net does that; when I combine them, I get a net that does this and that or this or that; or this tensor that, whatever your logic is. We don’t have that for neural nets. 

The other thing is we don’t have a why. It’s very difficult to extract a why from neural nets. We get a net that can recognize something or predict something, but we don’t know why—the only why is, these neurons connected with these weights recognize the situation. But there’s not a lot of why. Whereas with LADL, you can go back and forth between the term and the type and the type is telling you the why and the term is telling you the how, which is very important. If you want to be able to have AI systems that can explain to you how they got what they got, which I think is really important.

Christian: It amazes me that more people aren’t standing up and reaching that conclusion, saying, Hey, this isn’t, this isn’t right. This clearly is not long-term thinking.

Greg: I agree 100%. The issue is just that you get a few results, some of the language results or some of the image recognition results, and that’s exciting, but it’s not the solution.

Isaac: Simon Sinek has this notion of an infinite game. This is a mathematical notion, but he brings it more into the business world. His theory is people aren’t really thinking of this infinite game. They’re not really thinking of doing things because it will benefit you in the long run and seeing the whole picture. You’re in a finite game to like produce a result, but not really thinking about the long-term game. That’s probably partially the reason why people aren’t thinking in that way and they’re just doing what gets them results in the short-term.

Greg: I think you’re right. There’s a game-theoretic aspect to this. On a different angle, one of the things that we talked about was basic extensions of Rholang. The presentation that we went over, we added three language constructs to the language. We showed that in each case you get a high degree of compression as a result of these language constructs. Then we gave the semantics of those language constructs in terms of the primitives, showing that we haven’t changed for semantics of the language. We’ve only essentially added some macros that give a high degree of compression. Once you convince yourself that the semantics doesn’t actually change, then you can use the information that these higher-level constructs provide to optimize the execution. Again, all of this is done equationally. It’s correct by construction. 

Christian: Are you talking about the radical operator?

Greg: No, I’m talking about the Rholang 1.1 presentation. We added three extra decorations in the for comprehension. You have an X/question/bang, and an X/bang/question. You change the meaning of the semi-colon and introduce an ampersand to be where the semi-colon used to be. That’s what happens in the for comprehension. In the sequential output, you now have two decorations. You can do bang/whimper/data/semi-colon/P. Now now you have sequential output. That matches with one of the constructs in the modified for comprehension. That gives you some fairly significant compression. What was the last one? 

Isaac: The let binding. 

Greg: Yes, the let binding. Thank you.

Isaac: These are intended to be implemented in Venus, right? 

Greg: That’s correct. The point is that RChain research looks out in front of RChain present. We now listed a bunch of results and they’re all about this gamut from the practical—how do we modify the programming language to the aspirational?—which is if we wanted a more biological, programmatic agency, what does it look like, and that kind of thing. As you said, Christian, one of the things that was also really useful was the radical operator because that provides an inspirational example of how to do an equational presentation of a leaderless, distributed consensus protocol.

Notice that we didn’t say anything about economically secured, but when we look at the reflective proof theory, that adds in the economically secured bit. Because it’s the resource R that we’re using as the catalyst to enable these things that is the economically secured part of this. We’re steadily inching toward a meeting of the two approaches. 

We have this notion of this equationally presented fault tolerance, the square root. We also have a similar kind of presentation of a proof theory that has the resource constraint in it. The nice thing is that that also then connects with what we were saying about programming, this new kind of logic programming where it’s declarative, but you’re doing this for entire populations. The proof is a step in that direction. 

Christian: I hadn’t thought about how those two would interact. Would there be some way of splitting up a resource among the roots? 

Greg: Yes. That was part of the motivation for talking about this red-black calculus, where you have the resource for the red calculus is a program in the black calculus and the resource for the black calculus is the program in the red calculus.

Isaac: So you can have N different colors and, and have it…

Greg: …or a particular resource. It’s a Rho program, at least in the case of Rho calculus. It could be a par; it can be split up. It’s also something that we can run a route on. All of those things are now available for us. 

There are some interesting questions that come along the way. For example, Tomislav was asking, does the proof theory construction extend to a stochastic calculus? That’s a really interesting question. I don’t know the answer right now.

Christian: What does that mean?

Greg: We talked just a few minutes ago about being able to model chemical reactions with the calculus. What that entails is that you have a notion of reaction rates on various reactions. In the case of a name-based calculus, like the Rho calculus, what that effectively says is that you’re assigning rates to names. When you have a race that involves names of different rates, the rates will determine stochastically the winners of the race. It turns into a sampling rate on the pool of reactants. 

That’s how you would stochasticity. By the way, what that means is that RChain itself is an implementation of a stochastic version of the Rho calculus. The validators are using market determinations to select from amongst the comm events that are being proposed by clients is actually as stochastic calculus.

To the best of my knowledge, the state of the art is Martha Kwiatkowska and her group were able to derive a stochastic logic for a stochastic version of CCS, but no one has extended this to a logic with mobility. There’s stochastic Pi, and by extension, there’s stochastic Rho and other things like that, but no one has been able to derive a stochastic Hennessy-Milner logic or a stochastic spatial behavioral logic. It would be quite interesting to see if we can extend the proof theory approach that we’ve got here to stochasticity using the resource as the means to influence the sampling of the different potential reactions. 

Christian: Using the resource as an incentive, you’re saying?

Greg: The distribution of the resource is going to turn into a kind of sample rate. If I sprinkle this much resource, then I get these reactions happening. If I sprinkle this other distribution of resources, I get these other reactions. The question is, can we lift this to the logical level so we get a stochastic logic? 

The amount of fruit that we have already born is astonishing but it only gets better. As we’ve tried to suggest, there are so many things that we can do on the other side of this. We spent so much time in the past talking about the applications for LADL itself beyond RChain. All of this points to the power of this approach. In the short-term gives us faster transaction rates, better throughput, better security. But in the long-term, it gives us a much wider range of features. We get a lot more features, a lot more bang for buck. The research can sketch out the direction for the development in a fairly detailed way. We’ve got these equational presentations that turn into executable specifications.

Christian: It’s quite a spectrum of ideas over the past six months or so. Which of these ideas are more recent and which of them have you been building up over the past decade?

Greg: That’s a really good question. Things like the Rholang features, I’ve had those in the back pocket for a very long time. The space calculus, that was a long slow growth. I saw the equation in 2008 but I couldn’t write down the calculus that was a solution to the equation until 2017. Once the calculus was there, starting to see the applications was very recent. This whole idea of encoding error-correction codes this way came this year as we were wrestling with the liveness stuff. The reflective proof theory, as an example, I’ve largely been focused on the model theory. I’ve taken a few stabs at the proof theory in the past.  It wasn’t until the last few weeks that the pieces of the puzzle came together. 

I have thought long and hard about different notions of agency and how to make those formalized and also taking a retrospective view on how notions of agency have evolved in the discourse about agency over the last few thousand years. That’s always been a curiosity to me because it doesn’t follow natural patterns. People from two and a half thousand years ago foresaw notions of agency that that weren’t going to be available for a very long time, until computers were invented.

Likewise, notions of agency that were immediately available to them that were right around them in their present moment, their significance and how to encode them formally wasn’t represented formally until the last few decades.

Christian: Could you briefly explain both of those points? What did they foresee?

Greg: If you look at the notion of agency that’s embodied in Aristotelian logic, it’s agency of logical decisions. The resources are all copyable. There’s no cost to copy the resources. Seeing the digital world, they codify what ultimately becomes classical logic, which has this crazy notion of continuation embedded in it. The invalid of mitigation is directly related to call CC and the resources are copyable. They’re perfectly copyable without cost. 

That’s not what their world was like. When you have a bow and arrow, once you’ve gone through all the arrows in your quiver, you’ve got got to make more arrows or you’ve got to buy more arrows or somehow obtain more arrows. There’s a notion of agency of machine that is clearly resource constrained. The same with a catapult. When you run out of rocks, you’re not throwing any more rocks at the castle. 

Christian: Maybe life was pretty sweet back then. Maybe they could copy and delete stuff. 

Greg: Maybe that was it. It wasn’t until we have Jean-Yves Girard and linear logic, his insights, and how they then were echoed in the Pi calculus that we began to understand how to represent these notions of resource-constrained agency. This impacts logic. It impacts the notion of proof and everything. It’s not until the early nineties that Abramsky pushes forward this idea of proof as process. That’s a lot of intervening time. 

The notion of agency is all wonky and backward if you look at it historically. There’s still quite a ways to go in terms of getting a formal notion of agency that includes biology and biological imperatives. That was part of the reason I was trying to talk about biologically-inspired consensus and showing that the validators structure actually matches structures that are homologous to biological organizations that match certain imperatives.

That’s still very nascent. My point is that if we take the notion of agency seriously, if we look at it and we say, this is worth codifying, then the different qualities or aspects of agency to have agency, they are very practical. It tells us how to organize our computations, with an eye toward particular things, like biological scale, as an example.

Another example of that moving upward is economic agency. As I’ve tried to argue multiple times, there is no real analog of currency in the biological world. We just don’t see it. I don’t think anyone wants to argue that it’s not an effective coordination mechanism. It’s been one of the most effective coordination mechanisms for homo sapiens ever. That’s why it’s so entrenched. It’s like a shelling point or something. It’s an attractor. It’s very hard for us to get out of that way of thinking. 

It would be much better to formalize it and say this is the kind of agency that we’re talking about. I don’t believe that the economists have done that, certainly not with the kind of desiderata that we’re asking for. An equational presentation, a notion of equivalence like bisimulation, a notion of type—all of those things should now be the gold standard for any of these disciplines. Linking these different ideas together is really important. Stitching together what we ought to demand of our formalisms as we do more implementation is the future—how we should go forward. We want machine-checkable theories.

Isaac: We were talking about Aristotelian logic not being resource-constrained. Two thousand years later, Girard and Abramsky are positing these resource constraint logics with their linear logic. Thirty years later, you’re trying to bump it up two levels of realism, from moving from no resource constraints to resource constraints, to talking about agency of individual biological entities, to talking about economic agency. I commend your efforts in trying to do that because those are not simple tasks.

Greg: No, they’re not. It’s ridiculously ambitious. It’s more about holding our feet to the fire. These desiderata, which we’ve known about for a very long time, get lost in the mix. Suddenly, we have this great success in machine learning and everyone forgets all of the desiderata.

Isaac: Even if these are not things that are solved within our lifetime, these are at least things you’re putting out there so that people who are interested in this sort of thinking, this is what they’ll hopefully start their journey with—having this in mind and trying to make some progress.

Greg: I think that we have made a lot of progress. Even if there was no progress whatsoever, these are the desiderata for obvious reasons. The fact is that we’ve made just an enormous amount of progress,  this is nontrivial stuff, and the contributions that RChain has made speak for themselves.

Isaac: It’s exciting stuff. We’ve talked about a lot of really great things in the past few months.

Greg: To me, the most pressing thing, at least for LADL and a lot of these other ideas, is to get the nominality appropriately modularized. There’s really good evidence that can be done. We have James Gabbay’s work and we have Marcelo Fiore’s work. Can we take it a step further and plug it in to the frameworks that we’ve developed? No pressure, Christian.

Christian: It can and will be done. It’s just a matter of time. There are four different kinds of binding. There are term and term binding, like a Lambda abstraction. Then there’s term and type binding, which are dependent types. Then there’s type in term and type in type. Marcelo has done all of them separately. Now it’s a matter of how to get them to all fit together in the same framework. That’s active work right now. We’re going to need more than term in term binding. We also need binding on the type level to properly handle the LADL types. That’s what we’re thinking about right now.

Greg: Interesting. I hadn’t even thought about that. Art is long, life is short. This is great. I’d love to see an analog of this system happen for the Rho calculus. That would be lovely.