Developer Learn Podcast RCast Research

RCast 35: Atypical Currency

Subscribe to RCast on iTunes | Google Play | Stitcher

Greg Meredith and Isaac DeFrain discuss the future of currency.


Greg: I want to talk about a very odd idea. It will strike many people as strange. Hopefully, I can offer motivations for the idea and then move on to greater levels of technical depth. The idea is to think about currency (capital, money) in terms of programming concepts, in terms of the computation, money is untyped; it has a very specific property, which is its biggest strength in terms of its adoption. If you go back to the origins of the use of capital, this property of money is why it was adopted in the first place: you can convert money into anything and anything into money. That’s the idea. 

It’s a lot more efficient to trade cowrie shells at the market and then later deliver bales of hay or barrels of fish or kegs of ale than it is to lug all that stuff to the trading depot. If things don’t go exactly as planned, you have to lug a whole bunch of it back, with some of it spoiling. There’s an efficiency that’s gained by doing the exchanges symbolically and then later mapping that exchange onto a delivery of goods and services. 

That efficiency has extended into electronic and internet-based markets. But it ends up having consequences that we see every day, but because anyone hearing this podcast was born in a post-money era. It’s hard to imagine how it might be different. 

If we come at it from the angle of computing or computation, things get a little clearer. This ability to convert anything into money and money back into anything corresponds to being able to set a price for something. There’s this old adage that everything and everyone has their price. That refers to this property of money that we can set a price for something, and because we can set a price for something, then we can miraculously change water into wine. We have a price for water and we have a price for wine. That’s the essence of this property. 

If we are C or C++ programmers, then the entity that that corresponds to is void* at the type level. If I have a pointer to any kind of data type, I can always cast it to void*, which is now a pointer to something where I have no type information.

In languages like C and C++, that’s called upcasting. I’m throwing away type information, which means I’m going up toward the top of the type hierarchy, meaning I have less and less information. When I go down further in the type hierarchy I gain information. 

If I cast a void* and then do some manipulations, for example, I might serialize the data and shift it across the wire and then deserialize it. Now I would want to figure out what kind of thing this is. Downcasting it is asserting that this thing that was void* is something like a pointer to an integer or a pointer of some structure that contains information, let’s say, about a shipment of apples or shipment of oranges. 

That kind of downcasting is not safe because since we lost all the type information, we don’t know if it really is the thing that we are asserting that it is without some out of band check. In a language like Javascript, this is equivalent to saying that what I’ve got is an object. That’s taking it all the way to the root of the type hierarchy. In Java, it’s saying, this thing that I got, I forget all my type information. Once I’ve lost that information, once I’ve forgotten that information, recovering, that information has to come out of band. Therein lies the rub. Things may go south. Providing information out of band is basically just saying, I’m going to carry along the type of information along some secondary channel. 

That’s a lot to digest. Does that make some sense?

Isaac: Yeah, definitely. I really like this analogy of money or currency and how it’s similar to this notion of typecasting and information about a type. Essentially, that just corresponds to what can you use the currency for, which is just about anything. This is a really cool idea. 

Greg: I’m glad it’s resonating. It’s one of these greatest strengths/greatest weaknesses tradeoffs. Money’s greatest strength is that it allows us to convert anything for which we can set up price. The tradeoff is that there is no objective valuation. There’s no yardstick that we can all agree on that will allow us to have objective adjudication on valuation. 

We see this happen all the time. Because of this property of money, the board can value the CEO’s behavior one way while the shareholders value the CEO’s behavior a different way. There’s no objective adjudication, so you have to take it to the courts. It had become some kind of social adjudication, but nothing that’s objective. 

If you follow the analogy over into the programming world, we know the problem: a lack of type safety. Wherever you have a lack of type safety, you then run into these “this thing is not what you think it is” errors. You thought it was water, but it was actually wine.

Isaac: I’d be okay with that. 

Greg: Well, not if you’re the designated driver. That ends up being a source of a lot of runtime errors in programs in CC+ or Javascript, these kinds of things where you don’t have type safety. Even in Haskell, which is a strong type language,  there are places where you can do untype safe magic, life gets miserable. The tradeoff is a serious one. 

The question is: Could we add type safety to currency? Can we make a notion of type currency? In the blockchain world, it’s kind of the Wild Wild West. We’re exploring all kinds of different financial instruments. Because there is so much plasticity, now that we can program currency, people are beginning to remember that currency wasn’t this God-given thing. It was an invention by humans. We can explore what roles it was designed to serve.

Is there a different way to configure or reorganize things so it could serve those roles better or maybe identify new roles? That’s one of the most wonderful things about blockchain. In 2015, I was sitting on a tube with Vlad and another blockchain researcher. They were discussing the design of financial instruments to achieve certain effects. I was struck by how, with blockchain, the kids could make the design of financial instruments and the gamification of human behavior something that is second nature in the same way that for most of us it’s second nature to be able to manage a bank account. (Well, at least for some of us.) 

This next-level behavior of being able to design currencies and financial instruments that gamify certain behaviors would just be second nature. They’ve been doing this all their lives. Kids today, who have never known a time before the Internet, it’s second nature to whip out their phone and use Google or Google Maps or any of the hundreds of other services that are part of the fabric of daily life. This is a really beautiful opportunity to suggest some things we might have missed about currency. In particular, what would a typed currency look like? The biggest difference between typeless currency and a typed currency would be the ability to constrain or restrict what the currency can be exchanged for. 

Isaac: That totally makes sense. 

Greg: There are markets where that’s the name of the game. Foreign exchange instruments are double-ended at a minimum. It’s all about, “I can do dollars for Yen or I can do Euros for Lira.” These things come in pairs; it’s natural to talk about certain kinds of restrictions. It’s not a farfetched idea. 

Even in today’s markets, we understand certain kinds of instruments as being about that sort of enforcement that you can only convert this into that. The general purpose thing that we’re trying to do does show up in everyday examples. It’s not as far-fetched. 

We can have our cake and eat it too, in the sense that we can ask: Is there some kind of minimal model that we could build up of typed currency from first principles and yet still stick with this idea that there’s only one kind of thing that could be exchanged? 

If we’re going into a transactional mode, then whenever we’re in that mode, there’s only one thing that can be used for transmission. We’d go down to the market and if we’re not armed with this thing, whether it’s cowrie shells or bitcoin, if we don’t have that thing, we can’t conduct transactions.

Intriguingly, this is exactly the proposition of the Rho calculus. If we’re going to do a transaction, which is a comm event, there’s only one thing that gets exchanged, and that’s code. The same is true of the Pi calculus: there’s only one thing that can be exchanged and that’s a name. But what the Pi calculus doesn’t do is connect names to processes. 

It turns out that you can make that connection. In particular, Davide Sangiorgi showed that there’s a calculus called the Higher-Order Pi calculus, in which you can also ship processes around. Then he shows that you can fully and faithfully translate a Higher-Order Pi calculus down to the ordinary Pi calculus. It’s always possible to connect names to behaviors, but in the Rho calculus, this is direct.

The direct connection allows us to constrain what we can trade for what. I saw this aspect of currency in 1999, looking at it from the perspective of the Pi calculus. In 2002, I saw the Rho calculus. Then I connected the dots. In 1999, I looked at this and I thought, “Oh my God, if I were to talk about this in public, the economists will beat me up. You don’t know anything about economics or trading behavior or anything. And the computer scientists will beat me up. What are you trying to do?

Then I thought, “You know what? I’m  going to be quiet about this.” Even though I think it’s a foundational idea. It brings into question a lot of what we think about currency. In some sense, the Rho calculus is made for this idea. That’s why I gave this historical information. 

The next point really connects it to LADL and the namespace logic. It was around 2002, 2003 that Caires and Cardelli were starting to look at the spatial logics. From their point of view, the spatial logics gave one visibility into the structure of programs. You could have formulae that could detect whether or not process was composed of two parallel processes. 

There’s a famous theorem, one of the most important theorems about the relationship between bisimulation and the Hennessy-Milner logics. The spatial behavioral logics are a refinement of the Hennessy-Milner logics. For people who don’t understand that background, the Henness- Milner logics are a form of modal logics. In addition to true and false, negation and conjunction, and fixed-point operators, you also have the ability to talk about the evolution of the program. 

In the case of the Pi calculus or the Rho calculus, evolution only happens via comm events. You can ask whether a particular input-output have been possible or whether it’s necessary. One of the things that it could do is this comm event, either by doing an output or by doing input. No matter which path you take out of this state, no matter what this process does, it must do this IO event to make progress.

That’s the Hennessy Milner logics. They have this property that two processes are bisimilar, which is the notion of equivalence in process calculi. If and only, if for any formula in Hennessy-Millner logic—let’s say we’re talking about processes P and Q. P is bisimilar to Q if and only if for every formula, let’s call it P, P models P exactly when Q models P. 

This is a beautiful theorem because it says the Hennessy-Milner logics crisply characterize bisimulation.

Isaac: The formulae separate non-bisimilar processes.

Greg: That’s exactly right. You can do all kinds of crazy, weird, fun things. Let’s say we had a well-ordering of the Hennessy-Milner formulae. Then you could come up with a distance between processes by going down this ordered list of all the formulae until you get to that first one where they differ. That distance, the length of that prefix of this infinite list, is how far apart they are. There’s all kinds of fun, crazy stuff you can do with this information. That might be the subject of another RCast. 

The spatial logics add structural information. For all interleaving semantics of the Pi calculus or the Rho calculus, bisimulation cannot distinguish between concurrency and interleaving. That means that every parallel composition can be turned into a giant sum, which is all the different possible interleavings of all the IO actions. With the Hennessy-Milner logics, you can’t detect whether processes did this by just summing up all the interleavings or just put a par in there.

The important thing about par is exactly this expressiveness. You replace this giant sum of all the interleavings, which is exponentially explosive, with one symbol. So P vertical bar Q covers that giant sum of all the interleavers. Part of the expressiveness of the process calculi, they overcome this exponential explosion, which is why they are superior to state machines. 

I know this from hard-won experience. Trying to modeling in concurrent systems, where you’ve only got state machines, this interleaving stuff causes this exponential growth, which brings your simulator to its knees. Whereas with a par you can do algebraic manipulations. As long as you stay up at the level of the algebra, you don’t have to unfold all those interleavings. You can do that in a lazy fashion. As a result, you can cover a lot more ground in terms of computation. In many ways, the essence of the power of Rholang is precisely that. Lots of stuff is hiding in what we do that we don’t ever get a chance to talk about.

I’ll bring this back to typed currencies in just a second, but I had to provide a little bit of background. The spatial logics have X-ray vision; they can see that you’ve got a par, which breaks this bisimulation characterization theorem. All the IO events are effectively observations that you can make if you add an extra class of observations that correspond to the equational characterization of par, like the fact that it’s a commutative monoid and that sort of thing. You can recover this theorem. 

Being able to see that you’ve done in an association, being able to see that you’ve done a commutation, all of those things recover the observations you need in order to get regain the correspondence between formulae and bisimulation. This correspondence is preserved. It turns out to be really important because, to date, all process equivalences factor through bisimulation. 

Isaac: Do you have a reference for the result that you’re talking about? 

Greg: Yes. We can put it in the links. What that means is if you study bisimulation, that’s all you ever need to study. For someone like me with a tiny little brain, knowing I can jettison this giant vast sea of concepts around equivalences of concurrent computation and just focus on this one thing and everything else is just the specialization of that one thing, that’s a huge win. I can declutter my brain. 

Because we have this X-ray vision into the structure of processes, in the background of all of this, I’m implicitly invoking Curry-Howard. Whenever I say formulae, I’m also saying types. If I come up with a formula system for the behaviors, in Caires’s world that doesn’t say much about the structure of the names that are being communicated, because they’re unrelated. There’s no relationship between names in the Pi calculus and the behaviors of Pi calculus processes. But in the Rho calculus, we have a very different story. 

Isaac: Very intimately connected with. 

Greg: Exactly. Now let’s talk about some fun things we can do. As soon as I have a type system for the behaviors, I already have a type system for the names. Now let’s imagine that I don’t allow the D reference. I can use negation as a means of excluding D reference or I can do it either constructively or via negate classically.  If we use the types to restrict the use of D reference, where I’m turning a name back into a process, D reference is the only way new names can be constructed. 

That’s provable in the Rho calculus. Essentially, D reference functions in the same way that backquote does in Lisp, or the prefixed comma does in a Lisp-based or scheme-based macro system. Splicing into your quoted expression is some evaluated expression. That is the only mechanism that allows you to construct new names.

That means that if I wanted to have a fixed supply of names—in other words, a fixed supply of currency—by putting in this restriction, no D reference, then what I’m saying is you can’t ever pass me a behavior that is a mint. I’m not allowing for mints to be instantiated. 

We did this long detour through some of the theory, but in terms of the system that’s in front of us, the Rho calculus, with just one idea, we’ve already been able to talk about controlling the supply of the currency. 

We can also insist that the available currency diminish. We can insist that there’s balancing in the currency. If we start thinking about the types for currency as namespaces, we can say that we have names coming from the red namespace and names coming from the blue namespace, and we can insist that if you start a transaction with the red namespace, you’re going to close it with the blue namespace.

You can get all kinds of balanced Perrin-style behaviors around the use of currency, which allows you to do foreign exchange kinds of instrumentation at the type level. You could prove that at the end in a community, you’ve got a distribution of currency. You start with a particular distribution of currency and at the end, if you’re not minting anything, you end up with a different distribution of currency.

It’s just like balancing chemical equations. You don’t ever get to create any elements. You’re always just shuffling the packaging of the elements. In the example of the formation of salt, you’re not creating sodium. You’re not creating chlorine. They get to be packaged together into NaCl to get you to salt. 

All of that kind of behavior, which is just about packaging and distribution of currency, can be crisply characterized with the namespaces. This is absolutely worthy of investigation. Part of my motivation for bringing it up at this point, now that RChain is nearing main net, is to reach out to the community of youngsters and say, “Hey, there’s a whole bunch of toys here that nobody’s ever thought about.” If we’re thinking about currency as one of our coordination technologies, this is going to give you a toolbox for gamifying behavior like we’ve never seen before. 

I’m reading a series of books by Yuval Noah Harari. Over and over again he makes the point that what differentiates Homo sapiens from other species is that our superpower is coordination. I just continue to feel that we have to level up that power to address climate change. It’s astonishingly fortuitous that just at the time when we need to level up, there’s this crazy toolbox of coordination that just opened up. That’s pretty weird.

Isaac: Smells a little bit like destiny. 

Greg: Typically, you find the things you need right at the moment you need them. Necessity is the mother of invention. Your mind is focused: “Oh my God, I really need this thing now.” I just feel like this is an incredibly rich toolbox. 

The other thing is art is long, life is short. I’ve been able to carry the idea this far forward. I’m sure that if I keep playing with it, I can keep coming up with more and more interesting instruments and examples. 

I’m just really inspired by what Vlad and that blockchain researcher were doing, where they had gone so much further into this understanding of the gamification of human behavior using financial instruments. I would love to see someone who has an interest in mechanism design, which is typically the term of art there, if they could have this toolbox, what would they do? What could they do? Any thoughts from your side?

Isaac: Yeah, definitely. You’re talking about an ability to essentially constrain what your money can be used for. That’s such a generic concept that it seems like you could do just about anything with it. I’m incredibly interested to see what someone would do with this kind of idea.

Greg: I agree. To circle back to a point that we mentioned at the top: it means that you can walk the structure of your financial behavior, your transactional behavior, and adjudicate valuations. It’s algorithmic how you get to a notion of valuation. It’s no longer a question of social consensus. 

There’s no magic. There’s no antigravity machine. We’re not pulling a rabbit out of a hat. The place where the complexity shifts away from the social consensus around valuation, it moves into the social consensus around the design of the transactional behavior itself. If everyone buys into the rules of the game, then the valuation becomes objective.

Isaac: That reminds me of the analogy of blockchain in general because that’s essentially what you’re doing. It’s no longer just trusting that people are going to do as they say. Now we have everything in code that we can execute and verify and make sure that it will run the way we expect it to.

Greg: That’s exactly right. I probably shouldn’t say this in public, but I would go to these Ethereum conferences and people would be saying, “Now we can just look at the code and see that it does this, that or the other.” I would be cringing because no, that isn’t true. Is halting hard to determine whether or not a particular subroutine will be called? Unless you have something else that isn’t going to happen. 

We saw this with code audits. The Dao audited and yet it had all kinds of exploits inside it. Just looking at code is not sufficient.

On the flip side, people have long thought, “Okay, well that’s the end of any sort of verification.” No, that’s not true. They’re two dials. There’s the expressivity of your language and there’s the expressivity of the properties and there’s a sweet spot where you have interesting levels of expressivity in the language and interesting levels of expressivity in the properties that you can verify with those programs. That covers a huge range of behavior that is transactional. 

Where the rubber meets the road, if people really want to see this in math, in Caires’s “Spatial Behavioral Properties in a Logic for the Pi calculus,” he identifies a class of formulae for the Pi calculus where the model check is terminated. It’s a nontrivial classic formula. But you can shuffle it around. You can increase the power of the formulae as long as you correspondingly decrease the power of your processes, the kinds of processes you can write down, which is basically just constraining your scripting language down to something that’s more tractable if you want to have more interesting formulae. 

That makes practical the intuition that these people at Ethereum conferences were espousing. The way they were thinking about getting there, just by human eyeball inspection, is never going to happen.  But the spirit of what they were searching for, that is absolutely possible, and it’s possible with the use of typing information. If we’re using these things to design financial instruments and other aspects of digital currency, this is the way to go—or at least to date, this appears to be the most effective and most mathematically tractable way to go about that program.