In this RCast, Greg Meredith is joined by Isaac DeFrain and Christian Williams to discuss why concurrency is necessary for blockchain—and beyond.
Greg: The topic of today’s discussion is all about concurrency and how concurrency is necessary to scale. We know that concurrency is necessary to scale compute in general. One of the things that’s been absolutely clear is that the programming model for concurrency has been under pressure from above and from below. As early as the two-thousands—the very beginning of this millennia—people were recognizing that Moore’s law was coming to an end, that you couldn’t just write sequential C code and sit on your hands and next year’s processor was going to make it faster, because next year’s processor was getting faster not by improving the processor speeds, but by adding more cores per die. We’ve only seen that increase. We’ve seen more chips per machine in addition to more cores per die and more machines in a rack.
And that’s the kind of way in which things have been scaling for quite some time now for many years. At the hardware level and up, it’s all been about concurrency. As a result, in order for code to get any faster, code has to make use of that concurrent capability both on the chip and across the chips and across the machines in the rack and so on. So that’s been the pressure from below.
And then there’s also pressure from above, right? You guys are probably too young to remember; like today, the only commercial applications are the ones that are available on the web 24/7 by millions of concurrent users. It used to be you that you could write a single desktop application that might be a big hit. But today this is not possible, whether you’re talking mobile platform or browser platform. The characteristic of applications that go to market is that they’re available all the time concurrently by millions and millions of users, whether we’re talking to iTunes or Angry Birds. And so that pressure from above and pressure from below has really put the squeeze on the programming model. Before we consider anything like blockchain, the programming model really needs to support concurrency to meet the market standards. Does that make sense to you guys to kind of get what I’m saying?
Isaac: Absolutely. I just wanted to add that it seems like that evolution has happened really quickly; as you said, it’s more or less been in Christian’s and my lifetime. It’s one of those things that is constantly evolving. Your apps not really useful unless it’s always available, as people want to access it as soon as possible.
Greg: Yeah, exactly. And interestingly, that kind of concurrency models that were put forward in the nineties are still the ones that people are wrestling with today to address this and they’re not really good. The threading model in the Java virtual machine and the CLR, the dotnet virtual machine, or even in more advanced systems like Erlang, the threading model isn’t very good. The reason is because if you stare at a piece of Java code or a C sharp code or whatever, you can’t see how many threads are running through that code. You have no idea. If you see a synchronization block around the code or protecting a variable inside the code, you can kind of get a sense that there might be more than one thread, but you have no sense of actual concurrency. It’s kind of hidden in the background.
Isaac: When you say threads, you’re just talking about independent activity that’s going on, right?
Greg: Yeah. Actually it’s important to distinguish certain notions. For example, we can talk about parallelism versus concurrency; we can talk about concurrency versus distribution. Typically when you have parallelism, you can have activities going on but they’re not sharing information. I often use the traffic metaphor. If you have an eight-lane freeway and no lane changing at all, that’s parallelism. Concurrency is when you start to allow lane changing, which allows for interaction of the different activities and threads. If you think of each car is like a transaction, lane changing is getting over to a processor that’s going to handle your transaction. Distribution has to do with failure modes. If tasks can fail independently, that gives you a notion of locality.
Isaac: Not too much centralization.
Greg: Yeah, exactly. For example, if you bring down the entire virtual machine, there’s not a lot of distribution there in terms of the tasks that might all be concurrently running against that virtual machine. But if you have a couple of different virtual machines, even if you bring one down, the tasks on the other could continue to operate. So that gives you a more logical notion of distribution. Failure regions are one way to think about that. Those are important ideas and they’re definitely related to our topic today. There are different notions of threads: there’s operating systems threads, there are physical threads on the core. In the case that I was talking about, I was literally thinking about virtual machines that are equipped to provide some concurrency in the programming model.
The threading models that we, we found with Java and these other kinds of things, they’re not very good because essentially the concurrency is not supported in the semantics of the language. I want to contrast this to other trends that have happened in the market. I don’t know if you guys do a lot of actual programming, but there’s been a massive trend called language-integrated query. Surprisingly it began at Microsoft. It was pushed forward by Eric Meyer, who’s a big functional programming guy. Everybody recognizes that a lot of programming involves accessing a data store. There’s kind of two ways you can go about it. You can provide a library which gets you to the data store or you can provide language-specific features for query. In the former case, you don’t really get your type system involved in helping you check the semantics of your query.
In the latter case, you get the type system and the compiler involved in checking that your query is in good form and isn’t breaking, for example, the integrity constraints of the database or other kinds of things. And that has absolutely taken off right. In the last fifteen years, language-integrated query has shown itself to be one of the hottest topics. One of the things where we make the most bang for the buck. So much of it is because of the advantages of having the type system and the compiler help out with a lot of the automated tasks of checking those queries. That’s been a big win. The reason I’m using this analogy is that I want to point out is a library approach and a language-integrated approach. Over time the language-integrated approach has steadily won.
The reason is that you get all this support from the type system. So the same thing happens with concurrency. There’s a kind of library approach which is the threaded approach taken in Java and C sharp and other kinds of things. Then there’s a language-integrated approach, which is what we find when we look at computational models like the mobile process calculi that then give rise to concurrent language designs like Rholang or Pict or Vim or are some of these others. The reason that’s important is that then the type system can see the concurrency phenomena. When you look at a piece of Java code, you can’t tell how many threads are running through there. When you look at a piece of Rholang code, human eyes can spot that there are multiple threads. You can see the multiple threads. The same is true with the Pi calculus or the ambient calculus or things like that.
The concurrency becomes a syntactic phenomenon. Because it’s syntactic, it can now do analysis on the code, the syntax to check for various concurrency properties. A really good example of the kind of concurrency property you might want to check is a race condition. Is it possible for two events to happen in either order? We’ve already seen that the DAO bug; when you rewrite it in an explicit concurrency setting, the DAO bug is a concurrency issue. It’s a race condition. As a result, if you have this explicit language representation, you can catch through a syntactic analysis( also called static analysis), you can catch these kinds of race conditions. That’s really, really important. So let me just stop there. It’s a giant chunk of material. I want to check in with you guys and see how this lands.
Isaac: That really makes sense that you would be able, with Rholang for example, or pi-calculus, visually inspecting the syntax, be able to tell all of the concurrent activity that’s happening. I can even think of like a really short program that one could write to just tell you all of the concurrent activity that’s going on or how many threads there are currently in your program, as opposed to Java, where it’s something that’s layered on top of the actual language and not intimately connected with the syntax itself.
Greg: That’s exactly right. I’m trying to make the argument step by step. We’ve already agreed that whether we’re talking blockchain or any other Internet-based program, concurrency is the name of the game, and has been the name of the game now for well over a decade. At the same time, we’ve noticed that language integration, sort of syntactic support for important features, is really, really important because we need computers to help us solve our problems. But the thing that I didn’t touch on is how much harder concurrency problems are. We switched to the concurrency domain, which we are forced to because of the way the market’s evolved in order to hit scale. So when we switched to the concurrency domain, the problems become many times harder. There are tons and tons of literature about this.
You can give seasoned programs, programmers who are very familiar with concurrency problems, problems that they have solved before, and they still get them wrong. You can get standard problems, like dining philosophers or other kinds of problems, to people who have solved them before and say, “Write up an algorithm that addresses this problem,” and they still get it wrong. That’s how hard it is. I remember when I began to understand—when I was working on Rosette back in the late eighties and early nineties—I began to see how hard concurrent programming was even with very, very powerful language devices. I realized that I was going to have to somehow increase the capacity of programmers in order to make concurrency mainstream. So I thought, okay, well here’s a fork in the road from my career. I can either go into medicine and make drugs to make people smarter or I can go deeper into this type-based approach to provide automated tools.
At one point in the mid-two-thousands, I was working on applications to biotech. So it turns out there are really cool things you can do. The pi-calculus and the mobile calculus have lots and lots of applications to biology. One of the things that I’ve been interested in is there are all these boutique databases out there that capture a lot of information about a cell signaling regimes, and there’s no way to search them because you can’t search on the basis of dynamics. But when you get to these type systems, they give you information about the structure and behavior of concurrent operation or concurrent computation, you can turn it on its head and the type system can become a query language. It turns out there’s actually a deep, deep connection between the language-integrated query line of research and the topics that we’re talking about today. I did do some work on that. In many respects, the LADL algorithm grew out of that work. So it might ultimately help in the design of drugs that help make people smarter.
Without a lot of support for stepping into this, people are going to make really, really bad mistakes. We’ve seen it, right? Even in Solidity, it is quite possible to commit the sorts of errors that are going to result in draining smart contracts and hundreds of millions of dollars. If we really want to roll out a blockchain platform that addresses the reboot of the financial infrastructure without decades of risk management, we’re going to have to bite the bullet and take on data analysis for some of these concurrency properties.
It turns out that there’s good news. The good news is that if you do an explicit representation of concurrency in your language, then there’s a way to generate a type information that catches a lot of problems: deadlock, race conditions, security, leakage, all of that stuff is there.
Christian: I just wanted to say that this topic today is what drew me to RChain. Learning about the pi-calculus has been a really wild experience and really expanded my view of what computer science is and what the possibilities are. My math department here had to hear me talk about it a lot and had to watch me go through simple programs and try to get some math people excited about it. They’re definitely more excited about it than some random Java programming.
Greg: It shifts a lot of things, right? It also challenges classical mathematics, as we talked about last time because the packaging is so weird. The packaging is: hey, we’re going to actually put the dynamics here together in the Algebra and more distance between the Algebra. It shakes up a lot of stuff. It’s not just that it shakes up computing and programming. It also shakes up the sort of normal packaging for classical mathematics.
Christian: For me it’s very clear that this is a philosophical necessity to have this paradigm change to the concurrent languages. Yet since I don’t have a deep computer science background, I don’t really know how to really back that up and give specific examples where concurrent languages are just going to be vastly better. I want a good repertoire of those kinds of examples, because to me it’s just so obvious that if you try to use languages that were originally intended to just be a sequence of instructions for a single machine and you tried to scale that up to a distributed system, it just doesn’t make any sense.
Greg: The biggest examples come from whether or not you can get support from a type system that compiler to catch your errors. At the end of the day as programmers, the reason we employ computers is that they compute faster than we do. They automate some of the efforts that human beings could let a machine do. We’ve gotten really clever about that. We’ve sort of turned that on the job of compute itself. We said, let’s apply compute to compute in order to write programs faster and make better programs. Type analysis and static analysis is the make better programs part. For the longest time, type theory has just been about making sure that you don’t violate memory boundaries. So I give this kind of structured data to this program and if the program doesn’t understand the shape of the data, it can violate some memory constraints or memory boundary.
That problem is long, long, long ago in the rearview mirror. We kind of sorted that one out. There’s a lot more we can do with type systems. This is the connection to category theory. This is the thing that’s so exciting. And category theory has given a lot of guidance to computer science on exactly the issues of types. One of the things that’s quite interesting is the orientation with respect to Curry-Howard; learning to put the programs as the morphisms rather than the objects. That was a big shift in the thinking of a lot of folks and it helps us learn a lot more about the natural structure of type systems. But getting back to blockchain stuff, one of the things that’s really important is to recognize that if we aren’t going to build a virtual machine that we store on the blockchain, all the transactions have to go through a single virtual machine. And if that virtual machine is sequential, then it’s really just common sense. You’re going to have to pick a linear order on those transactions, which you run them in on end. That’s exactly what the Ethereum virtual machine does. It runs everything to completion. Whether you have a DAG or a chain, it doesn’t matter. All of your transactions are run end on end.
Christian: It’s not decentralized.
Greg: Right. It’s in some real way, it’s not decentralized. It has a higher degree of availability because now you have more notes, but it’s also anti-scaling because as soon as you add more nodes, they’re now all contending for that same virtual machine.
Christian: I feel like a lot of the limitations of sequential programming have been obfuscated by the ways that the hardware can amplify the performance. It’s a big problem that’s just like prolonging the inevitable. By the same token, I’m wondering to what extent the concurrency of these process calculi is going to end up being limited by the limitations of a concurrency in the hardware.
Greg: That’s such a great question. With the advent of the FPGA—I think FPGA itself is not necessarily the end of things, but rather the beginning. We’re going to find more and more and more that we can kind of have an internet on a chip. And that’s just with the traditional hardware techniques. I think we’re also going to see lots of other hardware techniques from biology-based computing to quantum computing that are going to demand that we look at other computational models in order to program to the hardware. The hardware is actually going to shift out from underneath us and it will be fundamentally concurrent. This was the point I was trying to make before, right? Moore’s Law ended a long time ago. Code that doesn’t take advantage of concurrency now doesn’t actually take advantage of hardware.
Christian: I feel like people are used Moore’s Law as proof that we were getting smarter as a species. Every time people said like, “Oh, my iPhone has more computing power than the space shuttle, I must be smarter than people that got on the moon,” it seems like justice now that we’re going to have to actually get smarter.
Greg: I think you’re absolutely right. I didn’t want to lose the thread on this point about types because when you employ the sequentialization model on the blockchain, it is no longer possible to have an explicit representation of any of the concurrency that’s happening in the language. So if you take that architecture, you’re never going to be able to employ static analysis to get a whole system of behavior that includes the concurrency behavior because it’s all hidden behind the scenes. There’s no explicit representation. It’s like the difference between a clean, pure functional language and Java. With a functional language that’s close to the lambda calculus, what you see is what you get. All the execution is actually in the code. Whereas with Java, there’s a bunch of hidden state that the program carries around in their heads: the heat, the stack, the number of threads, and that sort of thing is all in the code. You just have to know that because you know something about the execution model of Java.
Christian: Do you have a simple example of a time when a programmer wrote something in Java that’s not even very complicated, but they could just be looking at it and not notice that there could be some kind of deadlock or something?
Greg: Oh, sure. Deadly embraces—really, really easy, right? So you have a program that won’t go anywhere unless it hears this from another program. So A is not going to send a message out to the world until it hears from B and B is not going to send a message out to the world until he hears from A. They’re toast, they’re done, they’re stuck—deadly embrace. And then Java, you can’t catch that syntactically. Whereas in the precursor to Rholang—Highwire—which I did at Microsoft, that was one of the first things we demoed for Bill, we committed that error in code and then we type checked it and it said, “Oops, you got a deadlock.” Which is really cool, right? This is exactly the kind of thing you want your type system to catch.
Christian: How exactly did that, does that catch happen? What are the types represented there?
Greg: In this case, it ends up as a kind of dependency. You can calculate a dependency graph and you can see the cycle and the dependency graph. A depends on B, B depends on A, so they’re stuck. So it becomes a kind of cycle detection.
Isaac: So it’s like a modal formula basically, right.
Greg: You can also look at it in terms of modal formula; that also works. And there’s a direct relationship between these kinds of cycles detections in modal formula.
Christian: Well you were saying it was computed in terms of the types, but then you said it was about these graphs.
Greg: The type represents the dependency. You can extract this from the type. But the important thing is that since we know all of this and we know and the critique—I levied the critique in 2015 for blockchain. I spoke to many, many, many, many people spoke with Nash Foster and he agreed. Obviously Mike Stay and Nash talk a lot, right? I spoke with Vitalik, I spoke with Vlad, I spoke to anybody in the blockchain space that I could talk to you. I said, look, these are some just basic things that we can do in order to ensure success. So we should not go backwards is the thing that’s really important. Whatever compute model we put in there, it doesn’t have to be Rholang, but it shouldn’t be sequential.
In particular, WASM would be right out because WASM doesn’t have a decent concurrency model and is fundamentally sequential. Then they tried to paper over with a concurrency model. That doesn’t work. Most importantly, it doesn’t have the good properties that we’ve talked about, which is to make the concurrency model explicit in the code. So then any mapping to a well-established VMs, like the LLVM, which is not actually a VM, it’s a set of libraries. That also is of no interest if it doesn’t address concurrency. Back in 2014 and 2015, I was actively analyzing whether or not OCaml, precisely because of it’s mapping to LLVM, would provide a decent VM to store on the blockchain. The problem is that OCaml does not have an explicit representation of concurrency and LLVM doesn’t have a good representation. It’s a very coarse grain model and it’s also not well documented. You can see what kind of nightmare programmers get into when they try to do any sort of concurrency on LLVM.
At a minimum, what we don’t want to do is go backwards. It would be totally fine to find a better concurrency model then the Rho model. That would be awesome. I would be so into that. If someone came along and said, here’s a much cleaner, much simpler concurrency model.
Christian: What other groups are using concurrency right now?
Greg: They’re been a lot of groups over the years that that do concurrency. There’s the concurrent constraint language stuff.
Christian: I meant on blockchain.
Greg: That’s the thing that’s really fascinating is no one has tackled this. It’s crazy, but what it really shows that there’s a blind spot in human cognition. There was no decent model of concurrency in computation for decades, and there was no decent compositional concurrency model until the late seventies and no compositional notion of concurrency that also addressed resources until the early nineties. Lambda calculus came out in the thirties; Turing was doing his model in the thirties. Turing was quite convinced that his brain acted sequentially, which is completely false.
I remember when I first got interested in concurrency and thinking about concurrency, I started looking around in my life for examples of how my mind worked there. There were two that were really striking. when I lived in Austin, Texas and worked at MCC, I rode a motorcycle. I did that for about eight or nine years as a way of lessening my impact on the environment. I was sitting at a light at a major intersection and the light changed. And the next thing I observed was one hand gripping the brake as hard as it could while the other hand was rubbing the throttle to go. Then a car ran the red light. So part of my brain was already engaged in the, “okay, the light’s changed, let’s go” behavior. And another part of my brain was, “no, don’t do that because you’ll die.” It was crystal clear to me that my brain was a multipartite system.
Another one that was there was my two-year-old daughter had a pencil. It was again, Austin, Texas. We had a floor fan. She was fascinated by the fan and wanted to stick something into the fan. I was saying, “no, don’t do that.” And she was shaking her head; her eyes were locked on me and she was shaking her head. She clearly understood the “no,” while her hand was pushing the pencil towards the fan. And it wasn’t like some nefarious disobedience. It was really two parts of her system hadn’t integrated yet. They were really operating independently and the communication bridge hadn’t been established yet.
There’s tons and tons of literature now that shows that the brain is a concurrent operation We know this; it’s basic science. It’s a part of us, but it hasn’t yet become culturally a conscious part of us, where we learn how to actively think concurrently. When I retire I want to go all the way back to the beginning and create a language that allows us to do concurrent communication. Human beings are so wedded to sequentialism because we’re so used to language being this kind of sequential thing, one word after another and you collect the words into units before you have some sense of the total meaning. It’s very sequential processing. The truth is that you can put a piano score in front of a six-year-old. She can read the bass line and play it in her left hand; and then she can read the treble line and play it in her right hand; and then she can put both hands on a piano and play both lines together and internalize how the two voices are interacting with each other.
Christian: I feel like culturally the way that computer code has become a part of our thinking is that we’ve associated it with total, exact control of one thing. The cognitive limitation is that programming in the Rho calculus feels very different. It feels more like designing ecosystems of organisms that are gonna work together in the right way. But there’s this big difference in thinking and I just wonder a lot about how to bridge that difference. I assume you’ve been brainstorming a whole lot about how to get people to come over to the bright side
Greg: For a long time it just consumed me and I looked at every model of concurrency that I could think of. I looked at musical scores; another one that I looked at was comics. The interesting thing about comics is they don’t come with an instruction manual. You hand a Marvel comic to an eight-year-old and within seconds they grasp the language of the frames. They can understand when the frames are happening all at once, and they can understand when the frames are to be read in sequence. Artists play all kinds of games of the frames. Sometimes they have just a single image cut up across all the frames. There’s a notion of concurrency there that is immediately graspable by human beings.
Part of the puzzle is finding representations that tap into people’s natural ability so they have that a-ha moment. Because of the concurrency, I can get more done all at once.
Christian: That’s why visualization is so important.
Greg: In the blockchain world, it’s really important, right? Most transactions are isolated. We don’t have this wildly entangled financial state. Not one great quantum entangled state. Then there’s this multiverse facade over top of it. In reality, most people’s finances are disentangled from one another. There are little pockets that correspond in neighborhoods and cities and things like that. By and large, when you go get your coffee in the morning, assuming that you drink coffee, that it’s almost certainly an isolated from someone on the other antipodal point on the globe buying their veggies.
If we have to sequentialize those transactions, we’re toast. It isn’t going to work. Moreover, any sort of fancy sharding mechanisms, they’re going to have to be based on some kind of namespace technology. How do you route this message to that thing in that shard? Well, you’re going to follow some kind of path that names the thing in the shard. We’ve had this technology forever—it’s called URIs. We understand this and everybody understands it. We know what it means to sort of manage namespaces. I thought about this long and hard and said, hey, there’s this direct relationship between the way we carve up the namespaces and the concurrency structure. That has always been the biggest lacunae in the distributed Pi-calculus approaches because the distribution units didn’t line up with the concurrency. You couldn’t get a good alignment between the structure of the concurrency and the structure of the failure regions.
Christian: You said that was a failing of the pi-calculus, but has been addressed in rho-calculus?
Greg: There have been things called D Pi. Pi itself is not necessarily distributed. You have to give some kind of interpretation to the names because names have no internal structure; it becomes really hard to do that. So then people come up with these with variations called distributed pi, in which they imposed failure regions, regions where various processes are executing. But if you make those independent of the concurrency structure, they begin acting at cross purposes. The Rho calculus resolves that. It’s precisely because of the connection between the names and the processes. Namespaces give you natural notions of regions and those natural notions of regions now correspond to the concurrency structure of various programs.
Like the stuff that we talked about last time, the whole thing with the comm rule and the composing context and all of that stuff. That’s also directly connected to that. The other piece of it is these things are hard. Without math to help us out to do calculations and make some guesses and then test our guesses by doing the calculations, we’re toast, right? If we’re just pulling this out of the place where the sun doesn’t shine, we’re going to greatly lengthen the time to market. And the fact is we don’t have a lot of time. If we really are building this to try to address some of the consequences that are going to fall out of the way we’ve lived on planet earth, we just don’t have the time.
People keep misunderstanding me when I talk about the connection between what RChain is doing and climate change. I’m not talking about sustainability. In my opinion and the opinion of many other people, we’ve already passed the tipping point. There’s no way we’re going to turn the ship so that we get less than two degrees rise in the next twelve years. It isn’t going to happen. As a result, we’re going to see is massive population displacements; we’re already seeing them. If you believe Al Gore, that’s exactly what’s happening with the caravans. These people can’t grow vegetables in the regions they were growing them before. So now they have to move to regions where they can grow things.
This is only going to increase, and it’s going to increase much faster than we think it will. As a result, many more people will join the ranks of the unbanked, the unaddressed, and the uncredentialed. At a minimum, the most basic thing the blockchain can do is to provide this global compute infrastructure where people can register their identity and hang their credentials there, and it could be self-sovereign identity. It doesn’t have to be connected to any sort of watchful state. It can be self-sovereign identity, but immutable and with certain cryptographic assurances, they can hang their credentials there. That gives them an address where funds can be posted for relief if it turns out that they are so deserving of that relief according to the mores of a particular community.
Christian: Is lifeID already up and running?
Greg: lifeID is more of a standard, but we’re certainly supportive of what they’re doing, and we can provide a lot of those features and functions very quickly. That’s the important takeaway. I don’t know that we have to have even one particular format for identity. Most likely there will be several, in the same way that there are several television news networks. In different market sectors, they’ll each recognize the need for a compositional notion of identity, uh, and they’ll grow various formats. So probably in arts and entertainment you’ll get one, and then in real estate, you’ll get one, and so on and so forth. Just like with everything else: here’s how you find me on Facebook and here’s how you find me on Twitter.
Christian: So this is going to be crucial for helping with potential social instability. But what about the instability of the internet itself? I’m thinking of it as kind of a house of cards—the number of resources that depends on how perfectly everything needs to go in order to support the Internet. Ideally, with a system like RChain, how would this help to transition to a truly decentralized internet where we can just connect peer to peer and we’re not depending on big server hubs that might fail and internet service providers that could fail.
Greg: That’s a really good point. The node architecture is designed so that it can run on a wide range of hardware for exactly that reason. We’ve already proven that we can take it down to a Raspberry Pi and that we can scale it up to these large Kubernetes-based installations. In fact, the very same programmer did both ends of the spectrum. Kayvan Kazeminijad got us up and running on a Raspberry Pi and he got us up and running on a Kubernetes cluster for the RSong stuff. We’ve taken a deliberate approach to make sure that we can address exactly those kinds of issues. The other piece of it, of course, is that people are insensitive to the cost of what they’re storing. So they have no idea of the sort of environmental consequences spreading cat memes have all throughout the world, what that means in terms of heat dissipation.
But if you suddenly attach a rental cost of storage, people do become sensitive to it, given “free” versus pay. You might think that people won’t jump, but if you make it so that they can make some money on the rental market, then they will jump. If people can become part of the storage solution, and rent out the storage, they can help with this. A movement to decentralize a lot of this stuff. But that’s a long and complicated question, right? It is not straightforward.