Developer Learn Podcast RCast

RCast 89 – Bisimulation – June 10 2020

Greg Meredith and Christian Williams discuss bisimulation.

Transcript:

Greg 00:04
And, yeah, so thanks, Christian for joining today. I always get a lot out of these conversations. And yeah, I wanted to talk a little bit about by simulation today because it’s been a topic that you and Mike and I’ve been talking about a lot and and I, I can’t stress its importance because it’s such a far ranging notion. It has such an interesting history. And its, its impacts a lot of what we want out of computing systems and in general out of out of formal systems. Yeah. So The idea is really very simple. How would you know when two entities are the same? That’s, that’s really the most basic question. And there are lots of situations where we can where we can look at this question that I think, you know, people who aren’t steeped in the, in the art can understand one One example is, you know, imagine that, you know, Alice goes to work very early in the morning, before the crack of dawn, and every day that she goes to work, she sees this character who is dressed in a trench coat and a fedora also going to work or at least so she assumes, and then you know, Bob, coming home every night sees a character On his way home from work or so he assumes dressed in overalls and a denim shirt and farmer boots. And how would if Alice and Bob were to get together and compare notes? What methodology might be used to determine whether or not the two characters that they see are related or in fact the same? So this is actually an age old question from philosophy. But it’s a practical question, right? It’s the kind of question that investigative units like the city police have to look into. What what method would you use to determine whether or not these two characters are in fact the same character? That’s another example never.

Christian Williams 02:49
I’ve never heard that question before.

Greg 02:52
Oh, cool. Yeah. Another another, another one would be You know, I’ve got a beaker. And I’ve poured some chemicals in it. And I’ve got another beaker and I’ve poured some other potentially other chemicals in it. And by what methods would I determine or ascertain whether or not the chemicals in the first speaker are the same as the chemicals in the second speaker. And this gets really interesting mathematically when we realize that we can write down a system of differential equations. So let’s say one grad student writes down a system of differential equations that models then the chemicals in the first speaker, and a different grad student writes down a different set of differential equations, modeling chemicals in the second beaker, how would we know whether or not the two systems of differential equations were the same? Right, so if you were presented with two different sets of differential equations. How would you know whether or not they are the same? Describe the same system behavior. And they could be very, very different in presentation but ultimately describe the same dynamics. Yeah.

Christian Williams 04:21
More, go ahead, just if they have the same solutions, if they have the same set of solutions.

Greg 04:28
That’s one that’s one. That’s one approach. That’s one approach. But that that gives rise to just to what you just said there gives rise to one of the most interesting aspects advice in relation. So the general question is for for any two systems, which which we can view as as having some kind of agency or reactivity or, you know, change state. We can ask whether or not whether or not the two systems are somehow the same. And by simulation says, Well, you know, let’s call the first system P and the second system q. So if if there’s some action by which p goes from P to P prime, then there has to be a Q prime such that q goes by the same action to Q prime. And p prime and Q prime are by similar so it’s a recursive relationship. So P and Q are both similar if, if that holds and vice versa. So it’s got to also be the case. Not just that q simulates p, but that P simulates q. So it’s a bi simulation.

Christian Williams 05:46
Yeah. So that recursive definition took me some time to get used to. But I mean, essentially, it’s, it’s just ensuring the fact that P prime and Q prime are going to Be able to match each other’s moves as well. Right?

Greg 06:03
Yep, that’s exactly it. You know? Yeah. It’s like Fred Astaire and Ginger Rogers, they’re always matching each other’s moves. So so that’s that. That’s the by simulation view of equivalence. It’s a kind of system equivalence. But what you just talked about was this sort of solutions equivalence. And it’s really interesting to compare different notions of equivalence. At the outset of computing, people were looking at you know, sort of sets of results. So, when you when you look at a language generated by regular grammar, you could compare the sets of words generated by the grammars and it turns out that the complexity of that question, you know, so, if you know I have grammar a and grammar B, and I just look at the sets generated by those Words by those grammars, then the question is whether or not the sets are the same. And that turns out to have an enormous complexity is p pspace. Complete, right? So, super, super high complexity. Now intriguingly, instead of comparing the sets, you were to compare the automata that corresponded to the grammars, right? So the automata that would generate those sets, that’s a by simulation problem, you can you can, you can think of the you know, an automaton in a given state as a you know, like P and Q, and you can think of taking a particular transition and that uttama as an action moving to a new state, which is the new state of the automaton, so, that that, that particular variation of the equivalence problem So, instead of looking at the sets of words, you look at the automaton that has complexity polynomial Great. So that’s an enormous reduction in complexity. Now an example. So another another, we can we can go further. Which is, if we look at, instead of regular languages, we climb up the language hierarchy, we go from regular to context free. So we’re still not at Turing complete, right? These are, they’re still just context free languages. So the language equivalents problem is not decidable. You can’t do it. The course for context free languages, well known result. But what? But But intriguingly, if you consider the vise simulation problem, it’s polynomial. Yes. And, in fact, if memory serves, I might be wrong about this, but if memory serves is bounded above by a quintic power, so it’s, it’s, it’s, you know, it’s not going to be the fastest ever to, to find all cases, but it’s, it’s still, you know, like it’s not even exponential, it’s definitely doable. Right. So you go from can’t do it to can do it in polynomial time. Yeah. That’s amazing. So that says that there’s something magic about the behavioral view, the intentional view, yeah, that it is not present in the sort of, you know, the extensional or, dare I say, denotational view? No, which is why by simulation seems to be so important, then we can kind of like in the extensional view to the solutions view that you were looking at. So the question, the question is Is there a notion of by simulation that works for differential equations and people have actually looked at by simulation, probabilistic and continuous systems. So that’s an open research area. But it’s, it’s an interesting one. And so, it’s so when we think about it as a as, and the other thing is that it’s an effective notion of equality, right? We can we can just describe it as a procedure. And it lines up very nicely with the scientific method. In the sense that we can talk about two systems as being the same if we can’t find a distinguishing experiment.

Christian Williams 10:46
So, when when you say, describe it as a procedure, what do you mean exactly? procedure to test that, that these two things are very similar?

Greg 10:58
Exactly.

Christian Williams 10:59
But like, what What’s the principle of that procedure?

Greg 11:02
So the procedure now, this is the brute force way. And there are a lot of better algorithms than this. And if you look up algorithms for by simulation, there’s a rich, rich literature on that. But in general, you just simply, you asked for an enumeration of all the actions that are possible from a particular state. And then you just go and check recursively you just walk the graph recursively Okay.

Christian Williams 11:34
How is that not x? Okay, so this, this would be one that’s probably exponential, but there are ones that

Greg 11:40
Oh, the complexity, the complexity of that that’s ultimately undecidable. Right? If you have Turing complete, right? It’s effective, but it may not return a result.

Christian Williams 11:53
I see.

Greg 11:54
And when you when you restrict it to particular classes of graphs or automata Then the complexity becomes tamer.

Christian Williams 12:03
I see.

Greg 12:05
Yeah, yeah, exactly. Um, one of the interesting things is that this idea has been discovered multiple times. So, Jason, Georgie has a paper like on device emulation proof method. And when she talks about the fact that it’s been discovered and forgotten and discovered and forgotten, and it wasn’t until David PARC sort of discovered it, again, that Milner recognized the significance of it, and it kind of stuck took hold, at least within the concurrency community. And another interesting fact about by simulation is that all forms of equivalence so there’s a nice result from around 2005. It shows that all of the proposed forms of equivalence in for computation factor through By simulation in the sense that they’re, they’re all a form of by simulation with with filtering. So that’s in particularly nice

Christian Williams 13:12
and all forms of equivalence what what kind of

Greg 13:19
what kind of see you can you can look at trace equivalence, you can look at failures equivalence, you can go and read the equivalence or so called two thirds by simulation. So, all of those proposed notions of equivalence can be can be presented as factoring through by simulation. So, it’s by simulation plus some filtering.

Christian Williams 13:41
So, you can like for each kind of equivalence, you can construct a graph that corresponds, whose by simulation corresponds,

Greg 13:48
yes. So, but by simulation plus some some, you know, way of collapsing things, you know, yeah, yeah, that’s, that’s the idea. Yeah, yeah, I should bring up those results. Again, that’s a very lovely paper. So that’s, um, that what that says is that, you know, it’s not just that the complexity is nice or that you know, it’s or that it’s conceptualization is nice or that it lines up with science. It’s nice, but also that it’s somehow seems to be very, very fundamental. And one of the things that’s really interesting and is related to this research program that Christian Mike and I have been looking at which we’ve which we’ve officially read, dubbed operational semantics and logical form, so we were calling it ladle, because it had focused on the distributive law as the primary means of realizing models. But Christian has found a different approach to solve the problem that doesn’t depend upon a distributed law. So we’re dubbing the research program operational semantics and logical form. That, that has a reference to this program is really a progenitor or descendant of the descendant of Abramsky is dumb denotational semantics and logical form or domain theory and logical forms is the essentially you know, given given a domain theoretic account, he can generate a logic and that logic has nice properties relative to the domain theoretic presentation of notion of computation. So, so here you know what we’re saying is that the denotational account or the domain theoretic account is nice, but the operational account is is more pragmatic and so the question is can we can we treat that out the those intuitions Can we expand those intuitions, in much the same way that curry Howard can can be expanded not to not just be about the simply typed lambda calculus and intuitionistic logic but to a much wider field. So we’re doing the same thing. And so, so, so, in particular, the expansion of the curry Howard, you know, Abramsky observed that, you know, we should probably be able to do the same for concurrency. And that, that then ropes in logics like the Hennessy Milner logic. So, Milner and Matthew Hennessy observed that the sort of possible world semantics could apply to, to the transitions of a process algebra, and then the poly added pi calculus tutorial, Robin presents a variant dependency millimetre logic that corresponds You know, provides a logic for turns into pi calculus. And one of the nice results in that paper is that logical equivalence corresponds to by simulation. Yeah. So we get that two programs are similar if and only if they satisfy exactly the same set. And then along comes Chi race. Linux carries what his spatial logics and spatial logics have a kind of X ray vision in the sense that they can see when a process is parallel compositions, and they can see when a process has the structure of declaring a name to be bound, you know, bound by a new scope. So those two things are Things that Annecy Milner logic can’t see. And it’s really important to understand that if you look at the PI calculus and you look at the semantics, it’s effectively an interleaving semantics in the sense that the common rule only applies to one communication at a time. So if you have, you know, a big parallel composition that has lots and lots of potential communication events, that could all happen in parallel. The common rule as originally stated, would only allow one of them and then another, but the order would not be specified. So there’s a theorem for the PI calculus that every every process can be represented as a big sum of interleavings Yeah.

Christian Williams 18:57
That’s weird to me. I had really appreciated that until recently. And it’s uh, it definitely makes sense. But it’s just like a surprising, very different way of

Greg 19:10
thinking about the process or writing it down. Yeah, yeah, that I agree with you well, but basically what the content of the theorem is that, you know, calculi that have this kind of common rule are not truly concurrent, they’re only non deterministic. Right? So you don’t you don’t know the evaluation order, but you know, that there’s some evaluation and what is what would be something that’s truly concurrent. So people have looked looked at so called True concurrency and provided systems to give, you know, multiple common events all at once. And so, this whole true concurrency semantics

Christian Williams 19:58
approach

Greg 20:01
And one of the interesting things about the work that Mike and I have done is that we’ve roped Christian into is this idea of an evaluation context. And within evaluation context, you can straddle the spectrum between interleaving and true concurrency which is exactly what happens on machines right? So on machines, you only have some finite Well, if I’m talking about classical computing machines, I’m not talking about quantum machines. But on on classical computing machines, like when, you know might find you know, inside of a Mac Pro or an iPhone or an Android phone or some intel device, the the you have some physical number of threads some and then you map the logical number three Down to those physical threads. So you can have, you know, some limited form a true concurrency. So you might your your par might have, you know, 100 different common events that could all go at once, but you’ve only got eight actual physical threads. So you’ll, you’ll, you’ll, you’ll reduce those in chunks of eight. Yeah, that’s weird. Yeah. Yeah. But now that we’re like, now that we’ve been digging into theories with rewriting and putting the graph straight into the theory, we, we can’t do that if we want to allow the parallel. Exactly, exactly. So we can, we can allow the parallel rewrites but but also with the evaluation context, we can we can model real world situations where, you know, you actually only have a certain amount of concurrency So you you spend that all down, and then you have to have another supply. So you can you can, you can with the evaluation context, we now model that kind of thing. And that’s nice. And then there’s a virtualization of that, which is, you know, something like the blockchain idea of gas or flow. So, so, go ahead,

Christian Williams 22:24
does this true concurrency make by simulation more complex than in the interleaving? Just the interleaving pi calculus? Yeah. So so so it’s the same idea, right? You don’t have to adjust the definitions. It still works, but you

Greg 22:45
But now, your logic is going to have to be able to see this somehow see the true concurrency and this is this is now now this is the connection to the spatial logics is they can, they can see when processes have parallel composition.

Christian Williams 23:06
So now they’re out of the Well, it seems like one way to handle it is that instead of a single action, a process can now take a multi set of actions at once. Yes. Right. That’s, that’s one approach and that that has been explored. Yeah. Okay. Absolutely. Yeah. We’re not going that way. We’re, we’re going

Greg 23:31
well, we’re we want to be agnostic. We want to work at a level above that if at all possible. Yeah. But but but the, I guess the point I’m trying to make is that it would appear that something breaks as soon as you can make these kinds of observations. So so no longer than so spatial logic is more discriminating. In terms of behavior, than the bisimulation that gives rise to the interleaving theorem right? And what you can do to recover that, and this is what Moses and carrots show is, you add actions that correspond to the the equivalence of on on the park term constructor. So, there are there are a couple of equations that are associated with par actually there there are three. So one is that P par zero is P, another is the P par Q is the same as Q par P. And the third is that P par q or R is the same as P or Q or R. So it’s associated and the latter two equations can be treated as if there are actions associated with commutation. And actions associated with re associating process re associating terms. And once you this is very, very akin to the category theoretic view where you’ve got morphisms that would correspond to the symmetry and morphisms that would correspond to the associativity. And by the way, just as an aside, you know, since categories are kind of like graphs, you know, you can you can use by simulation to think about graph equivalence. And you could use by simulation to think about a notion notions of equivalence in category theory. Yeah. So, so those so it has lots and lots of applicability. And in fact, in general, one of the things I think is going to be important is to transport by simulation over into mainstream and classical, mathematic. Because I think it’s an incredibly powerful proof technique as well as you know, this powerful notion of equivalence. We haven’t talked about by simulation up to. But, but one of the most interesting things in the history of by simulation as it relates to the PI calculus and concurrency theory is Oh sorry, before I go down that path, let me just finish tie off this this other thread. So, so, the general notion is when and then this is what we hope to be able to prove with the operational semantics and logical form that you we want to be able to show that when we introduce the structural connectors, and these might not be part right you know might be application in the sense that lambda or might be the you know the formation of an ambient in the ambient calculus or any number of other possible on term calculi we want to we want to show that one even though we can see this structural stuff and the logic is strictly more powerful than by simulation, if the by simulation is unaware of the structural capabilities, we can always sort of add morphisms that allow us to see the structure, pro the structure and then and thereby recover this relationship between logical equivalence and by simulation. So, so this is, you know another way of putting that is that data is program that all data structures can be turned into to programmatic actions, which is the essence of, you know, the inside of the lambda calculus and church numerals and those kinds of things, and is the essence of the PI calculus. You know, rather than then thinking about what things are the data centric view, we think about what things do the programmatic or control flow centric view. And so this, this ability to turn, you know, on the structural predicates back into behavioral things is really just making sure that you have enough actions recorded behavior recorded that corresponds to the structure. So that’s how the two are related. In the end to do this, you can just replace all of the equations with rewrites on Yeah, will you replace that With the corresponding bi directional rights. Yeah, yeah. This is so, so much to talk about in these conversations. I mean, one of one of the things that I think is also really important to explore and I just haven’t had the time to explore it is. But I talked about this on a Casper standoff is essentially the difference between classical mathematics and computing, the structures that classical mathematics structures, studies and that computing structures is their refactoring of behavior. So whether you’re talking about a monoid, or a group or a ring, or you know, a field or a vector space or you know, a module or whatever, whatever structure you’re talking about, the structures effectively just sit there, they don’t they don’t have behavior on that. Wrong behaviors in the morphisms between the structures. And and category theory is largely informed by that view. So category theory is all about finding the notion of structure to preserve by morphisms. That was that was the set of intuitions that was informing the development of category theory. But computing takes a completely different view. It says no, no, the behavior is the fundamental thing. Right, when we compute we’re doing something and so it puts the behavior takes the behavior out of the morphisms and moves it over in and ties it up with the structure. So, so in general, a notion of computing it from an operational semantics point of view is given when you have a, you know, some kind of grammar that generates the terms. Some kind of equivalence, that erases syntactic differences that don’t make a difference and then Instead of rewrite rules, now that the equations that erase this syntactic structure that does doesn’t make a difference. So P or Q is the same as Q or P is. And so those equations are bi directional. Right? They run the container as a rewrite from one from left to write another rewrite from right to left. But the rewrite semantics are union direction, typically speaking, yeah. So the common rule goes one way. And that’s because you could have lots of different calm interactions that result in the same so you know, continuation. And the same with application and lambda and lots and lots of other exam. And, and so you, you forget where you came from. And that forgetfulness is directly related. So I’m going to put forward an idea, a hypothesis, a proposal that this this for question. Classical computing systems, you’re going if your system is is is classical, that not too reversible, like lambda or pi, that this corresponds to the thermodynamic consequences of computing. And essentially all I’m doing is following Bennett’s work Where, where, where he he analyzes the Maxwell’s demon problem. Yeah, so you’ve got a, you’ve got a chamber with with a divider down the middle and you’ve got some distribution of high energy and low energy particles bouncing around in both halves. And there’s a little hole and and a demon has is controlling a flap. Letting high energy particles go to one side and low energy particles go to the other side. So over time the system decreases in entropy. So the question is, how does this How does this avoid violating the set Second Law of Thermodynamics, and in particular, you know, so that we could say, well, you you’ve you’ve invoked a demon. So all bets are off in terms of, you know, the physical, you know, in terms of its relation to physics. So then you say, Well, how about if we replaced the demon with a computer? And now the computer is measuring the energy of the particle as, as it approaches. And, and, you know, does does the does the job of the demon in terms of sorting the particles. And what what Bennett showed was that the the computer eventually has to reset it has to go back to, you know, a particular state where it doesn’t remember all the, all the information about each particle. And that resetting is the thermodynamic consequence. And in particular, the forgetful act is one that has thermodynamic cons. sequences. And so that’s where all the entropy is going is in forgetting, you know, erasing the state of the computer. And so I argue that that’s, that’s the same in each of these classical accounts. So computing, that one directional rewrite rules are the things that generate the thermodynamic consequences of the system. So, apologies for the for that aside, but I think it’s a really worthwhile and interesting side. Yeah. Okay, so so so so what we’ve done is we’ve we’ve covered this idea of how the structural Connect is, can you know we can we can beef up the system in terms of additional additional actions that correspond to the equations on the structure, and that gives us our ability to recover? That by simulation equipment, you know, is the same as logical

Christian Williams 35:00
And this doesn’t really affect it doesn’t make computing with the language any more intensive because computers already treat equations as bidirectional rewrites that

Greg 35:17
Oh, no, that’s a really important point actually. So so the complexity of checking the par structural connective and spatial logic is exponential. It’s it’s very high. Because you have to look at all pairs. No way around. So it gets it gets really bad. But, but the nice thing is you with with the structural connectives, you’ve gone from, you know, being able to there there were things you couldn’t say or see before, right you could see the number of threads But now with the structure with the park connective in spatial logic you can see the number of threads and and we should expect this right like there’s this deep connection between the number of threads and prime ality we know that checking finality it has lots of interesting complexity. So, we should we should be expecting this. But, but the the the point I want to make is that now, the another aspect of the history of by simulation is when we talk about these actions, taking one state to another, you know, essentially we have some kind of label for the action. But when you talk about reduction semantics, so like beta reduction for the lambda calculus or calm reduction for the PI calculus, figuring out the notion The label has been more complicated. So Milner originally gave a label transition system for the semantic malware and para Milner pero. And Walker gave an LTS for for the first presentation to pi calculus. And then Milner says, Well, I’m really trying to make something that’s a lot more like lambda. So I want a notion of production. So he, he, he, he comes up with a reduction semantics for the PI calculus and then you know, in doing so he loses the the LTS, he loses a direct connection to the LTS semantics. And so he doesn’t have a good notion of by simulation because the comments are all supposed to be invisible. And so he insan Georgie workout this notion of Barb’s and in parallel Milner suelen lifer work out a way of deriving labels from context. And so, those have been two ways of trying to recover a notion of of by simulation that corresponds to reduction in the sense that by simulation becomes a congruence. And the idea of congruence for those people who are not familiar with this is essentially like canceling you want to be able to say that if p part Q is similar to our part Q, then P is very similar to our just cancel out the key. And you want to be able to cancel contexts more generally not just par but but more general context. And so this then, so you can do equational reasoning you can calculate like you would do arithmetic, with all the cancellation laws and all of that. So, that’s why you want by simulation to be a congruence with respect to reduction And, and so that was that was an important line of research. But what got left behind was the barbed by simulation. And so barbed by simulation was was a was another way of recovering by simulation. So essentially, you you say that all the reductions have to match but that you also have to make sure that you have the same set of observables and these observables where the barbs and essentially the barbs or actions that a process could could take. And so what Christian and Mike and I have looked at is, is using context as Barb’s yeah And do you want to say a little bit about that? Like I’ve been talking way too much? Sure.

Christian Williams 40:09
Yeah. So the labels, if you’re thinking of them as potential actions that a process can make. The idea that you could also think about it in terms of contexts is saying, if the process were put into this context, then it could evolve to the following process. And the two perspectives are kind of interchangeable. And in terms of this idea of being agnostic and trying to write these general theories with rewriting, we can’t depend too much on how Milner and others did it for the PI calculus. And this idea of doing it contextually Seems to be more general. And yeah, yeah. So you can just take your presentation of the theory, look at all the rewrites. And basically, in the source of each rewrite, you’re basically looking for ways to poke holes in it. And those are going to be your labels. And then you basically take your original reduction graph. And you’re basically poking holes in the sources of the reductions. And you’re probing those processes to to line up with those context labels. So that you can give a label transition system with more information.

Greg 41:55
Yeah, so so what what we offer what we offer the into the notion of BB by simulation. In the sense that we, again, we’re going to insist that whatever reductions you could make, in other words, whatever rewrites you could make, P and Q have to enjoy the same rewrites. But then also, if you could place p in a context and we would call it a prime context, so that it’s not like a big fancy, sprawling context of lots and lots of terms, but really just matches the left hand side of a rewrite. If you could place p inside that and there is a there is a rewrite then that context is a BB it constitutes an observable or an experiment. And so then you have to be able to place p in the same context and get a corresponding rewrite. Yeah. So, so so that that that idea is basically just lifting the note barbed by simulation. And so, the aim now is to prove that this notion of the bar by simulation together with the substitution machinery that comes from the second order algebraic theories will allow us to prove that BB by simulation is a congruence. And that the logics that we generate, if you add in the actions that correspond to the structural equivalences the observations the court that that notion that an enriched notion of by simulation with these extra actions will be exactly equivalent to the logical equivalence that we did. Yeah. So, those are those are our two proof obligations. And then what that does is it’s very practical, right it allows us to to instead of having instead of talking about the liskov subsidy substitutability principle, we give a mechanism for substitutability in the sense that we know that if you put a if you put a you know, a p by similar to Q and, you know, repeat you know, p replaces q in some context, then that’s going to be safe as long as P is by similar or as long as p meets the logical spec that is demanded. Right. So, so it gives us a you know, a rationale and a basis for talking about substitution. You know, of components. Yeah, not not not just a, you know, values for arguments. What’s an example when you want to treat

Christian Williams 44:58
I get that in theory, if to Prices are similar, and then you can treat them as interchangeable. But when do you actually want to do this substitution?

Greg 45:07
Yeah, so I mean, I mean, I mean, basically what now what we’re talking about is making, you know, software behave like hardware. Right? Is it we know what it’s like for a component on a motherboard to fail. Right? And, you know, we want to pull out that component plug in another one, right, that’s substitutability. So, that’s, that, that’s what we’re talking about is, is being able to do that at the software level. And that has been sort of the goal of, you know, systems like object oriented programming and all kinds of other things. Because, you know, I mean, the, the, the idea of software reusability ultimately comes down to this plug ability. But, but without all of this reasoning, you know, we’ve seen this software software reusability Been fraught, you know, you know, when it’s when it’s governed, governed by these governed by the object oriented principles that wasn’t in supportive enough of a framework to make substance reusability have the same kind of utility that we thought it would. Yeah. Cool. So I think this is a, I mean, we covered what I wanted to cover in terms of, you know, the history, the relevance, connections and how it lines up with the research that we’re doing. The only other thing is that this then also becomes the basis for the notion of query. So once we once once we have all these assurances that the logic does You know, it coincides with by simulation, then we have the right to use it as a query language. So we can, we can take terms and poke holes in them and say, Well, in this hole and I’m putting a formula which governs the kinds of things that could go into that, that go into that hole, then now you get, now you get a query language that can be applied to get hub or, you know, the smart contracts that are up on the blockchain or a wide range of other things. But it’s good stuff. Is, is that it’s, it’s also you know, it’s one of these things where the, the, the research program is relatively, you know, like we covered most of the salient points in an hour but nailing down All the details of the research has taken decades. Yeah, yeah, it’s getting there. Yeah, one step at a time. Well, thank you so much Christian. I really appreciate you taking. A you taking taking the time. I know you’re you’ve got a busy schedule and you got a thesis to write. But I really appreciate these conversations. Yeah. Thanks for having me. All right. Talk to everyone soon. All right. Sounds good.