Subscribe to the RCast podcast: iTunes | Stitcher | Google Play | Spotify
or listen here:
Greg Meredith 0:00
So we’re rolling Christian. I wanted to talk a little bit about sort of some larger scope issues. And I wanted to begin with a comparison of two different times. inflection points in mathematical history. So, um, one inflection point begins in the, see if I got the time, right, I think it’s the early 1900s. When people begin to drop the, the Euclid’s fifth postulate, and realize that they can get sensical geometries out of it, so, so it’s about 100 years ago. And what what happens is that suddenly there’s a zoo of geometries.
Greg Meredith 1:14
There are a lot of different geometries. And mathematicians like Klein want to provide some kind of organization for it? So Klein sketches out his Erlangen program which captures a lot of the geometries, but not all of them. And if, if memory serves, it doesn’t capture Riemannian geometry. But mathematicians like Karten and Ehresmann. revise the Erlangen program via the notion of a connection which, in some sense characterizes the relationship between a topological space called a manifold and its tangent bundle. And this gives a rather complete characterization of different notions of geometries. And there’s a similar inflection point although I have to say that the timeframe in which the inflection point happens is different. So it goes let’s see if I let me just quickly consult consult Google on this. So Brower law of excluded middle…
Greg Meredith 3:08
So what what what years was Brauer working? Do you know?
Christian Williams 3:16
Around that same time or maybe later
Greg Meredith 3:20
was it I thought it was earlier? Let me just double check
Christian Williams 3:23
sorry. Let’s see. Kind of around that same time.
Greg Meredith 3:32
Christian Williams 3:34
…. like early 19 hundred’s
Greg Meredith 3:42
Yeah. That’s about right. That’s right. Yeah. Okay, good. Yes, you’re right. You’re right. You’re exactly right. So about the same time then, um, logicians and mathematicians are beginning to look at dsropping some of the rules in classical logic and Brauer begins, so it’s very analogous to to what happened with geometry, Brauer begins by dropping excluded middle, but then subsequent to that we see a whole host of what are called sub structural logics and culminating in linear logic which drops the rules of weakening and contraction it doesn’t actually drop them it just it puts constraints it sort of, it marks them marks weakening and contraction. Um and and thereby and through this a rises This notion of of resource conscious logics. So but, but all during this time we also have another development in logic which is the arises from Krypke’s possible worlds interpretation of modal logic and then and then Milner and Hennessy pick up on this idea and connect it to the process semantics, the process algebraic semantics, where the possible worlds are possible executions of a process. But nothing gives us a uniform picture of logic in the way that the Karten / Ehresmann Connection gives us a view of geometry and the way I like to think about the work in OSLF is so (Operational Semantics in Logical Form) OSLF is like that it’s sort of a research program to to help us wrap our minds around what is logic in the same way that the you know, the Erlangen program and its refinements were a way to help us wrap around our, our minds around you know, what is geometry? Does that make sense?
Christian Williams 6:46
Greg Meredith 6:47
Cool. Cool. So then, um, one of the things that that was a big influence on me as a young researcher was a book in from Cambridge University Press called “Topology Via Logic” in which Steve Vickers gives a view of topology which was by the way, my first real love in mathematics, I feel like I genuinely fell in love with mathematics when I encountered topology, it was a, it was a really different way of looking at the world. And so, then when I saw this idea of topology via logic, where you characterized typology as an algebraic structure, and that that was and that algebraic structure was closely connected to notions of logic, then I really felt like there was a there was a an interesting Avenue and then when I saw the unfolding of the linear the consequences of linear logic that that sort of opened up for me this the possibility that we might we might see an organization of logic, but also that logic might have things to say, not just about topology, but about geometry. Yeah.
Christian Williams 8:26
Have you? Have you seen anything similar to the Erlangen program for logic?
Greg Meredith 8:36
Um, well, I mean, I think I categorical logic tries to do something similar but it doesn’t it doesn’t give a really it doesn’t give a very convincing account of how all the parts fit together with respect to the substructural logics. Yeah and and how modal logic works. So that that’s, that’s, that’s that’s one way in which I feel like it doesn’t. It doesn’t have the sort of same compelling organizing sensibility that that the Erlangen program does. Um, but one of the things that I’ve been, you know, sort of keeping around in my, you know, to work on when I have, you know, a few down moments, is this, this idea that we might, in the same way that Vickers transports the ideas from logic into topology and vice versa, that we might transport the idea of a connection. So we might come up with a more abstract characterization of the connection from, from a logical perspective and get a logical view on geometry. You know, the logical, you know, topology doesn’t give you a geometric geometric characterization of space, right? It doesn’t preserve angles, and it doesn’t preserve distances. So, so it’s very, you know, it’s like people like to call it rubber sheet geometry or something like that, but But yeah, it it doesn’t concern itself with those kinds of worries and concerns. But, you know, anyone who’s sitting in a car or a traffic light, wondering if there’s a different route to take, um, you know, might have geometric concerns. So, you know, I have wondered whether or not there might be a logical view of geometry, where, where angles and distance and their like, might be characterized geometrically.
Christian Williams 11:15
What do you think? I mean, the the distinction between topology and geometry. Is there a logical analog of that distinction?
Greg Meredith 11:31
Well, one of the things that that I’ve been thinking a lot about is this this notion of us splitting as a way of locating things. So, if you look at Conway’s games, in which he generates the not only the real number but the surreal numbers and lots of other numbers besides that can be viewed as a kind of generalization of data kind cuts. And both of them can be seen as a way of splitting a structure, you know, what’s to the left and what’s to the right. And so, when I saw the zipper construction, which locates a subtree, inside a context that sort of defines a notion of location by splitting an object into a context in a subtree, I began to wonder if that might be a notion of location that we could transport via logic. So that we might be able to get something like charts over a manifold structure to begin this idea of developing a notion of connection. And one of the interesting things in this is sort of what connects with the work that you and I’ve been working on is that once you have generated a logic over some algebraic structure, you automatically get a topology. So, if we associate formulae and their extension, so let’s take as our universe, the set of terms that are generated by some model of computation like SKI or lamda, rho calculus, whatever it is, so our universe is going to be the set of terms generated by that model of computation and so in general to specify a topology, I don’t have to specify all the opens, it suffices to specify a basis or even what’s sometimes called a sub basis. And as long as the logic that we construct will admit infinitary disjunctions, we automatically get a basis for topology. Because all we need is finite conjunctions and infinitary disjunctions and a top and bottom, and we’ve got our topology,
Christian Williams 15:20
what are the elements of the basis?
Greg Meredith 15:23
So the elements are the formulae,
Christian Williams 15:25
Greg Meredith 15:30
Or, you can think about them either as the formulae or you can think about them as the extensions. So if you need if you need to think about it in terms of point set topology, right, then it’s extensions of the formulae, but if you think about it in this point free way, in the topology via logic approach, then you get then then it’s the formula itself as opposed to a full extension. And here by extension, I mean the collection of terms that satisfies the formula.
Christian Williams 16:05
Are you including formulae that are just satisfied by a single term?
Greg Meredith 16:11
Christian Williams 16:13
So then wouldn’t the topology just be like all possible subsets of discrete topology?
Greg Meredith 16:21
Yeah, you can do that. But the nice thing is that you don’t have to take all the formula, right? You can take a generating set of formulae and then you get different topologies on the terms.
Christian Williams 16:36
So what would be an example basis of formulae that we care about?
Greg Meredith 16:45
Um, so for example, you might take in the case of the rho calculus on generating input terms from a particular namespace, for example. Right, and output terms from a particular namespace. So that would be, that would be an approach. Again, the point is that lots of them are available, and you use the ones that are going to give you a notion of neighborhood that is useful for what you want to do.
Greg Meredith 17:35
So and in particular, the things that seem like reasonable neighborhoods are structurally similar and reachable within a certain number of steps. Right? And so that’s a reasonable generating set for a lot of notions of computation.
Christian Williams 18:01
Yeah, that’s cool.
Greg Meredith 18:18
And now interestingly for us our arrow type this rely guarantee construction, the arrow type formulae pick out contexts. So now you can select context, and if you have context and term pairs, then you have locations. At least in the sense of, you know, the zipper construction, so you can define a notion of shard that is a collection of these arrow formulae. Right? So every inhabitant of an arrow formula is going to be a context. And you can lay that over top of your topology. And then you have to have an analogue of the of the differentiability condition. So when we’re when we’re building up a chart structure and Atlas structure on top of a manifold. When two charts overlap, you need you need there to be a differentiability condition in the overlap. That’s what makes the thing work in terms of a notion of differentiability and essentially what it does is you know, and I pointed this out in a recent talk about this, the whole thing, really pins you down to something very very close to what you can get from real space right. So some some product of the real number lines is going to give you your grid over your space and give you a notion of location, but that pins down your topology. So specifically you have this very, very clear separability conditions. So how store gives you super, super clear separability conditions, which are not generally true of models of computation, right? At least with the usual notions of separability model, for example, the Scot topology on you know, Scott domains is is not even T1, right so you don’t have these kinds of separability conditions.
Christian Williams 21:37
Sorry, sorry, did you say we’re thinking of collections of context as charts?
Greg Meredith 21:46
Yes. We’re going to think of them as as charts. That’s exactly right. And what I was doing was to kind of go a little bit deeper into the standard notion of an atlas on a manifold and what that does in terms of the kinds of spaces you can lay locations over. Right? So it really really pins you down in terms of the kinds of topological spaces you can give a geometry to. It looks very, very similar to real space. It’s not Euclidean, but it it’s only a certain range of kinds of things that you can do on top of real space. If that makes any sense.
Christian Williams 22:59
Yeah. Could you go into a little more detail? How we’re thinking of contexts as forming some kind of chart?
Greg Meredith 23:13
Yeah, sure. So, um, are you familiar with the zipper construction?
Christian Williams 23:19
Greg Meredith 23:21
Okay. All right. So, so the idea is that for those people who might be listening, that if you wanted to locate an element in a tree, you can take the sub tree below that element. So starting with that element is the root, finding the sub tree and then take the tree that the context which would get you to that element sees you split the tree into those two pieces, you know the the tree with a hole in it together with the sub tree starting from that element and that pair locates that element. So if you think about it, if you just had the context, you could plug all kinds of sub trees there. So context is not enough. And if you just had the sub tree, you could put it into lots of different contexts. So that’s not enough to locate but the pair is enough to locate. So now, if I have a term, I can consider which is just a syntax tree. I can consider locating the term in a context right and so now If I have some formula, which is an arrow formula, it’s going to denote a set of contexts. And now I can think about for a given open, I can think about locating all of those terms in each of those contexts.
Christian Williams 25:24
And the open being another formula.
Greg Meredith 25:27
Yeah, the open is a formula Exactly. So now, so now we can start to think of sets of arrow formulae over topologies. And over a topology imposed upon our universe of terms. Right. But that’s not quite enough. Right? And this is what I was trying to get at which is that the differentiability condition on the overlapping charts, we need to find an analogue of that. Right. And so now this is an idea I’ve been playing around with I’m not saying that I have a, um, you know, a solution that is like immediately obviously compelling. I think there are lots of different solutions that we can consider. But one that seems natural, so let’s, let’s think about what what is it? What is it going to mean? For two arrow formulae to overlap? Right, so the whole is the thing that’s on the left hand side of the arrow. It’s what you supply. And so, when two arrow formulae overlap, it’s because the thing that you put in the left hand side of the arrow is the same or that they have similar extensions. So some, there’s a conjunction that will fit in that region. Yeah. Right. So then what you want to do is you want to ensure in the topology that you’ve got that if you supply them, that you don’t sort of run off the edge as it were. That the terms that you would get as a consequence are there in the topology? Does that make sense? You understand what I’m saying?
Christian Williams 28:10
I think so. So, if you have a arrow type sigma arrow tau and another sigma prime arrow tau prime, they overlap in sigma intersect sigma prime.
Greg Meredith 28:25
Christian Williams 28:25
And we want to check that. The result?
Greg Meredith 28:32
Christian Williams 28:32
it stays in tau intersect tau prime.
Greg Meredith 28:35
Yes. No not tell intersect tau union
Christian Williams 28:38
Tau union Tau prime.
Greg Meredith 28:40
You want to make sure that the smallest overlap. The smallest input, widest output, so you can’t possibly fail. Right? And you need to make sure that that thing is available now it will be available, since it’s a finite disjunction. But you also need to make sure that the charts locate those points. That those those points are not outside of your chart, so your chart has to cover those points as well.
Christian Williams 29:27
those points being tau union tau prime.
Greg Meredith 29:30
Yeah, exactly. That’s right. Yeah.
Greg Meredith 29:35
So this is the kind of thing that I think about a lot is, is trying to make sure that when we’re solving for OSLF that we get utility on a lot of different fronts. So, you know, one area of utility that I continue to stress is making, providing type systems that will guarantee that our programs are more robust without having to do a lot of hand reviews. The other thing is, I want to make sure that we have this query capability so that we can use these type systems to find programs or other you know, computational gadgets. I’m out of a sea of computational gadgets like GitHub. And then this is natural. There’s this natural movement from this idea of using types to find programs to a notion of types or formulae that locate in the sense in a geometric sense. And that’s really what I’ve been trying to talk about today is how if we were to push this analogy, how far could we get? How far could we get a geometry via logic? Where we’re relatively literal. So we, we look at the, you know, the refinements of the Erlangen program and say, Now, that’s pretty successful. You know, virtually all the all the notions of geometry that have been put forward are captured by this idea. So can we give an abstract characterization that would give us a geometry of computation. Yeah. Right. So that’s the, that’s essentially the program and, and I feel like keeping our eyes on the prize in the sense that when we go down the checklist of “does our notion of logic do this”? Right? That this kind of thing should not should not be too far down the list in terms of priorities because, I don’t know….. it’s useful to dream.
Greg Meredith 32:46
Any thoughts or comments about about this approach?
Christian Williams 32:52
I think it’s really cool. This idea of the topology being structured by structural or behavioral similarity sounds very useful in terms of being able to actually explore these huge spaces of code. And this connection to manifold sounds really cool. I don’t have a good enough intuition yet. But yeah, it sounds really promising.
Greg Meredith 33:41
But well, we’ll have to see. I mean, it may it may all crash and burn, right. I mean, I just, you know, it’s only been in the last little while that I’ve really begun to absorb the Erlangen program in a way that I can. I can start, you know, seeing whether or not we could we could make any sort of headway whatsoever. And, you know, and this particular approach to using the zipper construction may not pan out, but personally, I mean, I just have the strong intuition that there’s something deeper there’s something more to this idea of splitting an object into a context and, and a sub object then we then we have yet understood. Right? And I think it’s a different insight on the same kind of thing that we do in mathematics all the time, right? First you define some domain, whether it’s a topology or a manifold or a group, any of these kinds of things and the first thing you do is to say what a sub one of those things is. What is a component right and this idea of thinking about, you know, components in terms of a context and, and sub thing I don’t think we’ve got to the end of that yet. And there’s a close connection between that and power of. Right. So so there’s there’s something there’s something there, right so the object that is all the sub objects of a thing. And and there’s something about the way category theory looks at it that has utility, but I don’t think it quite captures this context splitting idea yet, but maybe that’s just because I haven’t understood it properly.
Christian Williams 35:50
Yeah, I mean, from what I’ve heard, there’s a nice categorical generalization, I mean, characterization of, of zippers in terms of derivatives.
Greg Meredith 36:01
Oh, yeah, for sure. Yeah, absolutely. The derivative is is a is a very nice notion. But how that notion relates to sub object classifier, for example, is not clear. At least to me.
Christian Williams 36:20
Yeah. Yeah, I don’t I don’t think people have meshed the topics of zippers and these polynomial functors with topos theory very much.
Greg Meredith 36:37
Yeah. Yeah, exactly. Exactly. So there’s one there’s good. Fit. What?
Christian Williams 36:45
Christian Williams 36:47
Something else I was thinking about was, the the categorical notion of topology. In the distinction between pre sheaves and sheaves, you can actually put a topology on a category. Picking certain covers of each object. And then rather than looking at all presheaves on a category always to assign sets of data, you look at the ones that preserve this local to global piecing together of information. And I’ve been wondering about that when we take a language and make it into a category. What are these four topologies that we could put on it? I think that might be really similar to your idea of putting it on the lattice of formulae, I think those might even kind of be the same idea. But you could maybe do it at the source, like in the language as the category.
Greg Meredith 38:17
It would be nice if they were the same.
Christian Williams 38:20
Greg Meredith 38:21
that would be lovely. Yeah.
Christian Williams 38:23
Greg Meredith 38:25
Yeah. Yeah. Um, I mean, you know, it would be incredibly pleasant to be able to use a lot of geometric intuition to reason about programs. Not the least of which is that we have tons and tons of, of computational apparatus for geometry. Right. I mean, you know, and, and literally millions of years of evolution in our brains to be able to do geometric processing very, very well. So if we can if we can begin to yield that against programs, I think that would be very, very powerful. Very useful.
Christian Williams 39:25
Greg Meredith 39:25
definitely. So that’s, that’s another reason to do that. Well, I mean, really, that’s all I had for today. I thought it might be fun to just, you know, occasionally just, you know, talk about these wild eyed ideas. Yeah. Yeah, I guess the closing comments.
Christian Williams 39:51
No, I don’t think so. Okay,
Greg Meredith 39:53
cool. All right. Well, I’m looking forward to our next conversation about varying the the base on the presheaves construction, that’s
Christian Williams 40:04
Yeah. Yeah, that sounds good.
Greg Meredith 40:06
All right, well, you have a good day.
Christian Williams 40:08
All right you too
Greg Meredith 40:09