Podcast RCast Research

RCast 66: Projective Geometry and the Rho Calculus


Subscribe to RCast on iTunes | Google Play | Stitcher


Greg Meredith discusses projective geometry and its role in the Rho calculus with Isaac DeFrain and Christian Williams.

Discussion

One of the guiding intuitions is that points are where lines ​interact. ​Formally, this means that lines are interpreted as processes, while points are (sets of) redexes. The possible redexes have been color-coded to match up with coloring of the points on the diagram. For example, the point labeled p1, colored in red, lies on L1, L3 and L4. Therefore we expect that L1 might reduce with L3, and it does in the two red-colored terms of the respective parallel compositions. Similarly, L1 interacts with L4 at red-colored par-ands and likewise for L3 and L4.

To move from this skeleton to a process that simply reproduces the Fano plane after (some number of) interactions (thus, providing a stable geometry) it is merely required to solve for P​ij​ and Q​ij.​ This is a simple exercise, but you might find this little tidbit of to be of some help.

x?(y)( *y | x!(y) ) | x!( x?(y)( *y | x!(y) ) | P ) ->* P | P | P | ….

It is much more interesting to construct solutions for P​ij​ and Q​ij s​ uch that as lines interact the whole space evolves into a new geometry.

Observations
Towards a Compositional Geometry

This approach has a key property — geometries can be compositionally constructed. In the simplest cases geometries can be combined into larger geometries merely by parallel composition. It is worth comparing this to the machinery of say… cobordisms.

The relationship of Geometry and Logic

The skeleton is actually much closer to a spatial-behavioral type than it is to a process, evincing the implicit alignment with Curry-Howard.

Factoring the encoding through an encoding of graphs

As a final observation, this particular method of encoding is not specific to the Fano plane, but rather is derived directly from the graph associated to the geometry. The question arises as to how different π-calculus encodings of graphs compare in representing the geometry and evolutions of geometries. For example, Gardner, et al, have explored π-calculus-based encodings of graphs in the context of model XML documents. Does their strategy for encoding graphs provide different insights on the evolution of geometry.


Transcript


Greg: I want to recap some work that I mentioned on the Casper stand-up a week or so back; I thought it would be a good topic for RCast. It’s technical, but it’s really fun technical stuff. Let me quickly bring up the slides. What inspired me to talk about this at this time is a class that Nima Arkani-Hamed gave at Harvard physics this past fall. The lecture series is called Spacetime and Quantum Mechanics, Total Positivity and Motives. He takes the approach of calculating scattering amplitudes for particle interaction in a situation where he issues virtual particles; everything is on shell. He shows that the positive Grassmannian is directly connected to the amplituhedron and gives you these really nice, crisp calculations for these scattering amplitudes. It’s a really interesting perspective. 

He arrives at a kind of graph rewrite formalism for calculating these things that are very similar to the kinds of things that I contemplate when I think about models of computation, especially things like the Rho calculus. That was on my mind and I realized I remembered this work I’d done about a decade ago, in which I was showing that you can get more direct geometric encodings into the process algebraic setting. 

At one point we talked about Knots as Processes. That kind of topological information is really not the same as geometric information. Geometric information is a little bit more detailed. Typologies don’t preserve things like distances. I’m not going to go as far as encoding Euclidean-like geometries, but I am going to go after projective geometries. 

In particular, I’ll limit the scope of the discussion to encoding finite projective geometries, like the Fano plane, which is the simplest example. The point here is that while LADL as an algorithm allows us to look at generating type systems for a wide variety of algebras and calculi, and so could be applied to something that like Clifford algebras that already encodes geometric information, this is different. Can I get geometric information directly encoded into the process algebraic setting? Let me stop there and make sure that what I said makes sense to you guys.

Isaac: Yes. This is reminiscent of the Erlangen program. What can you say about like the connections, similarities, dissimilarities…

Greg: Let’s remind our listeners what the Erlangen program is.

Christian: My recollection of the Erlangen program is to have some sort of algebraic classification for geometries in terms of group theory and projective geometry.

Greg: Right. I’m glad you said that because it loops back around to the mention of LADL. If I can come up with a decent process algebraic encoding for a variety of geometries or geometric spaces, then I can employ the type system for the particular process algebraic framework that I’m using. That becomes a classification mechanism. It fits perfectly with the intent of the Erlangen program.

Christian: So it would be like a type classification for geometries?

Greg: That’s right. 

Christian: What is the motivation for incorporating geometry? 

Greg: I can tell you what my immediate motivation was at the time. As I was thinking about this, I had been talking to a lot of biologists. It was clear to me that the kind of work that Shapiro, Habib, and Cardeli were doing was important in the sense of capturing the dynamical certain aspects of the dynamical information in these cell-signaling networks and other kinds of biological members. At the end of the day, a lot of the information really needed to rely upon geometric information. A lot of the dynamics of these networks rely on geometric information—protein folding is the best example. It wasn’t like I was seriously attempting to go after the protein-folding problem or any really serious geometric problem. I just wanted to see for myself, can I, in a reasonable fashion, encode geometric information in this setting? The answer is a resounding yes.

There’s another aspect to this conversation. A point Bob Coecke makes when he talks about the graphical language that he develops for quantum mechanics. He points out that using the standard presentations of quantum mechanics, it took people 60 years to spot quantum teleportation. Whereas when you use the graphical presentation that he and Aleks Kissinger developed, you can spot quantum teleportation immediately. 

There’s something about different presentations that make certain things really jump out at you. Not just that they jump out, but they open up different kinds of questions. By the same token, if you look at the sort of considerations that Einstein and Hilbert were putting together when they were thinking about general relativity, they weren’t thinking about compositionality; they weren’t thinking about bisimulation for things for dynamical systems—asking when different representations might yield the same dynamical system; they weren’t thinking about type-based classification. 

But it turns out that all of those things are immediately present. The thing that’s a real stumbling block for Einstein in Hilbert is not a static geometry but a dynamic geometry. It takes Einstein a long time to be able to get this recursive relationship between the stress-energy tensor, which is giving you the distribution of matter and energy, and the metric tensor, which is telling you how space is warped.

Isaac: Is that what you mean by a dynamic geometry, this interplay between the stress-energy tensor and the metric tensor?

Greg: That’s very physical. What I mean is geometries evolve over time. Euclid does not discuss the evolution of Euclidean geometry, for example, into hyperbolic geometry. Remonde does not discuss the evolution of hyperbolic geometry into projective geometry. However, once we go through this presentation, what we’ll see is that this idea of an evolving geometry is right in your face. It jumps out immediately. That’s why this is fun to talk about—the presentation makes different questions immediately apparent. 

The Fano plane is defined by some points and lines. There are seven points in seven lines and every point is on three lines and every line contains three points. If you try to draw that in Euclidean geometry you will fail because you have to break the parallel line thing to do it. In particular, if in the diagram you look at L7, it doesn’t look anything like what we think of as a Euclidean line. My idea, which was directly related to a lot of things that I’ve been thinking about in terms of Knots as Processes, is that lines are going to be processes. The Fano plane is going to be a parallel composition of lines, but the points are going to be sets of redexes. They’re going to be ways in which the lines can interact.

If you look at this diagram, I’ve color-coded it. If you look on the diagram, L1 and L3 and L4 should be able to interact at 0.1. That’s exactly what we see. The paren in each of those lines represents a potential interaction. It should be straightforward to follow the encoding. But now in the encoding, what I’ve done is to make it abstract in the continuation of the interaction. L1 has a paren, which interacts on X13, wait for something on X3, call that something that you get Y13, and then become P13. That’s in summation with a similar kind of thing on X14. 

The interesting point is that if you make that RN go back to itself, then this encoding will remain stable. 

Christian: If you make what go back to itself? 

Greg: If you make P13 basically be that sum. You do a recursion. Everywhere you have a continuation, you have to have a recursion. Similarly for the outputs. Those would have to be banged in order for the thing to be stable. 

Christian: What do you mean by stable? 

Greg: Otherwise it will evolve to something that isn’t the Fano plane. 

Christian: Oh, yes.

Isaac: You’re saying if you go make a reduction or something, it’ll go back to the Fano plane only if you make these continuations. In the expression that you’re looking at particularly, P13 and P14, do they both have to be that choice expression?

Greg: They both have to be that choice expression.

Isaac: That makes sense.

Greg: Immediately, what this raises is the possibility that they don’t go back to that. They go to something else. This then raises all these wonderful different distinctions. You could imagine that you have a notion of continuity where whatever the evolution is, you always get to something that’s a geometry. You have to say what a geometry is. For example, it’s a set of points and lines with coincidence relations for the points and the lines. You always get to something that is a geometry. Now we can easily imagine that evolves to something that isn’t geometry, where you don’t respect the coincidence relations, for example. 

Christian: You’re talking about whether or not you can encode arbitrary parallel processes into one of these…

Greg: Geometry.

Christian: When are you saying that’s possible? 

Greg: It’s possible whenever you have a collection of parens that look like lines and a collection of redexes that match up the points and respect the coincidence relations, as they do here. 

Christian: If they don’t interact, they don’t have potential for interaction. They’re just like parallel lines. 

Greg: That’s right. Isolation is akin to parallel in Euclidean space. 

Christian: What determines input versus output here. Is it just to make the potential for interaction match up? 

Greg: There are many different encodings that will respect the same coincidence relations. In fact, there’s something interesting here that I want to point out: I have used mixed summation. We can actually look at different kinds of lines. There are lines where you’ve got uniform summation. You can distinguish the lines that have only input or output from lines that have mixed sums, like L7. 

I can’t remember exactly, but I’m fairly certain that you can prove that you can’t encode the Fano plane without mixed summation. I think it’s a fairly straightforward proof. I don’t remember exactly the steps that I took to get there, but I’m fairly certain that I concluded that you have to have mixed summation. 

That says something very interesting because the theorem is that you can’t have a par-preserving encoding of mixed summation down to a synchrony, or down to something that is only input-guarded, or non-mixed. There’s an expressiveness jump that is necessary to encode the Fano plane. That’s quite interesting. 

These kinds of observations that you can make with the processing encoding, that one plus the one Christian, you observed, which is that there’s some kind of analog between parallelism and interaction or isolation. You can begin to classify the lines by the types of interaction they support. You can see that different geometries are going to demand certain kinds of expressiveness characteristics within the calculus. There’s lots of opportunity here for exploration along with the fact that you can now start reasoning about geometry using bisimulation, which is an extremely powerful proof technique.

Christian: I was trying to picture how an interaction here would cause this picture to evolve. For example, if L1 communicated with L3 on the red, what would happen to the graph?

Greg: Let’s say that you choose the X13 Q13 output and therefore you choose the X13 Y13 P13 input. If P13 doesn’t put back what you got, you no longer have an L1 or an L3. 

Christian: But you still have pieces of them…

Greg: You have pieces of them but you don’t have the line.

Christian: So that whole part of the triangle disappears. 

Greg: Exactly. We’ve talked about stability—stability is when you evolve you get back to the Fano plane. Then we talked about continuity: when you evolve, you get back to a geometry. Then you can have this other notion. This is something that people have actually contemplated when they’re thinking about string theory. You can have evolutions that take you away from anything that is geometric and then eventually it comes back to something that’s geometric. These are kind of miraculous states. You go through this period where water turns into wine and then come back to you to the normal world again. That’s also really interesting to me. This is the equivalent of tears. In string theory, they’ve contemplated the possibility of tears in spacetime. 

Isaac: That sounds a lot like a singularity. 

Greg: That’s exactly right. You can contemplate these geometries that—support is probably a strong word—but where you might be able to reason about a notion of singularity. This is another interesting fact of this particular presentation. From my perspective, what it shows is that we’ve barely scratched the surface of what we can do with these kinds of computational techniques that show in a way that hoy teach to high school students some of the power of the process calculi that’s just waiting to be explored. We haven’t really thought about at all what we can do with these gadgets. 

They’re super powerful and we really know very little about them. There’s a lot more that we can encode in this way. In particular, the things that are important—that I keep coming back to—the thing is computationally complete. It has a certain minimal level of expressiveness. The thing is compositional. This is really important because it means that you can reason about things in chunks as opposed to having to consider the whole system all at once, which is one of the biggest challenges for physics and for a lot of classical mathematics. It’s not compositional. You have to have these global or God’s eye views, which make problem-solving very hard, and also make physical theories less testable. 

Christian: Do you have any writings about the general form of this encoding?

Greg: I make mention of it in lots of different papers, but I don’t think I ever wrote a paper specifically focusing on encoding and graphs. That’s largely because I developed the TOGL theory, which is specific to graphs. One thing that could be done is to show that every TOGL expression has an encoding in Rho calculus, for example. That would be a way to expose these graph-encoding techniques. 

It’s also interesting to contemplate: How does this change when we start talking about infinite geometries? The thing that makes this tractable is the finiteness of the Fano plane, seven points and seven lines. It’s easy to write down. Once you start getting these infinitary structures, how does that change the expressiveness power of the process calculus?

Isaac: I didn’t notice this the first time I watched this stuff during the Casper stand-up. Intuitively, lines 1, 2, and 3 are really similar somehow. They all lie on the outside of the triangle. Their encodings are all very similar—no mixed summations. The lines that all intersect at that midpoint—lines 4, 5, and 6—also all have very similar encodings, with one mixed summation. Then line 7, the circle that connects all of those midpoints also has a different encoding and they’re all mixed summation. I thought it was interesting that you’ve captured properties of the lines themselves and the fact that they are similar with other lines in the encoding. 

Greg: That was very much on my mind when I was constructing the encoding—to try to make sure that the shape of the encoding respected what seemed to be going on in the geometry. I confessed that I didn’t explore a number of alternative encodings to see if this kind of distinction always shows up because it was so obvious that the encoding should respect this distinction. I picked an encoding that made that as evident as possible. 

This is where I would hope that the type system would start to get really interesting, so that we could pick those kinds of phenomenon out at the type level. Then we would have a lens. Maybe we still have the outer triangle, but the inner stuff starts to get a little more interesting or intricate. You could have types that describe the boundary but are less distinct about what’s going on in the middle. Then you get lots of lots of geometries. Or the other way around: types of discard the interior but not the boundary. 

To me, it’s pretty, and it’s cool, and it says that we really don’t know how much more we can say with the process algebraic stuff. Certainly, we can say stuff about protocols. We can say stuff about topology. We can say stuff about geometry. How far we can go with that—I’m really hoping to inspire another generation of folks to go and investigate these problems. Art as long; life is short.

Christian: Is there any connection between this and the theory of graphs language you were talking about a while ago? 

Greg: That’s what I was saying when I call it TOGL: theory of graphs. 

Christian: Oh, yes. 

Greg: I could take a TOGL expression, which will give you a graph, and encode it in this technique. Since that’s really a theory, you could say that these are models of the theory. You could actually develop a whole interesting theory-model structure that way. In fact, it might be interesting to apply LADL to that. You can apply LADL to the theory, you could apply LADL to the model. Then you have to have a morphism from the types you get for the theory to the types you get for the model. That would be an interesting thing to take a look at as well. 

Christian: Definitely. 

Greg: How are things going with the work that you have reported on last time, Christian? 

Christian: All is going well. I’m just trying to pump out this paper; it might be a series of papers. The most interesting question is going to be exactly how much we can express using this method. I have figured out how to do recursion in this setting. Basically, you can take a theory of Rho calculus. Basically, you’re sending each base type sort there to the lattice of subfunctors if it’s representable. So for each sort in the theory, you’re getting a hiding algebra. You’re painting the original theory into the category of hiding algebras. In particular, they’re categories. They’re Cartesian-closed posets and they have some other nice stuff. In particular, the result that you get over there is more than just your original theory. It has adjoints to certain operations. 

In particular, you can consider endofunctors on one of the hiding algebras. For example, if you had a process with a free variable of sort process, this is a functor, from the lattice of processes back to itself—because it takes in a process and plugs it into that free variable—for any endofunctor you can always take the initial algebra of that endofunctor. For any process with a free variable, when you want to say the mu ex to do the recursion, you’re forming the initial algebra of that endofunctor.

Greg: I understand. Typically you want to do recursion not just on the processes but on the formulae. You want to describe recursive formulae.

Christian: That’s what I meant to say. You can have any predicate with a free variable—the free variable is also type process predicate, not just a process.

Greg: Got it.

Christian: We’re getting everything from namespace logic. It looks like we might get more. The question is exactly how much more. What can be expressed in this approach and what cannot is probably the main thing that’s left to be determined.

Greg: That’s really awesome. Are you keeping John Baez apprised of the work?

Christian: Yes. He’s liking it. That was his main point to emphasize is that this is going to be some kind of intermediate logic; it’s not clear exactly how expressive it is yet. He was telling me that I needed to nail that down. 

Greg: Oh, cool. Very good. 

Christian: I’m just excited to start meeting with you about how to implement this stuff. 

Greg: Yes, totally. I’m starting to feel more and more like Agda is missing some key pieces. If you want to get the type system involved in the verification of certain properties that are related to the shapes of terms, then reflection is a must-have. You need to go back and forth between, “I’ve got the term as a data structure” versus “I’ve got the term as a thing that the compiler can check the structure of.”  It’s looking more like Scala is a better choice than Agda. I’m going to still keep contemplating it, but this is my current thinking on the implementation piece. 

Greg: In those languages, is it pretty straightforward to write down a theory of calculus? 

Greg: In Scala, I’ve given multiple basic implementations where I do exactly that trick of getting the type system to verify that the theory is sound because the compiler says, “yep, this is a type and this term conforms to that type.” At a minimum we know it’s not trivial. The type is inhabited. I’ve given lots of implementations and I’ve given exercises to people that try to come up with a variance of the implementations. In fact, that was part of the interview process in the RChain ecosystem, handing people these kinds of little puzzles.