Greg Meredith discusses the evolution of Rholang with Artur Gajowy, Isaac DeFrain, and Kent Shikama.
The slides below the transcript correspond to this talk.
Greg: Today, I was hoping we could go over the Rholang 1.1 proposal so that people can see what the features are and the rationale for the features. I was hoping that Artur would be on this call, but Kelly is very protective of his time right now.
Artur: I’m actually on the call.
Greg: What a pleasant surprise! Part of the reason I wanted Artur on the call is because you have another proposal for the applicative extensions. I wanted to go over those after we go over the features that I’ve proposed. I’d like to give people a sense of how far we could go. I also want to encourage exploration as it’s open source. People can take the applicative ideas and do a knock-off, a prototype, and people can start to see what the advantages are. I’d like to like to go over those as well. How’s that sound as an agenda for today?
Isaac: That sounds perfect. Let me say thanks for having me on again. Those questions that you posed right at the beginning is exactly what I wanted to hear about today.
Greg: Awesome. Essentially, the Rholang 1.1 proposals have to do with providing support primarily for sequential computation. Rholang 1.0 was all about supporting concurrency and not necessarily making it easy to do sequential computation. What I’ve discovered from years and years and years is that if it gives people a tool that’s very close to what they’re familiar with, they won’t go and explore the corners that they’re not familiar with. They’ll stay in their comfort zone. You have to get them out of their comfort zone so that they can understand exactly what you’re trying to say.
After they’ve had some chance to get a grasp of what the full scope and scale is, then they can move into things that are a little bit more familiar, but in this wider context. That’s the rationale behind the way we’ve rolled out the Rholang design. These features are largely about making it easier to express certain kinds of sequential design patterns. As with the last few talks, there are slides that I’m speaking to that are available on the podcast page.
Isaac: I just wanted to add that—you’ve summarized what I’m about to say, but I just wanted to say it in a slightly different way—giving people a new tool is essentially giving them a new way to think about problems. Forget about the old way for a second so you can think in this new way and then introduce some features; it can also do the old things that you’re familiar with. So let’s throw those features in as well.
Greg: Exactly. I actually learned this the hard way when I did Biz Talk inside of Microsoft. I gave the full suite and discovered that because I also offered this way for people to stay in their comfort zone, they did. The same thing happened with Scala, where Odersky did a very clever thing: he made it possible to program in a very Java-like way inside Scala, which was really smart marketing because a whole bunch of developers could come over to Scala, jump on the Scala bandwagon, but they were really still programming in a Java-like way. They weren’t actually using functional design patterns.
But enough philosophizing. I was hoping to jump over to the first feature, which is a new and improved for comprehension. In this proposal, we rejigger the syntax a little bit. We’re going to replace the old meaning of semicolons in the join patterns with an ampersand. The semicolon will take on the meaning of sequencing. The general form would be something like four, then you have a bunch of pattern from channel-binding pairs. You have a pattern on the lefthand side of a binding arrow from a particular channel, ampersand, and then another one of those ampersands, and another one of those—yada, yada, yada—and then semicolon.
The meaning is that all of those need to go at once. You have to have bindings that match the patterns from each of the channels in parallel. Then sequentially you move to the next set of binding patterns, and then sequentially move to the next set of binding patterns.
And also this source—in Rholang 1.0, sources can only be channels. Now we expand the notion of source to be either an asynchronous request, meaning that the other side of this is probably an output that is waiting for an acknowledgment, or it can be a method call, in which case you’re sending arguments in to the source. Then there’s a well-defined return channel along which you’ll get a result.
We’ve expanded source to be a channel or asynchronous use of the channel or a channel as an entree to an object via method call. Those are the three different expansions in addition to this join-in sequence pattern. Does that make sense? It’s hard to explain. If you look at the slides, it’s probably easier, but Isaac Kent, Artur, is that making sense?
Isaac: Yes, definitely. I wanted to raise it a little bit differently than you phrased it and ask a question in phrasing it this way. It looks aside from the name or source changes, the changes in the actual for comprehension are now we accept, basically, a matrix of listens, whereas before we were accepting essentially a vector of listens. By listen, I mean the source pattern binding pair.
Greg: That’s exactly right. In each row in the matrix happens concurrently and then you go from one row to the next. That’s in sequence.
Isaac: So it would have to match the first row, however many of those listens would have to have some pattern that matches. Then we would go onto the second one and repeat that process.
Greg: You’re chunking in rows.
Isaac: One more thing. The rows: do they all have to have the same length or can they all be different?
Greg: They can all be different lengths.
Isaac: Ok, great.
Greg: We’ve talked a little bit about the general form. If I just specialize down to the case where we’ve got a single synchronous listen binding pair, so then the for comprehension would look like for pattern from X question Bang do P, that’s the shape of the for comprehension. The question is, what does that mean? Can we give a translation of that?
It turns out that we can desugar that quite straightforwardly. We take the pattern and expand it into a tuple pattern. The lefthand part of the tuple is going to be the original pattern and the righthand part of the tuple is just going to be a single variable, which has to be fresh with respect to P. That’s going to be the return channel.
Imagine that what’s really happened is on the other side of this for comprehension, we have a send, and the send is sending the data to a tuple, that where the data is going to match against the pattern, and it’s sending a return channel, and it’s hanging its continuation. It’s now an output continuation pair, but it’s hanging its output continuation, which Rholang 1.0 doesn’t have on a for comprehension, which is waiting for a ping on the return channel. That’s essentially what’s going on here.
The mnemonic of the question Bang part, it helps people understand what’s going on. We’re waiting for an output on the channel, and then we’re going to poke back something that is hanging off of a need for an acknowledgment on the return channel. That’s the basic shape.
I probably mangled the English description. The math is nice and crisp. It’s a one-line equation for desugaring. Any comments from you guys?
Isaac: I have one question/comment. The freshness of this name R that we’re listening for: Is the reason that we want it to be fresh so that it doesn’t also interact with that original continuation P?
Greg: That’s exactly right. If you look at the math, the pattern expands from pattern to tuple pattern comma R. Now you’ve got a binder for R, so if R were free in P, then you would capture it accidentally. You don’t want to do that.
Isaac: That makes sense. Off of that question, how do we ensure the freshness of this name R? We just take the set of free variables in P and just make sure that it doesn’t coincide with any of those or something?
Greg: That’s exactly right. Rho-calculus has all kinds of fancy tools for this, but you don’t need those fancy tools. People have been doing this forever. Any observations or questions, Kent or Artur?
Artur: It just struck me. Basically, correct me if I’m wrong, but it seems that X is basically a function channel. So we receive on the channel the arguments on the return channel, and the question mark Bang receive is sugar for implicitly immediately responding to the digital return channel within a tuple or whatever.
Greg: Yes, it’s making the for comprehension into a makeshift function. It’s a function that has a void return, but it does have a return. That’s another way of thinking about it.
Artur: The thing that struck is me is that there is actually no ordering between the execution of P and the callback that is being sent through R.
Greg: That’s correct.
Artur: Why would we want that? Initially, I thought that we have that sequencing.
Greg: What you have is a guarantee that Q will only run after the data on X has been delivered and not before, but the ordering of P with respect to Q, they’re running in parallel.
Artur: Makes sense.
Greg: All right, I’m going to move on to the next one. The other source expansion that we have is the X Bang question and then some arguments. The simplest expanded for comprehension that uses this would be for pattern from X Bang question, some arguments, A one through A K closed paren do P. That’s curly braces P. What this desugars into is, we really do make a fresh R in the sense that we bind it with a new binder. We make a new R and then in the scope of that new binder R we send on X all of the arguments A one through A K and the channel R, because that’s going to be the return channel where we’re going to get information.
We’re making use of the idea that on the other side of X is a function-like thing, which is expecting some arguments, and then we’ll return a value. The R is the place where the value is going to be returned. The convention is the return channel comes at the end of the list of arguments.
If you’ve got some other return discipline, then you would have to wrapper that to get it to line up with this convention, which is of course not a big deal. But if you were, for example, using this sugar to layer over a library that—maybe it was a C library that you poke through into Java and then made available in Scala and then made available through Rholang—if that thing used a different convention as a part of your wrappering, you’d have to get the convention to line up. Arguments first and then return channel.
Then you hang P off of the value that you’re going to get back from R, and that value is matched against the original pattern. You’re expecting the return value to be some data that matches against the pattern. If you get a match, then you release the continuation. Again, a long-winded way of saying what is just a one-line equation in the spec. Any questions or comments? Kent?
Kent: This makes a lot of sense. The sugar that you’ve introduced previously works well with this sugar. This could be the send side, and then the other sugar would receive on X and then send back on the return channel and P would get run, for example.
Greg: Yeah, that’s right. You’re bringing up an important point. The way we’re doing all of these different sugars is that you have to make them all work together all at once. As we go through all of these different expansions, we’ll see how they all fit together like together. In particular, the output side needs to be looked at. This one, we’re expecting it to line up with either something in Rholang that looks very function-like or with a Rholang wrapper around something that’s actually a function in Scala.
Let’s move on. The other piece of the puzzle is Isaac’s idea of thinking of this as a matrix. You’ve got these rows that are separated by semicolons and the column entries inside the row are separated by ampersands. That’s a good way of thinking about that.
The way you can think about the overall desugaring of the new and improved for comprehension is: get rid of all of the fancy sources first; then you’ll be down to things that are just ordinary channels as sources. Then what do you do?
Desugaring is actually a little bit more permissive than getting rid of them all at once. When you have, for example, a source in the method call style that’s in the first position of a row, you concept a new return channel, and then you do the send and put that in parallel with waiting on the R.
You can do this, likewise, for the other kind of source. Then you can get down to ordinary joined patterns. All that’s left is the sequencing part. The next piece of the puzzle is: how does the sequencing work?
Essentially all that does is nesting. If I have a row that’s all just a proper channel sources, I can turn that into an ordinary join. Inside of that one, do the translation of the matrix minus the first row. A long-winded way of saying what’s again a one-line equation in the slide. Anyone want to take a stab at explaining it in a different way for our audience?
Isaac: I can try. You’re saying we’re going to desugar the sources first. If there’s a method call on one of those sources in the first row, then we can desugar that, and if there is one of the other special sources that we have, we can desugar that as well, until we’re down to only names as sources on the first vector of listens. Then we just turn that into a usual for comprehension that with all of those listens joined together, just like we usually have in Rholang, and then the rest of the listens in that matrix are just put into the continuation. We just have another layer of this desugaring to do at that point.
Greg: That’s exactly right. At the end of the day, what happens is: the semicolons turn into nested for comprehensions. People who have experience writing Rholang, they feel really uncomfortable with the amount of nesting that they have to do in order to do just what would ordinarily just be straight-line code in a language like Java or Scala. It makes Rholang look really unwieldy.
This is providing that code compression. We now have this mechanism to do similar kinds of straight-line code approaches. Then you can see for yourself the kind of compression you get. The nesting structure that people experienced in Rholang today is very similar to when you program and node.js and you have to sort of callback hell, where you send a message and then you have this callback and inside that, you’re expecting to send a message and have a callback. That same kind of ugly, unwieldy nesting rears its head in most purely asynchronous programming patterns. This takes away that, so that you can get the kind of compression that you expect in ordinary straight-line sequential languages. Nicely described, Isaac.
Let’s look at the other side. We’ve been focused on the for comprehension side, but as Kent pointed out, there needs to be some massaging on the output side so that they line up with the for comprehension side. In the case, where you have a source that’s like a bang question—sorry, question bang. On the output side, we now have two new forms. Again, we use the bang question characters to provide a mnemonic for what’s going on.
Dual to the question bang binding pattern in the for comprehension, we have a bang question semicolon continuation on the output side. We’re going to send the value and then we’re going to wait for an acknowledgment before we fire P. Intuitively, you turn the sending of the value into sending of a pair, which is the value together with a return channel. So that people don’t have to write semicolon zero all the time, we also add in a period, so you can say X Bang Question on value, period. What that means is you’re expecting to get an acknowledgment that the value has been sent, but there’s no code to run after that. That’s the extension.
The syntactic sugar is, again, very straightforward. The desugaring of X Bang question V semicolon P is just: make a new R, send on X to tuple V and R star, and in parallel, wait on R for underscore. We don’t really care what we get back because we just want to know that we got something back if that’s the acknowledgment part. Then we run the translation piece. That removes all the semicolons from output to expressions. Likewise, if you have an X Bang Question V period, that turns into the desugaring of X Bang Question V semicolon zero. Anybody want to take a stab at restating that?
Isaac: The desugaring of this X Bang Question value and then a run P sequentially afterward is really similar to the for comprehension where the sources, the X Bang Question. There are some slight differences. The one big difference is you don’t have a specific pattern that you’re listening for on that channel R. It’s just anything comes through.
The other thing that I wanted to ask was: in this case we’re sending a tuple, is there an advantage to sending the tuple as opposed to the value V and the channel R as two messages?
Greg: You have to send them together. How it’s packed is like a lot of isomorphic packagings. I just took this most straightforward one.
Isaac: The last question that I have is: that value V, can it be a list of values? Typically in Rholang, I can send one, two, three—I can send several values all at once, polylactically or whatever they call it.
Greg: Yeah, they call it, yes. There’s an obvious polylactic extension of this. That’s correct.
Isaac: That’s what I was getting at.
Kent: I just want to point out that there seems to be like a small desugaring complexity. Between the slide that says removing received send and the next slide, removing semicolons, essentially you want to put the R Bang unit—I don’t know if that’s the right way to say it—but you want to put that in par with the desugaring of the next semicolon. You don’t want to put it in where the P is on this slide that says removing semicolons.
Greg: That’s right. That’s why I say the safest way to do this is you get rid of all fancy sources in a row and then you chunk the rows, then you do the nesting.
Kent: Yes, that makes sense.
Greg: It’s a good observation. Thank you for that. In the proposal, I work out an example calculation. I don’t know that it’s going to be successful in giving an English description of the calculation. We’ve got a one-liner where you’ve got a for comprehension that’s doing one of these Bang Question M and then acknowledge from X, where it’s receiving something that’s going to be bound to M. Then N is bound to something coming out of Y, which we also acknowledge. Then the continuation is to print on standard out to send a message on standard out that M plus N and is equal to the actual arithmetic sum of an M.
That for comprehension is run in parallel with sending one on X followed by sending two on Y. You get the sequential output part tested against one of these nested for comprehensions. It ends up being a one-liner to write the whole thing.
The first thing that people should look at is what it expands to (in terms of Rholang code) because it’s a compression of a factor of four. The other thing is that when you run the desugaring, you get an equational proof that the desugaring does exactly what we expected to do. The desugar program provides exactly the semantics that we want from the initial program that we started. That’s really what’s important and goes to the heart of the whole RChain philosophy.
It’s correct by construction. The equational presentation of the desugaring is proof of its correctness. I don’t do the formal proof here; rather, I do a prof by example. If you run through this example on your own, you’ll see that we literally have a formal proof of the correctness of the execution of the sugared program when it’s desugared. At the same time, you can see the extent of the compression that this sugar provides.
We’re starting to see some of the payoff of the whole RChain philosophy. We recognize that the community of programmers and developers writing Rholang smart contracts is suffering. They’d like to be able to write more compact smart contracts. So we introduce some language features and we provide an equational specification of the meaning of those language features. That gives us a formal proof that if the compiler follows the equational presentation, then the compiled programs will be correct.
That’s the way I’ve wanted to do product level development for ages. I was completely ruined after I was in Ottawa presenting research I had done at one of the Upsala’s. I saw Samson Abramsky talking about linear logic. When I got home, I read his paper on computational interpretations of linear logic, in which he presents this kind of methodology. This is back in the late eighties. I was completely ruined. I was like, “I’m never ever doing software again that doesn’t have this kind of correct by construction approach to it.” That’s what’s informing what’s going on in RChain architecture and design. I’ve been mumbling on for too long. Any thoughts or comments?
Isaac: I didn’t think it was mumbling. I thought that was a great story.
Greg: If you guys get a chance, go over the calculation and see if it makes sense. I’m not sure that it in a podcast format it would be easy to describe in English. I would love for folks to take a look at the calculation and see exactly how it works out.
Isaac: I can take a real quick stab at trying to describe it or at least trying to describe the point of showing the example. Basically, we’re just listening on two channels in a join fashion and then using this acknowledgment synchronization mechanism that we’re building here into the syntax, doing one of those synchronized sends to each one of those channels sequentially.
Greg: No, that’s exactly right. On the third page of the calculation, we’re able to guarantee that the comm event on X with the correct substitution must occur before the comm event on Y. You could formally prove that; the calculation shows that it’s exactly what happens. You can take this as a template to formally prove a similar kind of property for the sugar at large—exercise left to the reader.
Isaac: I have a question now that you said that cause that was a good point. What’s guaranteeing that the comm event on the X channel happens first is the position of the listen on that channel X? Or is it the fact that it’s the first send in a sequence of sends? Or is it a combination of the two?
Greg: It’s a combination. It’s the way the new and improved for interacts with the new and improved output. That’s the trick: you have to improve them both.
Isaac: That makes sense.
Greg: I know that I’ve belabored the point about the compression, but there’s more to it than just compression. The other thing is that we have gathered up all of the uses of unforeseeable names that are just about this coordination and have no transactional import under some syntactic sugar. All of those could be marked and treated differently with respect to the blockchain semantics. In particular, they don’t have to go through the same kind of consensus.
In other words, you can lift this up a level. The consensus is about the sugar and not about the desugared form, which means that you don’t have to hit the tuple space in the same way for the unforgeable names. It not only gives us this ability to improve the expressiveness of the programs, it also gives us a very large win for performance. In general, you’ll get better than two X compression. For certain classes of smart contract design patterns, you’ll get a better than 40% improvement against the current semantics.
Isaac: That sounds like a pretty large load off of the shoulders of the validators. I don’t know a whole lot about how to quantify that statement, but it does seem like a lot.
Greg: I agree. Any thoughts from you, Kent or Artur, on the points that I’m making here?
Kent: On the language level it makes a difference. Essentially, for the platform as a whole, the bottleneck is at the consensus layer. It probably won’t be a while until the language level efficiencies really shine through.
Greg: I think you’re right about that, but every little bit helps.
Kent: I agree, that’s for sure.
Artur: I will add to that that the construction of the syntax makes for a really nice way of implementing that. One of the first instincts was to not actually store the desugared form into the tuple space, just store the compressed high-level ASD and maybe desugar on the fly according to the rules that are presented in the slides. But we can go even further than that. We can have a direct interpreter of this high-level ASD, which is going to bring the computational efficiency improvements that Kent was suggesting was referring to our area. I like it as a language implementer.
Greg: Exactly. We’re on the same page. We can execute the higher-level language directly, but the nice thing is we don’t lose the connection to the desugared form. We always know that if we need to, we can talk about it in terms of low-level simple operations. We don’t have to take on any kind of technical debt or worry (or concern) that we didn’t get the semantics right or anything like that because we have a formal proof that it works. The proof is the implementation.
I want to blaze through the other features because I want to give Artur enough time to talk about the applicative stuff. We can also add let expressions. Anyone who’s programmed in Java or Scala especially, they’re used to being able to say something like Val X, where X is some pattern, is equal to some variable. Then you do all the destructuring, semicolon and then do P. You have a binding for X in the scope of P, or a binding for all the variables in the pattern X generated by destructuring V according to X in the scope of P.
That’s a very natural thing. It gives a high degree of code compression, which is not currently available in Rholang without doing a send. You can have two different kinds of let, where you have a nested let, so you have let pattern one comes from V. That’s going to create a binding for the destructuring a V one with respect to the pattern one, semicolon—yada, yada, yada—up to pattern M from VN and P.
You can also do a simultaneous. So that’s nesting. You can also do simultaneous. Instead of semicolons, use ampersands. Then the desugaring: you can do it either in terms of match, but since from my point of view, matches already syntactic sugar, I go down to the bottom. So the semicolons are nested send and receives. You send on some fresh X the value, and then you wait on X and bind that to the pattern. That’s the first binding pair in the semicolon let. Then you recurse and do that nested.
The other way around is to do it as a join. You send all of the values all at once, and then you wait in a join on all of the channels that you’ve sent those values on, and you bind them to their respective patterns.
That gives you two different approaches and that more or less corresponds to what you typically see in functional languages like Scheme where you have either a nested let or a simultaneous let. It doesn’t give you the recursive let, but that’s really what contract is for.
Artur and I had a good discussion in Wroclaw. My original guidance was that the interpretation of contract as a persistent continuation was only a convenience, but that the better interpretation is to do that as a recursive let because recursion is better suited to modern computational architectures. When we move to Rholang 1.1 implementation (and possibly even sooner) we will take that perspective. We’ll move to that interpretation. They’re inter-definable, but that one is better in terms of existing platforms like the JVM. Artur, do you have any comments about that?
Artur: For me as a programmer, this is very promising. I can leave guidance on the syntax level which of the values can depend on each other and which cannot If I use the ampersand syntax, I can assure that the values don’t depend on each other, which is correct.
Greg: Yes, because now a compiler will go and check your intention.
Artur: About the recursion, having a recursive let will be great, or some kind of a recursive syntax. That will allow us to have a term that will somehow refer to itself, somehow form a copy of itself. A couple of days ago, I mentioned in our internal channels an idea for a syntax sugar where we would have a receive from the underscore channel and the underscore would result to the name of the receive that we are in.
Greg: I think there’s some issues, but I’m still thinking about it.
Artur: I wouldn’t be surprised if there were issues. Anyway, having a more direct way of expressing recursion would also allow us to be more efficient in our contracts.
Greg: Yes, agreed. In the same way that we can execute the higher-level language with respect to the new and improved for comprehensions and output, we can also execute the higher-level language with respect to these let constructs. That gets us performance improvements. There’s fresh name generation going on here that doesn’t have to hit the tuple space. We can get some improvement there, and we don’t have to get a consensus about that. That’s an improvement. Now let me turn it over to Artur to tease people to what we might do with an applicative notation.
Artur: Haskell has this thing called—well, first of all, it has do notation, which is also reflecting in Scala’s for comprehensions. I don’t really have a formal thought about that, but my impression is that Rholang’s for comprehensions, in 1.1 syntax, are also kinds of related to the do notation: basically, having a way of sequencing computations.
Another similarity is that the new and improved for comprehensions takes a bunch of nested receives and makes them into a not-nested structure (if that makes any sense). When working on the REV wallet contract, especially in the area for handling this nested nest, for our syntax whenever we basically do a method call, which I used to call the method call triad, which consists of the new name being declared, then a receive being declared and a send being done to that name, on which that receive listens, which is the name declare din the new, and which is actually all covered in the Bang question mark syntax.
Because of the triad being nested over and over again, I went to workarounds where I embrace the convention where instead of taking values, my combinators would take channels from which they would receive two values. That would trip off at least the receive from the method call triad. Then I would take all those names being declared and put them all on top. Suddenly, I too didn’t have nested code; I had linear code. That’s kind of a hack.
What that led me to realize that is that this is actually quite similar to how do notation works. At the same time, it was different in the code that I had. It wasn’t actually executing completely sequentially. It was executing as parallel as possible because of how Rholang is constructed.
That led me to the realization that send is similar to Haskell’s applicative do. If you’re familiar with do notation, desugaring do notation is straightforward. It doesn’t depend on anything. There are like two rules, when we’re at the end do a map, otherwise we do a flat map. But that also means that regardless of whether the next step of the two computation depends actually value-wise on the previous, or if not, it all is being executed in sequence.
In some cases that’s wasteful. What Simon Marlow and Simon Peyton Jones and also Edward Kmett—basically, all the saints of Haskell—came to the realization that this can be done better. In Haskell, they propose this applicative desugaring, which inspects dependencies between the sub-expressions of the do block. Whenever possible they use the applicative semantics. They use desugaring that basically tuples the computations, similar to our ampersand operator in the Rholang 1.1 syntax.
While listening to you guys today, I realized that my proposal is actually in there. It’s in the ampersand. That’s being executed in parallel. The only issue I would have with that is that I only focus on the method call syntax and having it run in parallel, while Greg’s proposal is much more formal and much more complete. It’s been complete to new discoveries like the one I mentioned about let variance. What I would consider doing is refraining from providing the sequential semantics lease for a little bit longer. If sequentiality is near, one can enforce it with the value fall, and when it’s not full, there is always the temptation or the risk of using the semicolon spuriously.
Greg: What I like about your proposal is that you end up with a data flow sub-language in the for comprehension. My proposal is forcing people to make some hard commitments in syntax about where there’s parallelism and where there’s sequentiality. Whereas in the implicated side, they can not say what they mean. But there’s enough information in the syntax to work out the data flow dependencies and then it will just do the right thing.
Artur: I think that’s actually what’s going to happen if the ampersand signs can’t depend on each other. That’s kind of a difference. If we allowed for not having the semicolon at the end of the new for comprehension syntax at the end of the row, that would mean that the order of execution can be arbitrary and is only determined by the data being used. That behaves like Haskell’s applicative do.
Greg: Yes, exactly right. That’s why I was really delighted by your proposal was precisely for that. This is where I think we should let a thousand flowers bloom. This is why I mentioned at the top of the hour that I think people should go and start doing some experiments. I would really like for a group to go and code up a demonstration of the applicative approach precisely because of this data flow potential.
The other aspect of it, which you didn’t touch on, but which I’m quite keen on, is that there’s the co-do notation as well. There’s the do-monadic side of this that we haven’t discussed that would be great for streaming applications. If we can provide applicative data-flowy kind of language and at the same time dual to that provide the co-monadic, co-do notation, that is a rich area of language investigation.
It all comes out of this understanding that monads are the new objects. Monads and co-monads are the new object orientation. People should be thinking that way. The right syntactic framework for monads and co-monads are this sort of for-comprehension, do-notation, applicative gadgetry that we don’t know how it all balances out exactly right. If there are courageous and intrepid community members who want to go and explore this stuff, we need lots of people exploring it. This is really exciting, cool stuff.
Artur: One of the benefits of having that syntax applicative by default will obviously be paralyzing as much as objectively possible. At the same time, from the community member’s perspective, who is to implement that? There’s not a single best way to desugar applicative do. That’s a tricky thing to do. That’s even more points to your proposal because your proposal has a very direct and straightforward desugaring that is easy to implement. With applicative do, that is no longer the case.
Greg: At least for now, but I have a feeling that if someone goes and does it, we can find a data flow semantics that makes the most sense, at least in this setting, which is why I really wanted to have this conversation.