Hello I did not mention GO out of context in my previous emails. I would like to know the opinion of this forum about what would be the most likely CL - portable - substitute to implement *channels*. *usocket *and/or *bordeaux-threads* are what I'd target. In GO you have *chan *items and you can send and receive from them using the *<-* operator. AFAIU these channels are blocking. Any suggestions? Thanks Happy Holidays Marco -- Marco Antoniotti, Professor, Director tel. +39 - 02 64 48 79 01 DISCo, University of Milan-Bicocca U14 2043 http://dcb.disco.unimib.it Viale Sarca 336 I-20126 Milan (MI) ITALY REGAINS: https://regains.disco.unimib.it/
Any suggestions?
Transactional Conventional Hewitt Actors (staged SEND, BECOME, CREATE until successful exit) https://github.com/dbmcclain/Lisp-Actors
Automatic parallelism through parallel concurrent execution by dispatch thread pool. No user level threads, locks, no shared mutable memory. A convention of functionally pure programming protects concurrent access during parallel execution. Only one overt mutation (BECOME), under carefully controlled CAS execution, with automatic message re-delivery on failed CAS. Asynchronous execution.
On Dec 22, 2025, at 09:52, David McClain <dbm@refined-audiometrics.com> wrote:
Any suggestions?
Transactional Conventional Hewitt Actors (staged SEND, BECOME, CREATE until successful exit)
On Mon, Dec 22, 2025, 8:53 AM David McClain <dbm@refined-audiometrics.com> wrote:
This is very interesting! But I think it needs FSet ( https://github.com/slburson/fset) to store the state of an actor, and make it easy to efficiently compute a new state which can be supplied to BECOME. -- Scott
Interesting suggestion… BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword. But there are no rules, just conventions, used within my current Actors system. One convention is to provide a customer (Actor) argument in most messages. But again, no hard rules, just conventions. The most compelling advance from Conventional Hewitt Actors is the notion of transactional behavior which dictates that no SEND nor BECOME can be seen by anyone, including the issuer, until the behavior function exits cleanly. When parallel concurrent execution occurs, only one thread will succeed in committing the SENDS and BECOME. The others will be silently retried by redelivering their message to the (possibly now changed) Actor behavior. Writing code in such a system is a lot like assembling little Leggo blocks of behavior that can be stitched together to provide larger customizable behaviors to the outside world. Currently our hardware is aimed directly at Call/Return function behaviors, and so an Actors-all-the-way-down would be completely impractical on our present computers. I find Call-Return to be effective for the innards of math functions, and Actors to be most practical for high-level orchestration of components - as in the Async Socket interface.
On Dec 22, 2025, at 12:11, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 8:53 AM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
This is very interesting! But I think it needs FSet (https://github.com/slburson/fset) to store the state of an actor, and make it easy to efficiently compute a new state which can be supplied to BECOME.
-- Scott
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Keyword args are a possibility too, but that implies a piling up of old data behind the newly specified keyword arg. And that’s why I implemented the dictionary methods - to prevent keeping stale data from the GC.
On Mon, Dec 22, 2025, 12:20 PM David McClain <dbm@refined-audiometrics.com> wrote:
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet." Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version. -- Scott
Yes, good points O(log n) cost. But that implies that elements have an ordering relation, no? Partial or total order. I have such structures that I use often, red-black trees that are purely functional implementations. So I understand your points here. But many times my data does not have any order relation, just an equality.
On Dec 22, 2025, at 15:52, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 12:20 PM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet."
Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version.
-- Scott
I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols. So good point on that O(log n). My simple implementation is just a copy / replace, which is O(n).
On Dec 22, 2025, at 17:16, David McClain <dbm@refined-audiometrics.com> wrote:
Yes, good points O(log n) cost. But that implies that elements have an ordering relation, no? Partial or total order. I have such structures that I use often, red-black trees that are purely functional implementations. So I understand your points here. But many times my data does not have any order relation, just an equality.
On Dec 22, 2025, at 15:52, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 12:20 PM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet."
Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version.
-- Scott
Actor-State is just an experimental implementation at this time. There are only a few example where I tried using a formal Actor-State. One of them is my implementation of a Key-Value store (kvdb). The other place is in my Ionosphere monitoring system where I track 15 MHz carriers to <100 μHz precision. I did this implementation because a friend of mine involved in Actors machines suggested the JavaScript JSON Dictionary - which I find to be enormously redundant and noisy. Just a simple PList with keyword keys could fully replace them and great simplification. And so my Actor-State was an attempt to prove that out. I fear that Purely Functional Red-Black Trees would be even more costly, but maybe not. I should look into it, or your FSet.
On Dec 22, 2025, at 17:19, David McClain <dbm@refined-audiometrics.com> wrote:
I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols. So good point on that O(log n). My simple implementation is just a copy / replace, which is O(n).
On Dec 22, 2025, at 17:16, David McClain <dbm@refined-audiometrics.com> wrote:
Yes, good points O(log n) cost. But that implies that elements have an ordering relation, no? Partial or total order. I have such structures that I use often, red-black trees that are purely functional implementations. So I understand your points here. But many times my data does not have any order relation, just an equality.
On Dec 22, 2025, at 15:52, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 12:20 PM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet."
Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version.
-- Scott
On Mon, Dec 22, 2025, 4:21 PM David McClain <dbm@refined-audiometrics.com> wrote: I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols. Oh, an ordering relation can be defined for almost anything. FSet has a generic function 'compare' that can be easily extended for new types. Functional red-black trees are a fine choice. FSet has long used an older kind of balanced trees, called weight-balanced. More recently, I've added a relatively new hash-based data structure called CHAMP. On Mon, Dec 22, 2025, 4:25 PM David McClain <dbm@refined-audiometrics.com> wrote:
I track 15 MHz carriers to <100 μHz precision.
Impressive! I did this implementation because a friend of mine involved in Actors
machines suggested the JavaScript JSON Dictionary - which I find to be enormously redundant and noisy. Just a simple PList with keyword keys could fully replace them and great simplification. And so my Actor-State was an attempt to prove that out.
I fear that Purely Functional Red-Black Trees would be even more costly, but maybe not. I should look into it, or your FSet.
Haha, if you want small and simple, FSet might not be to your taste. My goal was to make it easy to use and featureful. It's pretty fast, though. -- Scott
On Dec 22, 2025, at 17:19, David McClain <dbm@refined-audiometrics.com> wrote:
I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols. So good point on that O(log n). My simple implementation is just a copy / replace, which is O(n).
On Dec 22, 2025, at 17:16, David McClain <dbm@refined-audiometrics.com> wrote:
Yes, good points O(log n) cost. But that implies that elements have an ordering relation, no? Partial or total order. I have such structures that I use often, red-black trees that are purely functional implementations. So I understand your points here. But many times my data does not have any order relation, just an equality.
On Dec 22, 2025, at 15:52, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 12:20 PM David McClain <dbm@refined-audiometrics.com> wrote:
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet."
Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version.
-- Scott
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list] pt
On Dec 22, 2025, at 9:00 PM, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 4:21 PM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote: I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols.
Oh, an ordering relation can be defined for almost anything. FSet has a generic function 'compare' that can be easily extended for new types.
Functional red-black trees are a fine choice. FSet has long used an older kind of balanced trees, called weight-balanced. More recently, I've added a relatively new hash-based data structure called CHAMP.
On Mon, Dec 22, 2025, 4:25 PM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
I track 15 MHz carriers to <100 μHz precision.
Impressive!
I did this implementation because a friend of mine involved in Actors machines suggested the JavaScript JSON Dictionary - which I find to be enormously redundant and noisy. Just a simple PList with keyword keys could fully replace them and great simplification. And so my Actor-State was an attempt to prove that out.
I fear that Purely Functional Red-Black Trees would be even more costly, but maybe not. I should look into it, or your FSet.
Haha, if you want small and simple, FSet might not be to your taste. My goal was to make it easy to use and featureful. It's pretty fast, though.
-- Scott
On Dec 22, 2025, at 17:19, David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols. So good point on that O(log n). My simple implementation is just a copy / replace, which is O(n).
On Dec 22, 2025, at 17:16, David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
Yes, good points O(log n) cost. But that implies that elements have an ordering relation, no? Partial or total order. I have such structures that I use often, red-black trees that are purely functional implementations. So I understand your points here. But many times my data does not have any order relation, just an equality.
On Dec 22, 2025, at 15:52, Scott L. Burson <Scott@sympoiesis.com <mailto:Scott@sympoiesis.com>> wrote:
On Mon, Dec 22, 2025, 12:20 PM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
> BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet."
Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version.
-- Scott
Hi I did waste some time on the topic and I agree. The only thing you need on top is threading/IPC, which requires mailboxes, which are implementable with locks and FIFO queues. MA On Sat, Dec 27, 2025 at 1:11 AM Paul Tarvydas <paultarvydas@gmail.com> wrote:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
pt
On Dec 22, 2025, at 9:00 PM, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 4:21 PM David McClain <dbm@refined-audiometrics.com> wrote:
I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols.
Oh, an ordering relation can be defined for almost anything. FSet has a generic function 'compare' that can be easily extended for new types.
Functional red-black trees are a fine choice. FSet has long used an older kind of balanced trees, called weight-balanced. More recently, I've added a relatively new hash-based data structure called CHAMP.
On Mon, Dec 22, 2025, 4:25 PM David McClain <dbm@refined-audiometrics.com> wrote:
I track 15 MHz carriers to <100 μHz precision.
Impressive!
I did this implementation because a friend of mine involved in Actors
machines suggested the JavaScript JSON Dictionary - which I find to be enormously redundant and noisy. Just a simple PList with keyword keys could fully replace them and great simplification. And so my Actor-State was an attempt to prove that out.
I fear that Purely Functional Red-Black Trees would be even more costly, but maybe not. I should look into it, or your FSet.
Haha, if you want small and simple, FSet might not be to your taste. My goal was to make it easy to use and featureful. It's pretty fast, though.
-- Scott
On Dec 22, 2025, at 17:19, David McClain <dbm@refined-audiometrics.com> wrote:
I guess the specific example of my Actor-State does have an ordering relation, since the keys are all keyword symbols. So good point on that O(log n). My simple implementation is just a copy / replace, which is O(n).
On Dec 22, 2025, at 17:16, David McClain <dbm@refined-audiometrics.com> wrote:
Yes, good points O(log n) cost. But that implies that elements have an ordering relation, no? Partial or total order. I have such structures that I use often, red-black trees that are purely functional implementations. So I understand your points here. But many times my data does not have any order relation, just an equality.
On Dec 22, 2025, at 15:52, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 12:20 PM David McClain < dbm@refined-audiometrics.com> wrote:
BECOME takes a functional closure, which contains its state within the closure vars. But I have become frustrated with too many BOA args, and I also implemented a kind of dictionary to carry state with items labeled by a keyword.
Right. I saw your file 'actor-state.lisp' and thought "Ah! This is a functional map. This man needs FSet."
Yes, the state is in the closure vars, but that doesn't preclude it being large and complex. With functional data structures, you can efficiently prepare an updated version of a large structure without invalidating the previous version. If something goes wrong before the BECOME takes effect, no harm has been done; the tentative new version simply becomes garbage. The trick is to use only O(log n) space each time, where n is the size of the previous version.
-- Scott
-- Marco Antoniotti, Professor, Director tel. +39 - 02 64 48 79 01 DISCo, University of Milan-Bicocca U14 2043 http://dcb.disco.unimib.it Viale Sarca 336 I-20126 Milan (MI) ITALY REGAINS: https://regains.disco.unimib.it/
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS. Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message. Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages. An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world. But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle. - DM
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it. Cheers
Interesting about SBCL CAS. I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL. CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs. Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding. Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
Hi, Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this). Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.) In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels. With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.) This nuance may or may not matter in your application, but it’s worth pointing out nonetheless. It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way... Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
Ahem… I believe that this “Color of your Code” issue is what drove the invention of Async/Await. Frankly, I find that formalism horrid, as it divorces the actual runtime behavior from the code, which itself is written in a linear style. My own solution is to fall back to the simplest possible Async model which is Conventional Hewitt Actors, and a single shared communal event FIFO queue to hold messages. But that does indeed offer a different color from our more typical Call/Return architecture. My solution for the conundrum has been that you want to use Call/Return where it shines - the innards of math libraries for example, and then use Async coding to thread together Leggo Block subsystems that need coordination, e.g., CAPI GUI code with computation snippets. Maybe I incorrectly find Async/Await a disgusting pretense? - DM
On Dec 28, 2025, at 05:17, pc@p-cos.net wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
… as for blocking/non-blocking code… How do you distinguish, as a caller, blocking from long-running computation? And what to do about it anyway? Even if our compilers were smart enough to detect possible blocking behavior in a called function, that still leaves my networking code prone to errors resulting from non-blocking, but long-running subroutines.
On Dec 28, 2025, at 05:46, David McClain <dbm@refined-audiometrics.com> wrote:
Ahem…
I believe that this “Color of your Code” issue is what drove the invention of Async/Await.
Frankly, I find that formalism horrid, as it divorces the actual runtime behavior from the code, which itself is written in a linear style.
My own solution is to fall back to the simplest possible Async model which is Conventional Hewitt Actors, and a single shared communal event FIFO queue to hold messages.
But that does indeed offer a different color from our more typical Call/Return architecture.
My solution for the conundrum has been that you want to use Call/Return where it shines - the innards of math libraries for example, and then use Async coding to thread together Leggo Block subsystems that need coordination, e.g., CAPI GUI code with computation snippets.
Maybe I incorrectly find Async/Await a disgusting pretense?
- DM
On Dec 28, 2025, at 05:17, pc@p-cos.net wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
And the working solution that I have found for the blocking/non-blocking issue is to have multiple cores and multiple Dispatch threads available. This is not a provable working solution in all cases - it has the same kinds of defects as unlimited-length FIFO message queues in the face of Transactional Actor behavior. You can easily see that these solutions both have potentially unbounded worst case behavior. But in both cases the practical day-to-day depth is small, and not unbounded as the worst case.
On Dec 28, 2025, at 05:51, David McClain <dbm@refined-audiometrics.com> wrote:
… as for blocking/non-blocking code…
How do you distinguish, as a caller, blocking from long-running computation? And what to do about it anyway?
Even if our compilers were smart enough to detect possible blocking behavior in a called function, that still leaves my networking code prone to errors resulting from non-blocking, but long-running subroutines.
On Dec 28, 2025, at 05:46, David McClain <dbm@refined-audiometrics.com> wrote:
Ahem…
I believe that this “Color of your Code” issue is what drove the invention of Async/Await.
Frankly, I find that formalism horrid, as it divorces the actual runtime behavior from the code, which itself is written in a linear style.
My own solution is to fall back to the simplest possible Async model which is Conventional Hewitt Actors, and a single shared communal event FIFO queue to hold messages.
But that does indeed offer a different color from our more typical Call/Return architecture.
My solution for the conundrum has been that you want to use Call/Return where it shines - the innards of math libraries for example, and then use Async coding to thread together Leggo Block subsystems that need coordination, e.g., CAPI GUI code with computation snippets.
Maybe I incorrectly find Async/Await a disgusting pretense?
- DM
On Dec 28, 2025, at 05:17, pc@p-cos.net wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>:
> I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
… I think even our notion of Stack-based Call/Return architecture suffers the same kind of indeterminacy issues as just discussed. There is a finite limit to stack depth during execution, but we normally don’t run into it. Just like I don’t normally run into blocking idle states due to running out of available threads/cores, or don’t normally fill up all of memory with unlimited FIFO depth.
On Dec 28, 2025, at 05:57, David McClain <dbm@refined-audiometrics.com> wrote:
And the working solution that I have found for the blocking/non-blocking issue is to have multiple cores and multiple Dispatch threads available.
This is not a provable working solution in all cases - it has the same kinds of defects as unlimited-length FIFO message queues in the face of Transactional Actor behavior. You can easily see that these solutions both have potentially unbounded worst case behavior.
But in both cases the practical day-to-day depth is small, and not unbounded as the worst case.
On Dec 28, 2025, at 05:51, David McClain <dbm@refined-audiometrics.com> wrote:
… as for blocking/non-blocking code…
How do you distinguish, as a caller, blocking from long-running computation? And what to do about it anyway?
Even if our compilers were smart enough to detect possible blocking behavior in a called function, that still leaves my networking code prone to errors resulting from non-blocking, but long-running subroutines.
On Dec 28, 2025, at 05:46, David McClain <dbm@refined-audiometrics.com> wrote:
Ahem…
I believe that this “Color of your Code” issue is what drove the invention of Async/Await.
Frankly, I find that formalism horrid, as it divorces the actual runtime behavior from the code, which itself is written in a linear style.
My own solution is to fall back to the simplest possible Async model which is Conventional Hewitt Actors, and a single shared communal event FIFO queue to hold messages.
But that does indeed offer a different color from our more typical Call/Return architecture.
My solution for the conundrum has been that you want to use Call/Return where it shines - the innards of math libraries for example, and then use Async coding to thread together Leggo Block subsystems that need coordination, e.g., CAPI GUI code with computation snippets.
Maybe I incorrectly find Async/Await a disgusting pretense?
- DM
On Dec 28, 2025, at 05:17, pc@p-cos.net wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
> Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com>: > >> I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list] > > > This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS. > > Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message. > > Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages. > > An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world. > > But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
How do you distinguish, as a caller, blocking from long-running computation? And what to do about it anyway?
All file descriptors and syscalls in Go code are guaranteed to be used in a non-blocking fashion, and call sites are instrumented to yield to the scheduler in case the kernel returns EWOULDBLOCK. That doesn't hold if the code calls into foreign libraries which call into the kernel, but that's why Go developers have been trying as much as possible to rewrite everything in Go. With enough manpower, that's feasible. -- Stelian Ionescu
That is interesting, Pascal. Do you maybe have a pointer to a simple description of the GO threading model? (Asking is much faster and less time consuming than searching or, given the Zeitgeist, trusting an AI). All the best and Happy New Year MA On Sun, Dec 28, 2025 at 3:01 PM <pc@p-cos.net> wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com
:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors ( https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example ( https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-... ). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried ( https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
-- Marco Antoniotti, Professor, Director tel. +39 - 02 64 48 79 01 DISCo, University of Milan-Bicocca U14 2043 http://dcb.disco.unimib.it Viale Sarca 336 I-20126 Milan (MI) ITALY REGAINS: https://regains.disco.unimib.it/
https://medium.com/@rahulreza920/the-go-runtimes-secret-weapon-a-deep-dive-i... On Mon, Dec 29, 2025, 11:12 AM Marco Antoniotti <marco.antoniotti@unimib.it> wrote:
That is interesting, Pascal. Do you maybe have a pointer to a simple description of the GO threading model? (Asking is much faster and less time consuming than searching or, given the Zeitgeist, trusting an AI).
All the best and Happy New Year
MA
On Sun, Dec 28, 2025 at 3:01 PM <pc@p-cos.net> wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain < dbm@refined-audiometrics.com>:
I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors ( https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example ( https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-... ). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried ( https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
-- Marco Antoniotti, Professor, Director tel. +39 - 02 64 48 79 01 DISCo, University of Milan-Bicocca U14 2043 http://dcb.disco.unimib.it Viale Sarca 336 <https://www.google.com/maps/search/Viale+Sarca+336+I-20126+Milan+(MI)+ITALY?entry=gmail&source=g> I-20126 Milan (MI) ITALY <https://www.google.com/maps/search/Viale+Sarca+336+I-20126+Milan+(MI)+ITALY?entry=gmail&source=g>
REGAINS: https://regains.disco.unimib.it/
Another good resource is the design document for the current Go scheduler. See https://docs.google.com/document/d/1TTj4T2JO42uD5ID9e89oa0sLKhJYD0Y_kqxDv3I3... Pascal
On 29 Dec 2025, at 18:22, Charlotte Swank <akopa.gmane.poster@gmail.com> wrote:
https://medium.com/@rahulreza920/the-go-runtimes-secret-weapon-a-deep-dive-i...
On Mon, Dec 29, 2025, 11:12 AM Marco Antoniotti <marco.antoniotti@unimib.it <mailto:marco.antoniotti@unimib.it>> wrote:
That is interesting, Pascal. Do you maybe have a pointer to a simple description of the GO threading model? (Asking is much faster and less time consuming than searching or, given the Zeitgeist, trusting an AI).
All the best and Happy New Year
MA
On Sun, Dec 28, 2025 at 3:01 PM <pc@p-cos.net <mailto:pc@p-cos.net>> wrote:
Hi,
Yes, mailboxes get you a long way. However, some nuances got a bit lost in this thread (and I apologise that I contributed to this).
Something that is very relevant to understand in the Go context: Go channels are not based on pthreads, but they are based around Go’s own tasking model (which of course are in turn based on pthreads, but’s not that relevant). Go’s tasking model is an alternative to previous async programming models, where async code and sync code had to be written in different programming styles - that made such code very difficult to write, read and refactor. (I believe https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is the text that made that insight popular.)
In Go, async code looks exactly the same as sync code, and you don’t even have to think about that distinction anymore. This is achieved by ensuring that all potentially blocking operations are effectively not blocking, but instead play nicely with the work-stealing scheduler that handles Go’s tasking model. So, for example, if a task tries to take a lock on a mutex, and that is currently not possible, the task gets swapped out and replaced by a different task that can continue its execution. This integration exists for all kinds of potentially blocking operations, including channels.
With pthreads, a lock / mailbox / etc. that blocks can have the corresponding pthread replaced by another one, but that is much more expensive. Go’s tasks are handled completely in user space, not in kernel space. (And work stealing gives a number of very beneficial guarantees as well.)
This nuance may or may not matter in your application, but it’s worth pointing out nonetheless.
It would be really nice if Common Lisp had this as well, in place of a pthreads-based model, because it would solve a lot of issues in a very elegant way...
Pascal
On 27 Dec 2025, at 18:45, David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
Interesting about SBCL CAS.
I do no use CAS directly in my mailboxes, but rely on Posix for them - both LW and SBCL.
CAS is used only for mutation of the indirection pointer inside the 1-slot Actor structs.
Some implementations allow only one thread inside an Actor behavior at a time. I have no restrictions in my implementations, so that I gain true parallel concurrency on multi-core architectures. Parallelism is automatic, and lock-free, but requires careful purely functional coding.
Mailboxes in my system are of indefinite length. Placing restrictions on the allowable length of a mailbox queue means that you cannot offer Transactional behavior. But in practice, I rarely see more than 4 threads running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores. Of course you could make a Fork-Bomb that exhausts system resources.
On Dec 27, 2025, at 10:18, Manfred Bergmann <manfred.bergmann@me.com <mailto:manfred.bergmann@me.com>> wrote:
Am 27.12.2025 um 18:00 schrieb David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>>:
> I've reached the conclusion that if you have first-class functions and the ability to create FIFO queue classes, you have everything you need. You don't need Go channels, or operating system threads, etc. Those are just inefficient, Greenspunian implementations of a simpler idea. In fact, you can draw diagrams of Software LEGO parts, as mentioned by dbm, just with draw.io <http://draw.io/> and OhmJS and a fairly flexible PL. [I'd be happy to elaborate further, but wonder if this would be appropriate on this mailing list]
This is essentially what the Transactional Hewitt Actors really are. We use “Dispatch” threads to extract messages (function args and function address) from a community mailbox queue. The Dispatchers use a CAS protocol among themselves to effect staged BECOME and message SENDS, with automatic retry on losing CAS.
Messages and BECOME are staged for commit at successful exit of the functions, or simply tossed if the function errors out - making an unsuccessful call into an effective non-delivery of a message.
Message originators are generally unknown to the Actors, unless you use a convention of providing a continuation Actor back to the sender, embedded in the messages.
An Actor is nothing more than an indirection pointer to a functional closure - the closure contains code and local state data. The indirection allows BECOME to mutate the behavior of an Actor without altering its identity to the outside world.
But it all comes down to FIFO Queues and Functional Closures. The Dispatchers and Transactional behavior is simply an organizing principle.
Yeah, that’s exactly what Sento Actors (https://github.com/mdbergmann/cl-gserver/) are also about. Additionally, one may notice is that Sento has a nice async API called ’Tasks’ that’s designed after the Elixir example (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-...). On another note is that Sento uses locking with Bordeaux threads (for the message box) rather than CAS, because the CAS implementations I tried (https://github.com/cosmos72/stmx and an CAS based mailbox implementation in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast but had a high idle CPU usage, so I dropped it.
Cheers
-- Marco Antoniotti, Professor, Director tel. +39 - 02 64 48 79 01 DISCo, University of Milan-Bicocca U14 2043 http://dcb.disco.unimib.it <http://dcb.disco.unimib.it/> Viale Sarca 336 <https://www.google.com/maps/search/Viale+Sarca+336+I-20126+Milan+(MI)+ITALY?entry=gmail&source=g> I-20126 Milan (MI) ITALY <https://www.google.com/maps/search/Viale+Sarca+336+I-20126+Milan+(MI)+ITALY?entry=gmail&source=g>
REGAINS: https://regains.disco.unimib.it/
But perhaps you meant mainly to suggest FSET as a method for avoiding shared mutable data? There should never be open mutation, even of the closure data. The safety of fully parallel concurrency can only be guaranteed by performing all mutations through BECOME which relies on a CAS operation in the dispatch threads. I suppose you could get around that restriction with FSET. But then so could you with LOCKS and SEMAPHORES and all the other SMP primitives. Right now, as long as you abide by functional coding and BECOME as the only mutator, we can completely dispense with LOCKS, THREADS, SEMAPHORES, MAILBOXES. None of that is necessary at the user programmer level. You code as though you are the sole occupant of the machine. And so long as you abide by clean conventions, fully parallel concurrency is yours. The same code runs as well in a single thread, or on multiple cores and threads. Speed of execution is the only variable.
On Dec 22, 2025, at 12:11, Scott L. Burson <Scott@sympoiesis.com> wrote:
On Mon, Dec 22, 2025, 8:53 AM David McClain <dbm@refined-audiometrics.com <mailto:dbm@refined-audiometrics.com>> wrote:
This is very interesting! But I think it needs FSet (https://github.com/slburson/fset) to store the state of an actor, and make it easy to efficiently compute a new state which can be supplied to BECOME.
-- Scott
But perhaps you meant mainly to suggest FSET as a method for avoiding shared mutable data? There should never be open mutation, even of the closure data. The safety of fully parallel concurrency can only be guaranteed by performing all mutations through BECOME which relies on a CAS operation in the dispatch threads.
I suppose you could get around that restriction with FSET. But then so could you with LOCKS and SEMAPHORES and all the other SMP primitives.
Right now, as long as you abide by functional coding and BECOME as the only mutator, we can completely dispense with LOCKS, THREADS, SEMAPHORES, MAILBOXES. None of that is necessary at the user programmer level. You code as though you are the sole occupant of the machine. And so long as you abide by clean conventions, fully parallel concurrency is yours.
The same code runs as well in a single thread, or on multiple cores and threads. Speed of execution is the only variable.
This is a fairly narrow mode of operation, where a task can functionally produce an output that gets committed with a single atomic operation. Most often, (green or SMP) threads are used when an operation must naturally interleave computation with mutation of shared structures and I/O. -- Stelian Ionescu
Am 22.12.2025 um 17:52 schrieb David McClain <dbm@refined-audiometrics.com>:
Any suggestions?
Transactional Conventional Hewitt Actors (staged SEND, BECOME, CREATE until successful exit)
Or maybe Sento (https://github.com/mdbergmann/cl-gserver).
Am 22.12.2025 um 22:16 schrieb Manfred Bergmann <manfred.bergmann@me.com>:
Am 22.12.2025 um 17:52 schrieb David McClain <dbm@refined-audiometrics.com>:
Any suggestions?
Transactional Conventional Hewitt Actors (staged SEND, BECOME, CREATE until successful exit)
Or maybe Sento (https://github.com/mdbergmann/cl-gserver).
Although Chanl or lparallel are probably the most Go idiomatic solutions.
There is a quite old library called chanl that has similar semantics. Worked well for me. Am 22. Dezember 2025 17:26:24 MEZ schrieb Marco Antoniotti <marco.antoniotti@unimib.it>:
Hello
I did not mention GO out of context in my previous emails.
I would like to know the opinion of this forum about what would be the most likely CL - portable - substitute to implement *channels*. *usocket *and/or *bordeaux-threads* are what I'd target.
In GO you have *chan *items and you can send and receive from them using the *<-* operator. AFAIU these channels are blocking.
Any suggestions?
Thanks
Happy Holidays
Marco
-- Marco Antoniotti, Professor, Director tel. +39 - 02 64 48 79 01 DISCo, University of Milan-Bicocca U14 2043 http://dcb.disco.unimib.it Viale Sarca 336 I-20126 Milan (MI) ITALY
REGAINS: https://regains.disco.unimib.it/
participants (10)
-
Charlotte Swank -
David McClain -
Manfred Bergmann -
Marco Antoniotti -
Pascal Costanza -
Paul Tarvydas -
pc@p-cos.net -
Scott L. Burson -
Stelian Ionescu -
Svante v. Erichsen