[resending to bknr-devel]
Hi Klaus,
2008/2/22, Hans Hübner hans@huebner.org:
2008/2/22, Klaus Unger UngerKlaus@gmx.de:
- It is hard to map complex generic functions to transactions
I have given this more thought, and it appears that it would be possible to replace the existing macro based transaction execution environment by generic functions with a custom method combination. This certainly would make the BKNR code much prettier and simpler.
The question is: What would it buy the user? Certainly, it would make it possible to dispatch on argument types to run different code for different types. I am not sure that this would be so very useful, though. Transactions are not your average function, you are not going to invoke a large number of transaction one after the other to fullfill a user request or command. Rather, one user command typically results in one transaction being executed, and in turn triggers a number of function calls from within the transaction body to accomplish whatever is needed. Thus, transactions are generally more coarse grained than functions.
Now, one could say that every function possibly is a transaction. The transaction monitor would only log the top level transaction invocation and process invocations of other transactions as standard function calls. This is what we currently do, albeit in a rather awkward and ugly form.
Still, I do not think that all functions should be transactions. Rather, transaction define the mutation points of an application, which makes them special and requires that they are well defined.
Now, this all does not mean that the transaction interface is perfect. There are two additions that I would consider worthwhile. One would be the possibility to specify action code to run within the global lock, but before actually mutating the persistent state of the system. This would be useful to check preconditions in a safe manner on preemptive multitasking systems. The other would be the possibility to declare an undo action which would help the application to roll back the persistent state to an earlier point in time.
Implementing transactions as generic functions would give us argument dispatch for free, which would certainly be nice. I managed to get away without being able to dispatch on transaction argument types or values so far, but that does not mean so much.
Before going ahead with any of this, I would like to know what made you want generic functions as transactions. It could be that I'm missing something.
Your comments are greatly appreciated. Thanks.
-Hans
Hi Hans,
I am currently doing all non IO work of a user command in a single generic function. This method is protected by a single global lock :around it. So I am using objects for application modules that handle user commands. OOP allows me to abstract common aspects of the modules.
I don't see the advantage of using a big transaction rather than a big lock. This big lock approach is very easy to grasp but It has two performance drawbacks: - It doesn't scale to multicores - When some commands require a long time, short commands suffer from high latency TBH I don't think these affect me much practically!
I think to solve the performance problems fine grain locks are used, they allow parallel execution but introduce those nasty problems like deadlocks.
Transactions then are introduced to lift the burden to think about the problems of fine grained locks and instead just define an "atomic" sequence of code. (In my opinion most database transactions still require too much thinking, for example whether to use "FOR UPDATE".)
Originaly I wanted to define the transactions at a more fine-grained level to avoid the performance problems. Due to the transaction implementation, it would only have bought me better latency, not scalability. At this level I would have needed generic transactions on the OOP "business level".
I have been thinking about transactions in the sense of Software transactional memory (STM). They would give a performance benefit, even for the coarse grain transactions. There are a lot of recent papers about efficient algorithms for STM, but I have not yet to seen an implementation in CL. While it would be feasible for Objects with AMOP, it seems very hard to track access to cons.
Thanks for the detailed responses! - Klaus
P.S. I also think that databases are nowadays clumsy for most applications. I have made bad experience with OO -> DB mappings, especially JPA.
Am Freitag, 22. Februar 2008 16:18:34 schrieb Hans Hübner:
[resending to bknr-devel]
I have given this more thought, and it appears that it would be possible to replace the existing macro based transaction execution environment by generic functions with a custom method combination. This certainly would make the BKNR code much prettier and simpler.
The question is: What would it buy the user? Certainly, it would make it possible to dispatch on argument types to run different code for different types. I am not sure that this would be so very useful, though. Transactions are not your average function, you are not going to invoke a large number of transaction one after the other to fullfill a user request or command.
Now, one could say that every function possibly is a transaction. The transaction monitor would only log the top level transaction invocation and process invocations of other transactions as standard function calls. This is what we currently do, albeit in a rather awkward and ugly form.
Still, I do not think that all functions should be transactions. Rather, transaction define the mutation points of an application, which makes them special and requires that they are well defined.
Now, this all does not mean that the transaction interface is perfect. There are two additions that I would consider worthwhile. One would be the possibility to specify action code to run within the global lock, but before actually mutating the persistent state of the system. This would be useful to check preconditions in a safe manner on preemptive multitasking systems. The other would be the possibility to declare an undo action which would help the application to roll back the persistent state to an earlier point in time.
Implementing transactions as generic functions would give us argument dispatch for free, which would certainly be nice. I managed to get away without being able to dispatch on transaction argument types or values so far, but that does not mean so much.
Your comments are greatly appreciated. Thanks.
-Hans _______________________________________________ bknr-devel mailing list bknr-devel@common-lisp.net http://common-lisp.net/cgi-bin/mailman/listinfo/bknr-devel
Hi Klaus,
well, so what you seem to write is that your application is architected around a generic function that serializes mutations, and that not having BKNR transactions support generic functions prevented you from just using them rather than implementing your own lock. Fair enough. I wonder why you can't use multiple generic functions instead of one, but that is beyond what I can (and want to) understand right now.
I am not a big fan of fine grained locking, but that is apparent from the store architecture with its giant lock, too. I am not bothered with the scalability limitations as I have not yet found them to be practically relevant - My user base has been growing slower than processor speeds have increased, and even with Pentium III CPUs we had more than enough headroom for our applications.
STM is interesting, but real transactional memory interests me more, I must admit. There is an implementation of CLOS STM (http://common-lisp.net/project/cl-stm/), but the lack of support for non-CLOS data types is kind of a show-stopper for me. Maybe someone will hack STM into one of the CL compilers, but I'm not really prepared to do that.
Anyway - If you need further support with BKNR indices, let us know.
-Hans
Hi Hans,
well, so what you seem to write is that your application is architected around a generic function that serializes mutations, and that not having BKNR transactions support generic functions prevented you from just using them rather than implementing your own lock. Fair enough. I wonder why you can't use multiple generic functions instead of one, but that is beyond what I can (and want to) understand right now.
The generic function issue is just an additional "cost" for using transactions. The main point is that I see no benefit from transactions. I'm sorry I didn't made that clear earlier. (I don't want to claim there is none, maybe I just don't see it or it is related to my scenario.)
STM is interesting, but real transactional memory interests me more, I must admit. There is an implementation of CLOS STM (http://common-lisp.net/project/cl-stm/), but the lack of support for non-CLOS data types is kind of a show-stopper for me. Maybe someone will hack STM into one of the CL compilers, but I'm not really prepared to do that.
Thanks for the hint, though I have to agree that non-CLOS is essential.
Anyway - If you need further support with BKNR indices, let us know.
Is it somehow possible with the datastore to run different stores in one lisp process? For example to run multiple instances of the same web-application on the same port on the same machine.
Besides the problems mentioned I have to add that I am really happy with the concept of indices and the relief from traditional DBMs and ugly DB -> OO mappings misleadingly claiming to be transparent OO -> DB mappings. I gladly pay the scalability price for that, and I'm looking forward to gain more experience with that paradigm. I'll also have a closer look on the web framework aswell, it looks very promising! Assuming no immediate projects/deadlines, do you think it is a good idea to wait until the summer release?
Thanks for your patience (:
- Klaus
2008/2/23, Klaus Unger UngerKlaus@gmx.de:
Is it somehow possible with the datastore to run different stores in one lisp process? For example to run multiple instances of the same web-application on the same port on the same machine.
Currently not. We have removed multi-store support at one point because we felt that it was not useful enough to warrant cluttering the API with STORE arguments. I think that it is usually better to put separate applications into distinct Lisp processed as that approach provides better isolation.
If you need multiple stores in one application, doing so would involve rebinding the store special variables in a WITH-STORE macro. I can add it if you need it, please let me know. Note that operations involving two stores will not be supported by that and I'd really question if that would make sense.
I'll also have a closer look on the web framework aswell, it looks very promising! Assuming no immediate projects/deadlines, do you think it is a good idea to wait until the summer release?
Presently, there is little documentation and we hope to be able to change that situation for the release. In its present state, several things are not working properly, so I can't really recommend trying to use it. We expect to have the functionality complete in April.
-Hans
(resend to the list)
Hi Hans,
Am Samstag, 23. Februar 2008 08:31:17 schrieben Sie:
2008/2/23, Klaus Unger UngerKlaus@gmx.de:
Is it somehow possible with the datastore to run different stores in one lisp process? For example to run multiple instances of the same web-application on the same port on the same machine.
Currently not. We have removed multi-store support at one point because we felt that it was not useful enough to warrant cluttering the API with STORE arguments. I think that it is usually better to put separate applications into distinct Lisp processed as that approach provides better isolation.
I agree that store agruments would be a bad thing, I think thats what special variables are good for.
If you need multiple stores in one application, doing so would involve rebinding the store special variables in a WITH-STORE macro. I can add it if you need it, please let me know. Note that operations involving two stores will not be supported by that and I'd really question if that would make sense.
It would indeed be handy for me. Could you give me a sketch of how that would work? From my understanding the *store* only contains some meta information (counter, path, lock, ..) while the actual objects are stored in the indices, which are stored in class slots that are not affected by the special variable.
- Klaus
Here is my try with the special variable:
BKNR.DATASTORE> (make-instance 'mp-store :directory "/tmp/store1/" :subsystems (list (make-instance 'store-object-subsystem)))
reading store random state restoring #<MP-STORE DIR: "/tmp/store1/"> loading transaction log /tmp/store1/current/transaction-log #<MP-STORE DIR: "/tmp/store1/">
BKNR.DATASTORE> (all-store-objects)
(#<STORE-OBJECT ID: 0>)
BKNR.DATASTORE> (let ((*store* Nil)) (make-instance 'mp-store :directory "/tmp/store2/" :subsystems (list (make-instance 'store-object-subsystem))) (all-store-objects) (close-store))
reading store random state restoring #<MP-STORE DIR: "/tmp/store2/"> NIL
BKNR.DATASTORE> *store*
#<MP-STORE DIR: "/tmp/store1/"> ;; Back in the old store
BKNR.DATASTORE> (all-store-objects)
NIL ;; But the object is gone :(