Hi Hoan,
I don't really plan to make the BKNR Datastore compete with AllegroStore, AllegroCache, cl-versant, or any of the O/R mapping database products. There may be some overlap between such a project and the BKNR Datastore, but I don't expect it to be large. The requirements for a disk-based database are very different from those of in-memory, transaction based system. For example, with the BKNR Datastore you can make arbitary in-memory datastructures persistent by making all destructive operations be transactions. In a disk-based database system, the changes to the persistent data itself is being written to disk, which requires different mechanisms - In practice, you would propably restrict such a system to make CLOS objects persistent and hook into the slot access functions.
I am not opposed to object database systems, but I want to see the BKNR Datastore developed towards better maintainability and reliability. Also, I fail to see what advantages a traditional, disk-based approach would have, despite the inherently lower performance.
What is the reason that makes you think that the BKNR Datastore should move towards a disk-based system? Maybe I am overlooking something or fail to see where the current approach does not work.
I see the most severe limitation of the current approach in it's size limit. A store, in practice, is limited to a few 100 MB of data. Beyond that, the restore times will be too high and also global garbage collections can become a problem, at least if the GC is very naive. In practice, a few 100 MB of data is a lot of data and will be enough for even very large applications. There are limits, but to me they are more theoretical.
Check out some design scribbles I wrote a while ago: http://common-lisp.net/project/bknr/templates/development-style.xml and http://common-lisp.net/project/bknr/templates/why-no-db.xml
Cheers, Hans