On Tue, Jul 21, 2009 at 12:45 PM, Ville Voutilainenville.voutilainen@gmail.com wrote:
On Tue, Jul 21, 2009 at 1:07 PM, Erik Huelsmannehuels@gmail.com wrote:
When running this code, it takes around 1440 seconds on my PC. Then I modified the stack management routine to re-use available stack frames (and conses), instead of creating new ones on every call. This took execution time down to around 1240 seconds. A good improvement, I'd say. However, with the change described above, stack frames won't ever become garbage anymore. My question is: does the loss of GC-ability outweigh the performance gain?
How many frames will then hang around? If I make a deep call, and never do another deep call, will I then have frames lurking around from the deep call?
In the current implementation: yes. However, with this improvement, there's probably some room for keeping a very small statistic of some kind which would allow building a shorter list of stack frames (ie dispose of all / a few). Ofcourse, that would undo some of the performance gain.
Do you have ideas? I thought to use a weak reference, meaning GC will collect the memory if it needs it, but other than that, I certainly think setting an upperbound won't fix the "keep hanging around" issue.
Oh; there's a single case where they get cleared up in my current implementation even: when the thread gets destroyed. For separate task threads that means relatively soon, for the main thread that would equal to 'never'.
Bye,
Erik.