I replied to your other email before I saw this one.
On Wed, 4 Oct 2006 15:50:59 -0400, "Graham Fawcett" graham.fawcett@gmail.com wrote:
Maybe it's a bit early to be asking about hunchentoot, sbcl and performance... but heck, let me ask anyway. :-)
Trying the hunchentoot beta on SBCL 0.9.17, I ran tbnl-test, and uploaded a ~0.75MB binary file. (Which worked!) Testing with 'ab', I downloaded the file -- but I wasn't able to get a download faster than 359KB/sec, or 2 seconds per download, on a fast network. I also observed >95% CPU activity during the download (about 50/50 between user and system).
Profiling the :TBNL package, and downloading the file again, showed this:
seconds | consed | calls | sec/call | name
1.740 | 35,748,752 | 1 | 1.739901 | HANDLE-STATIC-FILE 0.004 | 8,184 | 1 | 0.003998 | TBNL::READ-HTTP-HEADERS 0.004 | 24,912 | 6 | 0.000665 | TBNL::WRITE-HEADER-LINE/HTTP
and profiling :FLEX instead, then repeating the download, showed:
seconds | consed | calls | sec/call | name
0.009 | 81,920 | 352 | 0.000027 | FLEXI-STREAMS::READ-BYTE* 0.000 | 106,051,824 | 1,488,124 | 0.000000 | FLEXI-STREAMS::WRITE-BYTE*
The consing numbers seem pretty high. :-) I'm not sure how to interpret the 0.0 seconds for the write-byte* calls -- a hundred million conses might be fast, but not instantaneous! It's notable that the reported "profiling overhead" was ~15 seconds, perhaps it was just a profiling error. (I'm not a SLIME-profiling expert -- advice is welcome.)
Is this degree of performance similar to what you see under Lispworks? I'm not throwing stones at beta code, just trying to interpret what I'm seeing here.
I haven't really checked the performance yet - I usually try to make it work correctly first. If you want to make the whole stuff faster, I think starting with FLEXI-STREAMS is a good idea. Throw a couple of optimization declarations at SBCL and see what Python has to say about them. CHUNGA might be a worthwhile target as well.