John,
what you are seeing is probably related to the fact that the default multi threaded task master of Hunchentoot is not clever about creating threads. It creates a new thread for every incoming connection, so it is easy to force it into situations where too many threads are created.
Personally, I am not using multi threaded Hunchentoot. Instead, I run it single threaded and behind a HTTP caching proxy (squid) that is configured to use if-modified-since headers to negotiate reloading stuff from the Hunchentoot backend. That strategy works well and copes with all load patterns that I have tried (and I have tried a few, using both ab and Tsung).
If you absolutely need threads, try the threading patch that has been submitted by Scott McKay a few weeks ago. It prevents Hunchentoot from creating threads without bounds, and might solve the problem for you. We will be incorporating that patch once it has been separated from the xcvb changes that won't go into the Hunchentoot mainline. Any reports on how it helps with artificial loads tests would be appreciated.
For the SBCL crash: This should be reported to the SBCL folks. I never had a lot of faith in SBCL's thread implementation and it also is possible that the locking patterns inside of Hunchentoot trigger the misbehavior, but we need to have detailed advise how to fix that, if we need to.
-Hans
On Thu, Aug 19, 2010 at 01:53, JTK jetmonk@gmail.com wrote:
Hello,
I'm finding that the speed of hunchentoot falls drastically for more than 1 simultaneous connection, at least for CCL on OS X. And more than a few connections just fails.
I downloaded the latest svn version from http://bknr.net/html/
To reduce any complicating factors, I loaded just h'toot and ran ccl on the shell command line, not in slime.
I tried a simple page:
CL-USER> (hunchentoot:start (make-instance 'hunchentoot:acceptor :port 4242)) #<ACCEPTOR (host *, port 4242)> CL-USER> (hunchentoot:define-easy-handler (say-yo :uri "/yo") () (setf (hunchentoot:content-type*) "text/plain") "Yo! This. Is. Content") SAY-YO
Then I tried the ab (apache bench) benchmark with c=1 (1 connection), and 500 iterations:
$ ab -n 500 -c 1 http://127.0.0.1:4242/yo
HTML transferred: 10500 bytes Requests per second: 138.26 [#/sec] (mean) <- ##### note speed ###### Time per request: 7.233 [ms] (mean) Time per request: 7.233 [ms] (mean, across all concurrent requests) Transfer rate: 22.82 [Kbytes/sec] received
Then I tried it with TWO concurrent connections:
$ ab -n 500 -c 2 http://127.0.0.1:4242/yo
HTML transferred: 10500 bytes Requests per second: 29.10 [#/sec] (mean) <- ##### note speed ###### Time per request: 68.722 [ms] (mean) Time per request: 34.361 [ms] (mean, across all concurrent requests) Transfer rate: 4.80 [Kbytes/sec] received
It's nearly 5x slower with just 2 simultaneous connections, going from 138 to 34 requests/sec.
10 connections gives the same speed as 2, and 30 fails because connection was reset by peer (maybe my fault for not having enough open files available, but it should never open more than 30 filehandles at once, no?).
I'm running on a Mac OS X 10.6 with CCL "Version 1.4-r13119 (DarwinX8664)"
Can anyone else reproduce this? Is it some threading problem?
John
tbnl-devel site list tbnl-devel@common-lisp.net http://common-lisp.net/mailman/listinfo/tbnl-devel