Hi.
I have written a small and very simple web-service to which I want to upload files. With the current version 1.2.1 of hunchentoot I have problems uploading big files, say 1GB. As far as I remeber, with older versions (1.0 and 1.1) big uploads worked well.
Basically the following code is in use:
--8<---------------cut here---------------start------------->8--- (hunchentoot:define-easy-handler (handle-upload :uri "/path/to/upload-service") () (let ((uploaded (when (and (boundp 'hunchentoot:*request*) (hunchentoot:post-parameter "filename")) (handle-file (hunchentoot:post-parameter "filename"))))) (generate-html-code))) --8<---------------cut here---------------end--------------->8---
And handle-file looks like this:
--8<---------------cut here---------------start------------->8--- (defun handle-file (post-parameter) (ht-log :info "Handling file upload with params: '~A'." post-parameter) (when (and post-parameter (listp post-parameter)) (destructuring-bind (path filename content-type) post-parameter (declare (ignore content-type)) ;; strip directory info send by Windows browsers (when (search "Windows" (hunchentoot:user-agent) :test #'char-equal) (setf filename (ppcre:regex-replace ".*\\" filename ""))) (fad:copy-file path (ensure-directories-exist (merge-pathnames filename *unsecure-upload-dir*)) :overwrite t) filename))) --8<---------------cut here---------------end--------------->8---
It seems that hunchentoot tries to read the whole stream into memory and that the heap is too small (the server has only 1GB RAM and the heap of the sbcl process is limited to about 600MB).
Is there any (easy) way to let hunchentoot load the data in small chunks in order to limit the maximal amount of memory used, independent of the file size?
Stefan,
On Sun, Jan 8, 2012 at 5:33 PM, Stefan Nobis stefan-ml@snobis.de wrote:
It seems that hunchentoot tries to read the whole stream into memory and that the heap is too small (the server has only 1GB RAM and the heap of the sbcl process is limited to about 600MB).
What makes you think so? I am not saying that it is not possible, but as Hunchentoot uses the RFC2388 library for parsing uploads and I think that library has not recently changed, I wonder how you determined that your uploads are actually read into memory. What did you do to decide that this would be the problem?
-Hans
Stefan,
I have verified that Hunchentoot does not read uploads into main memory by running the :hunchentoot-test server and using the "file uploads" test. I sent a 2GB file and noticed no increase in the working set size of the Lisp. The upload took too much time, though, and making the body parsing more efficient might be a worthy optimization target. Yet, it does not seem that Hunchentoot itself is responsible for your working set size problems, at least judging from this test.
Can you isolate the problem and post a bit of code that'd allow us to reproduce it?
-Hans
Hans Hübner hans.huebner@gmail.com writes:
I have verified that Hunchentoot does not read uploads into main memory by running the :hunchentoot-test server and using the "file uploads" test.
Yes, correct. After quite a bit of testing it seems that either IE or the proxy server at the office causes the trouble. Using my home internet connection and Firefox, uploading to the public service works. Maybe the proxy server at the office has some limits configured.
Sorry for the inconvenience.