I've been testing some UDP codes on Lispworks (personal 32bit Windows version) and have encountered some undocumented behaviour, I think it's a matter of opinion whether it's a bug or not but it should probably have a sentence or two documenting it somewhere.
Basically I'm sending a UDP packet to a host and listening for a reply, using socket-send and socket-receive. If the host in question is not listening for UDP traffic on that particular port, the Windows socket API says it should return an ECONNRESET error immediately on the socket-receive call, this is indicated by a -1 return value from the underlying recvfrom call.
When this happens the Lispworks backend code returns a length of -1 (and buffer nil). This is perfectly acceptable behaviour I think (although it'd be somewhat nicer to signal an error, but that's a matter of taste). But it should be documented that checking the length here is the correct way to detect an error occurred.
Example code would be something like:
(let ((sock (usocket:socket-connect "localhost" 1234 :protocol :datagram :element-type '(unsigned-byte 8))))
(unwind-protect
(progn
(usocket:socket-send sock (make-array 16 :element-type '(unsigned-byte 8) :initial-element 0) 16)
(let ((buffer (make-array 16 :element-type '(unsigned-byte 8) :initial-element 0)))
(usocket:socket-receive sock buffer 16)))
(usocket:socket-close sock)))
What is somewhat more annoying is that the socket object remains in a :READ state. This means that a polling loop using wait-for-input spins continuously, with each socket-receive returning -1 (as explained above). Probably the socket state should be cleared if a socket-receive fails.
Apologies if this is all well-known to those reading this list, but it caused me 10 minutes of head scratching earlier today and thought it was worth mentioning.
Frank.