On Mon, 07 Nov 2011 03:11:50 -0500, Helmut Eller heller@common-lisp.net wrote:
- Hugo Duncan [2011-11-07 04:04] writes:
Is there a reason to start using a binary encoding of the message length?
No deep reason. We actually used binary encoding before we used hex-strings. That worked fine with latin-1 but not with utf-8. I guess it's just instinct; now that we explicitly work on a byte stream it's even more natural. Should probably have used network byte order.
This makes the messages less easy to inspect, and less easy to write integration tests for.
Only marginally. Shifting 3 bytes together is not exactly rocket since.
It isn't rocket science, but does remove the possibility of visual verification, and of being able to send messages via scripting or a simple console. HTTP, SIP, SMTP, and STOMP are all good examples of protocols with text headers, and I think appropriate, if one looks at swank as a sort of control protocol.
What is the gain from changing to a binary header? As far as I can see it is just saving a few bytes.
The playload is an s-exp encoded as UTF8 text.
Normalising on utf-8 and counting bytes sounds like it would solve the original issue without changing to a binary encoding of the message length.
Right. It would not be backward compatible, tho.
It seems to be worth solving encoding issues, so something has to give.
Given this is a breaking change, I also see the desire to introduce an extension mechanism at the same time. I would argue a text based header/value extension would be more appropriate.
At the end of the day, I realise the balance between the respective merits of binary and text headers is somewhat subjective.
Hugo