Christophe Rhodes csr21@cam.ac.uk writes:
And something of this form will happen whenever the lisp wants to emit a character which is unrepresentable in the non-multibyte buffer used by slime -- what does non-multibyte mean, anyway?
From the Emacs manual:
In unibyte representation, each character occupies one byte and therefore the possible character codes range from 0 to 255. Codes 0 through 127 are ASCII characters; the codes from 128 through 255 are used for one non-ASCII character set (you can choose which character set by setting the variable `nonascii-insert-offset').
Meanwhile, it seems to me that there must still be some other encoding-dependent interaction around " *cl-connection*", as otherwise my "fix" to use base 128 in the communication would not have improved matters even slightly.
I feel we're still grappling with what the problem actually is, I'm afraid... sorry for not being clearer.
Assuming that Zach didn't change the coding system on the Emacs side we can say that SBCL changed the coding system in recent versions. Since older SBCLs had only 8-bit chars we can also say that Zach's Emacs uses a unibyte coding system for reading and writing.
I think we have two options:
1) use a fixed coding system between Emacs and Lisp. This coding system should be unibyte, since that is supported by all Lisps and covers most non-exotic uses. In this case we have to tell SBCL that it should use iso-latin-1 or similar instead of utf-8 for the socket stream to Emacs. Your lambda cannot be send to Emacs and the write operation should signal an error.
2) make the coding system configurable. One advantage is that you can send lambda to Emacs. The disadvantage is that we have to make the Emacs side multibyte clean, have to write about it in the manual, have the constant feeling that the coding systems don't match.
I strongly prefer option 1. What do you think?
Helmut.