On Mon, 9 Oct 2006, Nikodemus Siivola wrote:
T Taneli Vahakangas vahakang@cs.helsinki.fi writes:
I'm a newb to lisp (using sbcl), slime and emacs, so maybe this is a FAQ or some other trivia I just can't find ... anyway, doing this:
(string #\UGARITIC_LETTER_ALPA)
causes slime to die and complain on emacs bottom-line about: error in process filter: net-read error: (end-of-file)
If I start sbcl from the command-line it works. Most other interesting characters work, like, say #\ARABIC_LETTER_HAMZA or #\HEBREW_LETTER_ALEF, but for whatever reason ugaritic and some others don't. I don't have a comprehensive list, but besides ugaritic, at least #\LANGUAGE_TAG kills slime.
What can be done about it? Is there a setting somewhere that needs to be (un)done?
Is some Unicode characters work, and some don't then it sounds like an Emacs bug decoding or SBCL encoding bug to me.
What happens if you write the problematic characters into an UTF-8 encoded file using SBCL, and then try to open it as an UTF-8 file
Writing seems to work ok; at least I can see the nice cuneiform with: % cat ftest ... on a second thought, I will not paste the result here ... Anyway, just take my word that this part works.
using Emacs? Can you verify that the contents of the file are as expected using something other then Emacs or SBCL?
Emacs does not seem to handle the file very nicely. It just shows: "\360\220\216\200" which is correct in a way, but not really usable (also garbles the console totally if started as "emacs -nw"). OTOH, other utf-8 characters (I tried japanese) seem to display correctly when read from a file.
So, your guess about emacs seems to be right -- I can generate the offending character and save it in a file with sbcl and cat shows me that the character actually is there. But emacs goes haywire. I probably need to "report-emacs-bug" or whatever.
(To the other guy suggesting (setq slime-net-coding-system 'utf-8-unix) in .emacs: I already have that, but thanks anyway.)
Taneli