In my opinion ANSI CL would'a could'a should'a standardized on Unicode (at least the 8 and/or 16-bit ranges) except that Unicode was not available yet. Since virtually the entire computing universe has since standardized on Unicode, it would be insane to do anything else if an updated CL standard could somehow be established. Since Unicode standardizes all character names, CHAR-NAME and NAME-CHAR should be defined to use those standard names, described here in
Wikipedia. The only obvious ugliness from the perspective of CL is that a char name really wants to be something the reader will swallow as a string designator, but standard Unicode names can and do contain spaces and hyphens (but not underscores). An obvious solution is that CL would translate space into underscore inside char names. (This is what Allegro does.) One could also escape spaces with backslash, but that is unbearably ugly:
#\Latin\ Capital\ letter\ A . Unfortunately, the current ANS gives implementations freedom not to support names for graphic (printing) chars. That should also be reconsidered in a revised Unicode-cognizant standard.