Well, if your lisp has threads, spawn mainloop in a thread. Or use serve-event. But with threads a word of caution: LTk hasn't been stress-tested with respect to multi-threading yet (proper threading support is still in the planning phase), so use it at you own risk :)
Oh, how.. simple. :)
Actually, not only has Ltk not been stress tested with two threads interacting with the GUI, but I can think of a few race conditions there. I fixed them once, but the fix was too ugly to be justifiable ... I've been planning on taking another stab at it "sometime". But if you're using SBCL, serve-event is perfect for interactivity -- the serve-event facility was in fact developed to allow a single-threaded lisp to run an application while preserving the REPL.
I'm starting to see the light here. I guess I'm looking at GUI programming from a network programmers perspective. I would imagine keeping the repl intact and interactive would be /essential/ for dynamic GUI programming, let alone rapid prototyping, debugging and whatnot. I suppose most people have simple, very easy designed GUI's with pretty much static behaviour, in which case you can just wrap it all up around a with-ltk call and be done with the GUI part.
Most of the time what you're actually doing is redefining functions or classes and invoking them from the GUI, not calling Ltk functions directly from the repl -- at least, once you get the basics in place for your application. So you don't really need the repl as much as something like Slime to let you redefine at will.
Another question, LTK is said to be just fine as long as you do not need high-performance. How do you define high-performance in this context? Is a CAD application a high-performance application? Is realtime stock trading curves high-performance? Or are we talking about the obvious; FPS game with intense graphics?
More like the obvious ... it's been used no problem for CAD, server load monitors, tetris, xeyes, pong, etc. Image editing is pushing the limits a little, but worked surprisingly well. Ken Tilton found he needed native bindings to get reasonable performance for OpenGL, which isn't too surprising.