On Tue, Apr 29, 2014 at 12:40:09AM +0200, Pascal J. Bourguignon wrote:
* Programmed in Common Lisp, either the fixnum in the Ariane 5 would have
been converted into a bignum, or an condition would have been
signaled, which could have been handled. This would have taken
time, which could perhaps have "exploded" the real time constraints,
but it is better to control your rocket slughishly than not to
control it at all.
That was not the real problem. The root cause was the design assumption that
overflowing value was _physically_ limited, i.e. during normal operation
it would have been impossible to overflow and an overflow would in fact have
signaled some serious problems bad enough to abort. While this held true in
Ariane 4, it no longer was true in the more powerful Ariane 5.
Your "solution" would have papered over the flawed design assumptions, which
is _not_ the same is fixing them.
You’re forgetting we’re talking about embedded programs with real-time processes.
You don’t have the time to stop everything and “debug” the design.
You have to control a rocket and avoid it crashing!
That’s the reason I’ve not mentionned RAX yet: the situation was quite different,
since they had the time to perform remote debugging, over several days.
* Programmed in Common Lisp, instead of using raw numbers of physical
magnitudes, you'd use objects such as:
(+ #<kilometer/hour 5.42> #<foot/fortnight 12857953.0> )
--> #<meter/second 4.7455556>
and Mars Climate Orbiter wouldn't have crashed.
This is ridiculous. If you end up mixing measurement systems (such as metric
and imperial) in the same project, you are _already_ doing it horribly wrong.
It wasn’t in the same project. The data was actually sent from a remote Earth station.
So this is even worse than not using magnitude with units inside the process, it was a
serialization/deserialization error. But notice how Lisp prints out the speeds above!
It writes the units along with the values!
Now, of course it’s not a programming language question. We already determined that,
when noting that neither the ANSI Common Lisp nor the ANSI C standard imposes
bound checking, but that C programmers don’t code bound checkings, and C implementers,
being C programmers, implement compilers that don’t do bound checking, while the
inverse is true of Common Lisp programmers.
This is always the same thing: “statically typed” proponents want to separate the checks
from the code, performing (or not) the checks during design/proof/compilation, while
“dynamically typed” proponents keep the checks inside the code, making the compiler
and system generate and perform all the typing, bounds, etc checks at run-time.
So when a C guy (any statically typed guy) sends data, he expects that the type and
bounds of the data are know (before hand, by both parties). But when a Lisp guy (any
dynamically typed guy) sends data, he sends it in a syntactic form that explicitely
types it, and the data is parsed, validated, bound checked and typed according to
the transmitted syntax on the receiving end.
Of course, generating C code doesn’t mean that you can’t design your system in a
"dynamically typed” spirit. But this is not the natural noosphere of the C ecosystem.
The design fault was mixing measurement systems, which one should _never_ do
on pain of embarassing failure. Papering over this design screwup with a
language environment that _supports_ this (instead of screaming bloody
murder at such nonsense) doesn't really help here.
Again, we are talking about an embedded program, in a real time system, where you
have only seconds of burn stage on re-entry, and where you DON’T HAVE THE TIME
to detect, debug, come back to the design board, compile and upload a new version!
The software that uploaded the untagged, without units, bit field *data*, instead of
some meaningful *information*, hadn’t even been completed before the orbiter was
in space! It wasn’t developed by the same team, and wasn’t compiled into the same
executable.
Nonetheless, here a lisper would have sent *information* in a sexp, and dynamic
checks and conversions would have been done.
If you will, the design would have been different in the first place!
* Programmed in Common Lisp, the Therac-5 bug wouldn't have occured:
"The defect was as follows: a one-byte counter in a testing
routine frequently overflowed; if an operator provided manual
input to the machine at the precise moment that this counter
overflowed, the interlock would fail."
But why did the counter overflow in the first place? Was it simply programmer
oversight that too small a datatype was used or was this actually an error
that just didn't have noticeable consequences most of the times? If the
later, then again, papering over it with a never overflowing counter is
not a fix.
But it if was a problem, it *would* eventually reach a bound check, and signal
a condition, thus stopping the process of irradiating and killing people.
Remember: a Lisp program (any "dynamically typed” program) is FULL of checks!
since again, incrementing a counter doesn't fucking overflow in
lisp!
* Programmed in Common Lisp, heartbleed wouldn't have occured, because
lisp implementors provide array bound checks, and lisp programmers
are conscious enough to run always with (safety 3), as previously
discussed in this thread.
Hehe, "conscious enough to run always with (safety 3)". Riiiiight. And nobody
was ever tempted to trade a little runtime safety for speed, correct?
Those are C programmers. You won’t find any other safety that 3 in my code.
You should not find any other safety than 3 in mission critical code, much less
in life threatening code.
As for heartbleed: arguably, the RFC that the broken code implemented
shouldn't have existed in the first place.
What I'm saying is that there's a mind set out-there, of using modular
arithmetic to approximate arithmetic blindly. Until you will be able to
pay $1.29 for 3 kg of apples @ $2.99, people should not program with
modular arithmetic!
Well, modular arithmetic doesn't go away because one wishes it so. As a
developer doing non time critical high level work one might be able to
cheerfully ignore it, but the moment one writes sufficiently time critical
or low level code one will have to deal with it. Because modular arithmetic
is what your CPU is doing - unless you happen to have a CPU at hand that
does bignums natively at the register level? No? Funny that.
This might have been true in 1968, when adding a bit of memory added 50 gr. of payload!
Nowadays, there’s no excuse.
And if the flight safety of an aircraft depended upon the current
Lisp version of Ironclad's impenetrability, we would be in trouble.
This is another question, that of the resources invested in a software
ecosystem, and that of programming language mind share. Why the
cryptographists don't write their libraries in Common Lisp and choose to
produce piles of C instead?
Usefulness. If I write a library in C, pretty much everything that runs on
Unix can link to it (if need be, via FFI and friends) and use it. If I write
a library i Common Lisp, then code written in Common Lisp can use it unless
people are willing to do some interesting contortions (such wrapping it in
an RPC server).
Anything running on unix can link to libecl.so (which is ironically a CL
implementation using gcc, but we can assume it’s a temporary solution).
Exercise for the interested: write a library in Common Lisp that does, say,
some random data frobnication and try to use it from: C, Python, Perl, C++
_without_ writing new interface infrastructure.
But the point is to eliminate code written in C, Perl, C++! So your exercise is academic.