On Sat, Apr 12, 2014 at 5:52 PM, David McClain <dbm@refined-audiometrics.com
wrote:
Just curious for other opinions... but wouldn't this (Heartbleed) sort of buffer excess read-back failure have been prevented by utilizing a "safe" language like Lisp or SML?
I used to be an "unsafe" language bigot -- having mastered C/C++ for many years, and actually producing C compilers for a living at one time. I felt there should be no barriers to me as master of my machine, and not the other way around.
But today's software systems are so complex that it boggles the mind to keep track of everything needed. I found during my transition years that I could maintain code bases no larger than an absolute max of 500 KLOC, and that I actually started losing track of details around 100 KLOC. Making the transition to a higher level language like SML or Lisp enabled greater productivity within those limits for me.
Part of the issue vis-a-vis security is that for many applications, much of the complexity is abstracted away into some library that the programmer may only be dimly aware of. While it used to be that many largish programs were more or less self-contained, often depending only on the system libraries, now days they tend to have a very broad set of dependencies on a large set of libraries that themselves have a similarly large set of dependencies. Indeed, the applications themselves are often little more than glue tying together a (to me, anyway) surprisingly large number of disparate libraries: transitively, the dependency graph is many times larger than it was a decade or two ago and hides an astonishing amount of complexity, even for applications that themselves appear trivial. Thus, you may "introduce" a security hole into your application simply by using a library that provides some useful bit of functionality but is implemented terribly in a way that is not easily visible to you: that seems to be the case with services that are affected by heartbleed.
Could using a safe language (or even one that implemented array bounds checking) have prevented this particular bug? Absolutely. But in the general case, modern applications have a huge surface area for attack because of the layering of dependencies on large, complicated bits of software that the application program has little to no control over. Further, building all the requisite dependencies oneself in a safer language is such a daunting task as to be generally infeasible, even if it makes sense for specific applications. And even if I did that, eventually I am going to pop down into some layer in the operating system or a system library or the language runtime that is out of my control, and those things seem to have also increased in size and complexity by a few orders of magnitude over the last 20 or so years. And even if I write my own compiler, operating systems, libraries, etc, I still have to wonder whether the hardware itself is truly secure ("DEITYBOUNCE", anyone? Let alone actual, you know, errors in the hardware). And this is completely ignoring the value-loss proposition of targeting safer but lesser-used languages. For better or for worse, things like heartbleed just aren't going to sway many library writers to give up on a huge, existing target audience (even if they should).
So even if I as a programmer am extremely careful and diligent, I may still be burned by something rather distant from the work I myself am doing, and I have finite capacity to influence that.
Of course, that doesn't mean that one should not oneself be careful and diligent, or even reimplement things safely where one can! Only that the problem is rather more difficult than can be solved by simply switching implementation languages.
- Dan C.