Rambles around computer science

Diverting trains of thought, wasting precious time

Fri, 20 Feb 2015

Putting observability first

[Update: this article has now been translated into Russian! Thanks, Vlad.]

Last week I had a fascinating conversation with Mark Shinwell and Stephen Dolan, two colleagues who know the OCaml runtime inside-out. We were talking about ways to make compiled OCaml code observable: making it support interactive debugging and various other dynamic analyses (notably memory profiling).

It turns out that although this is not impossible, it is very difficult. It's fair to say that the OCaml runtime has not been designed with observability particularly high on the wishlist. It's far from unique in this: ML-family languages are usually not implemented with observability in mind. In fact you could argue that ML is explicitly designed for non-observability. The orthodoxy of the day held that tagged storage was the naive implementation technique, and erasing tags at run time was a good thing—mainly since it enabled a modest performance gain. Moreover, this meant that the provable erasability of tags came to be considered a valuable mathematical result, about language designs in general and ML in particular. The lingering side-effect is that this erasure also removes the ability to decode program data straightforwardly from its in-memory encoding. In the absence of some other mechanism—which mainline OCaml doesn't currently have—this greatly compromises observability.

ML is not alone in this. Although I'm not so familiar, I'll wager that Haskell, in GHC, suffers similar observability limitations—or perhaps worse, since it does more extensive compile-time transformations, and these cannot easily be “seen through” at run time. This makes it even harder to explain the state of an executing program, seen at run time, in terms of the source code. Also, I'm not singling out functional languages. Java has serious flaws in the observability mechanisms exposed by its virtual machine, as I once wrote a short paper about.

C, as usual is an interesting case. On the surface, it appears to be designed for little or no observability. It certainly doesn't generate much run-time metadata. This omission could be seen as an ML-like erasure of implicit tags. Yet this is not an accurate representation. Pragmatic working programmers, like Ritchie and Thompson, know well the value of interactive debugging. The first edition of Unix included db, an assembly-level interactive debugger mostly used on coredumps. By the eighth edition two debuggers, adb and sdb, were documented in section 1 of the manual, the latter supporting source-level debugging, while a third debugger (pi; built on Killian's procfs) was bundled in the extras which would become Plan 9. More modern debuggers have kept pace (more-or-less!) with more advanced compiler optimisations, using complex metadata formats like DWARF to describe how to see through them. (This isn't bulletproof, and has its own problems, but works remarkably well for C.) The result is that C code has been consistently highly observable—albeit not without forty years' continuous toil to co-evolve the necessary language- and system-level infrastructure.

This co-evolution is interesting too. The mechanisms for achieving observability in C code lie outside the language. Coredumps and symbol tables are system mechanisms, not language mechanisms. Observability in Unix has been part of the design, down in the fabric of the system; it is not an afterthought. An opinionated system designer (okay, I'll step forward) might claim that observability is too important to leave to language designers. There are, however, some gaps to plug and some mindsets to bridge in order to apply Unix-style debugging infrastructure to ML-like languages. In another post in the near future, I'll dig a bit deeper into OCaml, by considering polymorphism (and why it's not quite the problem it seems to be).

[/research] permanent link contact


Powered by blosxom

validate this page