Rambles around computer science

Diverting trains of thought, wasting precious time

Mon, 17 Jan 2022

De-escalating the submission backlog

About two and a half years ago, I submitted a two-page abstract entitled “De-escalating software: counterproductivity versus conviviality” to the conference on History and Philosophy of Computation (HaPoC), proposing a talk the philosophy of Ivan Illich interpreted in the domain of software. I had already given a couple of experimental talks about it that appeared to be well-received. The idea was to push myself into developing these ideas a bit more, as a waypoint to eventually publishing them. The conference, in late October 2019 in Bergamo, would have been a nice stopover on my return (by surface travel) from SPLASH in Athens.

A couple of months later, the day the notification was due, I woke up very early in the morning in a panic, having slept very poorly. I was feeling overwhelmed by my workload and I knew that I could not deal with the extra burden of my talk being accepted. I ran downstairs and sent an apologetic e-mail asking to withdraw the submission. I've no idea whether my talk was to be accepted or not, but either way I am very glad for the chair's understanding response.

You can read the abstract here. I would still like to write more about software and Illich. As well as the two earlier talks I gave in 2019 (neither was recorded, sadly, but you can view the slides on my talks page), I was also a guest on the Hope in Source podcast by Henry Zhu and Nadia Eghbal where we shared some thoughts on Illich. It is proving very difficult to get time to devote to this. Perhaps it is something best done intensively at a retreat.

(I have at least three other pieces of writing that are in a fairly similar state. And I have dozens of blog posts that are “90% finished”—where the remaining 90% is left to do, of course.)

[/research] permanent link contact

Fri, 23 Aug 2019

Research travel, climate change, and why we must educate our institutions

Like many academics, I travel quite a lot. I'd rather do so in a way that is environmentally sustainable. Most conferences I travel to are sponsored by ACM SIGPLAN, which in recent years has developed a ad-hoc committee on climate issues.

It's great that SIGPLAN has taken this issue seriously. I do agree that carbon offsetting, SIGPLAN's current recommended action, can be worth doing. The following is my greatly simplified summary of the committee's argument for recommending this action. (For the actual argument, which is far more carefully argued, see the preliminary report.)

  1. Research benefits from high-quality communication, which entails travel, e.g. to conferences. [This is clearly true, for now at least.]
  2. Travel “very often involves flying”. [My quibble in this post: let's not assume this is a variable over which we have no control. In practice, many of us have some choice at least some of the time.]
  3. Flying is dependent on burning fossil fuels, therefore on emitting carbon.
  4. Therefore, as a short-term action it is feasible only to offset carbon emissions, not to reduce them.

To me this comes across as a North American perspective, and overlooks the transport situation in Europe. Within Europe it is largely possible to avoid flying. Here are some conference trips I've made by train (and sometimes ferry) in the past few years. In each case I was starting in south-east England, either from Cambridge or Canterbury. For clarity: many of these were not SIGPLAN-sponsored events, but I don't think that detracts from my point.

Hopefully the above shows that a lot of travel within Europe can be done without flying. So what stops people? My best guesses are the following.

The first step to changing this is to recognise that we live in a “flying first” culture. The power of unfamiliarity and inertia, both individual and institutional, is considerable.

The second step is to challenge it. This means asking for our institutions' support, and questioning policies that appear unsupportive. My previous institution's travel guidance sanctions Eurostar trains only for journeys “to France”, apparently not aware that they also run direct to Brussels. No doubt minds would be blown if I were to describe how by changing trains, you can reach even more countries. I wrote to the page's maintainers in March 2018, but despite a friendly-seeming response, there has been no change as of August 2019. Perhaps I'm optimistic, but I believe that institutions will learn we keep telling them. Flying in Europe mostly isn't necessary. The last time I chose to fly within Europe was in September 2012. Since then, I've done it exactly five further times (one way), Aside from the case of industrial action noted above, these have been where the choice was effectively made by some institution or other: if I'd taken the train I likely wouldn't have been reimbursed.

The third step is to accept taking a short-term hit in time and money. Even if our time overheads are somewhat increased in real terms, we should see sustainable choices as an intrinsic good, and choose them to the extent we personally can (which will vary; it's easier if you have your own travel budget). Not only do academics have a responsibility to society, but also we are unusual in the extent to which we can not only “afford” additional travel time, but perhaps even benefit—given extra time spent on trains or ferries, we might even get more work done! I admit that the above record of relatively little flying has only been achievable because I've sometimes ponied up some extra cash myself—but this shouldn't be necessary.

A fourth step is more political, but is necessary to make that hit “short term”: demand progress! Solidarity around sustainable transport is sorely needed. Even in Europe, loss of political favour has been causing the offering to go backwards. Cross-border train and ferry services are overall becoming more sparse—especially night trains, as I noted above. Sometimes, faster daytime trains are a suitable replacement, but often this is not the case. “Voting with your feet” is the most basic and important form of this, but other things, like supporting campaign groups or writing to your representatives, can make a difference. It's the sort of thing most of us could do more of—myself definitely included. The reasons for the cutbacks I mentioned are complex, but are in large part regulatory and therefore political. (Some useful articles about the DB cutbacks are here and here. Despite this, there are reasons to be cheerful: see here and here, where there is a very nice map.)

Coming back to SIGPLAN: I think there is room to develop practices that encourage SIGPLAN members to use more sustainable travel options where they exist. A positive framing is necessary, which will take some careful thought. I believe a financial incentive is also necessary—but straight-up differentiated registration fees might be hard to administer, and perhaps unpopular.

As one possible positive framing, perhaps SIGPLAN could develop a pool of funds, analogous to PAC, but used to pay top-up contributions for those who choose to travel sustainably and cannot otherwise fund the difference. One way this might work is that after a conference, those who submit evidence that their travel was sustainable would a earn a small rebate on their registration fee. This would be offered partly as a token redeemable against future registrations (benefitting the institution paying for the travel) and partly as a deposit into the pooled contribution pot I mentioned. For the latter, the individual or institution receives no money but gets some kind of “points certificate” (to help gamify things). I note the mooted ACM-directed carbon pricing recently proposed in a SIGPLAN blog post; which would also generate some pooled funds. I'm not yet sure whether it should be the same pool; perhaps not.

In October I will be travelling to SPLASH in Athens. From the UK this is one of the more challenging European destinations for surface travel. But it is certainly feasible and I'll certainly be doing it! The core of the journey is an overnight train from Paris to Milan, a train to Bari or perhaps Brindisi, and a ferry (again overnight) from there to Patras. Let me know if you're interested in joining me or learning more.

[/research] permanent link contact

Fri, 11 Jan 2019

Mildly profane meta-advice for beginning PhD students

A while back I chanced upon yet another “advice for PhD students” article, which cajoled me into finally writing my own. I should mention that I don't really like this sort of article; as a side-effect of this cognitive dissonance, the text below will be somewhat profane.

(Just as many such articles contain unacknowledged US-centrism, so mine contains some UK-centrism. I hope you can deal with it.)

Figure out how to make yourself work effectively. If you're relatively young when you start, a surprisingly large fraction of your PhD could go by before you get there. For me, eating properly and sleeping properly are the hardest things to get right, and (of course) a lot flows from them. It might sound silly, but I was in my third year before I mostly cracked this. (Ten years later, I still struggle from time to time.)

Read. Knowing the prior work means really knowing it; not just the last ten years' worth or the high-profile stuff. To merit the PhD degree, I'd argue it's actually more important to be an expert in your field than it is to advance human knowledge. (Some institutions' regulations even reflect this. Not that that's relevant... in practice, your examiners will be guided far more by the folklore of their particular research culture than by any regulations. There's more about examination below.)

Learn how to give good talks. The easiest way is to give talks, and get frank feedback. Also, attend talks and pay attention to which ones work. (Flip side: even after learning this skill, not every talk you give will be good. Don't beat yourself up.)

Attend talks. Cultivate your breadth as well as your depth. This can be easier or more difficult, depending on the particular academic environment you're in. I was lucky on this one.

Don't let anyone tell you “how research must be done” or “what good research looks like”. There are a huge number of different styles of research, and most people know only a few. The more prescriptive someone's take, the more narrow-minded that person and the more loudly your bullshit detector should be sounding. Example: once I was advised that good work should generate “graphs that go up and to the right”. If we only had the kind of research that does generate such graphs, the world would be a poorer place, no matter how good that work. To this day, there are few graphs in my work (and those that do exist don't look like that).

Find your research community (or communities). This relates to the previous point: even people claiming to be doing similar research and/or have similar goals might actually have very different attitudes and methods. Communities have their own vibes and personalities even beyond those issues. It took me a few years, and a couple of wrong attempts, to find the community for me. I was glad when I did. No community is perfect, of course, and I still feel like an alien from time to time.

Don't let the bastards grind you down. Chances are, if your work is at all interesting, some people will want to shit on it. More generally, the research world attracts a lot of extreme personality types (including narcissists, bullies, hypercompetitive arseholes, manipulative gaslighters, etc.). Learn how to see them for what they are—even the ones who are well-regarded famous people.

Get access to good feedback. It doesn't have to be your supervisor (who, realistically, you might well have chosen based on limited information and/or without really knowing what you were doing). It doesn't have to be one person; it's probably better if it isn't. Despite good intentions all round, I found that my supervisor's feedback wasn't very useful (I have no hard feelings). But I found that through my surrounding research group (groups, in fact) I could crowdsource a lot of extremely valuable feedback.

Know how to disregard unhelpful feedback. Some people won't have anything constructive or insightful to say, but will still be happy to pick imaginary holes, make you feel small and/or make themselves feel clever. Don't let them sap your energy.

Learn how to defend against probing and/or hostile questions. In a talk situation, or perhaps a viva/defence situation, you will get these from time to time. It's rarely a fair fight; often, the hardest questions to answer are the most sloppily formulated or the ones founded on wrong assumptions that remain implicit. There are some tactics for handling these. The simplest is batting them back for clarification or re-statement. Another is offering back the question that you think was intended (and, hopefully, one that actually would have been a sane question). The more expert technique is on-the-hoof picking-apart of exactly what those wrong assumptions were. Some of my most awkward moments as a young researcher were when my lack of skill at this was exposed. So, the other point is not to sweat it when this happens.

Don't be afraid to talk to people whose work you admire—even by sending cold e-mail. Even “famous” people are mostly approachable and are happy to hear if you like their work and have something interesting to say. (And if they're not, then it's their problem not yours.) I was far too timid about this. You never know; it might lead to something. And even if not, there is value in just saying hi. I held back too much.

Socialise your work, and yourself. Get yourself invited to give talks at other universities or research labs. Talk at workshops, give demos or posters; take the opportunities. Although it's possible to overdo this—getting some time for actual work is important too!—it's easy to underdo it, especially if you're shy or not a natural self-publicist. (Sad but true: the more people know you personally, the more citations your work will attract, especially earlier on.)

When communicating with others, learn how to focus your thoughts, and tailor them to the audience. This is partly about having not just one but many “elevator pitches” (or, as we Brits say, “lift pitches”). But it's more than that. One of the things I used to get wrong when talking to people, especially by e-mail, was to let my message degenerate into a brain dump. There's always more you can say... but often, less is more. Select the thoughts or points that you're most interested/excited about, and filtered by relevance to or expected resonance in the recipient. Have faith that the other stuff will come out over time if it's fruitful. There will be a lot of implicit cues, in how you're writing or talking, hinting to the recipient that there is more where that came from.

Learn when administrative rules are not really rules. Even the best institutions will have over-narrow admin-defined ideas about how research students' journey should go. They might assign you to one research group even though your interests span several. They might refuse to fund your travel to a particular event that's considered out-of-scope. Towards the end of your PhD, they might neglect to provide extension funding, or not connect you with the help you need in applying for your next funding. In all cases, the rules often appear inflexible when you read them, but they are often fluid in practice, so be bold enough to ask the question anyway. (One example: I was told I had only a three-year studentship that could not be extended. But I asked for extension money anyway. The rules didn't allow extending me, but they did allow retroactively increasing my stipend, paying me effectively as additional back-pay; this got me an extra 4.5 months of funding.) Usually it helps to be on good terms with the relevant administrators—go and talk to them in person, be unassuming, try to understand where they're coming from and what the real constraints are. It can also help to have a supporting professorial type (again, need not be your supervisor) who will be your advocate and help you poke the administrative gears.

Know how your PhD will be examined, and do what you can to influence and optimise for that. Under the UK system, in practice it is your supervisor who will choose your examiners, but you may have some influence over that choice. Sadly, “examination-friendly research” and “good research” are overlapping but non-equal sets. This is particularly dangerous in the UK system where the examiners are all-powerful, and doubly so if your supervisor's social capital is low enough that the examiners do not see fit to moderate themselves on that account. It's pragmatic to see your thesis as a thing to be examined, rather than a thing you're writing for yourself (even though the latter may make you feel fuzzier and more motivated). This also comes back to the “many styles of research” point: if an examiner isn't used to work of your particular style, you will get a rougher ride than you deserve. Make sure the choice of your examiners is made with due consideration of this; it shouldn't simply be whoever your supervisor randomly bumped into and asked on a whim.

Be sceptical about advice. There's a lot of it about, given freely and often smugly by people who took a single path. This is generally well-meaning, but the people concerned don't necessarily know much about the other paths. Successful people often don't truly know why they were successful. I received huge amounts of bad advice during my PhD, which has made me a cynic about advice-giving and mentorship to this day. (Can you tell?)

Clearly I'm a hypocrite too, otherwise I wouldn't have written this. My so-called career path, though not without its good points, has definitely not been one to emulate. Sadly, I can't blame all of its failings on following other people's bad advice. Still, I hope this article has been different enough that it makes a useful counterpoint to advice in the usual vein.

[/research] permanent link contact

Wed, 18 Oct 2017

Some were meant for post-hoc reflections

It was great to get so much positive reaction to my essay “Some were meant for C”, so thanks to everyone concerned. In fact, thanks to everyone who's read it, even if you didn't like it! There were lots of things the essay didn't cover, and a few which it did only on a way that perhaps wasn't very obvious. And there were a few common patterns running through various reactions. All these seemed worth addressing directly in a few paragraphs, which I'll hopefully keep brief.

What about C++? Firstly, it's true that C++ offers the same “communicativity” benefits as C. We all know C++ was designed, and continues to be designed, to support systems programming (among other flavours). So, the reason I focused on C is not that C is unique in this benefit; but simply that C is the more paradigmatic example. (Of what? Well, of systems programming languages in the “unsafe school”... see also the next paragraph.) C++, on the other hand, is more things to more people, so would require a much broader and more discursive treatment than my essay had space for. For the record, I use C++ quite a lot myself. When I use it, it is (I hope) because it is the better tool for a task. These tend to be algorithmically complex tasks, and tend not to be systems-programming ones. I wouldn't say I enjoy using C++ as much as I do using plain C, but that is largely a consequence of the tasks concerned (I hate algorithmically hard coding!) and not the languages. More generally, I think it's uncontroversial that the huge complexity of the C++ language is a source of both strengths and weaknesses.

What about Rust? If C is a paradigmatic example of oldskool languages in the “unsafe” systems programming language, then Rust is surely a paradigmatic example of a new school. Or is it? Actually, the benefits of Rust are pretty much orthogonal to everything I talked about in the essay, although I agree that this fact is non-obvious. Rust's selling point is its static checking, which I certainly agree is neat. This particularly suits critical systems programming applications that want a high degree of static assurance while avoiding run-time baggage. However, this static checking comes with a similar “must oversee the whole world” requirement as do managed languages—albeit overseeing it at compile time, rather than at run time. When using Rust for systems programming of the communicative kind that my essay talks about, one is dealing with foreign objects not defined within the Rust language; therefore one is necessarily using what I call “Unsafe Rust”, i.e. the subtly different sublanguage one gets inside unsafe blocks—at least within some “interface layer” in the code. (Of course one can build abstractions that are checkably safe assuming correctness of the unsafe interface layer.) In this respect, Unsafe Rust has the same properties as C: the dynamic safety that I talk about in the later sections of the essay would be equally useful, and equally possible, within Unsafe Rust. (That would make it Dynamically Safe Yet Not Statically Safe Rust—a catchy name indeed.)

Are you proposing a “safe subset” of C? No. The whole point is that no subsetting is required. That is a consequence of how C is specified; to assume that C “specifies” its own unsafety is a confusion, albeit a common and understandable one, between specification and implementation. If you want to get super-technical about the C language specification, you could push me to “almost no subsetting”, since there are some tiny corner details that might need to be tweaked to get the best implementability and efficiency trade-offs. One key area I'm thinking about is the rules to do with effective types, which I'd rather had a tiny bit more ceremony attached. The same goes for unions. Still, these changes would be working at the same scale as the present language standard's evolutionary process; the essence of the C language, and the deliberately underspecified style in which it is defined, are amenable to a fairly efficient, dynamically safe implementation.

So you think C is wonderful, right? Not at all! C has plenty of flaws and I'm more than happy to point them out (er, some other time). I don't think anybody out there believes C could not be substantially improved if we were to redesign it with several decades' hindsight. But my point is rather the converse: that we should recognise the non-obvious ways in which we might get these redesigns and replacements wrong, and recognise why people continue to use C despite today's candidate replacements. Among these reasons, the one I chose to expound on was communicativity, because it is among the most non-obvious (to me, of those I know). There are other reasons why C is an effective tool in ways that certain other languages aren't—one respondent helpfully pointed out that the preprocessor allows certain cases of portable code, for example, which is a very true and important observation. Overall, any attempted replacement must preserve (or ideally improve on!) all these strengths. And of course, it is worth re-emphasising a million times: this is not simply a matter of performance.

What about... (some other language)? Although I'm always interested in and (usually) sympathetic to new language designs, one of the main messages of the essay is (or was supposed to be) that we give too much attention to languages. So, asking me about languages has been a somewhat frustrating pattern in the responses I've got. It's understandable to some degree. As humans, we like to use labels to partition the world in front of us (into species, tribes, taxonomies, etc.). Languages are a readily available label, and the differences among them are very visible. But sometimes this desire to label and partition leads us astray; the point about how we confuse C with implementations of C, on which the essay labours at some length, is a key instance of this. So forgive me if asking me about this or that language makes me a bit grumpy, even if that language is really nice.

Aren't language wars a waste of precious time and energy? Yes! I've already hinted at my distaste for language wars in the previous paragraph. Still, I suspect this reaction came from people who read the essay's title but not its content.

[/research] permanent link contact

Sun, 17 Sep 2017

Project suggestions (for Part II and MPhil students)

I have several project ideas to suggest this year. Any of them I could be available to supervise, for the right student (and not too many!).

A caveat: most of these are researchy project ideas. Part II students should read the pink book carefully, and also read my advice about Part II projects from a student's perspective, including the potential dangers of getting too researchy. Nowadays as a researcher, I feel a strong incentive to suggest projects that verge on the over-challenging or overly research-led. We should discuss carefully any project idea before writing the proposal, to ensure it's within your interests and (projected) ability.

See also my rather cryptic and occasionally out-of-date list of other standing suggestions.

An observable OCaml

This is about an OCaml-to-C compiler that allows the generated code to profit from DWARF-based debugging support of C compilers, while still providing a coherent OCaml view of the debugged code. A very nice job of this project was done last year by Cheng Sun, but there is plenty of room for variation; please ask me.

A fuzz-tester over API traces

This is renamed from “DWARF-based”. I've done some more thinking about this project since the linked write-up a few years ago, so let me expand. Most fuzzers work by perturbing the data given to a particular piece of code. This project is about fuzzing library interfaces by perturbing the trace of calls. This may include fuzzing data, but is not limited to it.

A random test case generator for linkers

(This is probably more of an MPhil project than a Part II project, though it could be attempted as either.)

Tools like Csmith have found considerable use in testing C compilers. A similar tool could be useful for linkers, which continue to be buggily extended and reimplemented, e.g. in the LLVM project's LLD linker.. This project would be focused on ELF-based linking on Unix platforms.

Linking is rather simpler than compilation in some respects, but exhibits a large number of possible feature interactions. For example, how do symbol visibilities interact with weak definitions?

One challenge is to generate only link jobs that “should” work. For example, the job should usually (if linking an executable) contain non-weak undefined symbols that lack a matching defined symbol.

Another challenge is to generate code for which some test of behavioural correctness of the output program is not only possible, but also usefully exercises the linker's work in some way: we should obviously generate code that involves cross-module referencing, but it's not clear what else it should do.

A third challenge is to achieve a bias towards “interesting” intersections of features, to improve the probability of finding bugs.

Useful outcomes from a randomized test case would include crashing one or both linkers, or producing divergent behaviours between two existing linkers. Evaluation could be done in terms of coverage of the feature space, and also in bugs found (including known bugs, e.g. in older versions). A starting point would be to extract a corpus of bugs from the change history of LLD.

A testable specification of C and C++ ABI details

Compilers use documented ABI specifications, such as the System V ABI, its x86-64-specific adjunct, and the Itanium C++ ABI, to achieve compatibility at link time and run time. These fix standardised details for representing data, calling code, interfacing with the operating system and achieving link-time effects (such as uniquing of inlineable function instantiations, dynamically interposable bindings, etc..). Given a particular C or C++ source file, what linker symbols should we find in the output binary (.o file), with what attributes and satisfying what properties? What data layouts should be used for various data types? ABIs answer these questions, at least in theory.

Since ABIs are specified only informally and partially, and are not rigorously tested, divergence in these details currently causes compatibility problems between compilers, even if they are intended to implement the same ABIs. This project would involve writing a formal specification of (some of) these details, partly from ABI documentation and partly by observing the output of real compilers, then using differential testing to identify divergences and (perhaps) report bugs in real compilers. This includes such issues as how particular source language features map to particular linker-level features, across differnet code models and when built for static or dynamic linking. Some experimental work will be necessary particularly in the area of extended/non-standard C features (packed structures, bitfields, ...) and C++ (vtables, typeinfo, ...), with differential testing against gcc and LLVM toolchains.

A lightweight user-space virtualisation system using the dynamic linker

The dynamic linker (ld.so) is a critical component of commodity Unix platforms. It is used to interpret almost all binaries on the system, and contains the earliest-run user-space instructions. It is therefore a potential interface for virtualisation: a modified dynamic linker can provide unmodified application code with the illusion of an extended kernel interface and/or differing instruction semantics. As with hypervisors, this can be done with either trap-and-emulate or binary-rewriting techniques, or both. This project concerns building a usable dynamic linker which virtualises the user-space environment on a mainstream Linux or FreeBSD platform and adds one or more “added value” services or tooling features. The choice of added value is somewhat free, but a simple suggestion is a memory profiler, similar to Valgrind's massif but offering much lower run-time overhead. Another might be a filesystem extension something like flcow or fakeroot.

Either way, the project will involve implementing an extended version of some existing dynamic linker. To be useful, this should include (1) self-propagation support, so that child processes are executed under the new dynamic linker; (2) instrumentation of the necessary instructions or system calls to deliver the added value; and (3) an implementation of the added value itself. The starting point will include, should the student find it useful, a basic proof-of-concept tarball for extending the GNU C library's dynamic linker on Linux. This project will demonstrate understanding of operating systems, assembly-level programming, compilers, virtualisation, and instrumentation-based programming tools. There are lots of possible extensions, including instruction emulation, system call interposition (particularly challenging in multithreaded contexts), supporting statically linked binaries, supporting setuid binaries, adding simple instrumentation primitives (e.g. call relinking, GOT/PLT redirection), and integrating heavyweight instrumentation systems (by plumbing in a system like Valgrind, DynamoRIO or FastBT).

A testable formal specification for dynamic linking

(This one is probably more of an MPhil project.)

Dynamic linking exhibits subtle variations both across different Unix platforms (such as GNU/Linux, FreeBSD, OpenBSD etc.), and across different dynamic linkers available even within just the GNU/Linux platform (such as musl, uClibc and glibc).

This project will use a combination of documentation and experimental methods to produce a precise specification (yet “loose”, i.e. accommodating allowable variation) of what happens when a dynamic linker loads an executable and its constituent libraries.

Given an input binary, the output is a memory image in the address space containing the dynamic linker. A host of details matter to this transformation: symbol visibility, relocation semantics, program header semantics, and so on. We currently have a specification (in Lem) of some of these, which has been used as a testable specification for static linking. This project will most likely benefit by extending these, albeit in the context of a new executable specification simulating the dynamic linker rather than the (static) “batch” compile-time linker. Another useful component would be ptrace()-based tool for testing against the memory image actually produced by real dynamic linker.

Initially one would probably factor out the filesystem aspects, i.e. assume that the output of ldd is known and ignore features such as preloading and auditing that can affect this. These could then be added in later as extensions. One would probably also want to work with preexisting binaries; specifying and checking the (static) production of dynamically-linkable binaries could again be a useful optional extra, for which we have some initial work.

A machine-readable, testable system call specification usable by instrumentation-based tools

Many programming tools, including debuggers and tracers and also, especially, tools built with instrumentation frameworks such as Valgrind or DynamoRIO, require knowledge of the system call interface. Currently these use hand-maintained knowledge of syscalls which is often incomplete and/or incorrect. Pooling and automating these specifications would improve quality and completeness, reducing the costs of keeping tools up-to-date. Candidate tools/frameworks include qemu, DynamoRIO, Valgrind, gdb, strace, and our own ppcmem/rmem.

A system call's register footprint (arguments in/out, clobbers) is defined by the kernel ABI. Most system calls also have a memory footprint, which is much harder to describe. Some tooling for describing and testing Linux system call interface and a very partial specification of calls' read/written memory footprints can be supplied as a starting point.

Other useful information includes argument types (mostly done already), possible error codes and their necessary and/or sufficient preconditions, the latter perhaps initially limited to argument well-formedness but extending later to details of the existing memory map, file descriptor state, resource limits, signal dispositions, and so on. (Of course error conditions also depend on filesystem state, which is rather complex by itself, and subject to interesting race conditions; a thorough job of this is not feasible within one project, but some cases could be addressed.)

One extension of memory footprints concerns the well-definedness of the contents of memory written and/or read—for example Valgrind's Memcheck would ideally consume a specification of output (written) definedness as a function of input (read) definedness.

System calls are “downcalls”; the specification could also consider system “upcalls”, mostly meaning kernel-generated signals, the conditions on which they might occur and the properties expected to be true of the process state at those times.

This project would ideally pick a couple of tools and a couple of architectures (including AArch64) and produce a testable specification usable by both tools on both architectures as a replacement for current ad-hoc architectures, including specification of those aspects of the interface that matter to those tools. We assume the host tool/framework already has mechanisms for recognising when a system call is occurring, and what state the process memory/registers are in. We also assume the tool has a means to request system calls (typically emulated by issuing host system calls, perhaps doing a verbatim or modified version of the guest code's system call, e.g. to do guest/host address translation).

Maintaining testability of the overall spec is useful, e.g. using SystemTap or DTrace to check actual system state against what the specification says should happen. Again a proof-of-concept for memory footprints will be provided for extension.

[/research] permanent link contact

Wed, 21 Sep 2016

Project suggestion: run-time type information in the FreeBSD kernel

It's past time I wrote up some Part II project suggestions. This post is about a new one. See also one from last year about a debuggable implementation of OCaml, also my suggestions from 2014, another from 2014 and other standing suggestions.

My liballocs project is a library and toolchain extension that allows tracking of run-time allocation types in large C programs, precisely and efficiently. On top of this, the libcrunch system uses this information to do run-time type checking, such as checking that a pointer cast matches the type of the actual pointed-to object. The idea is to make C coding a lot more easily debuggable, by providing clean failure messages at run time, rather like ClassCastException or ArrayIndexOutOfBoundsException in Java. There are various other applications in debugging and program comprehension, since liballocs effectively adds an oracle capable of answering the question “what is on the end of this pointer?”.

Both of these systems have been developed for user-space code but could usefully be applied in the kernel. The toolchain extensions, which gather type information from binaries and analyse allocation sites in C source code, should work with kernel code mostly unchanged. However, the liballocs runtime is written against user-level interfaces and cannot be used as-is.

There are two obvious approaches to creating a liballocs runtime suitable for the kernel.

One is simply to modify the existing liballocs runtime to support kernel-side interfaces. This should yield a liballocs.a library that can be linked into the kernel, adding code to observe the kernel's allocations and maintain type metadata on them as it executes. The main differences will be how the runtime allocates memory, and how it locates and loads type information. Likely, the latter will be trivial (the type information can simply be linked in to the kernel), except perhaps for loadable modules (they must be created with care, so that they contain any additional type information). The former is more complex, since the current runtime makes extensive use of Linux's unreserved virtual memory (via MAP_NORESERVE) which is not available as such in the FreeBSD kernel, although equivalent alternatives are likely possible with some hacking.

The other option is to use DTrace to write a non-invasive “monitor” process, started early during boot, that captures equivalent information to the liballocs runtime. One could then implement a client library which mimics the liballocs API and supports the same kinds of query. This approach is likely simpler to get started, and allows for a completely external, non-invasive client that can be used without recompiling the kernel. However, it is less reliable because it is likely to miss certain kernel allocation events, such as those happening early during kernel initialisation, or those that DTrace drops because of buffer contention. These will lead to incomplete type information. It is also unlikely to be able to answer queries coming from the kernel itself, so is useful for debugging and comprehension tasks (queried by a user-space program) but not for run-time type checking.

Both versions will require careful understanding of the FreeBSD kernel's allocation routines, for which some familiarisation time must be allowed. The DTrace option would be a good vehicle for exploring this, after which an executive decision could be made about whether to switch to the full in-kernel option.

[/research] permanent link contact

Wed, 30 Sep 2015

Project suggestion: an observable OCaml, using liballocs

It's the season to suggest student projects, and this post is about one which follows up a series of blog posts (1, 2, 3, 4). Feel free to see also my suggestions from last year, another from last year and other standing suggestions

This post is about a suggested project which will produce a working implementation of the OCaml programming language, using an existing front-end but a very different back-end.

Unlike the mainstream implementation, this implementation will be optimized for debuggability, or more generally “observability”. The compilation strategy will be to translate to C, hence obtaining the debuggability (and, to some as-yet-unknown extent, the performance) of the C compilation toolchain.

This means at least three key differences. Firstly, all locally-bound names will map to C local variables. Secondly, OCaml's lexical closures will be implemented in a highly compatible way, perhaps using the technique of Breuel (which is already implemented in the GNU C compiler, but with stack allocation, whose lifetime semantics are not appropriate for OCaml). Thirdly, all allocations will have associated type metadata at run time, using liballocs This will allow parametric polymorphism to be “seen through” by a debugger—so it can tell you, for example, that some stack frame of a function whose type is 'a -> 'a is actually activated with 'a as int, say. (This may or may not be the right formulation of “seeing through”—stronger and weaker versions are possible; ask me!)

Key ingredients are the following:

Optional “extension”-style challenges might include the following:

Evaluation of the system is mostly via performance—the goal would be to get within a reasonable factor of the OCaml native-code compiler. We can also do some experiments to measure observability of our system. One simple experiment might interrupt both versions of a program at randomly chosen call sites and count how much of the local state we could recover (number of frames in the backtrace, number of locals in those frames; perhaps even doing a diff on their values). Call sites are a good choice because all but one frame of the stack is always stopped at one, and instrumenting the native code so that we can do lock-step breakpointing, by counting call invocations, should not be too difficult (though it might mean disabling inlining, which does affect what is being measured). Usually there'll be no contest, since the original OCaml native compiler doesn't let you observe locally bound values at all. There is, however, a development branch which does, and comparing with the bytecode debugger (ocamldebug) is also a good bet.

[/research] permanent link contact

Wed, 16 Sep 2015

Partially evaluating a bytecode interpreter using C++ templates

I should start this by saying that I'm definitely not a C++ metaprogramming genius. People have done outrageously clever and diverting things with C++ templates, and I am not about to outdo anyone. Nevertheless, this might be interesting.

I was facing the task of taking code in a simple but obscure stack machine language—the stack machine from DWARF, if you couldn't guess—and turning it into vaguely efficient machine code. My thinking was spurred on by systems like Truffle, or PyPy's tracing JIT, which produce compiled code from only source code and an interpreter for the source language. Unlike those systems, I didn't need dynamic compilation—offline is fine. But also, I wanted a system with easy support for C and C++ intrinsics, zero runtime baggage, and something I could prototype in a day or so. It turned out that writing the interpreter in C++ templates, and using the C++ compiler as a partial evaluator, was reasonably successful at all these. Here's how it works.

To start with, I simplified right down from DWARF to an analogous but much simpler stack machine where all operands are passed on the stack. We model instructions using simply an enumeration. This is not essential, or even desirable, except for exposition.

/* instruction -- a simple enumeration */
enum instr
{
    PLUS, /* add top two stack slots and push result */
    BZ,   /* pop val and dest off stack; if val is zero, branch to dest */
    PEEK, /* pop an address off the stack and push its contents */
    DUP2  /* clone the top two elements of the stack */
};

[I've edited this slightly from my initial version of this post, to add the DUP2 instruction. This allows me to include a slightly more meaningful example program—thanks to Ross Bencina for prompting that.]

The four instructions are representative of the classes of operation I care about: arithmetic, control, C intrinsic and stack manipulation. They're not a very complete machine, but they suffice to illustrate things. To witness the power of this somewhat-operational stack machine, a simple program which combines all four opcodes is as follows.

typedef program< PLUS, program< DUP2, program< BZ, program< PEEK, void > > > > prog;

This program add two numbers taken as input, and if they sum to something non-zero, it interprets that as an address and peeks it, returning what's stored there. Otherwise it returns zero. Note that since we only have a computed branch instruction, not a direct branch, the branch target exists as data in the initial stack. From top to bottom, our initial stack must be [arg1, arg2, 4], where arg1 and arg2 are the two input values, 4 is the program counter to branch to (the end of the program). We don't need to store the literal zero in this case (I'll let you figure out why).

So, clearly, a program is a sequence of instructions. We model it in the C++ compiler as a template-level sequence, with the opcode as one template argument and the program “tail”, i.e. the following instructions, as another template argument.

/* program -- a sequence of opcodes */
template <unsigned Op, class Next>
struct program
{
    static const long val = Op;
    typedef Next tail;
};

In effect, this gives us a “linked list” structure but at the template level. We can write some simple helper functions over this kind of structure, using a functional style in which template specialization is how we do case-splitting and template instantiation is how we do recursion.

template <class Seq> 
struct sequence_length
{
    static const long value = 1 + sequence_length<Seq::tail>::value;
};

template <>
struct sequence_length<void>
{
    static const long value = 0;
};

/* Utility for getting the tail of a type-level sequence. */
template <typename Seq>
struct tail_of
{
    typedef typename Seq::tail type;
};
template <>
struct tail_of<void>
{
    typedef void type;
};

/* Utility for getting the nth of a sequence, then the rest;
 * -- it's like "let s = drop n Seq in (hd s, tl s)" */
template <unsigned n, class Seq> // n > 0 case
struct nth
{
    static const long val = nth<n-1, typename tail_of<Seq>::type>::val;
    typedef typename tail_of< nth<n-1, typename tail_of<Seq>::type> >::type tail;
};
/* termination case: n == 0 */
template <class Seq>
struct nth<0, Seq>
{
    static const long val = Seq::val;
    typedef typename tail_of<Seq>::type tail;
};

The main use of sequences is to model the program itself, a sequence of instructions. We also use them to model the program counter, encoding it simply as the “tail” of the program, i.e. the subsequence of instructions from the current position to the end of the program. (This won't stop us from jumping backwards, because we'll always pass around the whole program too. I'll come on to jumps a bit later.)

What's our program state? It's a program counter and an operand stack. Since, in general, our stack will not be statically known, we have to model it at the value level, passing it around as “real” arguments at run time rather than template arguments. (I did create a version where a parallel family of template specializations could be used to encode statically known stacks. That's neat because it you can get the C++ compiler to partially evaluate the stack-machine program for known input. But I'm skipping that extra complexity here.)

To actually represent the stack at run time we'll use a plain old array of long. I'll show you that in a second. Before I do, let's start to piece together what our evaluator looks like. It's a template that takes arguments standing for the whole program and the program tail, i.e. program counter. The whole program is always statically known, so is a template argument. There are finitely many possible values of the program counter, so we can make the compiler enumerate them; this means the program tail can be a template argument too. (This works nicely with how we do branches; I'll come on to that.)

/* States in a computation */
template <
    /* Program */ typename Program,
    /* ProgramTail */ typename ProgramTail    /* i.e. a suffix of the Program, 
                                                 beginning with the next instruction to run. */
    >
struct state
{
    static long eval(/* some args... */)
    {
        /* figure out our next state and delegate to it... */
    }
};

We also want each state to support an eval() method which executes the program's continuation from that state. Most of the time this will work by calculating the next state and delegating to its eval() method. For states that are “end states”, like when we hit the end of the program, we'll instead return something—the value on the top of the stack. We can handle this case using another specialization (which we'll define later).

A simple opcode's eval() method looks something like the following, which implements PLUS. We pass the stack as an argument to the eval() method, in the form of two pointers, bottom and top.

template <
    /* Program */ typename WholeProgram,
    /* ProgramTail */ typename ProgramTail
    >
struct state< WholeProgram, program<PLUS, ProgramTail> >
{
    static long eval(long *bottom, long *top)
    {
        top[-1] = top[0] + top[-1];
        return state<WholeProgram, ProgramTail >::eval(bottom, top - 1);
    }
};

Here we destructively update the stack, decrementing the top. Note that the whole interpreter works in continuation-passing style, and the current stack is always passed around explicitly and linearly—all our eval() calls are tail calls. Currently we malloc() the stack eagerly to a large-ish size, and abort on overflow. But linearity means we could even start small and then realloc() it if it overflowed. A neater trick would be to make it an actual stack that is lazily allocated by the OS, using mmap()—this would give us gradual allocation without cluttering our instruction stream with overflow checks or realloc() calls (but I haven't implemented this yet).

That's the basic idea of our evaluator: each opcode can be implemented by a template specialization which gets elaborated differently according to its position in the program. What about branches? We can handle branches by a single specialization for the case where the PC is not known statically. As before, we deal with the not-statically-known value by accepting it as a method (value) argument rather than a template (type) argument. The simplest and fastest way to do this is to code up a jump table. For this to work, we also have to bound the size of programs.

#define MAX_PROGRAM_SIZE 64

struct run_time_pc {};

template <
    /* Program */ typename WholeProgram
    >
struct state< WholeProgram, run_time_pc >
{
    static long eval(long *bottom, long *top, long pc)
    {
        switch (pc)
        {
#define CASE(n) \
case n: return state< \
        WholeProgram, \
        program< \
            /* We build the tail starting from the nth, using the definitions in `nth<>'. */ \
            nth<(n), WholeProgram>::val, \
            typename nth<(n), WholeProgram>::tail \
        > \
    >::eval(bottom, top);
            CASE(0)
            CASE(1)
            CASE(2)
            CASE(3)
            /* ... */
            /* MAX_PROGRAM_SIZE */
            default: 
                assert(pc &gt;= MAX_PROGRAM_SIZE); 
                warnx("branched out of bounds");
                abort();
        }
    }
};

Now, to implement a branch opcode, all we need to do is have it call this run_time_pc specialization's eval(), which takes an extra argument: the destination program counter, passed as a value. Passing it as a value is what allows the jumping instruction to actually decide the destination at run time. At worst, the C++ compiler compiles this switch statement down to a jump table which can jump to the code implementing any of the program's instructions. If this happens, the compiler must enumerate all these destinations up-front in order to build the jump table, hence why we had to bound the program size. At best, though, if we're doing a direct branch, the branch target will be statically known in the caller of this eval() template, allowing the C++ compiler to optimise away the dead switch cases, replacing the jump table with a direct branch. (In our current stack machine, all branch destinations are encoded on the stack, so are potentially data-dependent, meaning we don't hit this direct-branch case. But in the actual DWARF stack machine, we can encode direct branch targets into the instruction, so we will hit this case.)

Since each entry in this jump table jumps to a specific part of the program, we immediately transition back to having a statically known PC value. Our implementation of branches is completely self-contained within this one mechanism.

Actually, make that two mechanisms, because I want to add an alternative. We can avoid hackily bounding the size of the program, if we don't mind rewriting our jump table as a chain of ifelse branches. We can use templates to generate this chain of branches, for any length. To do this, we parameterise our run_time_pc type on the upper bound of what program counter value we might be jumping to. Initially this is the length of the program; recursive elaborations' ifelse-tests whittle this down to zero, the lowest-numbered place we could jump to. In this way, the compiler effectively builds our switch statement (as an if–else chain) for us. As before, if it's a direct branch, all the tests will get elaborated away. If it's not, whether we get a nice jump table depends on how clever the compiler is. For me, sadly g++ seems not to generate a jump table, even at -O3, but perhaps it can be coaxed into it somehow (let me know if you know!).

template <unsigned UpperBound> 
struct run_time_pc {};

template <
    /* Program */ typename WholeProgram,
                  unsigned MaxPc
    >
struct state< WholeProgram, run_time_pc<MaxPc> >
{
    /* This is fast only if the C++ compiler can turn if--else chains back into jump table. */
    static long eval_if_else(long *bottom, long *top, long pc)
    {
        if (pc == MaxPc) return state<
            WholeProgram,
            program<
                /* We build the tail starting from the nth, using the definitions in nth<>. */
                nth<MaxPc, WholeProgram>::val,
                typename nth<MaxPc, WholeProgram>::tail
            > 
        >::eval(bottom, top);
        else return state<WholeProgram, run_time_pc<MaxPc - 1> >::eval_if_else(bottom, top, pc);
    }
};
template <
    /* Program */ typename WholeProgram
    >
struct state< WholeProgram, run_time_pc<0> >
{
    static long eval_if_else(long *bottom, long *top, long pc)
    {
        if (pc == 0) return state<
            WholeProgram,
            program<
                /* We build the tail starting from the nth, using the definitions in nth<>. */
                nth<0, WholeProgram>::val,
                typename nth<0, WholeProgram>::tail
            > 
        >::eval(bottom, top);
        else abort();
    }
};

I promised to show how we deal with the end of the program. We just specialize for the case where the program tail is void, i.e. the empty sequence.

/* This specialization handles "end of program". */
template <
    /* Program */ typename Program,
    >
struct state<Program, void>
{
    static long eval(long *bottom, long *top)
    {
        if (top <= bottom) abort();
        return top[0];
    }
};

There are a few more fiddly things to figure out to make it all compile, but I've shown you all the interesting stuff. A working snapshot of the implementation is here. It compiled only with g++, not clang++, the last time I tried; fixes are welcome. It implements everything I've mentioned. In the near future I'll turn it into an actual DWARF compiler, and it will appear in liballocs. You might ask: why? In short, compiled DWARF expressions are hopefully a way to achieve faster introspection on complex memory layouts. The liballocs metadata toolchain already “compiles” much DWARF content down to an efficient in-memory representation. By compiling not only manifest DWARF structure but also computational DWARF expressions, we can efficiently introspect on structures whose layout can only be described precisely via a computation—think variable-length arrays, hash tables, and so on. Eventually I also want to support “compiled frame information”, which I hope will allow faster stack walking.

[/research] permanent link contact

Tue, 30 Jun 2015

A position on licensing research software

I have a habit of putting code on my github page without any statement of license. Sometimes this is oversight or laziness, but sometimes it's entirely deliberate. It's not because I don't want to open-source or otherwise license my code; I do. But, as this post covers in some depth, the Devil is in the details.

In particular, I believe that software whose purpose is scientific or researchy—to exist, to prove a concept, to be extended by other researchers perhaps, but not to acquire users—can be better off without an explicit license, at least while it remains relatively immature and has only a small number of contributors. In short, this is because there's little practical loss from not licensing at this stage, whereas by forcing a choice of license, it's possible to choose wrongly.

For some reason, this belief, at least when compressed into 140-character snippets, attracted quite a lot of opprobrium. So allow me to explain my justification and respond to the criticisms. Before I go further, I should add that I'm definitely not pretending to be an expert on licenses, law, tech transfer, aggregate properties of big companies' IP policies, or anything else I'm writing about. Everything you're about to read is informed by limited personal experience and anecdote. In fact that's how I got here: I'm very aware that licensing is something I don't fully understand. For me, that's a strong motivation for avoiding making premature commitments that I might later regret. You have the opportunity of enlightening me, and might even change my position—but first please understand what my position is and why I hold it.

So, here goes: I claim that early-stage scientific proof-of-concept software is often better off without an explicit license. Naysayers (some hypothetical, but mostly real) claim the following.

“But then people can't legally use your software?” They can, but they should ask first.

“But people rarely bother to ask.” That's fine; this software is not aiming at getting lots of users.

“But you might miss out on users.” I don't want users.

“But you might miss out on collaborators.” I doubt it. Collaborators have to communicate, not just consume. They get in touch. Or, more likely, I meet them through other channels than their browsing my repositories online.

“But people aren't allowed to look at it without a license.” No, that's not how copyright works—quite the opposite. The code is published, so you're allowed to look.

“But company policies mean employees won't be allowed look at your code, even though it's allowed by law.” I've worked at two large IP-paranoid companies, and in both cases employees would happily exercise discretion about whether to look at code lacking a permissive license. (This was true even if official guidance left no room for their discretion.) Dealing with third-party code was also a well-established part of corporate culture. Among other things, this makes these developers much more likely to ask for permission than others. During my three months at one such company, in a very junior position, I was encouraged to make enquiries not only about licensing third-party software that appeared useful, but also about perhaps contracting the developer to support our effort. This surprised me at the time. But being a corporate entity dealing in non-permissively-licensed IP makes you more amenable to dealing with other people's IP in whatever way is appropriate. As I'll cover shortly, the well-known copyright paranoia (including what I'll call “GPL terror”) arises because of cases where the license on offer doesn't suit the prospecting company, and where there's good cause to fear litigation.

“But if they look at your code, they risk creating a derived work, so could get sued by your employer!” Companies know the pragmatics of IP better than this. If I were talking about Big Company Labs's github page, I'd agree that explicit licensing would be highly desirable. Big companies tend to be litigious. In all cases, the question is: who is liable to get sued by whom? But we're not talking about that. The risk to a company from simply looking at my code (emphasis important) is miniscule. The risk to me and to the public good, from premature commitment to a given license, is non-trivial, because it risks closing off opportunities later.

“Why not GPL your code?” I'm not against doing so in principle. In fact, even though I'm told it's falling out of popular favour, the GPL is one of my favourite licenses. But if I GPL'd my code, many more companies' employees would be scared off. Many big companies' legal departments have spawned corporate policies which foster a terror of GPL'd code. Partly this terror is justified: much GPL'd code is copyrighted by large competing companies, who don't want to relicense it and do want to sue infringers. However, the terror is not justified in all cases. I'm not a competing company, I am amenable to relicensing, and I'm not likely to sue infrigers anyway. So companies have no reason to be terrified of looking at my code. Unfortunately, many company procedures are written in a blanket style, with rules like “don't look at GPL'd code!”. that are not sensitive to these nuances.

“How is not putting any license possibly better? It confers strictly fewer rights!” I know it's perverse. But since these “don't look at GPL” policies exist, the absence of any GPL notice would actually make it more defensible, according to these policies, for employees to look at my code in any depth. The real problem here is that, as far as my anecdotes can see, many corporate legal people don't even understand the basic properties of the domain. Software doesn't “have” a license; it is licensed and can be licensed differently to different people. I sometimes say that licensing is “bilateral”, in an attempt to convey this multi-party “edge, not vertex” nature. Being GPL'd doesn't exclude other licensing possibilities. Sadly, corporate legal types tend to act as though it does. They're not the only ones. Some of my peers appear keen to (as I'll call it) “licensify” all scientific software. They seem to be acting on similar misunderstandings. I've even seen it written that one should not use the CRAPL because it allegedly makes it “impossible” to re-use the software. If you actually read and understand the CRAPL, it does no such thing (notwithstanding that as a license it is basically pointless). Even if the CRAPL were restrictive, the kind of software it's intended for—scientific software, as in my case—would almost always be available for relicensing anyway.

“Why not say “GPL, or other licenses by negotiation”?” It's not a terrible idea, but again, lack of nuance among corporate legal departments makes me believe that this will scare people off unnecessarily. I could be wrong about that.

“Why not MIT- or BSD-license your code?” I'm not against doing so in principle. But as a blanket policy, I worry that it'd weaken my ability to successfully do tech transfer later. In particular, the option of dual-licensing code as useful if you have already gone with a permissive license. This might weaken the position of any company set up to do the tech transfer. As I see it, this is the key difference between BSD and GPL from a researcher's point of view. In particular, note that pursuing tech transfer can be in the public interest (and bigger competitors are likely to be less public-spirited than a small start-up founded by scientists). I've nothing against permissive licenses in general; they just feel like too great a commitment to make in the early stages of a project. I happily release non-research software under these licenses (or even public domain) from time to time, as you can see. And I'll happily contribute under these licenses to other projects that have an established tech transfer model. The key points are that my early-stage projects don't yet have this, that there's more than one way to do it, and that licenses make a difference to what options are available.

“So you might later close off your research outputs for your own commercial gain?” Not at all. I'm dedicated to using my research outputs to create the greatest possible public social good. The question is how to do that. Whether BSD-style (permissive) or GPL-style (copyleft) licensing creates the most social good is still a matter of debate. It almost certainly varies case by case. Meanwhile, two other social goods need to be considered: being able to do more research, and actually getting research results transferred into practice. The commercial model can be useful in both of those regards, even though, ideologically speaking, it's not the model I prefer. It would be foolish to jeopardise these opportunities at the moment I publish my code, especially since nowadays the norm is to publish code early (no later than publishing the paper). Very often, at that stage it won't yet be clear how best to harness the code for social good.

“Aren't you inviting people to take the unreasonable risk of using your code without a license, which is breaking the law?” No. Anyone concerned with staying within the law is welcome to contact me about licensing terms. Anyone worried that I might sue them is welcome to do the same. This is no hardship: anyone interested in running or building on my experimental team-of-one research code almost certainly needs to contact me anyway—if not to build it, then to do anything useful with it. There's also an important separate point, which I'll return to shortly: scientific endeavour is built on personal contact and trust relationships, far more than either commercial or open-source software are built on those things. Even in the case of people who don't bother contacting me, the “reasonableness” of the alleged risk needs to be assessed in the context of the scientific culture.

“But in practice, if everyone followed your approach, won't people end up breaking the law more often?” Perhaps so. Most people break the law just going about their everyday business. Few of my countryfolk over the age of 30 have not transgressed our laws in countless ways: copying a tape, keeping a recording of a TV programme for longer than 28 days, wheeling a bike along a public footpath, borrowing their neighbour's ladder without asking, backing up a shop-bought DVD to their hard disk, and so on. Fortunately, the process of law enforcement involves discretion at the point of prosecution. This discretion is kept in check by social dynamics which, while mostly outside legal system, are very real. If my neighbour prosecuted me for borrowing his ladder, it'd render future reciprocal altruism considerably less likely. Suing good causes creates bad press, regardless of whether it can legally succeed. The law is overapproximate out of necessity, so that the genuinely problematic infringers can be pursued, not so that absolutely all contravening activity can be shut down. It presents the option to prosecute, not the imperative to do so. That's not an argument for egregious overapproximation in our laws. Obviously, legal overapproximateness can be vulnerable to abuse, and should be minimised by a process of continuous refinement. But aspiring towards eventually reaching a perfectly precise law is a hopeless ideal. The law is a practical tool, not a formal system—it is both ambiguous and inconsistent. The letter of the law needs to be viewed in the context of the legal process—one carried out with the frailty and imprecision that come from human weakness, and the malleability that comes from human discretion. In short, we are doomed to living with a background level of lawbreaking, and it's in our interest to live this way. With licensing, the right question to ask is: how can our use of licenses affect scientific research, and how should it? This means thinking about process.

“But don't big companies have good reason to be conservative? Won't this harm your relationship with them?” Yes, they do, but no, I don't believe so. Conservative policies are justified in a corporate context. For big companies in competitive markets, the stakes are high, and the capital cost of pursuing legal cases is affordable. If an equally large competitor had such a case, it could turn into a very costly court battle (regardless of who wins). That's why big companies are paranoid about IP: they're afraid of competitors. But they're not afraid of small guys, like researchers or universities. If I had a credible case that a big company were infringing my copyright, the company would very likely settle with me out of court. In any case, as I covered earlier, if they want to use my software, they will want to negotiate terms with me anyway.

“Isn't your attitude socially irresponsible?” I believe I have a stronger social conscience than most people I meet. Everything I've just said is a position I hold in the interest of maximising public good, whether or not my beliefs are correct or effective. So no, I don't believe I'm socially irresponsible at all. Here are some things that I believe are socially irresponsible: to allow corporate capture of publicly funded research outputs; to make licensing commitments of publicly funded work without very careful consideration; to needlessly jeopardise the future potential to exploit publicly-funded work to public-good ends; to allow the practices of big-company legal departments to get in the way of scientific endeavour.

“Licences are good, m'kay?” This seems to be some people's position, but it's not an argument.

“All sensible software projects have licenses!” Software that seeks users is one thing. I've already covered why I'm talking about something else.

“Why are you resisting doing the normal, “responsible” thing?” My situation isn't normal. In this abnormal situation, the normal thing is not necessarily the most responsible thing.

“GPL is not free, troll troll...” I'm trying not to get into that. But I do suspect that this whole debate is really GPL versus BSD in disguise. If you don't believe in copyleft, then the only open-source licenses you're likely to use are BSD or MIT, and there's not much useful flexibility to be gained by deferring the decision. So you can just license all your code straight away. BSD in effect chooses a fixed attitude to commercialisation (“anything is fine; what goes around comes around”) whereas GPL tends to create a fork in the road. I want to hang on to my option on that fork, until I know that doing away with it is the right thing to do. The fact that the permissive-versus-copyleft debate never dies is a powerful argument against coming down firmly on either side. It's better to take decisions in context, case by case. That's exactly the position I'm defending. Note that Bruce Montague's article advocating the BSD license model for tech transfer is right if you're talking about big projects—it's harder to commercialise GPL'd code than BSD'd code, and deliberately so—but completely wrong for most small research projects written by a small group of collaborators. In those cases, it's much easier to relicense the lot, in which case your options for transfer include a dual-license arrangement in the manner of Qt or MySQL. This still permits GPLing of the ongoing commercial development (or not), while retaining the option to license it for commercial development. It says a lot about how subtle the issues are that a simple tweak of context can more-or-less invert the up- and down-sides of the two licenses. (I actually find Montague's article quite fuzzy, even FUDdy, about the details—it only really says that BSD is “simpler”, which any old Einstein should be quick to challenge.)

“I once had licensing trouble, bad enough to require lawyers. Therefore, clearly explicit licensing is essential!” This is a complete non-sequitur. I don't doubt how problematic these things can get. But it doesn't mean that any blanket policy will improve things. If IP lawyers have come into a scene where science is supposed to be happening, it means something has already gone wrong. The culture clash is between science—where trust, good faith and cooperation are routinely assumed—and corporate licensing, where you can't trust anyone beyond what their contract or license binds them to. In short, this happens when big companies get involved and impose inappropriate licensing terms. These companies are where you should be directing your opprobrium—not at me.

To properly explain the latter point, I need to ditch the question-and-answer format (which was already becoming tedious, I'm sure). The argument came up in reference to the Richards benchmark. Let me summarise my understanding of it (corrections welcome). Martin Richards wrote and distributed some benchmarking code. It was distributed as a tarball, in the scientific spirit, with no license stated. It turned out (later) that he was happy to release it into the public domain. Trust, goodwill, reputation, responsibility, ethicality are currency among scientists. Martin's release occurred within that culture. My github repositories do likewise.

Some time later, Mario Wolczko produced a Java version of the Richards benchmark, while working at Sun Labs. Working for a big company makes a person more IP-fastidious than average. Mario very conscientiously sought permission from Martin (perhaps unnecessarily—I'm not convinced he was actually making a derived work, but perhaps he was; no matter). He was of course granted Martin's blessing. He circulated his new code, including to Martin. Martin even began distributing it in his tarball.

The catch was that Mario's employer was a large company, and would only allow the code to be released under some restrictive licensing terms. You can still read them. They come in two parts; the first page is statutory US export regulations while the second page contains Sun's own licensing terms. These include the following restriction.

“Licensee may not rent, lease, loan, sell, or distribute the Software, or any part of the Software.”

This is a severe restriction. It's especially inappropriate for a work of this kind—a benchmark. Benchmarking is a concern that cuts across research areas: it's not just people working on a particular technique or problem that want benchmarks. Anyone working on any aspect of a language implementation might want to apply some “standard” benchmarks. A benchmark is more useful the more things it has been applied to. That's why we often like to group them into suites and circulate them widely. This license is effectively incompatible with that use.

I'm sure Mario tried his best against the might of Sun legal. It's telling that his current web page advises the “caveat emptor” approach: make your own mind up about how likely you are to be sued, and act accordingly. I'd agree it's not good at all that we are being forced to take this approach around a big, litigious company (as Sun's new owner Oracle certainly is). But it seems bizarre to blame this on Martin's original sans-license release. The blame is entirely on Sun. The problem is not a lack of license statement from the little guy; it's an inappropriate license imposed by the big guy. The right response is to decline to use Sun's code. (That said, it's worth noting that Martin's redistribution of the Sun code, although technically a breach of the license terms, has not resulted in Sun or Oracle suing him. So perhaps they, too, understand the value of scientific culture and have no interest in disrupting it.)

To put all this more succinctly, the onus to license doesn't weigh equally on all parties. For artifacts coming out of big players, there's good reason to demand licenses—not just any license, but fit-for-purpose licenses—precisely because big players can afford to sue. By contrast, even the University of Cambridge is a comparatively little player. It is vanishingly unlikely to sue anyone pursuing public-good scientific or research agendas, not only because it's expensive to do so, but because the reputational damage would be huge. And even if you disagree that this particular university isn't a big player, that's immaterial because it doesn't own my research work's copyright anyway; I do.

I, like Martin, work within the scientific culture as a non-litigious “little guy”. It should go without saying that redistribution of my software towards scientific ends will always attract my blessing, not lawsuits—even if people don't ask first (mildly rude as that may be). Meanwhile, for commercial use, I'd like to defer fixing the exact terms until there's serious interest—so that I have some idea of how it's going to be used, and can choose terms appropriately. The pragmatics of “no license specified” and “restrictive license specified” are wildly different, even though a hard-headed legal interpretation makes the former a degenerate case of the latter.

The push towards “licensification” of scientific software actually works against this culture, since it erodes the norm that says that permission will be granted for doing reasonable things for scientific purposes. That's yet another reason why I resist this way of working. Licensification is also viral. If you believe strongly that all released artifacts should “have” “a license”, then you have to follow this to its transitive closure: to release anything that builds on anything, you must obtain a precise statement of license from all derived-from works. This is justified for commercially valuable software, but I don't believe it's the way science needs to work. And if you share my belief that premature commitment can do more harm than good, then we agree that science ought not to work that way.

You might complain that this “scientific culture” I've been talking about is nebulous. I'd agree. But capacity and inclination to sue are reasonable proxies. Call me naive, but if you really worry about the legality of doing science with artifacts released publicly by a reputable scientist working for a university, either you worry too much or the world has gone very wrong. On the other hand, it is reasonable to worry about artifacts released by big companies, because the culture they operate in is very different. That's why we need to pressure them into offering reasonable terms for the things they release, and decline to use their artifacts if they don't. That's what conspicuously failed to happen in the Sun/Richards case.

As more and more scientific research involves code, licensing of code among scientists is going to become much more of an issue. Misunderstandings are already rife: a quick bit of web searching will easily turn up things like this and this. We need to have a frank discussion about what the real issues are, and an open acknowledgement of their subtlety. I believe that blanket licensification is not the answer. It forces a decision to be taken too early, when too little information and expertise are available. I also fear that it will encourage a culture of litigiousness and bureaucracy that is neither desirable nor affordable within science.

As one tentative suggestion for how to make the law work better for us, I note that US copyright law's “fair use” allowance already covers educational purposes. Ideally it should also cover public-good non-commercial scientific purposes. In fact, this already is a factor in assessing fair use. But it's ill-defined, and I'm not aware of any useful case law on that front (are you?). If we want something to work towards, this kind of fair use law is what it should be—not the up-front licensification of every published scientific artifact.

Just as a coda: the above is what I currently believe, based on limited study and my very limited awareness of evidence, case law and the like. Although that belief has withstood a certain exposure to others telling me how wrong I am, it's not a conviction I'm strongly attached to. I went to the effort of writing the above not because I'm convinced I'm right, but because I don't like being branded “socially irresponsible”. Challenge my argument if you like, but don't doubt that I treat this issue with due care and attention. I'm still waiting for the counter-argument that will make me change my position. You might well have it. If you do, please let me know.

[/research] permanent link contact

Tue, 26 May 2015

Polymorphism and observability

[In an earlier post, I talked about debugging, and more generally “observability”, in ML-family languages. Later, I also clarified what I think by “polymorphism” most usefully means. This post explores the less obvious relationships between polymorphism and debugging.]

When I talk to language-minded people about debugging code in ML-family languages, they tend to think that the difficulties are something to do with “the type system” and “polymorphism”. This is only half right. Polymorphism does complicate debugging, but it does so in every language, even ones with no “type system” of any kind. (As a sanity check, ask: does BCPL support polymorphism? The answer is clearly yes, at least according to my earlier definition.)

The axe I'm grinding in this series of posts is that “polymorphism” or “type-mumble” are no excuse for a lack of decent observability in ML-like languages. There are no major obstacles to implementing a nicely debuggable ML, and certainly none regarding polymorphism. Intuitively, this makes sense if we remind ourselves that polymorphism is to do with abstraction, whereas observation is done on the concrete: we're observing a concrete program state, laid out in front of us. (Of course, there are some unfortunate decisions taken by existing implementations of ML-like languages, that make retrofitting debuggability more difficult than it might be. That's a very different problem!)

A similar viewpoint explains why other kinds of fancy types present no obstacle. Even using fancy features of OCaml or Haskell, like GADTs or type classes, our program state still boils down to a big pile of atoms, sums, products and functions. The innovations in “type systems” have been in reasoning, specifically about safety. This is a question of dynamics: checking at compile time that invalid structures will never be constructed at run time. Observability isn't concerned with dynamics; it's about looking at the present, not the future. All we want to do is decode a static snapshot of the program. (Here I'm excluding the debugger feature of “altered execution”, i.e. enacting debug-time side-effects on the debugged process. How to do this safely is an interesting question, but I'm not going to dwell on it here.)

Can we expose more clearly why polymorphism isn't a problem? As I covered last time, “polymorphism” is a fancy word for deferring specialisation. Specialisation can be done in a compiler or at run time. At run time, specialisation means execution: execution is a specialisation process that culminates in the program's result. We can also think of this process as “decision-taking” in response to input.

Polymorphism during execution, witnessed in program snapshots, is very different from polymorphism in source programs. In source programs, the whole program is oriented around an unknown future: programs describe dependency on an “input”, supplied later. By contrast, observing a program state at run time is about decoding a present, not a future. Moreover, to help us do that decoding, we can exploit all the decisions that have been taken so far, i.e. all the specialisation that has occurred, during both compilation and execution, to reach the present state. Some of this specialisation can be called “monomorphisation”, because it has taken generic code and applied it in a specific context.

As before, I'll focus on OCaml. The OCaml compiler turns polymorphic source-level functions into generic run-time function objects (instruction sequences). Similarly, for polymorphic data types in source code, the compiler selects a size and layout, independent of any type variables that might be parameterising the definition. As we would expect, this is achieved using indirection: the fixed size and layout ensure that locally, storage can always be allocated and accessed generically. The specialised part of the data—for example, the payload of a list node—is indirected away, using pointers. If we were talking about C, these pointers would be pointers to void. OCaml's source language of data types lets us be more precise about these, by introducing type variables. But that's a meta-level bonus that helps us reason about dynamics. A snapshot of an OCaml program still reveals it as consisting of allocated objects and pointers between them, just like in C.

Viewed as source code, a program consists largely of functions and data types. But during execution, we have other things too: activations of functions and instances of data types. It's usually these that we want to inspect when debugging. For example, a backtrace is a list of function activations. The heap is a collection of values—instances of some type. (It also contains closures, which are a bit of both; I'll ignore them, for simplicity, but you can ask me at the end.)

Here is the key observation about polymorphism at run time. Whenever a polymorphic function is activated, or when a polymorphic data type is instantiated, some instantiation of its type parameters is morally decided. “Morally” means that we could define an oracle, using the semantics of the language, that tells us how they are being instantiated. For example, it could tell us that at some given allocation site, we're creating a list of int rather than a list of 'a (whereas the latter is all the source can tell us). Exactly what this means, however, is slightly subtle.

One subtlety is that the code doing the allocation doesn't necessarily know what this instantiation is. That code might itself be generic! So maybe we're building an int list list out of some int lists. The code doing this might only know it's building an 'a list list, but our oracle would still tell us that the allocation “morally” has the more precise type int list. Another subtlety is that, of course, there's no guarantee at the implementation level that our runtime is helpfully defining any such oracle for us. Nor need the compiler have provided us any output that would help us implement the oracle. In the case of OCaml, these are definitely not true, and that's precisely why it's difficult to add debugging support to the current OCaml toolchain.

Another subtlety is that the “instantiation” does not necessarily yield something free from type variables. Although our int list list example got rid of all the variables, in some other cases we might find the best we can do is to instantiate 'a with 'b -> 'c, say. But this turns out not to stop us from observing anything we might logically be able to observe. I'll return to this shortly.

One way to make OCaml debuggable might be to directly implement this oracle, by maintaining extra state at run time. Whenever we call a polymorphic function or instantiate a polymorphic data type, we could stash information somewhere that explicitly records how the type parameters are being instantiated. Something quite similar was done a while back in the HashCaml project. Unfortunately, it's a fairly invasive change to the compiler. It's likely to meet resistance via a performance argument, which you can think of this as the “frame pointer” debate but for type information. Pushing around extra information creates a bit more pressure on registers and memory, so typically shaves a few percent off performance. In return, we make observability massively more straightforward. Apparently opinions differ on whether this is a good trade. All I'll say is that if frame pointers are good enough for the Linux kernel, they're good enough for me.

Instead of tracking allocation types up-front, one could do a deeper analysis to recover the same information on demand. If we assume we have a backtrace, that gives us a powerful chunk of context: it describes a nest of function activations. The top-level activation (which would be main(), if OCaml had main()) is always monomorphic, so we should be able to figure out all the subsequent instantiations all the way down the stack. Or, we can flip that around: starting from a given activation, we should be able to figure out any type variable instantiations by looking some distance up the stack, and in the worst case, all the way to the top. Currently this is what my OCaml-implementing colleagues prefer; they expect it can work by looking no more than a few frames up the stack in the common case. The logic involved is basically the same as that of the compile-time type checker—which now needs to be replicated in the debugger and/or the language runtime. That's an annoying chunk of replicated stuff, which I find distasteful. Also, this inference might be expensive—fine for an interactive debugger, but poor for other applications of run-time type information (like serialization routines or a tracing tool, say). The advantage is that it requires fewer changes to the compiler.

A third option would be to relax our aim of recovering source-level types. In practice, we don't necessarily care, at debug time, that we're looking at an int list. It might be enough to look at each list node individually, seeing that it's a Cons, and then, separately, discover that each Cons points to an int. In this way we've avoided “typing” at the same granularity that the OCaml language does typing, but we've still recovered a somehow “typed” view of the program (i.e. one interpreted in terms of source-level data types). Put differently, source-level types like list encode information about a whole structure, spanning multiple allocations. Perhaps all we really need is a piecewise, per-allocation view. Currently, OCaml's tagged-pointer implementation ensures that at word granularity, we can distinguish integers from pointers. That's not enough, because we can't, say, distinguish the first variant of type T from the first variant of type U, nor from the integer 0: all are encoded as a zero word. But if we add local tracking of ADT variants and a few other things, that might be enough for observability purposes, and would be less invasive than a full HashCaml-style solution. I find this promising, although I'm still working through the consequences.

Suppose we stick with our oracle-based approach, tracking a source-level type for each allocated value. There seems to be a complication. I mentioned that type parameters are decided at instantiation points. but also that we might only be deciding that 'a becomes 'b -> 'c, say—we're not fully monomorphising them. This makes sense, and just reflects the nature of functions. Suppose we have a list of functions. A perfectly valid such list might contain the list head function hd. That's a generic function of type 'a list ->'a. When we instantiate our 'a list to one that can hold this function, we've specialised type parameter 'a to 'b list -> 'b. Our list is still polymorphic: we haven't got down to a monomorphic type. Does that mean we're lacking the ability to observe something in our program state? The answer is a resounding “no”! I mentioned that when debugging, we're looking at the present and not the future. The polymorphism in hd encodes the unknown future: we don't yet know what types of arguments the functions in the list will be applied to (it hasn't happened yet!). So, these polymorphic-at-run-time values in our heap represent the residual genericity in delayed computations, i.e. in functions. Functions encode things our program hasn't done yet, but might. They don't present an obstacle to decoding the current program state. In practice, any function has a name, even if it's a fake one generated by the compiler from the source code coordinates of a lambda. If we're in a debugger, getting the name of that function (or those coordinates) is comfortably good enough.

There's a final apparent complication. What about the empty list, or the null pointer? These seem to be polymorphic values. But unlike functions or data types, they're not going to get specialised further by activation or instantiation. The simplistic answer is that these values are okay because they're degenerate cases. It's not a practical loss of observability at run time if we can't answer the woodchuck-esque question of “what type of non-empty list would this empty list be if it wasn't empty?”. A more subtle answer is that these values aren't really polymorphic at all. If we think of how we would define the data type 'a list, we see that the Nil constructor, viewed in isolation, isn't polymorphic—it doesn't use 'a. In the context of this constructor, 'a is a “don't care” or ignored argument. An unparameterised constructor is only vacuously polymorphic: its meaning doesn't actually depend on the parameter. (This view, which sees constructors as somewhat independent of the type definition that encloses them, is one which OCaml's polymorphic variants directly build on.)

Finally, I alluded to some parallels with C. Just as the pointers which allow generic layouts for ML data types are equivalent to void pointers, so we have a similar problem when debugging C code: what's on the end of a void*? If I'm looking at a generic linked list node in my debugger, say, the debugger won't let me follow the pointer to the payload data. For that, we would need some run-time service that can look up some metadata about arbitrary memory locations and tell us what's stored there. Java-style VMs solve this problem using object headers. Clearly we don't have this in C; we need some extra infrastructure to answer these questions. I've been working on it: it's called liballocs. By dynamically tracking allocations in our running program, and using some carefully crafted associative data structures, we can build up a fast mapping from arbitrary pointers to metadata about the pointed-to allocation.

In fact the reason I got interested in this topic was that I wanted to make liballocs understand allocations made by OCaml programs. One of the complications liballocs has to deal with is polymorphic allocation sites. These sometimes occur in C code. For example, we might malloc() an array of generic void*, say, but actually use it to hold some specific kind of pointer. Genericity like this is occasional in C, and commonplace in ML. But there's no fundamental difference: code can be generic regardless of whether our source language includes a type language for describing that genericity. Genericity itself is what makes debugging tricky, because it indirects away concrete details in a way that some implementations (both C and OCaml) make hard to recover at run time. The presence of a fancy type system isn't the problem.

[/research] permanent link contact

Fri, 27 Feb 2015

Talking about polymorphism

[In my last post, I talked about debugging, and more generally “observability”, with reference to ML-family languages. This post is continuing the theme.]

Before I go into what's different about polymorphism between compile time and run time, I need to get some terminology straight. What is polymorphism anyway?

People traditionally talk about “ad-hoc polymorphism” and “parametric polymorphism”. These terms go back a long way— to Strachey in the the mid-1960s. They're unhelpful today, for two reasons. Firstly, and most trivially, some flavours of apparently non-parametric polymorphism are nevertheless rather systematic, so ideally shouldn't be called “ad-hoc”. Type classes in functional languages and method overriding in object-oriented languages both illustrate this.

Secondly, calling polymorphism “parametric” confuses people into thinking that if code doesn't have type parameters, it's not parametrically polymorphic. But, to me at least, this kind of polymorphism is not about how code is presented; it's a deeper property, about how code is expressed. It's easy to find code expressed in apparently polymorphic ways in all languages—even ones where there is no additional metalanguage of types to describe that polymorphism. Some C code is chock-full of such polymorphism—take Linux, for example. In such code there are no type parameters, only void. Similarly, one of the main strengths of dynamic languages is that code often comes out with some degree of polymorphism by default. If you write a sort function, say, you tend to get something that can operate on any kind of ordered sequence.

So although “polymorphism” sounds fancy, it's really just talking about delaying specialisation. If we're write code in a way that avoids commitment to details that would specialise it to one kind of data or another, we're writing polymorphic code. Fancy type systems are a means of reasoning about polymorphism, hence enabling static safety guarantees about polymorphic code. But they're not what enables polymorphism itself.

(I am probably departing from Strachey at this point, although it's not clear. His presentation doesn't directly address the question of whether polymorphism is really polymorphism without a metalanguage to describe it. For the purposes of my subsequent writings, I declare that it is, and that the metalanguage is a separate concern. Even if you disagree, it won't affect the substance of anything that I'm about to say; only which choice of words is an agreeable way to say it.)

Expressing code in a way that defers specialisation invariably means introducing some kind of indirection. Here I mean “indirection” in a very general sense. Run-time indirection via pointers is certainly one very familiar way of introducing polymorphism: a list node having a void* payload has avoided specialisation, whereas one with int hasn't. But there can also be compile-time indirection, such as between an identifier and its binding. And then there is also run-time indirection across not only storage, but also computation. (The latter is the trick which, when applied to its fullest extent, gives us objects in the sense of Cook.)

Somewhere along the line I have stopped using the qualifier “parametric”. In fact, the view I've just expounded—that polymorphism is the deferral of specialisation—covers ad-hoc polymorphism too, if we look at it the right way. When we say a source language supports “ad-hoc polymorphism”, it means that it lets us group together collections of specialised definitions. Such a collection of definitions, viewed as a unit, start to look like a single, unspecialised definition—in other words, a polymorphic definition. Again, indirection is what enables this polymorphism. If I write the token “+” in a language with overloading or type classes—Haskell or C++, it doesn't matter—it denotes the whole collection of defined addition operators. The code thereby avoids directly selecting any particular one. Typically, the selection is done indirectly. Usually it happens in the compiler, since the language defines some rules for selecting which one will actually be used. But it might happen very late in compilation since these rules might allow dependencies on the code's context of use (think how “+ is resolved in a C++ template definition, for example). And all this could equally well happen at run time; multimethods allow us to do this for binary operators like plus, while the usual single dispatch is implementing precisely the restricted unary case: deferring specialisation, on (only) the leftmost argument, until the execution of each individual call.

If polymorphism is this incredibly general thing, we need some way of getting down to specifics—like the question I started with, about what “polymorphism” is present during execution of compiled OCaml programs on a particular OCaml runtime. One thing that helps is to qualify any mention of polymorphism with a particular time: perhaps compile time, perhaps run time. At any given time, we can describe a polymorphic entity (perhaps a definition in source code, perhaps an object at run time) as somewhere on a spectrum between strong “specialisation” and strong absence of specialisation, which we can happily call “genericity”. (I say “strong” not “full” because the logical extremities of this spectrum tend to be degenerate, like “the output of the program for given input”—the most specialised state!—and “not written any code yet”—the most generic possible state.)

This “one time” restriction forces us to distinguish a source-level view from a run-time view. It's clear that this distinction matters. Polymorphism in source code can sometimes be implemented by specialisation before execution. C++ templates are the classic example: I can define a generic template, but the compiler will implement it by generating lots of specialised instances. Meanwhile, certain whole-program optimisations in ML compilers, like MLton, do effectively the same. This kind of compile-time specialisation is sometimes called “whole-program monomorphisation”. Just as “polymorphism” is a fancy word for the deferral of specialisation, “monomorphisation” is a fancy word for applying some kind of specialisation.

Polymorphism only defers specialisation; it doesn't avoid it. As I'll talk about more in the next post, all polymorphism eventually ends up being specialised away somehow. What varies is where and when this specialisation occurs. It might be done early, in a compiler, or late, during execution, by “elaborating” a particular trace of machine instructions in a particular machine context—i.e. by actually running the generic code! In the latter case, each execution is “specialised” in the sense that it runs in a different machine context and so produces a different outcome. Either way, execution is a gradual process of specialisation, entailing the gradual removal of polymorphism.

Phew. Perhaps that was a lot of belabouring the point. But it certainly helped me clarify (to myself) in what ways “polymorphism” might be present in ML code at run time. I'll revisit this precise question in the next post.

[/research] permanent link contact

Fri, 20 Feb 2015

Putting observability first

[Update: this article has now been translated into Russian! Thanks, Vlad.]

Last week I had a fascinating conversation with Mark Shinwell and Stephen Dolan, two colleagues who know the OCaml runtime inside-out. We were talking about ways to make compiled OCaml code observable: making it support interactive debugging and various other dynamic analyses (notably memory profiling).

It turns out that although this is not impossible, it is very difficult. It's fair to say that the OCaml runtime has not been designed with observability particularly high on the wishlist. It's far from unique in this: ML-family languages are usually not implemented with observability in mind. In fact you could argue that ML is explicitly designed for non-observability. The orthodoxy of the day held that tagged storage was the naive implementation technique, and erasing tags at run time was a good thing—mainly since it enabled a modest performance gain. Moreover, this meant that the provable erasability of tags came to be considered a valuable mathematical result, about language designs in general and ML in particular. The lingering side-effect is that this erasure also removes the ability to decode program data straightforwardly from its in-memory encoding. In the absence of some other mechanism—which mainline OCaml doesn't currently have—this greatly compromises observability.

ML is not alone in this. Although I'm not so familiar, I'll wager that Haskell, in GHC, suffers similar observability limitations—or perhaps worse, since it does more extensive compile-time transformations, and these cannot easily be “seen through” at run time. This makes it even harder to explain the state of an executing program, seen at run time, in terms of the source code. Also, I'm not singling out functional languages. Java has serious flaws in the observability mechanisms exposed by its virtual machine, as I once wrote a short paper about.

C, as usual is an interesting case. On the surface, it appears to be designed for little or no observability. It certainly doesn't generate much run-time metadata. This omission could be seen as an ML-like erasure of implicit tags. Yet this is not an accurate representation. Pragmatic working programmers, like Ritchie and Thompson, know well the value of interactive debugging. The first edition of Unix included db, an assembly-level interactive debugger mostly used on coredumps. By the eighth edition two debuggers, adb and sdb, were documented in section 1 of the manual, the latter supporting source-level debugging, while a third debugger (pi; built on Killian's procfs) was bundled in the extras which would become Plan 9. More modern debuggers have kept pace (more-or-less!) with more advanced compiler optimisations, using complex metadata formats like DWARF to describe how to see through them. (This isn't bulletproof, and has its own problems, but works remarkably well for C.) The result is that C code has been consistently highly observable—albeit not without forty years' continuous toil to co-evolve the necessary language- and system-level infrastructure.

This co-evolution is interesting too. The mechanisms for achieving observability in C code lie outside the language. Coredumps and symbol tables are system mechanisms, not language mechanisms. Observability in Unix has been part of the design, down in the fabric of the system; it is not an afterthought. An opinionated system designer (okay, I'll step forward) might claim that observability is too important to leave to language designers. There are, however, some gaps to plug and some mindsets to bridge in order to apply Unix-style debugging infrastructure to ML-like languages. In another post in the near future, I'll dig a bit deeper into OCaml, by considering polymorphism (and why it's not quite the problem it seems to be).

[/research] permanent link contact

Mon, 02 Feb 2015

Thoughts on peer review

My PLDI submission was rejected. I'm not too sad about this, since the reviews were basically encouraging, and on balance I agree that the paper can still be improved and was arguably not (quite) ready yet.

However, despite certain innovations, namely two-phase review and rebuttal, the conference review process is as creaky as ever. This is rather dispiriting, and the longer I spend doing computer science research, the more bizarre our system seems. I filled in the anonymous post-conference survey, but since my comments apply more widely than to a single instance of a single conference, I feel like forgoing my anonymity and re-posting them here (lightly edited). I hope they don't have too much of a sour-grapesy flavour—at least, that's not really how I'm feeling.

Something I didn't elaborate in my survey response: what would this “progressive model” be? It might involve reserving a smaller part of the programme for short presentations of recent journal publications, and a larger part of the programme for short presentations of high-quality in-progress work. This would be work at a substantially more advanced stage than a typical workshop paper—possibly reasonably mature already, but not yet accepted for publication in a journal. Initially, it seems important to ring-fence the slots for the latter kind of presentation, to avoid having already-published work always trump the not-yet-published stuff. Eventually, in a well-functioning community there would not be much call for the former kind of slot, since most journal publications would describe things that had been presented at previous conferences.

[/research] permanent link contact

Tue, 16 Dec 2014

For and against languages

A colleague recently summarised my research's position, not unkindly and partly in jest, as “against languages”. One one level there's some truth in that. I decry language-specific toolchains, runtimes, debuggers and package managers; I scoff at claimed benefits of being “100%” “pure” <your language here>, I am endlessly sceptical of anyone trumpeting one language too loudly. In my own work, I am preoccupied with language-agnostic runtimes, linkers and debuggers, cross-language composition and language-agnostic program analyses.

But that doesn't mean I dislike languages—far from it! Language innovations deliver their highest net value when we're free to pick the right one for the job, and when it's possible to adopt new languages without suffering massive pain. This also means it must be easy to migrate away from a language. It's only by humbly accepting any one language's inevitable imperfection that we can use each language to its best advantage, and can accommodate progress in languages generally.

Not all code is alike, and the world doesn't change overnight. We should therefore expect code in multiple languages to coexist. The alternative is getting stuck with the “current” and never making it into the “new”. We need to think of coexistence of languages not as a one-time transitional measure on our way to the perfect language, but as inevitable, ongoing reality. Even if we consider a particular language to be “legacy”, “outdated” or “broken”, we shouldn't make dealing with that language a second-class use case. If we are to have progress, we must give these scenarios first-class consideration. (This is why I lambast the status quo of “foreign function interfaces”, and in fact the very idea of them.)

So it's not that I'm against languages. I'm interested in making language innovations as useful as possible. I'd say that makes me more strongly in favour of languages than many people.

[/research] permanent link contact

Mon, 24 Nov 2014

I hate systems research... sort of

[I wrote this article on 6th August 2008, when I was a second-year PhD student, but then didn't have the guts to post it. Looking back, it's not so controversial, although it was obviously shaped by my (relatively narrow) experiences as a PhD student in a particular corner of the Networks and Operating Systems group here in Cambridge (a group I am very happy to be associated with, I should add).]

David Byrne, a well-known proponent of musics from around the world, once wrote an article entitled “I hate world music”. He hadn't experienced a sudden change in his taste, but was referring to the label of “world music”. By labelling all music that doesn't conform to a certain narrow set of Western-heritage styles as “world music”, it can be conveniently marginalised and dismissed as a unit.

So too with computer science research, labels are used to divide and marginalise. This time, it's the inside marginalising the outside. A recent EuroSys call for papers invited papers on “systems aspects of...” a long list of topics. Despite having been working in a systems research group for nearly three years, nobody has ever told me what “systems aspects” are. Yet most people who'd describe themselves as “systems researchers” seem to wear that badge with some degree of pride. Until someone can tell me what it means and why it's such a good thing, I won't be doing the same.

What defines systems researchers? Is it that they “build stuff”? This is perhaps necessary but hardly sufficient. Is it that they meddle with operating systems, networking stacks or other infrastructure software? Perhaps, but it seems to cover a lot more ground than that to which the label is commonly applied. My preferred take is that “systems” is simply a label, of historical origins, which now identifies a research community—a community which, like all social networks, is defined only by the links among its members and the continuity of that relation's evolution over time. One thing that has continued to surprise and frustrate me about the world of research is that despite huge overlaps of interest, it is divided much more than it is united: divided into largely separate, non-cooperating communities. Many systems researchers would be horrified (thanks to a snobbery I will come back to) if you claimed that their work overlapped with that of the software engineering community. Yet having attended in the last year both EuroSys and ESEC/FSE, I estimate that at least 40% of the papers I saw at each of those conferences could easily have been presented at the other without any sense of incongruity whatsoever.

Last week at the Semantics Lunch, Mike Hicks gave an excellent talk about CMod, the module system for C developed at the University of Maryland. Unlike other Semantics Lunches, the advertisement was cc'd to my research group's list. I get the e-mails anyway because I'm subscribed to other lists, but the obvious inference is the impression that systems researchers will be interested in any languages research that concerns C, but not in any other. This impression is certainly not fair, but the exaggerated impression does indeed stem from a grain of truth.

One way of getting your tools- or languages-oriented paper published in the systems community is to show how it applies to “systems code”. This almost always means one of three things: performance-critical code, resource-constrained code, or code written in C. These are all pretty general requirements, and fully relevant to the widest imaginable “software engineering” community. Regarding the last of the three requirements, of course there are many great reasons for creating tools that work with C, given the astronomically huge volume of existing software that is written in it. Working with C is certainly a priority for my own work. However, everyone knows that C is a language with many gigantic problems, and ones that have been known for decades. Cynically, I would claim that part of the reason that C-language developments, both code and tools, continue to flourish within the “systems community” is an endemic disdain for programming language advances, or in general of any research community that isn't “systems”.

This separatism comes in more than one form. There is a certain air of what I call “macho bullshit”, where, among other things, there is pride is associated with writing unnecessarily nasty code in unnecessarily low-level languages. Now, I have seen some exceptionally clean and well-engineered C programs, particularly ones coming out of my own research group, so this isn't any sort of blanket statement. I've also seen, again locally, many systems researchers who are heavily into language advances. But the other side of the coin is there too: the disdain and the separatism are definitely things I've observed around here, and I don't like them. They're joined by an attitude to criticism: there appears to be an unwelcome degree of sniping and dismissiveness, found in reviews and programme committees, together with a bizarre culture of unnecessary controversy, as typically found in workshops with “Hot” in their title. It seems there are too many systems researchers prepared to forward any criticism of others' work, however invalid, and pronounce any self-aggrandising position, however obtuse and misleadingly argued. This is, in a word, unprofessional, and it undermines the very goals of research. I don't have enough experience to know either how widespread these attitudes are, or how much more systems research suffers from them relative to other fields.

[/research] permanent link contact

Tue, 07 Oct 2014

Seven deadly sins of talking about “types”

[Update: this article has been translated into Japanese!]

My essay “In Search of Types” attempts to be a dispassionate review of some of the different concepts, purposes and attitudes surrounding the word “type” in programming. I imagine that my passions are still rather thinly veiled in places. But in this post I'm going to come out with a more unabashedly ranty take. Certain statements and attitudes really get up my nose. I was recently exposed to a few of these at Strange Loop (a great conference, I should add). So I've written down a list of “deadly sins” committed by many people (mis-)speaking about “types”

I should add that the theme here is rhetoric. What gets up my nose is when people don't make honest, transparent arguments. Their conclusions needn't be wrong. I program in OCaml a reasonable amount, and it's plain that I get a lot of value from the type checker. But advocates of type systems often try to sell them as a miracle cure, without acknowledging their limitations and undesirable side effects. Can we please move the debate past propaganda and blanket statements?

A contributing factor is that we often struggle to disentangle the many distinct concepts that lurk under the word “type”. My essay tackles this entanglement more directly, although hopefully the following rants will make some useful distinctions.

Not distinguishing abstraction from checking

This is my number-one beef. Almost all languages offer data abstraction. Some languages proceed to build a static checking system on top of this. But please, please, don't confuse the two.

We see this equivocation used time and time again to make an entirely specious justification of one language or other. We see it used to talk up the benefits of type systems by hijacking the benefits of data abstraction, as if they were the same thing.

The discussion of “stringly-typed programming” (sic) at Strange Loop, both in the Snively/Laucher talk and in Chris Morgan's Rust-flavoured talk, was doing exactly this. Yes, it's true that HTTP headers (to borrow Morgan's example) can and should (usually) be abstracted beyond plain strings. But failing to do so is not an omission of compile-time type checking. It's an omission of data abstraction. Time and time again, we see people advancing the bogus argument that if you make the latter omission, you must have made the former. This is simply wrong. Type checkers are one way to gain assurance about software in advance of running it. Wonderful as they can be, they're not the only one. Please don't confuse the means with the ends.

Pretending that syntactic overhead is the issue

The Sniveley/Laucher talk included a comment that people like dynamic languages because their syntax is terse. This might be true. But it made me uncomfortable, because it seems to be insinuating a different statement: that people dislike type systems only because of the syntactic overhead of annotations. This is false! Type systems don't just make you write annotations; they force you to structure your code around type-checkability. This is inevitable, since type checking is by definition a specific kind of syntactic reasoning.

To suggest that any distaste for types comes from annotation overhead is a way of dismissing it as a superficial issue. But it's not superficial. It's about the deep consequences type checking has on how code must be structured. It's also (perhaps ironically) about polymorphism. In a typed language, only polymorphism which can be proved correct is admissible. In an untyped language, arbitrarily complex polymorphism is expressible. Rich Hickey's transducers talk gave a nice example of how patterns of polymorphism which humans find comprehensible can easily become extremely hard to capture precisely in a logic. Usually they are captured only overapproximately, which, in turn, yields an inexpressive proof system. The end result is that some manifestly correct code does not type-check.

Patronising those outside the faith

If somebody stays away from typed languages, it doesn't mean they're stupid, or lack education, or are afraid of maths. There are entirely valid practical reasons for staying away. Please drop the condescending tone.

Presenting type-level programming as a good thing

No sane person can argue that we want to bifurcate a programming language into distinct base-level and type-level fragments. Gilad Bracha calls this the “shadow worlds” problem: to abstract over constructs at level 0, we need a whole new set of level-1 constructs, and so on. This is an antipattern. It's a failure of compositionality. That doesn't mean it can't be justified for pragmatic reasons. ML's module system is the way it is because nobody has figured out the maths to do it any better under the various design constraints of the system (which of course include a proof of soundness). But please, stop pretending that type-level programming is a desirable thing. It's a kludge, and hopefully a temporary one. Personally, I want to write all my specifications in more-or-less the same language I use to write ordinary code. It's the tool's job to prove correctness or, if it must, tell me that it failed. (It should also do so using a straightforward explanation, ideally featuring a counterexample or stripped-down unprovable proposition. I don't know any type checker that does this, currently.)

Fetishising Curry-Howard

Curry-Howard is an interesting mathematical equivalence, but it does not advance any argument about how to write software effectively.

Equivocating around “type-safety”

Here is a phrase which is used to mean many different things. Old-fashioned “type safety” is what I call “Saraswat-style”, after the famous “Java is not type-safe” memo. (Of course, it wasn't invented by Saraswat—it was the “standard” meaning of the phrase at the time.) It's a valuable property. But it's actually about memory, not data types: it's definable in terms of a language featuring only a “word” data type (using only minor tweaks of wording from Saraswat's). It's also nothing to do with static checking—“type safety” holds in all “dynamically typed” languages. The reason it's such a useful property is that it can be implemented without sacrificing a lot, unless you want to program close to the machine. Although many implementations happen to use syntactic compile-time reasoning, a.k.a. type checking, to lessen the run-time checking overheads, this is an implementation detail.

Saraswat-style type safety is usually a very good thing. But it's a far cry from proving near-arbitrary correctness properties of code. It's popular to confuse the issue, by saying that if you don't use type systems you don't have “type safety”. This wilful mixing of separate ideas is cashing in on the relatively uncontroversial good value of Saraswat-style safety, and using it to paint “type systems” as a similar no-brainer. Proof has a cost, and the appropriate expenditure depends on the task. It's far from a no-brainer.

Omitting the inconvenient truths

If everyone would just use a modern language with a fancy type system, all our correctness problems would be over, right? If you believe this, you've drunk far too much Kool-Aid. Yet, apparently our profession is full of willing cult members. Let me say it simply. Type systems cannot be a one-stop solution for specification and verification. They are limited by definition. They reason only syntactically, and they specify only at the granularity of expressions. They're still useful, but let's be realistic.

Try checking realistic reachability or liveness properties with a type checker, and you will not succeed. In a formal sense, this can be done, and has been, but the resulting system has little in common with type checking as practitioners would recognise it. The authors note that the ostensibly type-style specifications it yields are “bulky” and often “not suitable for human reasoning”. This is hardly surprising, since they are working against the grain of the program's existing syntactic decomposition. To specify and check liveness or reachability, we need to represent programs as some kind of flow graph, where flows are very likely to span multiple functions and multiple modules. It's not an expressionwise view of code, and it's also not the syntax in which most people want to code.

We need to grow up and accept this. Once we've accepted it, what do we do differently? We need a model that lets us assimilate gradual improvements in automatic reasoning, like better SMT solvers, without requiring us to re-type (pun intentional) our code. We need to be able to step up the strength of our specifications without switching language all the time. We need to integrate typed and untyped code, and between distinct languages too.

These ideas are slowly gaining some traction, such as in gradual typing, but have a long way to go. In particular, accommodating multiple languages has some far-reaching consequences for our infrastructure. So far, we implement languages in a particular way: relying on a single in-compiler type checker to establish whole-program invariants. If we embrace multiple languages, we can't build things this way any more. Instead, it's necessary to tackle invariant enforcement not within a per-language compile-time service but within a more generic composition-time service. (Hint: it's called a linker. In short, the linker-like service must do a mix of proof and instrumentation, in a language-neutral way, guided by descriptive information output by the compiler. If you've seen me talk recently, this will probably sound familiar. Ask me for more!)

Coding without proving everything we'd like to prove is an activity which will continue. Meanwhile, late binding, and hence limited applicability of truly ahead-of-time reasoning, is often a requirement, not an implementation decision. Eliminating these can bring benefits, but also can eliminate value. Weighing up costs and benefits, in a way that optimises overall value for a given product, is the right way to think about all these issues. It's a long way from the dogma, rhetoric and blind faith that currently dominate our discourse.

Coda

Now that I've ranted my heart out, here's a coda that attempts to be constructive. As I mentioned, part of the problem is that we equivocate when talking about “types”. Instead of saying “type” or “type system”, I encourage anyone to check whether anything from the following alternative vocabulary will communicate the intended meaning. Deciding which term is appropriate is a good mental exercise for figuring out what meaning is actually intended.

If you can think of others, do let me know!

[/research] permanent link contact

Mon, 06 Oct 2014

Project extra

I just thought of another nice project idea, so here it is.

A generic fuzz-tester, using DWARF

Fuzz-testing is a technique for randomised software testing. The software under test is run with randomly modified inputs, starting from existing test inputs. This covers more code paths, hence potentially finding more bugs, than using only human-crafted test inputs.

Typically, fuzzers are built around particular input domains. For example, we might write one fuzzer for C compilers which generates randomised C source files. We might write another fuzzer for X11 applications which randomly modifies the packets sent over the wire. (In fact the original fuzzer, xjig, did exactly this.) We might write yet another fuzzer for a particular library API, like John Regehr describes here. This works... but can we build a more powerful, more general fuzzing system than these per-domain solutions?

This project is about building a general tool for the latter scenario: fuzzing of library APIs. We want to be able to fuzz any library whose API we have a description of. For us, this will mean described by compiler-generated DWARF debugging information. There are several technical steps to this. Firstly, we need the ability to observe and (optionally) capture API traces, typically from running an existing test suite. This can be done using something reasonably off-the-shelf like ltrace, although we might want to make some modifications to suit our purposes. Secondly, we need to perturb these traces somehow to generate randomised versions. These can be randomised both in terms of the calls made and the arguments passed; the project would need to investigate various randomisation strategies. Thirdly, we then execute these randomised traces and attempt to detect any errors that occur—perhaps assertion failures reported by the program itself, memory errors reported by Memcheck (of Valgrind fame), or type errors reported by libcrunch.

For evaluation, we would like to measure how much we improve test coverage, relative to the existing test suites. We could perhaps also compare this improvement against the improvement obtained by API-specific fuzzers, like Regehr's, hoping to do almost as well (or perhaps even better!).

One problem with fuzzing is false positives. It might be that our randomised changes yield traces which exercise the API in way that aren't supposed to work. Normally we'd want only to generate traces in which the client stays in-spec, while perhaps leading the library itself to go out-of-spec (that's what a bug is!). (In security-flavoured fuzzing, the exposed attack surface is what's important, not the documented interface per se, but the same principle applies.) Such specifications are rarely written down precisely! So, extension-wise, an obvious direction is to refine our model of APIs to allow for more semantic constraints. To pick a trivial example, if we were testing the C library's malloc implementation, one constraint is that we shouldn't double-free a chunk. That's a little subtle—it's okay to free the same chunk of memory a second time, iff it was issued by malloc a second time! There is a lot of scope for investigating this kind of constraint, and, in general, to produce more sophisticated semantic models of APIs. There is a wealth of research about “API usage patterns”, but often using only a naive finite-state formalism that struggles to capture whole-trace properties (like the malloc one I just described). We could also investigate using Daikon, or some similar invariant generator, for inferring such invariants from collections of harvested traces.

[/research] permanent link contact

Mon, 29 Sep 2014

Progress by distillation

A theme of Joe Armstrong's Strange Loop keynote was that creating stuff doesn't necessarily make progress overall. Proliferation reduces value. Entropy is our enemy. He mentioned the bewildering array of build systems, packaging systems, unidentifiable system software (I never did catch what “grunt” is) and, more generally, copies of the same or similar code and data. Building a “compressor” was his likeably vague solution.

My talk arguably continued this theme. I was talking about proliferation of dynamic languages—in contrast to the original Smalltalk vision, which didn't envisage the host of broadly similar dynamic languages which now coexist alongside it.

An underrated way to make progress is by distillation. It's to recognise repetition, to integrate and unify existing systems where possible. We don't have to do so perfectly or completely. It doesn't have to be a Grand Unifying Design. We can work opportunistically. But we need both the right mindset and the right infrastructure. Tools for creating are not enough. Tools for integrating and distilling are necessary.

I seem to have written about this before. It's nice to see the same ideas cropping up in a keynote!

[/research] permanent link contact

Project ideas 2014--2015

I maintain a brief list of ideas for student projects which I'm always willing to supervise. Here I'll elaborate on some that I'm particularly keen on this year. As always: these are just a selection. I'm very interested in having students discuss their own ideas with me, or suggesting variations on what I have proposed. Please contact me if any of these ideas (or those on the linked page) is of interest. Most of these ideas could make Bachelor's or Master's projects, with appropriate tailoring which we would discuss.

Program slicing at debug time

Program slicing is a powerful program transformation. It seeks to create a minimal program that explains some strict subset of a program's state at a particular point in an execution, throwing away code that did not influence that subset. For example, suppose I'm debugging the following (very stupid) program.

y = 12;
z = y * foo();
x = y + 1;

I might see, when debugging, that after the third line, variable x has the value 42. I want to know how it got that value. A program slicer could show me a minimised version of my program that eliminated any statements that did not contribute to the value of x. Here, that's the middle statement (probably! that's if foo() did not change y somehow).

Although Mark Weiser's original paper emphasises the fact that programmers do program slicing in their heads when debugging, automated slicing still hasn't entered the usual repertoire of features that programmers find in their debuggers. This project will implement and evaluate a slicer that is designed to be usable from a debugger. The evaluation is likely to be in size comparison with an existing slicer—smaller slices are usually better. We can also measure the speed of the slicing algorithm—since to be usable interactively in a debugger, it shouldn't be too slow.

A simple slicer is not too much work. There are also many directions for extensions. Precise slicing is a Turing-complete problem, so all approaches are approximate. Better approximation through cleverer program analysis is the obvious avenue. Another is amorphous slicing, which explores semantics-preserving changes to the program's syntactic structure, in order to produce a still smaller program. A third is to use knowledge of the program state itself as an extra constraint on slicing—not just the position we want to slice at, but also the values of program variables. Slicing can be seen as a kind of backwards program execution, with a user-customisable degree of concretisation introduced. The “most amorphous”, “most concrete” slice is simply the relevant subset of the (branch-free) trace of program execution (if it's unique; else the union of all possible traces leading to that state). This might not be as useful as a slice preserving slightly more of the program's execution.

There is also an incidental problem that will need treating somehow. It is that most debugging infrastructure doesn't directly you reconstruct the original source code that you're debugging (at least in the cases of native Unix-style debugging and Java debugging). Instead it enumerates the source files that went into each compilation unit, together with the line number ranges of each, and provides a shallow mapping of program counter values onto these files and lines. In Java this is very nearly enough; for C, it's less good because the preprocessing step is not explicitly described. So, there might be some insight to be had about how source files ought to be represented within debugging information. One extension might implement such an improvement, by judicious hacking on a compiler (most likely LLVM) to modify and/or supplement the debugging information that it currently outputs.

A garbage collector using liballocs

My work on liballocs is developing a run-time library that dynamically tracks allocations and their data types across (potentially) a whole process. This means you can ask it what's on the end of a pointer, and it can usually tell you—even for pointers to stack or heap memory, interior pointers, and so on. It can be thought of as implementing a reflection API, as a platform for building dynamic program analyses, and as a platform for implementing dynamic languages (see my talk!).

This project is about implementing a mark-and-sweep garbage collector on top of liballocs. This can be thought of as a replacement for traditional conservative collectors like the Boehm collector. If liballocs's view of the process is completely accurate, and no pointers are computed in way that we didn't anticipate, then we obtain a precise collector. To deal with unusual address computations or tricky encodings of pointers, we might still need to build in some conservatism nevertheless.

In any case, the liballocs API allows a collector to be more “federated” in its design. Traditional collectors are owned by a single virtual machine (VM) which makes strong assumptions about how objects are laid out, what addresses are issued, what parts of the process can point into “their” heap (the “roots” of tracing), and so on. With liballocs, a collector can make a reasonable attempt at tracing native data structures directly, without knowing in advance how they are laid out—the library returns metadata describing the layouts. This allows tracking reachability over paths that stray outside a single region and then reach back in—for example, from the collected heap into the malloc() heap and back.

This “federated” design is potentially (i.e. in future work!) an enabler of reasonably seamless programming in a mix of languages, or when mixing manual with automatic storage management. For example, interfaces like JNI force the programmer to explicitly signal to the collector when native code takes a reference to a Java object. by creating a “global references” (which is managed manually). With liballocs, it can simply trace the native data structures too.

Although liballocs can usually tell you what data type is instantiated on the end of a pointer, there are a few exceptions to this (such as on-stack temporaries), and this isn't quite all the information we need. For collectors, it matters whether storage is initialised (hence meaningful) or not, and whether a value is live or dead. It also matters what funky address computations a program might do (all the way up to extreme cases like XORed pointers!). A large part of this project will mean finding some kind of solutions to these, as well as gathering an understanding of when they do and don't matter in practice. Most of these issues have “big hammer” solutions which work fairly well but are slow (like replacing malloc() with calloc() to avoid uninitialised heap data) and also more clever solutions (like skipping this if we can prove that an allocation site always initializes the whole heap block before the next safepoint). So there is plenty of scope to pick extensions to the core project.

Obvious comparisons for evaluation are the Boehm GC, and also perhaps the garbage collectors of the Go or D languages. We can compare in both speed and precision (or, inversely, conservatism). The Boehm GC is well known for having observable conservatism, keeping objects alive when they are not reachable, because of integers that alias pointers. We would expect to do better than Boehm for precision, while retaining comparable performance (within a factor of 2 for total memory management time overhead, say). An MPhil project would set more ambitious goals and use more advanced garbage collection algorithms (e.g. a generational and/or compacting collector).

A bounds checker in libcrunch

My work on libcrunch uses liballocs to implement run-time type checking for C (and other languages, like C++ and Fortran, in due course). The basic idea is that whenever a pointer cast occurs, we check that the cast-to type “matches” what's on the end of the pointer (for a sufficiently refined notion of match). One weakness of libcrunch is that it assumes the program is memory-safe, both spatially and temporally. Put differently, it will catch type errors caused by bad cast logic (analogous to a ClassCastException in Java) but not those occurring as a consequence of buffer overflows (“spatial” errors), nor use-after-free or use-before-initialize behaviours (“temporal” errors). This project will implement spatial correctness checking that integrates with libcrunch.

Unlike conventional spatial bounds checkers, like SoftBound or ASan, we have type information (from liballocs) so can do better in various ways. Most significantly, there is no need for per-pointer metadata; per-object type metadata is enough (this holds for C; ask me for the detailed reason why!). This means we don't need to move metadata around as we move pointers around, so we should usually see some performance wins. It also doesn't matter if we haven't instrumented all code; as long as we observe allocations, we will have the necessary metadata. However, there are some drawbacks: querying pointer metadata will involve a short search, via the containing object's metadata, rather than a direct-mapped lookup as in SoftBound. The main target of the project should therefore be to go roughly as fast as SoftBound. Lower memory overhead and lower virtual address space usage are also likely benefits of this approach; these can be measured.

System call interposition, specification and emulation

To understand the precise semantics of a user-level binary program running atop a commodity operating system, we must understand not only the instructions it executes but also the system calls it makes. Currently, we are making progress on formally specifying instruction set architectures, but the system call interface remains surprisingly murky. What system calls may a program make? What, precisely, are the valid arguments to each system call, and what do they mean? Given some arguments, what memory might a given system call touch? What other effects might it have on the program state?

This project will build a toolkit for observing, specifying, intercepting and emulating system calls as made by real, large programs. As a starting point, we have a basic (rather limited) preexisting infrastructure for trapping system calls in an x86-64 Linux program, and a sketch of a domain-specific language (not set in stone) for describing a system call interface. The project will produce a usable end-to-end system for intercepting, specifying and emulating system calls. In a basic usage, we would divert the calls back to the underlying operating system and observe it (e.g. using SystemTap or DTrace) to check the accuracy of our model. In a more advanced usage we would instead be running the program in an emulator, and would update the emulator's state to reflect the effect of the system call. The intention is to have this work for various pairings of OS kernel (Linux, FreeBSD) and instruction set architecture (Intel, Power, MIPS, ARM), although at least initially, some pairings are more important than others.

This project is very researchy, and has both practical and theoretical aspects. It's motivated by the research we're doing in the REMS project.

A linker, mostly functionally

Much like compilers, linkers have a crucial influence on the meaning of the eventual binary program. Current linkers are big piles of imperative code, with little explicit specification. This project concerns building either a static or dynamic linker for ELF binaries in a mostly functional style, in a way that will yield a clear specification of the various parts (relocation, address assignment, section concatenation or merging, etc.). Ideally the linker would be written as an “executable specification”, likely in the Lem language, making explicit the points where nondeterministic choice is taken within the written specification (as far as it exists).

Either a static or dynamic linker can be attempted; in the dynamic case, we have a very basic skeleton already. It won't be feasible to implement a fully-featured linker, but we will carve out some subset depending on interests.

An optimising linker

Traditionally, linkers do not understand the instructions in a program. Instead, they blindly patch binary code in predefined ways, such as “add n bytes to the four-byte signed integer at this address” (which happens to be the relative-address field inside a call instruction, say).

However, a linker is potentially a very good place to do optimisation, because interprocedural flows are necessarily visible to it. A dynamic linker is potentially a very good place to do dynamic optimisation, because it can observe code being loaded and unloaded, hence can perform optimisations speculatively yet safely. So, there is a case that linkers should understand instruction streams.

Current toolchain support for link-time optimisation (LTO) is limited, in both gcc and in llvm, by working on the toolchains' intermediate representation. This must somehow be propagated into the input binaries—the binaries must be “compiled for” link-time optimising. An alternative approach is to bite the bullet and teach the linker about instruction streams, so that it can disassemble and re-optimise the instructions directly, likely using debugging information as a source of type information where this is helpful.

Some other interesting applications of link-time instruction stream rewriting include whole-program instrumentation (e.g. to intercept certain procedure calls, system calls, or certain memory accesses, such as a generational garbage collector's write barrier), reference widening (to overcome the complexity of code models) and speculative dynamic optimisation (e.g. to do “devirtualisation” of indirect call instructions). One of these could perhaps be addressed as an extension.

The chief evaluation will be on performance improvements. There are also benefits in terms of binary size, relative to traditional LTO toolchains, which can be measured too.

[Update: I added another project suggestion in a separate post.]

[/research] permanent link contact

Wed, 02 Jul 2014

Why isn't verification standard practice?

Yesterday, Steve Crocker gave a very stimulating talk sharing thoughts—and also soliciting thoughts—about why verification isn't standard practice. He began with an anecdote about last year's BIND vulnerability (CVE-2013-2266) that could allow a single malicious packet to crash a DNS server. The problem is a simple bug in a regular expression library. Malicious clients can exploit the bug to make the process allocate huge amounts of memory, and hence kill it. How can we justify the fact that such simple bugs get into deployed code? Why can't we verify the absence of at least these simple bugs in anything that we deploy?

There were many useful contributions from the audience. It took me a while to collect my thoughts, but here are some personal responses that weren't discussed at the time.

Coding versus deployment

The BIND problem is primarily one of deploying shoddy software, and only secondarily one of programming shoddy software. Even with the best possible tools, making something robust will always take time and effort. Our problem is that we don't have good ways by which to make an informed decision about whether something is good enough (and if not, what needs fixing). This is re-stating a theme of the discussion: that our problem is at best only partly a technical one. I do believe that technical solutions and cultural solutions must go hand-in-hand: changing the technology can change the culture.

Proof as an activity much like programming

Considerable progress has been made in proof assistants like Coq and Isabelle, where proof starts to look like a programming task. In particular, the machine helps us do it and helps us check it. This is a really useful advance. Programmers are already used to satisfying a proof procedure, if they use any language with a type checker. But that doesn't mean we need to restrict ourselves to designing all our proof systems to be type checkers or things like them! I'll elaborate on this shortly.

One size doesn't fit all

I believe it's too limiting to expect a single compile-time analysis to tackle all proving and proof-checking workloads. If we write some immature code that's doing something subtle, we might expect machine proof (or disproof) to take some time. As it gets more mature, we can add more annotations and generally structure the code better, to help make the proof fast. We can't just forbid ourselves from writing or executing immature code. Of course if we consider the BIND scenario, meaning the case of mature, production software, then the deployed code should be sufficiently mature that a fast compile-time analysis is capable of producing and/or checking the proof. But we need tools that let us progress code across the maturity spectrum, not just demand a fixed level of maturity.

Language-independent analyses

One of the reasons that we get hung up on language so easily is that static analysis systems like to have a source-level input language. There's no reason why they need to, though. As programming researchers, I'd argue we have never been very good at recognising the need to accommodate diversity of programming languages in most of the infrastructure we design. This can and should change. (It's something I'm working on, intermittently.) One approach I think makes sense is “binary-level analysis, source-level properties”. We can annotate source code and push those annotations through into binaries, and check that they hold of the binary level. Binaries are what is deployed, after all, and as I've argued already, deployment is the point where the assurances are most needed. It also defuses complaints about the ambiguity of source languages. Binaries are a lot less ambiguous (and we're working on improving that too). While source-level correctness for all deployment environments, i.e. “correctness portability”, is a nice property, it's a harder problem and not always important. We should walk before we run.

Simple and not-so-simple bugs

Is it enough to consider only the simple bugs? During the talk, Jon Crowcroft rightly put it like this (I'm severely paraphrasing): rewriting everything in a better language would bring us to the next level of problems, and these would be more interesting than most of the ones we currently face, but (by implication) such problems will exist. It's not clear to me that a similarly nightmarish scenario could not occur in BIND owing to some much more subtle bug than a buffer overflow (or similar). If what I just said happens not to be true for BIND (again, following Jon's comments that DNS and other core internet services are simple), it's probably true for some other critical system (like the web).

The static-to-dynamic continuum

Some discussion of assertions during the talk was interesting. One audience member commented that some software crashes more than it would if it didn't contain assertions, either because the assertions are wrong or are not sufficiently critical properties to justify termination of the program. I find it hard to blame the assertions for this, It's true that if we must tolerate a risk that assertions might fail in production code, “log and [attempt to] continue” is a marginally better strategy than “abort”. But Steve Crocker countered that for critical, deployed software, assertions should be proved to hold—a position I agree with. That's a moral “should”; in practice we have to be very clever about how we enforce this. For example, we wouldn't want to unwittingly encourage programmers to delete assertions that they couldn't prove to hold. More importantly, to repeat my last paragraph, we need to allow developers to progress a codebase from immature to mature states. We might only deploy it when it's mature, but for development, we need to have something which admits run-time failures, yet is still executable. This is another reason why I believe a fixed compile-time analysis isn't the right approach, and that we should instead choose the level of assurance we want at deployment time. The tool that establishes (or denies) this assurance might also be a compiler, but needn't be.

Assertion languages, not annotation languages

Even though we don't currently prove assertions to hold, I'd argue that assertions are a great success because they elicit specification from programmers. Most programmers use them, and they elicit a lot of subtle properties which might otherwise not get explicitly written down. Improving the expressiveness of assertions is an approach that I'm particularly keen on. The TESLA work of Jon Anderson, Robert Watson and colleagues, on temporal assertions, is a great example. I have some work brewing about expressing type correctness using assertions.

The neat thing about assertions is that they are expressed in the programming language, and easily understood by any programmer. I believe that building specification and verification infrastructure out of programmer-friendly constructs, like assertions, is a necessary step to making them standard practice. We should avoid forcing programmers to use formalisms that don't map very clearly to the program. Again, TESLA is an example of this. Reportedly, programmers find it easy to write TESLA automata. They might not find the same thing easy to write in LTL or CTL*, even though those formalisms might be more friendly to somebody who writes verification algorithms. So, when I hear mention of “annotation languages” as a separate thing that needs to be bolted on to a source language, my reaction is to ask whether these annotations can be expressed in the source language itself, augmented by only a minimal and programmer-understandable set of new primitives. In my book, such a property then becomes an assertion rather than an annotation.

The performance culture

This is a bit of a departure but I can't help throwing it in. It's generally considered that infrastructure that slows programs down more than a few percent is not acceptable for deployment use. This is in spite of the relative plenty of computing power and the relatively huge expense of simple bugs like buffer overflows. Suppose we reach a point where all the “basic” bugs are routinely proved absent via verification. There might be some less basic properties which we can only check dynamically at nontrivial cost. Do we leave the checks in, or not? I'd argue that the performance nuts are usually getting the calculations wrong. The cost of extra servers pales to the cost of downtime, security compromises and the like. and they also pale next to the cost of debugging subtle failures that are not caught cleanly. Unfortunately, saying that the slowdown is too great is still a popular way to reject research papers. The commonly accepted criteria about “suitability for deployment” are finger-in-the-air stuff.

The I/O bottleneck

This was the least obvious of my thoughts, but is for me one of the most important. Let's consider the Heartbleed bug, which is triggered when a malicious client supplies a bogus length field and causes the server to overrun its buffer. This particular overrun would have been prevented by a memory-safe language, because the buffer copy would be bounds-checked. But why do people write code at the level of bytes and buffers anyway? Despite what some people write, “type safety” is not quite the issue here. Bytes and buffers are fundamentally semantically impoverished things, and yet we find ourselves writing this kind of code in every language, no matter how “type safe”, and no matter how expressive its data abstraction features might be. Coding at the level of bytes and buffers is always error-prone, even if one or other kind of error is ruled out by one or other kind of check mandated by a given language.

The reason we do it is because of I/O. I/O happens on bytes and buffers, because that's what the libraries expose. In turn, they blame it on the operating system: that's what Unix exposes. Once upon a time there was a lot of work on persistent programming languages, but they have not taken off. I suspect it's because they require too much buy-in. We don't want our data to be an opaque box that is managed only by a language runtime. What we need instead is to add specification to the byte-streams we already have. If we produce or consume byte-streams, we should specify their format, both syntactically and including some basic semantic properties. If we're OpenSSL, we'd specify that the first two bytes of the heartbeat message are a length field, and that it should be equal to the remaining length of the buffer. This can already be expressed in languages like Datascript. We might be able to come up with something even better (ask me!) but in any case, we also now need to push this kind of specification facility deeper into our libraries, language runtimes and operating systems. Once we've done so, it's trivial to insert bounds checks. We can even check subobject bounds inside the payload, i.e. referential integrity within the payload itself. In general, once we know the meaning of the bytes, we can actually treat I/O payload data abstractly. By contrast,even a “type safe” language currently only achieves abstraction by copying the data into data structures that it manages. Doing so unavoidably requires interpreting those bytes, and doing so unsafely—any checks that are done are entirely down to the programmer.

It's no coincidence that I've advocated this before. I'm working on it—after I work on all the other things that come before it in my list. If somebody wants to join me, that would be great....

[/research] permanent link contact

Fri, 13 Jun 2014

Linking and loading: what's incidental?

Not long ago I gave a couple of talks about linking, loading and debugging here in Cambridge. It's a notoriously grungy area of our infrastructure, and a recurring question people asked me after the talks was as follows: what's the incidental complexity and what's the intrinsic complexity? Put differently: how might we redesign the whole lot nowadays, to get a less complex end result?

It's hard to answer, and partly it depends on what you value. On the surface, it seems like linking should be conceptually simple. Digging deeper though, it becomes apparent that there are a lot of concerns involved, and they are often in tension, making it very tricky for any simple solution to satisfy all requirements. That doesn't mean that simpler solutions are not possible, but we should not underestimate the intrinic complexity either.

What intrinsic complexity am I talking about? I can see four concerns central to the design of ELF dynamic linking. One is link speed: linking should be as fast as possible. Another is memory sharing: shared libraries were introduced (in part) to save memory. Another is flexibility, achieved through interposability: it should be possible to customise the link to add or replace features without modifying binaries. The last is federation: ELF is shared by many vendors' toolchains and many OSes, so the ELF runtime necessarily has a descriptive role, which is the basis for cross-toolchain runtime protocols for stack walking, exception handling and the like.

Part of the problem is that many people don't value all four of these concerns simultaneously. Link speed is valued by software distributors (who want to cut down startup times) and people who use embedded platforms (who can't afford overheads), but not workaday developers or, directly, users. Position-independence is valued by people who run the kind of systems where it saves memory, like desktop systems, and not by those who don't, like most servers. (Plan 9's decision not to bother with shared libraries is both reasonable, if you run mostly servers, and unreasonable, if you run mostly large desktop applications.) Interposability is valued by people who write tools and those with unusual requirements—which often entail reimplementing selected parts of core libraries (think malloc())—but not by those who don't. Federation is valued by those who value “openness”—competition among vendors, interoperation of tools, multi-language programming, and so on—but not by those who are comfortable with closed platforms, VM-style designs and/or “one true language”.

Few people care about all things simultaneously, so linking seems overcomplicated to everyone. But all these concerns add to complexity individually, and the tension between them also increases complexity. Link speed concerns create incentives for doing more work in the compiler or compile-time linker. This creates “premature optimisation” phenomena which complicate linker implementations (by adding to the variety of relocations) and also complicate the user-facing parts of the linker (such as getting the link options right). Sharing memory entails position independence; this creates complexity for the compiler author, who must support multiple code models, and the user who must grok the related compiler options. Interposition creates complexity for the linker author and user, and costs in performance since it prevents some binding happening earlier than it otherwise could. Federation adds complexity for the compiler author, who must take care to correctly follow conventions and emit metadata such as unwind information and debugging information. It also brings some cost at run time, such as how a cross-language cross-vendor exception protocol is slower than a custom one.

What design decisions can we rethink to get a system that satisfies all these requirements but with less incidental complexity? I believe that a smarter linker could help. The Unix-like linker design, as evolved to elaborate formats such as ELF, can be described as “smart format, dumb linker”. A key “dumbness” of the linker is that it does not know about instruction streams. Rather, the linking format describes all the ways that relocations can be applied as simple arithmetic on byte sequences. The linker's output is a mostly-deterministic function of its inputs (including linker scripts).

If we had a smart linker, we could have slightly dumber compilers—a win for simplicity since there are many more compiler implementations than linker implementations. We could also have less user-facing complexity: the compiler could just output code according to the slowest but most flexible code model, and the linker could take care of optimisations at the instruction level, including position independence. The key difference between such a linker and the current one is that a linker that can do these things must be able to parse and rewrite instruction streams. Current designs have been crafted to avoid this, but I believe this is a good fit for linking and not the burden that it first appears to be.

A smarter linker needn't yield slower programs than a dumb one. In fact it could speed them up. Link-time optimisation is rarely used at present, and not at all across dynamic linking. Meanwhile, interposition has nontrivial run-time costs in some cases, since symbol bindings must be computed at load time and run time. A smarter linker could use its awareness of instruction streams to deploy a variety of strategies to speculatively bind, cache and unbind references, both in disk images (which can be thought of as a less ad-hoc prelink) and in memory (e.g. inlining calls that currently go through the PLT). The former option can be thought of as link-time deoptimisation: we store on disk a binary that has been adaptively optimised for the common case on the particular system, and then if we find that an unusual link requirement comes along, the linker can produce a more flexible deoptimised version. The idea of “linkers as servers” has been explored before (e.g. in the OMOS linker, from Utah) but not in this particular way.

Compatibility concerns have also greatly complicated linking. Before shared libraries, things were fairly simple. To link against the C library, you link with -lc. This is just a bunch of object code in an archive. Contrast that with now. Shared libraries have been added in a “compatible” way that sits alongside, but doesn't replace, archives. As a result, we use both, and get double the complexity. For example, on GNU platforms we mostly use shared libraries, but libc_nonshared and -lgcc et al complicate all that. Since “compatible” shared libraries could not quite offer a complete solution for replacing the old-style linking, we have bifurcated the previous clean static linking model into a mostly-shared-but-partly-static model. A smarter linker could internalise the goal of code sharing. By rewriting instruction streams at load time, any compiled code is potentially shareable, and it's the linker's job to extract an adequate degree of sharing. There's no need for the user-facing complexity of distinguishing shared from non-shared linking.

Shared libraries themselves must bring some added complexity, since being (logically) shareable is a stronger requirement on a chunk of code than having it work only in a single context. In particular, robust shared libraries require symbol versioning or some other notion of “semantics”, to account for the inevitable evolution of software. Current designs for symbol versioning are themselves a great example of unnecessary complexity. Again, they were designed to be a transparent extension to what already existed—versionless shared libraries. This has yielded an overly circuitous solution. By pushing symbol versions into the symbol name, a lot of complexity is pushed to other contexts. Most irritatingly, cients of dlsym() are essentially broken by it, because they must deal with symbol names munged to include versions (or must switch to using dlvsym()). The “exact match” semantics offered by dlsym() no longer offer the right abstraction when looking up symbols provided by versioned libraries. But this solution was accepted because it didn't syntactically break anything (and semantics are the user's problem, right?). The ideal solution would look up symbols not (just) by name but also (partly) by their semantics.

Interposition is also arguably not a fully realised idea. One problem is that not every symbol is interposable, because of the static/dynamic split I mentioned earlier: some programs, and some parts of most programs, are still linked statically. Moreover, “interposition misses” easily occur where the interposer doesn't interpose on all the def–use edges it's supposed to, For example, in glibc, many library-internal calls to low-level functions like mmap() don't go via the publicly-exposed symbol. Calls that appear to be to open() might actually be linked to a mysterious alias of that symbol, such as __open_2(). Although interposition is still useful, the possibility of “missed” calls and the need to grok undocumented binary interfaces like __open_2() make interposition-based software fragile and expensive to maintain. This could be fixed by a more uniform dynamic linking model and a more semantically-founded notion of interposition. One that might work would be interposition based on code paths, not entry points. Again, implementing such a notion inside a linker would require that linker to understand instruction streams and the control flows they allow. When we ask to interpose on a (function) symbol, what we're really asking is a bit more like to insert a new call-graph node that dominates all those in the interposed-on call graph. (It's a bit more complicated than that, since this alone doesn't solve the glibc problem I mentioned, but this basic idea can likely be extended to yield the interposition semantics we really want.)

Hardware complexity also leaks into linkers. The root cause of complexity in relocation and code models is the diversity of addressing modes offered by hardware. Linker authors and others working at the toolchain level tend to take on trust the good sense of the hardware designers' decisions. Whether this complexity is intrinsic or incidental depends on whether we allow ourselves to rethink the hardware too. For the moment I'm going to take this diversity as a hard constraint, but I'd welcome views from more hardware-minded people.

All this is not to say that old-fashioned complexity creep isn't behind its share of problems. Debugging information is a great example of “limb growth” and is crying out for a cleaner design, ideally one that helps reduce the burden on compiler authors and/or improve the accuracy of the debugging information they generate. For linking proper, though, the old-fashioned Unix model of “dumb linking” has stayed relatively clean. It's been keeping that model, in the face of new requirements such as shared libraries, versioning and so on, that has complicated things. I believe it's no longer the right model. Luckily, we can produce a smarter linker that is nevertheless a drop-in replacement for the standard one, so we have an evolutionary path towards something better.

[/research] permanent link contact

Wed, 14 May 2014

Instrumenting casts in C++

A reviewer of my libcrunch paper wanted to see me back up my claim that its architecture is “multi-language” by demonstrating support for another language. In particular, for some strange reason, he wanted C++ support.

I had shyed away from C++ support because C++ front-ends are infamously complex. I don't know of a particularly friendly system for instrumenting C++ code. But on reflection, perhaps this isn't necessary. One of the joys of C++ (yes, you heard me) is that it's designed so that user-defined constructs are more-or-less on par with built-in constructs. My instrumentation needs to instrument pointer casts which would otherwise be unchecked. If you follow good C++ style, these are done using static_cast and reinterpret_cast. (Note that dynamic_cast is already checked by the runtime.)

Can we define our own checked_cast template that is a drop-in replacement for these two built-in operators? Then all we need is to #define the latter to the former and we've done the instrumentation.

I've managed to get something that appears to work well enough, but this is easier said than done. We might be tempted to write a function template like the following.

template <class From, class To>
To checked_cast(const From& f)
{
    // ... do our check, then...
    return static_cast<To>(f);  // or reinterpret
}

But that's no good, because we need to distinguish the case where To is a pointer from where it's not. We can't partially specialise function templates, so we can't do this. It's also no good because some instances of static_cast need to be constexpr. Indeed, static_cast to and from integers and chars is used extensively in header files, in constexpr contexts such as expressions which size arrays, or the bodies of inline library functions that are themselves constexpr. We need a more fine-grained approach.

Let's try a class template. This is awkward because casts have to look like function invocations. More specifically, we're instrumenting expressions of the form cast_operator<T>(expr) and we need something that drops in where the cast_operator bit goes—we can't modify the T or expr parts, or add anything afterwards. What seems to work is defining a class template such that the cast expression becomes a constructor invocation of a class we provide, say checked_cast_t<T>. We then define a type conversion operator to turn it back into something of the result type. The compiler inserts the conversion implicitly, so we have a drop-in replacement.

template <typename To>
struct checked_static_cast_t
{
    To temp; 
    operator To() const { return std::move(temp); }
    
    // construct a temporary, which decays 
    template <class From>
    checked_static_cast_t(const From& arg) : temp(static_cast<To>(arg)) {}
    
    // overloaded constructors go here
};

// ... specialisations go here

#define static_cast checked_static_cast_t

We need to specialise the above to make various cases constexpr. In particular, we want all casts to integral types to be constexpr, and all casts from integral types to be constexpr if they're not going to a pointer type. We handle the former by specialising the whole template, instantiating To for the various built-in integral types. We handle the latter by overloading constructors to be constexpr (which, perhaps surprisingly, does work). We also need constructors that take rvalue references. All that expands out to a lot of tedious code, but the result happily compiles various standard library headers that make use of static_cast and reinterpret_cast. We can then supply the header to our compiler using a command-line -include option, and hey presto: we've instrumented casts in C++ using just the C++ language itself.

It's not perfect, though. We can't instrument C-style casts. It's also certainly possible to construct code that compiles without the instrumentation but not with it, by requiring pointer casts whose results are constexpr (and which, by definition, we refuse to provide). In particular, it'd be nice if constexpr were a modifier that can be a basis for overloading—then we could ensure that a cast for constexpr input always yield a constexpr results, by overloading this case explicitly. As it is, I'm hoping it's good enough; I'll post back later when I have more experience of using it.

[/research] permanent link contact

Tue, 08 Apr 2014

Dynamic linking and security

The thoughts in this post were provoked after reading Tim Brown's very interesting Breaking the Links article.

Dynamic linkers are notable for privilege escalation bugs. The reason is their interaction with the setuid mechanism, and indeed any mechanism that associates privileges with an executable. Unix's original model where executables are trusted in their entirety is fundamentally flawed on modern platforms that have shared libraries, where executables usually link in other code, some of which can be supplied by the user. Rather than getting rid of the now-flawed setuid mechanism, currently dynamic linkers instead impose a raft of ad-hoc restrictions, lashed together in the hope of closing off any paths by which user-supplied code can get into a setuid process. They must also balance this against another goal: to avoid the unwanted side-effect of ruling out some perfectly trustworthy compositions. Unfortunately, these ad-hoc measures invariably fail on both counts.

What does setuid mean? It means that the invoking user has access to any behaviour allowed by the setuid program, as executing with the program owner's effective uid. Attackers escalate their privileges by introducing unanticipated code which widens that set of behaviours. Can we take a better approach? One naive idea would be to construct the process as normal, and then check that it includes only trusted code; at that point, we decide whether it runs with elevated privileges or not. (A wart is that we also have to account for code loading after the start of execution; let's ignore that for now.)

Does this break anything? Certainly it will do. I might run a job that spawns a process tree in which some subprocess is setuid. Normally I can run the tree with some other library LD_PRELOADed, expecting that although my library won't get preloaded into the setuid process, that process will still run with elevated privileges. Under our proposed new model, if we do the preloading then discover that the preloaded library is not trustworthy, we will run it with lower privileges, and likely break the process tree (assuming the process really needed to be setuid).

This is a feature interaction, and what we need is a policy for resolving the interaction. Current Unices have the policy that “setuid clobbers LD_PRELOAD”. The alternative we just considered is that “LD_PRELOAD clobbers setuid”. Neither of these seems adequate. Perhaps instead we can evolve things towards a more subtle mechanism that can avoid the interaction in the first place, perhaps by selecting among untrusted and trusted libraries. For example, if there are multiple available versions of some library, we might use the more trustworthy one instead of the one that a fixed set of name lookup rules guides us towards. In general, we can see this as resolving ambiguity among a set of depended-on library specifications in a way that maximises value (both trust and functionality).

Doing so requires a way to designate what code is trusted, not just what executables are trusted. We also need a sense of what alternative ways there are of satisfying “the same” link requirement. I have been using the example of LD_PRELOAD so far, but on ELF platforms, link requirements (beyond the executable) are specified as either a PRELOAD or (more often) a NEEDED, a.k.a the DT_NEEDED header of ELF's .dynamic section.

To find alternative implementations of “the same” requirement, we can mine the redundancy inherent in RUNPATH, LD_LIBRARY_PATH and perhaps the multiple notions of ORIGIN that can created by hard-linking. Each of these might provide multiple candidate libraries. Setting up a fake ORIGIN is a trick familiar to crackers, but we can turn it around by enumerating all possible ORIGINs of a given shared object and considering all the libraries we find there. (Sadly this requires a scan over all directories in the filesystem, and in the common case will yield only one library. But this approach will defeat link-based attacks, since even after hard-linking, we will still find the original library, and any sensible trust heuristic will select it in preference.) The ABI tag matching (modified by LD_ASSUME_KERNEL) is another instance of how the linker will look for libraries in particular places satisfying particular properties, in a way that is currently very rigid but could be generalised into a search/optimisation problem where paths supplied by developers, administrators and users are used as hints and bootstrapping input, rather than definitive instructions.

This approach brings two further problems. Firstly, what's to prevent us from choosing a probable-looking binary that is semantically broken (with respect to our use of it)? We can argue that all binaries with the same soname should be interchangeable, but in practice there will be difficulties. And matching by soname might be too restrictive anyway. Secondly, like any search- or rule-based system, our approach has a “delocalising” effect, lessening the administrator's explicit control and making the linker's behaviour more complex to configure and debug.

Another subtlety is that trust in the part is not the same as trust in the whole. Even if we refine Unix's notion of trustedness down to libraries rather than just executables, some exploits can work by combining trusted code in untrusted ways. The case of another linker exploit, CVE-2010-3856, is one instance of this: the library is sane enough that it could easily be deemed trusted, but we can construct a very specific context in which it is anything but. (This context is: use it as a linker-auditing library to a setuid binary, causing its constructor to be run with elevated EUID, hence allowing a temporary file exploit that would not emerge in “normal” contexts where the constructor did not have elevated privileges.) This is a classic “confused deputy” situation.

Confused deputies are always a good argument for yet finer-grained models of privilege, such as capabilities. So it's not clear whether we would get much security value from search-based link-time composition, relative to plumbing a better model more deeply into our operating system.

[/research] permanent link contact

Mon, 13 Jan 2014

C libraries and linking

At my talk today, Simon PJ asked an interesting question which I managed to give a slightly wrong answer to. I had observed that asking my C compiler to link an object file invoked the linker with a lot of extra input files, many of which are specific to the particular C library implementation being linked to. Note the various crt*.o files in the following link command concocted by gcc. These files come from the GNU C library.

$ gcc -### -o hello hello.o 
(snip)
/usr/local/libexec/gcc/x86_64-unknown-linux-gnu/4.8.0/collect2 \
  --eh-frame-hdr \
  -m elf_x86_64 \
  -dynamic-linker /lib64/ld-linux-x86-64.so.2 \
  -o hello \
  /usr/lib/x86_64-linux-gnu/crt1.o /usr/lib/x86_64-linux-gnu/crti.o \
  /usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/crtbegin.o \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0 \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/../../../x86_64-linux-gnu \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/../../../../lib64 -L/lib/x86_64-linux-gnu \
  -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/../../.. \
  hello.o \
  -lgcc \
  --as-needed -lgcc_s --no-as-needed \
  -lc \
  -lgcc \
  --as-needed -lgcc_s --no-as-needed \
  /usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/crtend.o \
  /usr/lib/x86_64-linux-gnu/crtn.o

What does this mean if I've compiled some of my program with compiler A (from some vendor whose C library is in /usr/A/libc.a, say) and some with compiler B (from another vendor whose C library is in /usr/B/libc.a)?

It's tempting to say that C compilers are strongly coupled to their library, so we must link via some unique C compiler and use only its library. Does this preclude using another C compiler for some of our program? I answered more-or-less in the affirmative... but it's not true! There are two clear (in hindsight) bits of evidence to the contrary.

The first is that empirically, it's easy to see the same C library being used by multiple compilers. The important thing is that there's only one set of library headers. When I install clang on my Linux box, it happily uses the incumbent glibc library headers when compiling. When linking, it happily issues the right linker command to link with the glibc binaries. Indeed, it issues a very similar linker command to the one we saw earlier. We can again see the glibc-provided crt*.o objects being linked in.

$ clang -### -o hello hello.o
Ubuntu clang version 3.2-1~exp9ubuntu1 (tags/RELEASE_32/final) (based on LLVM 3.2)
Target: x86_64-pc-linux-gnu
Thread model: posix
 "/usr/bin/ld" "-z" "relro" "--hash-style=gnu" "--build-id" "--eh-frame-hdr" \
 "-m" "elf_x86_64" "-dynamic-linker" "/lib64/ld-linux-x86-64.so.2" \
 "-o" "hello" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crt1.o" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crti.o" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/crtbegin.o" \
 "-L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7" \
 "-L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu" \
 "-L/lib/x86_64-linux-gnu" \
 "-L/lib/../lib64" "-L/usr/lib/x86_64-linux-gnu" \
 "-L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../.." "-L/lib" "-L/usr/lib" \
 "hello.o" "-lgcc" "--as-needed" "-lgcc_s" "--no-as-needed" "-lc" "-lgcc" \
 "--as-needed" "-lgcc_s" "--no-as-needed" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/crtend.o" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crtn.o"

But how does it know about these files? The answer is worse than I had imagined. The file lib/Driver/ToolChains.cpp in clang's codebase embodies a ton of knowledge about linking on different platforms—even down to individual GNU/Linux distributions and versions thereof. Unsurprisingly, this is fragile and has been known to spawn bugs, like this one.

The second bit of evidence is in how most C compilers let you tell them to use a foreign set of headers, which could be from any C library implementation we like. To avoid the “standard” headers in /usr/include you need to use an option like -nostdinc, and then use -I to point it at the right headers. Assuming the headers don't use any non-standard features, there's no reason why any compiler couldn't generate code using any other vendor's library headers.

Of course, “there's no reason why not” often translates to “it is unfortunately impossible”. A data point in this case is provided by uClibc, a small replacement C library, whose FAQ notes that “it is possible in some limited cases to re-use an existing glibc toolchain and subvert it into building uClibc binaries by using gcc commands such as -nostdlib and -nostdinc... [but] it proved impossible to completely subvert an existing toolchain in many cases.” I'd love to dig into the details of what didn't work, but that will have to wait for another day. (It might just amount to the same issue we've noted, i.e., making the C compiler generate the right link commands... but it wouldn't surprise me if there's more to it.)

Another case I haven't handled: what about multiple C libraries (or parts thereof) in the same process? An obvious problem is conflicts in the symbol namespace—since the ABI likely requires a unique definition of some symbols, forcing a choice between the two libraries. (Chapter 6 of the System V AMD64 psABI is in a vague state, but appears to confirm the existence of this kind of constraint.) However, with enough hammering on symbol renaming and scope-localisation, there's no reason why certain portions of two different C libraries couldn't coexist. But it seems unlikely that two implementations of any low-level parts (such as startup and shutdown, threading and signal handling) could be combined in a working fashion without being explicitly designed to do so.

In summary: any compiler “should” in principle be able to generate code targeting any C library on a given platform, but there are inevitably some warts that inhibit this mix-and-match in certain cases. Moreover, knowing how to link to a given C library requires knowledge of nasty implementation details of that C library. These details could perhaps be “promoted” to a vaguely standardised ABI detail of (in our case) the combined GNU/Linux/glibc platform, but this hasn't been done so far.

Another question that Simon asked was whether we could apply some hindsight to come up with a simpler model which avoids the hair-raising complexities that I'd talked about. One suggestion I had was a single virtual address space, which would eliminate the need for position-independent code in shared libraries (since a library's load address could be assigned at deployment time). Later, Raphaël also reminded me that Plan 9 does away with shared libraries altogether, apparently without it costing too much in memory. I'm sceptical that this wouldn't be costly on a desktop system though (think how many copies of the KDE or GNOME libraries you'd end up mapping, for example). I'm also a big fan of Multics-style orthogonal persistence, which ties in quite nicely with the SVAS approach, but is a far-reaching change. Meanwhile, I think the trade-offs surrounding large-versus-small processes and the complexity of near-versus-far addressing modes are quite difficult to avoid (without an obvious performance hit), since they come to us from the hardware. Perhaps we could use reconfigurable hardware somehow to push all that complexity out of the view of compiler writers... but I doubt any hardware designers consider this a problem worth tackling.

I'm giving a “Part 2” follow-up talk on (most likely) Monday 3rd February.

[/research] permanent link contact

Tue, 26 Nov 2013

(Tell me why) I don't like Java

I can just about cast my mind back to when I learnt Java. It was 2001, when I was a 17-year-old C++ programmer. (Actually the particular version of Java I first learnt was called C#, but let's not complicate things.) I remember being fairly impressed. It struck me as a C-family language that had kept enough of C++'s features to do abstraction neatly enough, modulo the occasional grumble, while making some impressive complexity savings by carefully-chosen omissions. (I knew nothing about Smalltalk at the time.)

At the time, many of these omissions were bold, and the way they worked together was, to my 17-year-old self, ingenious. Lack of subobject structure keeps garbage collection simple. In turn, garbage collection (with compaction) allows faster heap allocation, clawing back some of the cost of all these extra objects. Even doing away with templates, by relying on checked downcasts and a common supertype, had its elegance, despite now being supplanted by generics. Grumble-wise, I won't say I wasn't alarmed by the lack of multiple inheritance and operator overloading. But overall I wasn't unsympathetic to what Java was doing.

Nowadays, the mention of Java makes me groan (in my mind). It seems to be a combination of the intrinsic and extrinsic properties that makes it so. Thinking extrinsically, Java has a huge, huge mindshare. Like market share, mindshare is dangerous once one player gets hold of too much of it. Many intrinsic details of Java would be unremarkable warts in another context, but have a giant significance, because they're what a large proportion of programmers think are “the way programming is”. In reality, Java contains a selection of somewhat-arbitrary design choices made by clever but flawed human beings some time within the last couple of decades. By letting them become ingrained, we are closing people's minds and inhibiting adoption of better alternatives. What follows is a quick run-down of how I see the problems.

As a language, it's boring. This is actually one of the least bothersome things about Java, but it needs mentioning here. As a researcher, doing any work that is specific to the Java language feels like work not done properly, because it avoids a lot of potentially more interesting cases. (I'm being a little facetious here, but I hope the underlying point is visible.) Against today's competition, Java seems remarkably inexpressive as a language. C++ is cleaner than ever, Scala is achieving a C++-like complexity curve with nicer syntax and marginally less unclear semantics, and dynamic languages are faster than ever. What keeps Java going is its self-perpetuating ubiquity. It also helps that Java is in a sweet spot regarding IDE assistance: Java code has enough statically-derivable properties to do automations like refactoring and autocompletion reasonably precisely, while being simple enough—unlike C++ and, I venture, Scala—to implement them reasonably correctly without giant effort. Is this a good thing or not? It has no doubt delivered certain innovations to practitioners faster than would otherwise be possible. But right now, to me, its net effect seems to be to stifle innovation in better languages.

As a learning device, it's a dog's breakfast. This is probably my number-one beef. Java is too complex to cleanly expose concepts like procedures, structured data, or pure object-oriented programming—never mind functional styles of programming. Yet it is too semantically simplified (constrained) to reveal essential contrasts and orthogonalities, such as between by-value versus by-reference, heap versus stack, dynamic versus static type checking, and inheritance versus subtyping. Various conflations of these are the deliberate features of Java's design that make it simple (relative to C++, say). But now that Java is ubiquitous, they start to be harmful too. I concede that just because a language isn't the ideal vehicle for exploring conceptual distinctions doesn't make it worthless as a programming tool—far from it. But the effect of mindshare is huge. The concepts of programming are harder to teach in a world where we must teach Java, because it is not a vehicle for cleanly conveying any of these concepts. My impression is that today's programmers learn fewer languages than ever, so it is harder to establish contrasts. Even diverse languages are being shoehorned onto the JVM, further enshrining Java's [bytecode's] limited vocabulary as a set of fundamentals. It's like some perverse “worse is better” situation, except that the usual point in favour of the “worse” solution, namely simplicity, it not in evidence much. As I'll rant in a moment, Java is a very complex beast below the language level.

Portability is a myth. Java's portability was its key selling point, but it's not clear that it has succeeded. Although Java makes it hard to write code with nonportable semantics, that is only a small part of the portability puzzle. Libraries are a big issue. JVMs are complex enough to suffer compatibility problems among each other, too. So now we just have portabilty problems one level up. I've been sceptical about the drive for portability because in general, no two portability requirements are quite the same. Having “the same” behaviour among every program built using technology X”, which is my paraphrase of Java's portability sell, is primarily a benefit to the authors of technology X, not to the software's users, nor even to the application developers. For example, as Ian Lance Taylor pithily blogged (talking about Tk), saying that applications look the same on any platform means they look odd on every platform. Attempting to insulate the JVM and its libraries from all details of the host system has become a dogma pursued far beyond its usefulness. This theme continues....

As a deployment infrastructure, it's a mess. Again in the name of “portability”, the Java platform tries to hide all trace of the underlying system, defining its own conventions for configuration (classpath, system properties), resource limits (Java heap size), its own archive formats, its own security model, and so on. The problem is that the underlying system always has some way of doing these, and by duplicating this functionality, the end result is excess complexity and hard-to-understand interactions. Even despite massive uptake, the JVM isn't the only runtime. The approach of eliminating complexity by defining “one platform to rule them all” is a hopeless modernist ideal. In practice, it just adds one more platform to the mix, causing a strict increase in complexity. This is the key paradox of attempting to achieving simplicity through portability. It can only succeed if the portable thing completely hides the underlying complexity. This is less and less likely the higher up the stack you try it. Instead we should aim to build systems out of simple (but not necessarily portability-providing) pieces, then achieve systemic assurances by reasoning about whole compositions. Typically, the kind of “whole composition” I'm talking about would be a runtime plus an operating system plus an instruction set architecture. (Interestingly, I contend that stopping at the ISA is sensible; we don't have to go all the way down to microarchitecture. Unlike JVMs, ISAs are low-down enough that they can hide the underlying complexity fairly well.)

As a development infrastructure, it is diabolical. Debugging and dynamic analysis on the Java platform are hugely, horribly flawed. I wrote a paper about one aspect of this a while back. Primary evidence is how the leading implementation (namely Hotspot) implements various profiling and debugging tools (like hprof) using private interfaces (namely the “serviceability agent”), because the pseudo-standard interfaces aren't good enough. And that's in spite of their unfathomable complexity. These interfaces—JVMTI, JDI and friends—are not officially part of the JVM specification, and are pseudo-standards in that no two implementations are quite alike. They also offer inherently poor coverage, because two large chunks of the process's code—namely natives and the VM itself—are respectively not covered, or covered in limited fashion using a disjoint mechanism (which may or may not be implemented). As a result, if you want to do something as simple as observing all allocations of Java objects in your program, you have to do three different (and fiddly) things: bytecode instrumentation for a selection of bytecodes (creating objects and creating arrays), handle JVMTI's VMObjectAlloc callback, write JNI function inteceptors to catch creation from native code. Even then, you're a long way from understanding the memory behaviour of your program, since, surprise surprise, native allocations—including (on most VMs) those made by the VM itself—are actually very significant. There was even a paper at OOPSLA 2010 about this. There are analogous problems in other kinds of dynamic analysis in Java. Try implementing an information flow analysis at the JVM level, and you will be stymied, because information is continually flowing through native code and through the VM itself, and these are wide, complex, undocumented interfaces. (By contrast, doing this at the whole-process level requires only modelling the system call interface, which is manageably-sized and stable.) Java-specific interfaces simply cannot cover, by definition, the whole behaviour of your program.

Debugging technology has gone backwards This is a bit of a specialist rant, so forgive me. It builds on what I just wrote about dynamic analysis. The conventional approach to debugging is takes a simple and high-coverage abstraction, namely the memory image of the debugged process, as the baseline. On top of this, we selectively describe how source-level language features are realised. We does this by documenting the compiler's (and runtime's) implementation decisions, using standard (albeit quirky and complex) formats like DWARF. Apart from GNU Java, no Java implementation I know does this. Instead, they rely on Java-specific interfaces. In so doing, they take a big technological step backwards, for no better reason than expedience. I can excuse VM prototype developers for taking the short cuts of knocking up an in-process debug server with a fixed wire protocol, and building shim “tool interfaces” as thin abstractions of their internal APIs. But in a platform that is the outcome of billions-of-dollars product development, there is no such excuse. As with portability, Java adopts an approach which can only work if it owns the entire system. Since it does not, it fails to eliminate any complexity, and instead just adds more.

As a culture, it's conspicuously herd-like. I suppose this is a universal property of communities. Languages as practical tools tend to take on the properties of the individuals using them. That's why Haskell is over-clever, Python is hack-filled, C is rarely well commented and Java is bureaucratic and verbose. In each case you could say the same for these languages' advocates. My distate for Java owes partly to the fact that it is favoured by not-so-good programmers and middle managers. The effect permeates the community. Java people love reinventing things which existed before—only this time, it'd “pure Java!”. They also seems to love XML, a technology I despise. They outdid themselves by combining these two properties to spectacularly ill effect, as known to anyone who's ever edited a file called build.xml. Reading about Eclipse plugins makes me jaded. I don't know whether it's the chicken (programmers) or egg (Java language) that's to blame for all these problems. Either way, practice has forgotten far too many principles: of keeping it simple, of not duplicating mechanisms, of using information hiding to hide what is change-prone rather than just to make your code more complicated. The principles of object-orientation itself are long forgotten. To illustrate this latter point, I defer to the very nice (developer-focused) talk by Kevlin Henney titled after William Cook's careful statement that “it is possible to do object-oriented programming in Java”.

Here's an anecdote to finish. I just read a research paper containing the following two sentences.

The first kind of behaviors is interface behaviors. (Please note that the name has nothing to do with Java Interfaces.)

Sadly, I can understand why the qualification is necessary. But it really shouldn't be. If we're doing research, what we're doing should transcend the immediate technology, and we should be accustomed to a presentation which reflects that. But even in research communities, too many people equate doing X “for software” with doing X “for Java”, and this is a sad state of affairs.

[/research] permanent link contact

Thu, 10 Jan 2013

Systems versus languages

Somewhere buried within one recent magnum opus in these pages I highlighted a contrast between systems and languages. I also noted that OOPSLA conspicuously contained the word “systems” in its title. This juxtaposition probably seems incongruous to some, but it is one close to my heart. I see myself as a researcher tackling problems usually labelled as “programming languages”, but with a “systems” mindset.

Richard Gabriel's very interesting piece about incommensurability in scientific (and engineering) research, using mixins as an example, makes some remarkably similar observations. (There's also a video of a presentation from last year's ClojureWest, but I haven't yet watched it.) He distinguishes the “engineering” papers that first discussed mixins from the “scientific” papers that subsequently studied and formalised them. And, without disparaging the latter, he very much emphasises the underrated value of the former. (He also makes some very incisive observations about apparent misunderstandings and omissions in the influential Bracha & Cook paper. A major point, of course, is that they need not be misunderstandings per se—incommensurability is the posited explanation.)

It's nice to get a historical perspective on these matters, from someone like Richard Gabriel who has seen a lot of ideas come and go. In part, his appraisal of the changing role of engineering papers offers me some reassurance that I might not be crazy. In conversation with other researchers, it can be frustrating that declaring my interest in programming languages is taken so often to imply that I do theoretical work. But I am more interested in languages as systems—my interest has to do with their “useful-for” properties, not “abstractly-is” properties. (Of course, I'm glad people are working on the latter, and I try to relate their findings to what I do... while accepting that it's not what I do.) Another interesting historical tidbit is that Gabriel estimates that my kind of work—engineering approaches to programming problems, you could say—was “outlawed” from (yikes!) roughly 1990 to 2005. I suppose it's handy that I started my PhD in 2006. The feeling of “separate camps” is still very much there, of course.

[/research] permanent link contact

Thu, 06 Dec 2012

Bridge that gap

I don't do IDEs.

(This post isn't really about IDEs, but humour me.)

I used to do IDEs, back in my Turbo Pascal days. And I want to now: I value the things they promise. But the reality, for me, always seems infuriating and limited. They get in my way more than they help me.

One example: Eclipse. I used Eclipse for Java programming back when I was an undergraduate, and mostly bent it to my will. Since then I've done fairly little Java programming, and a combination of mind-rot (mine) and evolution (Eclipse's) has left me sufficiently unfamiliar with recent versions that I am unable to use them effectively.

I just tried again. I wanted to open a third-party Java codebase's Eclipse project that I had downloaded from that codebase's web page. I am in the Java perspective, and I click the “Project” menu, hoping it will let me “Open project”. But no! “Open project” is greyed out. Woe is me.

Greying out UI elements is a terrible, terrible practice that should never have been invented, because it doesn't tell you why something was greyed out, so leaves you clueless about how to ungrey it. But no matter. Being a researcher, two ideas occur. Both are ways in which we could, for “little cost”, add back the ability for a program to explain why a widget is greyed.

Idea one: we should really be using some modern programming technology under which the “greyedness” state is described declaratively. In that case, the program's code will contain a concise description of exactly what causes the menu item to be greyed. I could check out the code to understand why it was greyed. Or perhaps this condition could be reflectively exported to the user. Either way, if only we had written our UI in some declarative style! Then our problem would be easily solved. But, alas, we haven't written it like that. Instead, greyedness is the emergence of some maze of twisty little imperative procedures.

Idea two, luckily, is more immediately applicable. Let's “shadow” each UI element with some extra information to do with its greyedness or otherwise. When we set its state to greyed, we snapshot some property of the program state, like the address of the caller who is turning on greying, or the whole callchain, or whatever. Then I can somehow query this shadow—maybe by attaching a debugger to Eclipse, or whatever—and have it tell me why it was greyed.

A thought occurs. Is this a general-purpose approach to “programs that explain themselves”? (Hat-tip: “Functional programs that explain their work”, by Perera, Acar, Cheney and Levy, seeded that phrase in my mind, although the general idea has been with me for much longer.) Interactively querying for explanations of program state or program output seems like a general, powerful tool, both for programmers programming and users UI-using.

Aha! you might say: this could work for a Dr Expert Developer , but there's a problem for poor Joe User. The “explanations” will be in terms of program objects, not “something the user understands”. Therefore, you would argue, my approach is not very useful except to niche-case users like me. But to this, my rejoinder is: why shouldn't they understand? If an object-oriented program is properly abstracted, it will have a “surface” level of objects which model the domain as the user sees it. Dialogs and buttons and widgets and text entry areas are all just objects, and users understand these more-or-less fine (accepting caveats from the HCI audience).

It seems to me that paving this continuum, between user-facing and program-level abstractions, is one of the great promises of object-oriented programming. I wouldn't say it's fulfilled, but then, I wouldn't say we program in a terribly object-oriented way most of the time. When I was quite a bit younger, I found it odd that Alan Kay would simultaneously have been working on user interfaces and on object-oriented programming. More recently I think I have begun to grok this connection. My latest Eclipse problem is yet more evidence of why it's a useful one.

This connection is also one which functional programmers of the Lisp-y, Scheme-y schools understand. The abstractive power of these languages is supposed to be used—I've heard it said, at least—to construct domain-specific abstractions which naturally model the elements of the program's work (its “objects”, you could say, except that here we won't). In this way, the program itself is lifted up to a level of abstraction which the user, by definition, would understand. (Emacs users might be able to tell me how well this can work out... anyone? Oh, all right.) I lean more towards the object abstraction than the lambda, but perhaps it's six versus half-a-dozen.

Perhaps disappointingly, that's mostly the end of this post. But I can't resist appending a rant about my own work. What I'm convinced of is that the object abstraction is everywhere... it's just latent a lot of the time. It's in C programs, functional programs, windowing systems, filesystems, OS kernels, spreadsheets, web browsers, IDEs. It's latent in any chunk of state at all. Yet the promise of seamlessly bridging that gap—of constructing a continuum between the programmatic and the user-facing—is not yet with us. That's because there are hundreds of different ways in which this state has been constructed and encoded, and no infrastructure unifies them. Classically, we can only shrug and say that people weren't enlightened enough to write their code the “right way”, using the right languages, targeting the right runtime infrastructure, and so on. But more likely, those “right” things were never fully realised, and never will be. Either way, what we need is a postmodern object-oriented runtime: one that can find the object abstraction, and present it to us where it exists, even if that presentation is something of an illusion—an adaptation, you could say—of reality. (This concept is also similar to views in a relational database.)

What would such a runtime look like? What's a precise definition of the problem it's solving, even? Both of these are hard questions to which I have no complete answer. But it's no coincidence my PhD was about adaptation (although I'm not saying you should read it). And my more recent side project on DwarfPython (that I would love to pursue, if only I could find a way of getting paid for doing so) is also tackling a subset of the same problem space. Instead of a language implementation owning its object representation, can we build one which gives up that ownership, but rather takes on the job of “seeing” arbitrary chunks of state as objects it can manipulate? The idea of DwarfPython is to use object producers' debugging information to do just that. More generally, in-memory state is not the only kind of object; I also have an interest in unifying files with objects. Again, rather than the classical approach of persistent programming languages, we can take a more postmodern approach in which each file has the freedom to represent its contents, or “object state”, in a different way, or a variety of ways, subject to appropriate interpretation. This is thematically similar to the work of my Bachelor's dissertation; although that pursued a far too classical approach, it was still trying to unify files with objects. So, finding the object abstraction in unusual places seems to have been a theme of my work from day −1, even if I didn't realise it at the time....

[/research] permanent link contact

Mon, 03 Dec 2012

Tools or not tools

Jim Coplien's keynote at SPLASH this year was a peculiar one. It featured two particularly provocative sentiments: firstly that too much abstraction is a bad thing, and secondly that building tools is not what we want to be doing. (The latter was actually due to Dave Ungar, during questions, but met with vociferous agreement from the speaker.)

These remarks met with some puzzlement from much of the audience, judging by a series of apparently disparaging remarks during subsequent conference talks and question sessions. Who could disapprove of abstraction or tools? I think there are some reasonable answers to that question; what follows is my attempt. (I have no idea whether it coincides with Coplien's or Ungar's.)

Abstraction all the way up

The abstraction issue is perhaps not terribly controversial. Most programmers are aware that abstractions present a trade-off. The temptation to abstract endlessly can be a rat-hole that distracts from actual progress on the task at hand. Ian Lance Taylor once blogged a fairly similar opinion. If you abstract too far, you abstract away essential features of the domain, rendering it unrecognisable. This yields “abstractions” that are actually complex, not simple, to use. Abstracting over multiple use cases, i.e. generality, is a common offender here. For example, rolling your own implementation of a graph algorithm can be easier than figuring out how to tame the monstrous generality of something like the Boost graph library. (Pardon my C++; no doubt you can think of your own examples.)

Sometimes, abstractions exploit specificity, by packaging up common case usage patterns. This can be very useful. In fact, in interesting counterpoint to the Taylor piece above was Rustan Leino's note about loop structures in a subsequent SPLASH keynote: inferring loop invariants is one of the hard problems faced by any verifier. By constraining the form of a loop, it becomes easier to find its invariant. Abstract loops are an extreme case of this, since the loop itself is in library code and not in user code, so the invariant need be found only once. But of course, just as Taylor hinted at, any user forcing themselves only to use such loops will end up spending rather a lot of time structuring their code to accommodate this constraint. (In this way, it shares a lot with other syntax-directed reasoning tools, including type checkers. These tools are superficially easy to market—hey, look, it shows you bugs in your code. But there is a hidden cost to using them, deriving from implicit constraints on how you can structure your code such that it interacts nicely with the tool. If you don't stick to these, your tool fails in some way, like false-positive type errors or solver timeouts.)

To end my rants about abstraction on a complaint, I could also roll out one of my previously-blogged complaints about common styles of functional programming—with liberal use of polymorphism, or unification of higher-order with “ordinary” operations (think currying, or juxtaposition-is-application), code can become needlessly hard to read. Lazy languages add the unification of storage with computation, which I concede is sometimes an incredibly useful abstraction, but easily makes the memory behaviour of your program incredibly difficult to understand.

What about tools?

For me, the most interesting issue concerns tools. Dave Ungar phrased it something like as follows: “if every bolt under my car had a little handle on it, I wouldn't need to get out to go and get a wrench”. So, let me frame the contrast I believe he was making as one of tools versus run-time systems. Dynamic O-O environments are very much systems, geared around the ability to push new capabilities down into the system's fabric, rather than having them sit on top. This “fabric” is what emerges from combining the messaging metaphor (messages are fundamentally proxyable) with dynamic extensibility (adding new messaging behaviour is a local change during runtime, not a far-reaching change at compile time). As I have rambled about previously, the lower some functionality is integrated into a system, the more pervasively available it is, so the more power and leverage it confers. Smalltalkers and other dynamic language advocates know this. It's a very tricky thing to convey to the unfamiliar. It's even harder to measure. Most of us don't use runtimes that have this amount of dynamism and immediacy, although Javascript may yet change that. Operating systems, not least Unix, are also dynamic runtimes in this way, although their inability to see inside application means (unfortunately, and avoidably) that a large amount of useful code and data (hence potential “extension”) is opaque to them.

Tools are fragmentary; runtimes are integrating One reason people develop tools and not runtime extensions is that integration is hard. If you write a command-line tool, you get to define its input domain, output domain and behaviour from a clean slate, according to your convenience. This is often (though not always) easier than plumbing something into a runtime, which is a bunch of code somebody else wrote. But let's imagine making the leap. To get slightly more concrete, suppose the “tool” we're interested in is a dynamic analysis tool—pick your favourite bug-finding, race detection, memory leak detection or other tool that fits the general mould. What's better about having it as a “runtime” rather than just a “tool”? Well, its functionality would be embedded right there in your running program. As a consequence, it supports exploratory, interactive, programmatic use. If you dropped to a REPL in your program, the innards of the tool would be laid out across your program state, pushed into fields on program objects. If your tool is a race detector using a lock-set algorithm, for example, then each object's lock-set would be accessible as a field on that object. If you're using timestamps or vector clocks, they would be there too. You're also not stuck with a fixed amount of insight the tool's authors saw fit to provide (e.g. when trying to track down the source of a data race); the tool's code is a service you're free to extend. Getting interactive, exploratory, programmatic usage seems like a useful payoff for the effort of integrating your tool into a runtime. Arguably, then, the challenge is building runtime infrastructures that are not unduly difficult to extend like this.

Progress? Is the “tools not runtimes” tendency getting stronger? “Systems, languages, applications” is the conference title's invariant. “Tools” is nowhere to be found. My vague impression is that today's programming research is more tool-focused, and less system-focused, than 20--30 years ago. (A near-dual property is also true: self-proclaimed “systems” research has less programming focus than it used to. I used to bemoan this frequently while doing my PhD in the systems research group in Cambridge.) But why? Simplistically, we might just say that integration is hard. I think there is something more subtle at work. Integration of research techniques into runtimes arguably scales poorly—since we all have to integrate into the same runtime, we have to achieve consensus on that runtime's interfaces. Tools, being freestanding and piecemeal, arguably scale better. You could say that lots of small, freestanding tools are the postmodern way, whereas “one true runtime system” is a classical ideal. (It's fitting that Noble and Biddle's “Notes on Postmodern Programming” was recognised at SPLASH this year for its influence, in the Onward! strand.)

Avoiding classical fallacy In the battle of the classical versus the postmodern, normally I side with the postmodern. How can we square this with the desire for the benefits of the runtime approach as I described it? I should probably save my thoughts for another post. But two ideas come to mind. The first is one I've already mentioned: design a runtime infrastructure that is specifically easy to extend. But that seems to be begging the question: if we knew how to build this magical runtime, and what kinds of extension it would need to support, we'd already have done it and solved the problem ages ago. For this reason, we also need the second idea: we need to get into the mindset of tolerating a lot more heterogeneity. Very briefly, it means pushing radically downwards our notion of “runtime” so that most of the typical implementation decisions of an object-oriented runtime, such as dispatch mechanisms, introspection mechanisms and object layout, are actually user-level decisions in such a system, but still recognisable as the abstractions they represent. In other words, our system can descriptively extract extract latent object abstractions from the contexts in which they emerge in existing systems, given descriptions of these latent abstractions. This contrasts with traditional runtimes, in which the object abstraction constructed by the runtime implementor in a way that is prescriptive. And hey presto, we are back to my VMIL 2011 workshop paper: we already have a very powerful descriptive technology, in the form of debugging infrastructure for native code; our task is to bend it to this new purpose. So, end of rant for today.

[/research] permanent link contact

Fri, 01 Jun 2012

Metacircularity (or: “what's Java got to do with it?”)

Before I begin, here's an executive summary. Metacircularity is not about self-interpretation at all. Rather, it's an engineering approach to re-using as much code as possible between different parts of a toolchain (including compiler, runtime and debugger). This is noble, but limiting ourselves to working in a single language is needlessly restrictive. If we get over our presumptions about“language barriers” (cf. Oracle's disappointing attempt at explaining metacircularity), we can apply the same re-use philosophy to supporting a diversity of languages, not just one.

I've recently found myself wanting to understand the concept and purpose of metacircularity in language implementations. This is because I've become interested in understanding the Maxine JVM, which is often described as having a metacircular design.

All this meant to me at the time was that it's written in Java---making it a self-interpreter, certainly. But is it meta-circular? What does that mean? Why might it be a good thing? As I will show, if we're even just a tiny bit pedantic (which I hope we are), then metacircularity is not really a special case of self-interpretation, but a completely separate concept.

I've found two very helpful papers describing metacircularity. The first is that of Chiba, Kiczales and Lamping talking about the “meta-helix”. The other is Ungar, Spitz and Ausch describing the Klein VM, a metacircular implementation of Self. The first paper really emphasises the idea of implementation by extension which is at the heart of metacircularity. They note that use of metacircularity [is] “to allow extensions to be implemented in terms of the original non-extended functionality”. As the paper goes on to discuss, there is a tricky bootstrapping problem inherent in this. If we don't keep careful track of the dependencies between all these extensions, subtle and not-so-subtle bugs, most obviously infinite recursions, can arise. The paper is about avoiding confusion of metalevels, and as they propose, the shape of a helix, not a circle, makes much more sense in describing what supposedly meta-circular systems are actually doing.

The second paper, by Ungar et al, is more of a practitioners' view: it shows what VM builders consider to be a metacircular design, and what they hope to achieve by it. After reading these two papers, reading some other things, and scratching my head a lot, it became apparent the primary goal of metacircularity is to solve two very practical engineering problems concerning re-use: re-use of code and re-use of debugging tools. They mention the code re-use issue directly, by saying that in traditional designs “an operation such as array access must be implemented once for the runtime, once for the simple compiler, and once for the optimizing compiler”. The question of tool support is also an explicit motivation in the same work: they lament that in the non-metacircular Self VM, “to inspect a Self object... [in] an application that has crashed the VM, one must invoke a print routine in the VM being debugged, [an approach of] dubious integrity”, that the VM “must be able to parse both Self and C++ stack frames”.

So, perhaps prematurely, I'd like to propose a new characterisation of metacircular VMs that I think captures their true nature. Metacircular VMs are interpreters (in the general sense) that are carefully constructed to have a dense re-use structure. They do this by expressing as much as possible---front-end, compiler, runtime---as extensions over a small common core. This core has some interesting properties: it is designed explicitly for extension, and includes the necessary foundations of debugger support. It is the former which allows code re-use, and the latter which enables the same tools to see all the way up the stack, from various levels of VM code up to application-level code.

Note that this is fundamentally different from a self-hosted compiler. Under self-hosting, the compiling compiler is not re-used at all in the compiled compiler. It is just used to implement a translation function, from end to end; how it does it is completely opaque. By contrast, in a metacircular VM, you can invoke your hosting runtime to perform part of your work---asking it do “do what you would do in case X” by calling the corresponding function (or sending the corresponding message, for Smalltalkers). The trick is to ensure that these helper requests are correct and well-defined, meaning they do not cause infinite regress (the obvious bug) and do not confuse meta-levels (the more subtle bugs mentioned by Chiba et al).

As a consequence of this fundamental difference, the key challenge of metacircularity is not just that of “implementing language X in language X” it's dealing with the bootstrapping problem. What is a suitable common core? How can we make it small? What extensions must it permit, or may it permit? How can we structure the extensions on top of one another, so that they can express what we want, re-using what we want to re-use, and efficiently?

So, we've established that “self-interpretation” is irrelevance. But it seems that most metacircular designs are, in fact, self interpreters, right? I actually consider this to be false. Even when staying within “one language”, the fundamentals of the bootstrapping process means that at a given level in the interpreter, certain language features may only be used in restricted ways. Sometimes these restricted subsets are given a name, like “RPython” in the PyPy project. In other cases, they are not named. But in all cases, there are restrictions on what functionality at some level in the system it is is safe and meaningful to invoke at the metalevel. Indeed, this is exactly the “helix” shape that Chiba et al were describing. In other words, different parts of the interpreter are written in different sub-languages, precisely in order to avoid infinite regress. Just because there is continuity between the core language and the eventual top-level language doesn't make them “the same”, and for this reason, metacircular VM designs are not self-interpreters.

If I were to write a ranty summary of the above paragraphs, it would be that the apparently “beautiful”, head-twisting, recursive, quasi-mathematical aspects of the metacircular design---the things which language nerds get excited about---are both irrelevant and illusory. Metacircularity is motivated by engineering pragmatics, not “deep” linguistical or mathematical concepts. (Homoiconicity, itself a concept of overrated interest, is an orthogonal concept to metacircularity, despite what at least one blogger has written.) I believe this fixation with superficial observations about language stems from the documented inability of many programmers to divorce concepts from language. (For “documented”, I can say at least that Lamport has commented on the problem, and in this case I agree with him. I have big disagreements with other parts of the same article though. I will post about those in the near future.)

So, having stated that re-use is the good thing about metacircularity, why is re-use so much easier in a metacircular design? The reason is that we have a common core providing a coherent and adequate set of services---the services embodied in the “bootstrap image”. And I say “services” and not “language” for a reason. The core really is a set of runtime services. As I have explained, is only a distant relation of whatever high-level language the author is intending to realise. In our current technology, re-using code and re-using tools across languages is hard, and so “build everything in the same language!” seems like a useful answer to a VM author's problems of API-level interoperation and of tool support. Metacircular designs are the result (because it's the closest you can get to doing everything in one language). But as I've just described, the “same language” property is an illusion, and there are inevitably many languages involved. It just happens that in current projects, those languages are designed to be as similar as possible to one another---featurewise increments, in effect. But instead of this unimaginative perspective, anyone building a metacircular VM should ask themselves: how can I design my core services---the core of the VM---to support as many different languages as possible?

This will sound familiar to anyone (so, hmm, maybe ten people on the planet) who has read my “Virtual Machines Should Be Invisible” paper. Although it doesn't approach the problem from a metacircularity perspective, this paper is all about building an infrastructure that can support a diverse variety of languages, sharing code and tools between all of them.

Currently, our shared base infrastructure is a POSIX-like operating system. Every VM author (even those interested in Windows, which I'm cool with) implicitly targets this infrastructure. Unfortunately, these systems don't provide enough abstractions. As such, different language implementors build their own infrastructure which reinvents similar abstractions in incompatible ways---including functions, objects, garbage collected storage, run-time self description, exceptions, closures, continuations, and so on. We can clearly avoid this pointless diversity without sacrificing innovation. Just as with operating system interfaces, there is never complete quiescence or consensus, but we still manage to share a lot more software between OSes than we did in the pre-Unix or pre-POSIX days.

One of the mitagating techniques which my VMIL paper describes but which metacircular designs don't use is: describe your implementation decisions. Don't encapsulate them! If you implement a certain language feature a certain way, describe it. There is nothing fragile about this, because your descriptions will be written in a standard way and consumed by an automated interpreter---called a debugger. This is what native debugging infrastructure does. VM-hosted debuggers, of the Java or Smalltalk flavours, don't do this. To make the value of this approach clear, let me finish with another example from the Ungar paper, where they proudly state that Klein VMs can be debugged remotely, and in a post-mortem fashion, using another Klein or Self VM. “A separate, possibly remote, Self VM hosts an environment that manifests the innards of the Klein VM at the source level. Thanks to Klein's metacircularity and Self's mirror-based reflection model, Klein can reuse a vast amount of already-written Self programming environment code.”

What the authors are forgetting here is that this is not a new facility. Native debuggers have long had the capacity to inspect remote processes. Smalltalk-, Self-, and Java-like designs took a retrograde step by forcing debugging to exploit the help of a server within the VM. Although this has the benefit of allowing the debugger implementation to share the introspection services already present inside the VM, it requires a core of the VM to remain working correctly, even after a failure, which precludes many cases of post-mortem debugging. By contrast, trusty (or crusty? your choice) old native debugging is necessarily designed for this as a common use-case.

The approach I advance instead, as described in the VMIL paper, is to implement introspection on the same infrastructure that supports remote inspection---which happens to be the DWARF infrastructure, in the case of DwarfPython and modern Unix-based compiler toolchains. This is very similar to the Klein approach, in which mirror objects may reflect both local and remote state. But it completely avoids the myth that we should implement everything in a single language. Indeed, debugging information formats like DWARF are actively concerned with supporting a wide variety of languages. One Klein process can inspect another because they share a set of implementation decisions. By contrast, a native debugger need share nothing with its debuggee, because native debugging infrastructure includes a facility which is fundamentally omitted from VM-hosted debuggers: the language implementation explicitly describes its own implementation decisions. It does this down to the machine level, and moreover, up from the machine level.

The result is that given a memory image of a crashed program, we can recover a source-level view of its state at the time of a crash. VM-hosted debuggers are fine for user code because encapsulation and memory-safety protect enough of the VM implementation that the debug server can still work. (Notice I don't say “type-safety”! Type-safety is just an enforcement mechanism for encapsulation, not the key property that ensures encapsulated state is not corrupted.) These VM-level guarantees do not have such a nice property if the failure was due to a bug in the VM itself. This is because the invariants of the VM's own data structures are by definition broken in this case. Some might argue that this is a minority use case, so VM-hosted debugging is fine for general use. Personally I don't mind, as long as I have a debugger that can see all the way down. Currently this doesn't include any VM-hosted debugger, but perhaps it could do. (One of my perhaps-future small projects is to create an implementation of JDWP that knows how to answer queries about native code.)

In summary, I think of the solution to re-use proposed by metacircular designs as a degenerate case of the approach I am pursuing. It sounds strange to most people, but it is not too much to ask for a language-agnostic runtime infrastructure that supports a plurality of language implementations (going right down to native code), direct sharing of code and data, and orthogonality of tool support from language. As I ranted about in the VMIL paper, this infrastructure is a modest and very feasible generalisation of what already exists, with basically only performance questions outstanding. (I'm working on it.) Given this infrastructure, the same careful bootstrapping approach can be used to share code and retain tool support throughout higher-level language implementations. But we can do this without the requirement that everything be in a single language, which doesn't make sense anyway.

[/research] permanent link contact

Tue, 20 Dec 2011

Cathedrals, bazaars and research groups

[Post-hoc clarification: at the time I wrote this rather grumbly post, I was working in the Department of Computer Science at the University of Oxford. It doesn't necessarily reflect any on other institution whose domain you might currently be seeing in your address bar!]

A fe months ago I finally got around to watching the video of Guy Steele's “Growing a Language” talk from OOPSLA '98. It's a spectacularly entertaining and insightful talk.

(It's also a nice demo of how a good keynote doesn't have to be Earth-shattering, as long as it's compelling in concept and delivery. Really, the meat of the talk is quite specific: it's about how language evolution should be managed, with particular reference to the then-ongoing attempts to add two features to Java: generic data types, which we all know and love, and operator overloading, which still hasn't made it.)

It was a nice reminder of the two “organisational modes” of collaborative effort that Eric Raymond called The Cathedral and the Bazaar. Building software is one activity where these metaphors apply. Designing languages is another. Research groups are a third.

Like language design and the construction of any large software project (think Linux), research groups aren't a “fully collaborative” activity. Rather, they are “partially collaborative”---it's not that everyone is working with everyone else, but rather, different participants are interested in different pieces of the overall puzzle. There will always be multiple frontiers of progress open concurrently---but all building on a shared central core.

When I was in Cambridge, the group I was in was very much a bazaar in style. There was no unique leader (but rather a gaggle of four or five faculty). Group communications revolve around a mailing list and weekly meetings where discussion was open, informal talks were and anyone would be free to raise questions big and small.

It wasn't a problem-free group, either in general or for me personally. For my first year in the group, the bazaar was dead. That was a tough time---mainly because communication structures reverted to small cathedrals (and I wasn't really a part of any of them). Even later on, I must admit I didn't always feel completely at home. I was a programmer-oriented researcher in a performance- and applications-oriented group. But in hindsight I appreciate that the group's bazaar-like communication structure and ethos were a very good fit for me, even if the topic selection wasn't great. By the end of my PhD, I found I was getting some reward from my participation in the group. in two ways. For one, my work had gained some degree of recognition in the wider group---I felt I had, in my own small way, “shaped the agenda” at least in a tiny corner. (Sadly this was not enough to get others on board with similar work, but also not miles away from that either.) For another, I had participated in the more topic-independent community aspects of a research group---organising the talks for a while, participating in discussions at talks and on the mailing list, being around, organising events, and so on.

I was recently lamenting to myself---a favourite pastime of mine---how right now, my work isn't a “part of” anything. Nobody cares about what I'm doing, or so it seems, and conversely, I find it hard to get enthused about what those around me seem to be doing. But then again, I have very little idea of what their work is, nor they of mine. There is a lack of transparency and a consequent lack of spontaneity. Without cross-linking communication structures, there just aren't the opportunities to spot synergies and find common interests. I have found this a bewilderingly opaque and unsatisfying environment almost since I arrived, but I only recently realised the reason: that it is a severely cathedral-organised group. There is no institutionalised process for cross-talk (such as frequent group meetings or mailing list), and while there are multiple frontiers open, each is coordinated from the top. This clearly works for a lot of people, but not for me. Does that say anything about the kind of researcher I am, or others are?

As an addendum: it's worth briefly mentioning the “agile research groups” idea, one example of which is Scram of Mike Hicks and Jeff Foster. Eric Eide also mentioned to me he uses some of these ideas, to varying degrees of success, in the Flux group at Utah. Coincidentally, I recently dropped in on both these groups! I think these techniques are mostly orthogonal to the cathedral-versus-bazaar issue: they concern the manner (frequency, duration) of communications, not the topology. I expect Scram works best when participants have a common goal, i.e. there may also be tighter topic-coherence requirements on its suitability. These may perhaps even be more likely to hold in a cathedral-style group, although there is certainly no hard-and-fast causal relationship there.

[/research] permanent link contact

Wed, 14 Dec 2011

Heterogeneity or homogeneity: what's the problem?

My attention was recently drawn to a problem that some web developers call the “language heterogeneity problem”. I'm not sure where the term comes from; in fact it is not as widely used as I was led to believe. But still, most people who have done web programming know that there are a lot of languages that people use for the web, not usually out of choice per se, and that this is somehow a problem.

The phrase “language heterogeneity problem” immediately jarred with me, since some of my work has been looking at heterogeneity of language as a goal, not a problem. Surely, we want to choose the best language for each part of our program, and not pay any unnecessary cost when doing so? Of course, the problem is about choice versus imposition. It's not that the ability to use multiple languages is bad. It's that in any given context, we don't have that ability! Consequently, you're forced to use particular languages for a given piece of code. This is the true “heterogeneity problem”. I'd couch the problem as lots of small homogeneity problems, not one big heterogeneity problem.

One of my recent student project ideas, sadly not yet attempted (or indeed advertised), is to develop a compiler back-end and run-time library that would let us compile vanilla C programs into web applications. So, for example, if I do printf() it will write some text to the page, and if I do fgets(..., stdin), it will generate a form field whose submission action is to activate the continuation of the program. There are some interesting extensions to this project. How do we partition a program into its client- and server-side halves, or its client-, server- and database-side thirds? Can we tune the partitioning given a set of requirements for security, interaction latency, client code size, and so on?

(There is also an interesting converse to this problem. Most programming languages' standard libraries are designed around a Unix-like model of I/O. And the first programs we teach---like the Hello World program---use this facility explicitly, by printing to streams and reading from them. But we now live in a world where most familiar forms of computing don't have an obvious terminal- or stream-style of I/O evident in their interface. So perhaps clinging to these examples is creating a barrier in front of potential students---who won't relate to the idea of a program doing I/O through a terminal?)

At SPLASH, I discovered that one chunk of my proposed student project effort has been scooped by Emscripten, an LLVM-to-Javascript compiler. However, since writing such a compiler would be too much work for a single project anyway, this might actually be helpful in enabling a single student project to achieve more working stuff. In other words, they could focus on matters other than the compiler, or on doing interesting domain-specific analyses on user code. Alternatively, perhaps a keen student could try to make their own compiler that does a better job than Emscripten, in some way. Hopefully I will managed to advertise the project in time for the next academic year.

[/research] permanent link contact

Mon, 05 Dec 2011

Refactoring refactoring

A little while ago I embarrassed myself in conversation by blurting out a sceptical opinion of refactoring. In this post I'll explain some opinions on refactoring and related research, hopefully in a more considered and coherent manner that I managed on that occasion.

I admit that my inclination to be a bit negative comes from prejudice, with a couple of origins. One is that a while ago, I had to fend off (rhetorical) claims that refactoring solved what my PhD was doing. It clearly didn't, but then again, it was well worth writing an explanation of why not. (These claims were from my supervisor, not my examiners, thankfully, and I think were being advanced rhetorically.) In that context, interface evolution scenarios were the issue. I still contend that refactoring is not the solution to interface evolution. (Endlessly editing code to stay in sync with “one true” “current” version of an interface, whether with or without the help of refactoring tools, is an unnecessarily burdensome approach; an automated assist for editing code doesn't make that editing work- or risk-free.)

More refactoring

Happily, most uses of refactoring are quite different from interface evolution: they're about the internal structure of code, not “edge” interface details. I'm a big fan of refactoring in these cases. As a practitioner I'd love to have more and better refactoring. In that space, one clear improvement would be refactoring tools for more languages. This doesn't mean starting again; most languages are more alike than they are different. At the moment, the popularity of refactoring serves to cement the Java hegemony. This is my other unreasonable prejudice: I dislike this hegemony, and so refactoring culture is tained by association (unfairly) in my mind. It'd be really nice to have some decent refactorings available for C++, but I'm not holding my breath. That said, I might not know about them if they do exist.

(Aside: the real problem with C++ is not pointer arithmetic or lack of garbage collection or lack of type safety or anything else that people usually trot out; it's complexity. I'll rant about that in a future post. By contrast, Java does well because it's a simple language. Actually it does unfairly well because researchers abuse its syntax tree as an intermediate representation for program analyses. I might rant about that in yet another post.)

That's my practitioner's view over with. As a researcher, I have one qualm remaining about refactoring. Rather than doing research on making refactoring work in more and more scenarios, I want to see some exploration of a few bigger ideas in the same general space. There are ideas that are more powerful, more “revolutionary” than refactoring but have (so far) less currency.

Language-independent refactoring

Language-independent refactoring is an obvious goal. The means by which to achieve it is less obvious. A shared metamodel seems sensible. The need for a shared metamodel is arguably a limitation. But I don't buy that argument! My reasoning is based on the observation that most languages have large sets of features that are cognate. By this I mean they are not just theoretically equivalent in what they can express (or perhaps not at all equivalent in that way), but rather, a human user understands them in the same way. (I wish we had the empirical foundations to substantiate that, but that's another matter.) So if we can capture these, we can probably come up with a metamodel that works well in practice, even if it fails for adversarially-constructed cases.

Conversely, just to throw another controversial claim into the mix: languages that have such a pared-down set of primitives that they don't offer a cognate of some recurring feature---like purely functional languages without mutable objects, or C without virtual calls---in practice do have cognates, but appearing as patterns of use rather than delineated language features. So I seem positing some sort of Chomskyan “universal language feature set” that is in programmers' minds if not explicitly in all languages. That feels a bit strong; I'll have to take some time to see whether I agree with myself there.

(As an aside: of course, many languages have too many features, in that they are mutually cognate: do I use static variables, or a singleton object, e.g. in Java? Do I use typeclasses or traits, e.g. in Scala? Do I specialise using template specialisation, overloading or overriding in C++? These self-cognate features usually have arbitrary limitations, and their diversity exists for implementation reasons. Exposing a choice among them leaks implementation details to the programmer, along with consequent performance characteristics. So, forcing the programmer to select among these early on, as these languages all do, is an evil akin to the evil of premature optimisation. Fortunately, it's an evil that refactoring exists to fight!

(Continuing the aside: we perceive “clean” languages to have few mutually cognate features. Conversely, most mutually-cognate features differ along some potentially-separable dimensions: each feature mixes a particular setting on each dimension. For the “static versus singleton”, having a chunk of data that is “one per program instance” is the main concern, and dynamic-versus-static allocation is the orthogonal issue that is unhelpfully mixed with it. In a Java-like implementation, object identity is another mixed concern: it's something you get in the singleton case, not the static field case, and effectively for reasons of implementation leakage. Conversely, in C and C++, statically-allocated stuff can still have its address taken, so there is better separation of concerns in that case.)

Non-behaviour-preserving transformations

Digging deeper than language-independent refactoring, it seems that refactoring's main value is in its ability to improve code by reversing bad decisions that were somehow expedient earlier. But underneath that, there are two cases. Firstly there are cases where you refactor because you got the abstract design wrong earlier (e.g. you assumed there was only one Frob per Widget, and in fact there might be many). Secondly are the cases where you got the abstract design right, but the code-level design wrong, i.e. you didn't map the abstract design optimally onto language features (with respect to maintainability, efficiency, ...). To me, it feels like there is too much emphasis on the second case, while the first one is harder and more interesting.

I think this is because automated refactorings aim to be behaviour-preserving. But since the two problems are very close---they both arise from forced premature commitment and the programmer's failure to anticipate the future---we should perhaps use the same tools to combat both of them. In other words, the requirement that refactorings should be behaviour-preserving actively limits what we can do. So how about some bolder approaches that might sidestep the problem? While these approaches might violate the letter of the definition of refactoring, for me, they retain the most useful charateristic of refactoring: by a single localised change, we can effect global changes on our codebase.

The only work I know that does automated non-local code edits that can change program behaviour is Coccinelle, based on the “semantic patch” idea. Aspect-oriented programming is a similar technique, but works by effectively (and controversially) delocalising run-time semantics rather than performing non-local code edits. I'd like to know if there are others more like Coccinelle already in existence.

So, suppose we discard the restriction of limiting ourselves to behaviour-preserving edits. One direction under this auspice is to creep closer towards model-driven development. I want the ability to change my “model” (either in my head, or in some modelling notation) and see changes reflected in source code. And also, vice-versa: if we do have a modelling notation, code changes should update the model. This is a hard but interesting problem in bidirectional transformation, which has something of a currency at the moment (witness the BX workshop).

Logic metaprogramming

A final thought is about logic metaprogramming. This is a very cool idea that I have not yet got up to speed on. In fact, almost all I know about it is from the abstract of a paper I saw at SPLASH last year, which said: “In logic metaprogramming, programs are... derived from a deductive database.” But this one sentence is so intriguing that I want to run for a while with what I think it might entail, before I find out what is actually done in existing systems (of which there are few!).

I've often wanted to program not by writing code directly---since I'm often aware that the code I'm writing will probably turn out “wrong” or “bad” once I've done a bunch more coding---but by making a sequence of simpler statements that I have more confidence in. Each statement should be small, freestanding and less of a commitment than writing a line of code would be. These statements might be such that none of them, by itself confers enough information to write a "known good" piece of source code. E.g. I might write that each instance of class A “has a[n]” associated instance of class B, but I don't yet know whether this association should be expressed as pointers, or by some associative data structure, say. This decision could be determined later, by solving constraits originated by other small statements. Ties could be broken (i.e. multiple candidate solutions selected among) by extrafunctional requirements such as performance (which might favour pointers over associative structures).

This is related to program synthesis and refinement methodologies, I guess. But I am particularly interested in making it exploratory. By having a tool explore the implications of the programmer's statements, we can potentially refine our understanding of the problem (a.k.a. “debug the design”) without going through the circuitous path of first writing some “bad” code and then either finding it's not the right design (and cleaning it up by heavyweight code changes) or finding it's incidentally messy (and cleaning it up, just by automatic refactoring if we're lucky). We can also have a tool tell us what the “right way” to code something is, but early. If the only solution to a particular set of requirements is to use a particular language feature, then the tool can tell us this, rather than letting us find it out by making the wrong choice and then backtracking. Of course, we need to get the requirements right up front, so this technique will only ever be a complement to more backtracking-oriented techniques.

Multi-dimensional representations of software

It is a very classical notion that programs have one true form, being their source code in the form of a string of symbols. Refactoring sticks with this idea but tries to make it easier to alter that form, by abstracting and automating certain common complex non-local edit patterns. But we can go further by rejecting the notion of “one true form” altogether, at least in the sense that that form is manipulated by programmers.

Of course, this is the MDSoC holy grail. I think the idea is just slightly too big for its own good, at present. Ironically, or fittingly, it has not been decomposed properly: aspects, refactoring and typeclasses are the main programming weapons that share its spirit, but none has its power or elegance. It's a shame that work on the idea seems to have fizzled out. (It's also a shame that the paper isn't on more reading lists!)

Somewhat relatedly, there's been some interesting work on subjective/transient/dual encodings of language features, as with the registration-based stuff at last year's Onward!, or Rob Ennals' Jekyll. But I'm still not aware of any mature tools that can really rip apart the modular structure of code and transform it on demand. Perhaps one problem is that we need to be able to define what primitive entities these queries “select”, and how they reformulate them into the appropriate bigger chunks---ideally in a language-agnostic way. So it's back to the shared metamodel. Again, better understanding of “cognate” language features, and indeed of less intuitive correspondences between language features (like the nontrivial correspondences between algebraic data types and class hierarchies), will help here.

[/research] permanent link contact

Guided by folklore

In a recent chat with my internal examiner, Andy Rice, I had a few thoughts which I decided to write down. It turns out he reads my blog---along with (his words) “everyone in the department” ---so, hi Andy and everyone. One day I might stop writing as if my audience consists only of myself, but not right now.

In summary, I want to rant about two weird things that go on in the research world. One is that there are some memes that seem to have a real influence on how PhDs are examined, but seem to have no origin other than folklore, and are different from the standards used to judge other research. The second rant, and perhaps the more oft-repeated, is that we actively encourage boring research.

(I should add that although this post is rather ranty, the chat was not an argumentative one. So, this is mostly post-hoc ranting about related topics, and not a direct reflection of our conversation.)

A thesis is judged on criteria from folklore, beyond what applies to “normal” research. At various points in my PhD, I heard it said that “a thesis should... [do X]”. Usually, X was something to do with telling a complete story, strongly substantiating a succinct hypothesis, and so on. And now I have heard the same from my examiners. Unfortunately, these statements continue to be just that---hearsay. They're different from the ways in which other research is judged. There are no regulations or official guidance to support them. There are no clear scientific or moral justifications for them either. The research community happily publishes many papers that do not tick these boxes, and at good venues. My own OOPSLA '10 paper is one example, but there are lots of others. But despite this, PhD examination seems to give a lot of currency to these criteria, for apparently no reason other than their having been handed down through the generations.

During my PhD I didn't worry myself much about this, since, like most researchers, I don't put much weight on unsourced claims. Besides, there seemed to be enough data downplaying their significance anyhow---several other theses seemed to break the rules, and plenty of published, respected research papers did too. Surely if a PhD is training for research, the qualifying criterion should be focused on doing good research? From my very limited experience, and from what I gather from listening to others, this is not how things currently are. Fortunately, I am of the bloody-minded type. I was aware that I might be “creating trouble” for myself, but I personally preferred to risk creating that trouble, thereby at least gathering some evidence about it, rather than swerving to avoid an obstacle that was at best nonexistent (I didn't know it would cause trouble) and at worst, worth challenging. So, consider it challenged! If you think a thesis needs to be anything more or different than good research, I challenge you to justify that position.

Now, on to my second rant. The evaluability problem has an irrational hold on many practical computer scientists, to the extent that research into many important problems is deliberately avoided. I spoke to many experienced researchers about my PhD work as it went along. Several of them suggested that I might have some trouble at examination. This seemed odd to me, for the reasons I just ranted about. Nevertheless, I didn't disbelieve them. But I had no intention of applying the fix they suggested. Rather than developing an alternative evaluation strategy or (the best advice in hindsight) to maximise the persuasiveness of the presentation of whatever evaluation data I did have, the only “advice” I ever received on this point was a not-so-veiled encouragement to abandon my current problem and work on something else. “Up and to the right” was what one researcher told me---about the kind of graph that should be in my evaluation chapter. (My evaluation chapter has no graphs, and is staying that way.)

This attitude is the tail wagging the dog. If a problem is important, and we do some research that is not conclusive, we should damn well work harder at it, not give up. The problems and curiosities of humankind are not regulated by how easy it is to collect data and draw graphs about them. If we avoid working on important but difficult-to-evaluate problems, or discourage such work, it shows the worst kind of ivory tower mentality. It is far from a pragmatic position, despite how (I'm sure) many of its adopters would try to spin it. What is pragmatic about ignoring the real problems?

I'm not downplaying the importance of evaluation. It goes without saying that measuring the value of innovations is important. Moreover, our ability to measure is something we need to work on actively. After all, many of those physicists and other “hard” scientists seem to spend nearly all their time working out ways of measuring stuff. So I'm completely in favour of rigorous evaluation. On the other hand, I'm not sure that a lot of evaluation that currently passes muster is really rigorous anyway. We need to recognise evaluation as a problem in its own right, whose hardness varies with the problem---and make allowances for that. For many hard problems, evaluation of a solution is comparably hard. That shouldn't mean that we give up any attempt to tackle those problems. The preference for conclusive results in published research has a deceptive influence, being essentially the same phenomenon as the “decline effect”, described in this very interesting article from the New Yorker.

There are some other problems with evaluation in particular kinds of CS research. One is what I call “evaluation by irrelevant measurement”: if you develop something that is supposed to help programmers, but you can't measure that, how about measuring its performance or proving its type-soundness? It says nothing about whether you've achieved your goals, but it still ticks those evaluation boxes. And of course we have a big problem with reproducibility of experimental results---at the VMIL workshop at SPLASH, Yossi Gil gave a great talk about the non-reproducibility of VM-based microbenchmarks, and Jeremy Singer's Literate experimentation manifesto was a nice counterblast to the wider problem.

I have found programming language researchers to be more sympathetic than “systems” researchers to work “towards” a goal, as distinct from work telling a complete story about some problem. This is partly because the nature of programming language research makes reliable evaluation a very high-latency endeavour. In other words, until real programmers have used your idea in a large number of projects, there will be no clear experience about how well it works. So, being computer scientists, we mitigate that latency, using pipelining. Rather than a slow stop-and-forward algorithm which waits 20 years between research projects, we have to be more amenable to two approaches: argument, in the sense of paying attention to the reasoning that justifies the approach of a particular piece of work, and speculation, meaning allowing the research discourse to explore many alternative approaches concurrently, and letting time tell which ones will “stick” out of the many that have been given a chance. The job of the researcher is less to conclusively show a problem as solved, but to show that a technique is feasible and has some potential for wide and successful application.

Going back to the first point, perhaps I should add that I'm not saying that my thesis would have stood up any more strongly by “good research” criteria. But having said that, a very large chunk of it appeared at a top-tier venue, so it can't be all that bad. Both of my examiners seemed to miss this fact, so the lesson is: always put a prominent summary of your publications in your thesis! Personally I can be very critical of my thesis work. But it seems bizarre to me that folklore should have so much sway in the way that theses are examined.

[/research] permanent link contact

Sat, 12 Nov 2011

Static versus dynamic analysis---an illusory distinction?

When writing a recent talk, I found myself arguing that static and dynamic analysis are not really that different. At least, people don't really agree on the distinction. Model checking people frequently argue that what they're doing is dynamic analysis, because it directly explores paths through a system's state space. Meanwhile, abstract interpretation people would argue the opposite, since clearly model checking is an instance of abstract interpretation, and so is all other static analysis.

I'd much rather avoid the debate entirely. Since model checking is a far cry from run-time checking or testing, my sympathies initially lay with the abstract interpretation camp on this particular issue. But the distinction becomes even more ill-defined in other cases. In particular, I've been thinking a lot about symbolic execution, of the kind done by KLEE and other tools. Is it doing a static or a dynamic analysis? I'd challenge you to defend either position.

(Meanwhile, execution environments which are somewhat speculative, like transactional memories, lazy evaluators, or even plain old branch prediction, can be considered as partial runs of a static analysis. But I should save that line of thinking for another post.)

So rather than talking about static or dynamic analyses, here are some dimensions of analyses that I think are more interesting.

That's all for now. Let me know if you can think of any more noteworthy dimensions!

[/research] permanent link contact

Mon, 22 Aug 2011

Pipelines (are lazy functional composition with recombination)

Unix pipelines are often held up as a paragon of compositional virtue. It's amazing how much they can do, how efficiently and how (relatively) simple to use they can be.

Programming languages people especially like this sort of thing, and functional programming clearly has some synergy with pipelines---which are, clearly, a kind of functional composition. Microsoft's F# language even uses the pipe to denote functional composition.

But pipes are not just compositions of functions mapped over lists. They have two other properties that are critical. The first is well-known, and the second less so.

First is laziness. Since each process in the pipeline explicitly forces its input, pipelines can be used on infinite streams (or portions thereof) just the same as with finite data.

The second is what I call recombination. Each stage in the pipeline can have radically different ideas about the structure of data. For example, pipelines typically combine characterwise (tr) linewise (sed) and whole-file (sort) operations in one go.

This is much harder to achieve in a functional setting because you don't have access to an underlying representation (cf. character streams on Unix): if your part of the pipeline understands data differently than the previous one, you have to map one abstraction to the other by writing a function.

Meanwhile, the shell programmer has a lower-level set of tools: familiar recipes for dealing with lower-level idioms such as line-breaking and field separation that recur across many different kinds of abstract data but may be recombined using the same tricks in each.

The upside of the abstract, functional approach is that you preserve the meaning of your data, even in the face of some heavyweight transformations. Meanwhile, shell programmers are often driven to frustration by their separator-munging not working properly once things get complex.

The downside is that it's more work, and more repeated work, because you might (in the worst case) have the quadratic problem of having to map n abstractions to n-1 other abstractions. By contrast, simple pipelines can often be quickly hacked together using a few standard idioms. They can even be written to be robust to changes in the underlying abstraction (such as numbers of fields), although this takes some extra care.

A middle ground might be to characterise those idioms and formalise them as a functional abstraction, into which many higher-level abstractions could be serialized and deserialized, but without going all the way down to bytes. Perhaps such a system already exists... it sounds a bit like a reflective metamodel of functional records or somesuch. Hmm... perhaps Lispers have been doing this for decades?

[/research] permanent link contact

Fri, 08 Jul 2011

In praise of (good) workshops

Publishing has at least two roles in science. On the one hand, it is a means of disseminating results, concretising the “progress” made by completed work. On the other hand, it is a source of feedback: it's a pillar of the peer review system that rejected papers receive feedback. Meanwhile, the idea behind conferences and workshops is, at least in theory, that each presentation will stimulate discussion and further feedback.

During my PhD I learnt the value of a pattern which seemed to suit my style of work, as follows. When work is under way, write an in-progress style of paper. This gathers early feedback. If it gets accepted (which it usually will, if the idea is good), you also get the benefit of the presentation and the subsequent feedback and discussion. Later, once you have results, write a full research paper to present them. Inevitably, you will have a lot more to say this time round. Some things will have changed, too. You will be able to write a better paper than before, because the earlier paper gave you some idea how to present the work and how to address its perceived weaknesses. (I admit I've only completed this bit of the cycle once so far! There is more in the pipeline....)

When I first went to a conference, I was surprised at how little conferring was going on. Many talks received only a couple of questions. Workshops, on the other hand---at least the good ones---set aside more time for discussion. Smaller audiences at workshops make it more likely that people will initiate discussion as a talk goes along. The lunch and other break times tend to have a more discussion-heavy vibe than those between conference sessions. This is perhaps, again, because a small number encourages more discussion. Also. the workshop group tends to “stick together”, rather than in a conference where people diffuse between sessions. I guess single-track conferences are better in this respect, but I've only been to one of those, and I don't recall a lot of high-quality discussion that time.

(Poster sessions are not bad either, for discussion, if your poster can grab people's attention. But they are painful to present at... never again, I have vowed.)

Recently I had it put to me by an experienced researcher that workshops are not worth bothering with: they're just for people who are starting out, or for less good work, and they stop you from publishing at a better venue. I sympathise because I've been to some bad workshops, and seen some decidedly poor “research” presented at them. But that's an argument for greater workshop participation, not less. Submitting interesting ideas to workshops, for discussion, is exactly what's supposed to happen. The reason that they degenerate into small conferences for mediocre-or-worse work is precisely because they don't get enough good submissions by good people. Some workshops are established, and get good participation, and work very well in that form.

Prior publication at workshops is a subtle thing, but in short, is not something I worry about. I have certainly seen workshops having (online) digital proceedings but from which it's common to see follow-up papers appear later at conferences. I'm not sure whether this is because workshop papers, being more preliminary presentations of work, simply “don't count” (an opinion I've heard voiced) or because those follow-up papers present quite a large delta. For the kind of work I do, a big delta is not hard to achieve anyhow---the contributions of the workshop paper would mostly be in argument, “position” or “idea”, together with perhaps some motivating experiments and preliminary results. Implementation and ensuing experimental work is saved for a full paper. Archival is cheap nowadays, so the convenience of having a printed proceedings accessible from the same place where we can find all the other papers shouldn't be seen as giving equal contribution-weight to these papers. (Suggesting otherwise seems to me to be endorsing a “numbers game” approach to the evaluation of research. Heaven forbid that we actually decide for ourselves what the contribution of some paper is, by reading it.)

I can see this split being less applicable for more theoretical work. The more abstract the formulation of the problem, the less there is to argue about. For practical work, discussing the problem set-up and high-level approach is very valuable. Even when work seeks to build big systems, the idea behind some work is often much bigger than the part that you are actually able to implement in practice. It's nice to have an opportunity for the bigger ideas to be discussed, reviewed and recognised.

A final reason for me to enthuse about workshops is that I'm one of the “little guys”. So far I've worked only on my own. I don't have big collaborative projects whose conference papers I can parachute onto. And I don't have very many coworkers who I can discuss my ideas in detail with. Workshops are a support infrastructure that I particularly need---for feedback, and also, perhaps slightly cynically, to maximise the exposure my work gets. Ultimately I want to convince people that my research vision is worth investing in. It's important that I take up opportunities for conveying my potential---which I believe to be great!---as well as what I've achieved, which will never match up to those who are habitual collaborators. Of course I'm not opposed to collaborating---far from it, but I just can't seem to find the right person....

[/research] permanent link contact

Wed, 06 Jul 2011

Functionality, not (just) reliability

I'm still a newcomer to verification. The more reading on verification I do, the more evident it is that most work in the area is interested in checking fairly boring properties of programs: usually that it doesn't crash, don't corrupt memory, or that some simple assertions hold. In this post I'll argue that thinking of a provable absence of these properties as “job done”, or even a primary goal, is counterproductive: it overlooks the hidden costs of verification, and overstates the value of such proof. There are far more important top-level goals than verifying these properties, yet there seems to be a real danger that research is overlooking these almost entirely.

To explain myself, I'm going to highlight three distinctions I've come up with about different kinds of work that can loosely be called “verification” or “formal reasoning about programs”. None of them are commonly mentioned, and certainly not by the names I've given them. But they seem fairly important to me, and in each case, one side of the divide is sorely neglected.

Positive specification versus negative specification

I call this kind of “doesn't fail” property “negative specifications”---they say what the software shouldn't do. I'm not trying to denigrate any work that verifies against these specs. It's often more than difficult enough to check these “boring” properties statically, without making things any more “interesting”. Nevertheless, focusing only on negative properties seems to neglect the classical goal of verification, which is to check that an implementation satisfies a specification which captures its intended functionality.

Tackling this means addressing “positive specifications”: what the software should do. This is a similar distinction to that between liveness properties and safety properties. But it is not the same: specifying liveness in the sense of “not deadlocking” or “not hanging” is still a negative specification. Really, what defines a positive property is that it has something to do with functionality, and not whether it's stated in a way that uses logical negation. We should aim to specify positive properties that capture an isolated facet of our system's intended functionality, expressed in application-level terms: perhaps small behaviours that a program simulates (but doesn't bisimulate), or small functional dependencies that a program reproduces.

Conservative versus “liberal” software construction

I had a hard time thinking of the right antonym to “conservative”, and “liberal” will have to do (as “reckless” was my other candidate).

Some classical approaches to programming have a highly conservative property that I call compulsory proof: the programmer cannot make progress without ensuring that some specification remains satisfied. Static typing is the most familiar example: type-checking is, from Curry-Howard, proving a property about your program. You can't do much with a program that doesn't type-check. Refinement-based synthesis methods are another example: a refinement step is only valid if it preserves the specification, and provably so.

As we scale up to larger and more complex programs, and larger and more complex specifications, these approaches start to inhibit progress. It's interesting how dependent types are still a “nearly there” feature---nobody has made them usable yet. I'm told that making programs type-check becomes harder and harder under increasingly dependent types. This is unsurprising: we're making the specifications (a.k.a. types) more complex, so harder to prove satisfaction for. It seems that the most pragmatic solution so far, is to relinquish the insistence on proof at all times, a compromise adopted by Edwin Brady's Idris.

The idea of compulsory proof is flawed because programming is usually exploratory. As a programmer, I often start coding before I fully understand the problem. The process of programming provides necessary illumination. So, presupposing any specification that I must doggedly adhere to at each stage, whether from the tyranny of a type-checker or the tyranny of a refinement procedure, is anathema---it actively stops me from acquiring the understanding I need to complete the task. In the worst case, compulsory proof is a net loss: it slows us down far more than it helps us. What's unfortunate is that our research evaluation methods don't account for this. No programming research considers the net worth to a human programmer. Instead it prefers the mathematical orthodoxy of derivable properties such as preservation of type safety. I suspect these are poorly correlated.

The polar opposite to these conservative approaches, which I call “liberal”, is exemplified by dynamic languages. Here no proof of adherence to any specification (type-based or otherwise) is statically enforced. The programmer is free to break his abstractions at any time---even at run time. The only specifications of that are enforced are those of pre-existing machine-level abstractions---integers, pointers and floating-point numbers---whose integrity is enforced by dynamic checks only.

Like most polar extremes, neither fully-conservative nor fully-liberal approaches are often optimal in practice. I think the consensus from mainstream languages (C, Java, C++) is that static checking using a simple type system is a good idea (and I'm sure even most die-hard Python programmers often yearn for some basic static checking). The jury is still out on the cost/benefit ratio of more complex type systems, even such as Java's generics. Ken Arnold vociferously argues against Java generics , and when I first read his opinion, I was sceptical of his view---surely more static checking can't hurt? Nowadays I can see his points about the cost in complexity and, particularly, in the proof burden on programmers.

Meanwhile, from the liberal end of the spectrum, bug-finding tools like KLEE are interesting: we liberally (or recklessly) allow bugs into our program, then try to whittle them down. It's bug-finding and not verification because we can happily specify properties that the tool can't conclusively show are not violated. A consequence is that in practice KLEE doesn't terminate for most nontrivial programs. On the other hand, the space of assertions that we can use KLEE to check is large: at least in theory it subsumes type-safety (noting that a specification of type-safety can be encoded by instrumenting an untyped program with assertions using a suitably-defined instanceof-like operator) and arbitrary application-specific assertions. There are a few problems with KLEE: it's limited by what we can express as assertions (or built-in properties); it doesn't terminate in enough cases; and it tends to find uninteresting bugs which, while not technically false positives, might well be if we could write a more refined input specification. Nevertheless, I like it because it treats perfection (i.e. proof) as the limit case, not the immediate goal. (It's no coincidence that I'm basically working on addressing these weaknesses right now, including making it terminate for more programs---or at least, I would be if I wasn't wrestling with LLVM bugs the whole time.)

Constructed reliability versus emergent reliability

Using any of these techniques takes time. Writing specifications takes time. Even if you say “my verifier requires no annotations!” (like a lot of Dawson Engler's work, including KLEE), you probably rely on assertions. Even if you restrict yourself to “belief”-style latent specifications (like Engler's 2001 SOSP paper), they got there by somebody writing code. If you rely on “mined” specifications, as recovered by a bunch of work (like this and this and this) you have to gather a corpus and run a mining algorithm and check for false positives and then use some other technique to hunt the false negatives.

In other words, despite half-claims to the contrary, we make our software reliable by pouring some amount of labour into it. New, exciting techniques are exciting because they deliver more with less labour.

But there is an appalling lack of holism here. (Aside: Thomas Ball's call for holism is an interesting read and I applaud it; here I like to think I'm going much further!) What other approaches do we have for making software reliable? How about making it easier to write the code, so that we have more labour to pour into the reliability side?

In other words, nonfunctional concerns trade off against each other, and also against functional concerns! Time spent coding new features is time taken away from static analysis or testing or any other reliability-focused work, and vice-versa. It's also time taken away from profiling and optimisation, from refactoring and documentation, and so on. So in other words, it all helps. Software's value isn't a sum of independent parts. The parts are interdependent; what benefits one benefits all. Reliability can be deliberately constructed by applying specific reliability-focused tools, and hacking at code until it passes muster by these tools. But also, it can emerge from superior development processes that made it easier for programmers to build in reliability in the first place.

Now let me lament my own position. In the research world, only a relatively narrow selection of approaches get any funding. Reliability is a perennially hot topic. It's unquestionedly considered sound research motivation to trot out lines about the importance of reliable cars or reliable trains or reliable nuclear power plants. Similarly, it's routine to trot out analogies with civil engineering, bridge-building and the like. Reliability is important, for sure. But thinking holistically, that doesn't mean we have to attack these problems by building tools with the narrow remit of making sure nothing bad happens in whatever code they're given. Improving all parts of the development process can contribute to these goals, and have untold other economic benefits in the process. Too many researchers' mission statements list reliable software as top priority. But reliability is just a means to economic (or “value”) gain. Hardly any researchers will say their goals are “quality software”, or “functioning software”, or “economical software development”. Why not?

A senior researcher here in Oxford recently pitched his work (to industrial visitors) by saying that verification is ensuring that “nothing happens”. I hope that was a gross simplification for pitching purposes, because we can do a lot better than that.

To finish, let me shamelessly bring on some of my current “background” research interest and activities. I'm interested in good ways of re-using existing code; good ways of adopting programming language innovations without rewriting the world; good editing tools (including refactoring and a whole lot more---I'll blog shortly there); good dynamic analysis tools, including a good debugger (again, I'll blog more shortly). Of course, I didn't manage to find a job on any of these ideas. Since my PhD, days, I've felt as though I was perhaps the only programming researcher in the world whose top priority is not specifically code that is reliable, per se, but the bigger yet seemingly more obvious goal of code that does what you want it to.

So, am I crazy? I took a job in verification because I thought I'd learn some program analysis skills that would be generally useful, including (later) application to the problems nearest my heart, not just to approaches sanctioned by the reliability orthodoxy. But it's a draining experience to be railing against orthodoxy all the time, especially when you feel like the only lunatic in the asylum. I'm not sure how much longer I can take it.

[/research] permanent link contact

Tue, 14 Jun 2011

Post post viva

I blogged previously about my PhD viva. I've finally got the examiners' reports, and was quite surprised by the difference between the two. Suffice it to say that one was much more positive than the other, and reassuringly for me, the more positive one was also from the examiner who is both more experienced in the role, and more familiar with my research area. (It's probably not hard for most readers to work out which one is which.)

I'm only kicking myself since I could, given this information, perhaps have steered quite a different path through the viva that would have resulted in far less extra work being demanded of me. Nevertheless, the same points in favour of doing the “corrections” that I am doing (a.k.a. self-financed development work) still stand from my last post, so I shouldn't kick myself too hard.

[/research] permanent link contact

Wed, 13 Apr 2011

PhD examination

So I passed my PhD viva a couple of weeks ago. I do, however, have a lot of corrections to do. In fact I have about the most corrections I could have, in hours of work terms, without having to resubmit my thesis. Thank God I don't have to do that. As it happens, the actual corrections to my thesis are not very many. I have to add the odd paragraph here and there, and collect a small amount of extra data. The killer is the non-thesis bit. I'll talk about that in a moment.

There's a lot I could say to summarise my feelings about the viva. Here are the two words I've been using most when people have asked me how it went: “reasonable” and “annoying”.

For the “reasonable” part, I have to thank my examiners, Andy Rice and Alex Wolf, who deserve credit for the depth at which they got to grips with my thesis. I was quite impressed with their attention to detail. Although I can (and will, shortly) disagree with their take on what is necessary or sufficient to substantiate my thesis, I also appreciate how my doing so is very much challenging a norm... and the examination process isn't the right place to do this. Examination is a pragmatic business, and when considered less on intellectual high ground and more in terms of personal risk and reputation, I could not reasonably have expected (at least not with high probability) their taking a different position.

For the “annoying” part, in short, I was far too idealistic in my conception of the PhD examination process. Of course it has some room for intellectual rigour; but virtually no research in any practical field has such unthreatened validity that examination doesn't fall back on “due diligence” to some extent. Another word for “due diligence” is “hoop-jumping”, and that really sums up why I think my thesis attracted the complaints that it did: it didn't jump enough established hoops to make the examiners feel comfortable rubber-stamping it. I'm not saying my thesis is great; it's fairly weak really---but it's no weaker than a lot of other theses which seem to pass without problem. I suppose the examiners did rubber-stamp it in the end, given that I passed---but subject to corrections which, unsurprisingly, make it jump an additional hoop. I don't feel that jumping this hoop substantiates the thesis any more strongly, and this is the centre of my annoyance.

A new rant about an old problem

My problem is not a new one. Introducing a new language is a relatively common thing for a CS researcher to do. Assuming the claimed benefit of the language is a practical one, rather than a theoretical one, then evaluating the language is a huge problem. PhD students don't have the time or the budget to carry out large field studies. Anyway, instead of this, the usual approaches are to prove something about the language, to show that it has reasonable performance, and/or to apply it to case studies. I'm going to be bold and claim that the first two are hoop-jumping in most cases. It's a rare case indeed where a language's goal is actually to realise the theoretical property in question or to “do X really fast”. (Of course, other kinds of work, in theory and systems respectively, do have these as express goals, but I'm talking about languages here, where “language” is distinct from “calculus”.)

It's reasonable to set for your language a performance or theoretical goals in addition to your main goal, as this can be a source of interesting problems and brings the work closer to applicability in practice or interest in theory. However, it really annoys me when people confuse these goals. I hate seeing papers which introduce some new language feature that is claimed to help programmers---the usual end goal of any language---and then evaluate it either by an irrelevant proof or irrelevant performance measurement. This has the effect of encouraging both a confusion between the main goal of a language and these side-goals, and moreover, encouraging a culture where evaluating the main goal is neglected in favour of the side-goals, or where the side goals are seen to imply the main goals.

Trouble with case studies

Case study evaluation is unsurprisingly the approach I chose. This might have passed muster, except that the other hoop I didn't jump through was producing a complete working implementation. This doesn't mean I didn't implement anything: I did a lot of implementation work during my PhD. But for various reasons, my reach had exceeded my grasp. I had plenty of working examples of the techniques I wrote about, but the code generation side of my compiler had got hairy enough that I decided that it should suffice to show implementability rather than implementation. I think I did this, and I don't think my examiners doubted it either, although they did mince some words on the subject. In the end, they were reluctant to accept this implementability evidence as sufficient defence of the thesis. I couldn't put my finger on why, and I wouldn't say they could, either. Instead, I only got some quite vague questions, in essentially four forms.

The first was: “How do you know your language features are sufficient?” Of course, I don't. Since I spent a whole chapter talking about cases that aren't handled, clearly I make no such claim (although I do identify what needs fixing and how this doesn't break the key abstractions of the language). I do claim that they're sufficient for the case studies, and that since these are representative of other code, that they will be sufficient for a wider range of code. This is demonstrated by argument and careful analysis of code rather than saying “we ran it”. But saying “we ran it” is still subject to error---since realistically, how much testing did you do, and how can you be sure it was enough? The case the examiners seemed to worry most about was the one where, by failing to account for some unspecified detail, some new language feature or altered semantics would be necessary just to handle the case studies themselves, never mind other examples to which I claimed it generalised. I think I provided quite a weight of evidence that this wasn't going to happen. However, even if it did, it would still a matter of error bars, not validity.

The second was: “How do you know you haven't forgotten something in your implementation?” Again, I don't, but I have implemented enough that the implementability isn't in doubt. Even if a fully working version would turn up one or two extra details that need addressing, this wouldn't undermine the thesis.

A final question: “How do you know your language features are necessary?” I still find this question bizarre. The language features exist to handle common cases in a way that saves programmer effort. Every feature is illustrated with a plausibly common example, and any experienced programmer would recognise its usefulness. This doesn't mean they couldn't be eliminated, but doing so would bring a strict degradation in what the language offers the programmer.

What didn't help was that the examiners didn't ask me these questions one at a time, but rather rotated among them with dizzying speed. It was though they themselves hadn't yet separated them in their own heads. Without this, I might have been able to fend them off better, along the above lines. As it was, I can't help feel I did well not to get too put out by it all. I nearly did lose my cool at one point where one examiner suddenly claimed that I needed to do a performance evaluation. I had very explicitly and very clearly excluded performance from any but informal consideration very early in the dissertation, precisely in order to prevent my task from blowing up even further than it already had. Fortunately I managed to argue this one down, although annoyingly, I still have to gather some (meaningless, but fairly trivial to collect) performance data for my corrections.

The “solution”

So, how did the examiners propose that I answer their objections? In the time-honoured hoop-jumping way: to finish the implementation, of course, so that I can say “I ran it”! Actually I only have to get it up to a certain level, rather than finishing everything, which I suppose is something to be glad about. But I had failed to complete my implementation for very good reasons: it's a ton of work, and it was already past the point where its feasibility was established. In hindsight I could have written up this fact better. But I think it was still clear that what remains is a matter of development---which I wasn't prepared to spend any more of my own money to fund, given that I'd already spent 6 months living off savings and consultancy work. Fortunately, circumstances now mean that I have a job which pays enough that by going part-time I can get it done while remaining solvent. (It also had to happen this way round, since if I hadn't been able to submit my thesis without a full implementation, I wouldn't have been able to get the job that is now, indirectly, paying for the implementation's completion.) Of course, my financial situation is an irrelevance as far examination goes, and it has to be that way. The moral is that there is no safety net, and nobody who is truly responsible for your thesis than yourself. The system is accountable to nobody, and it has no incentive for self-improvement... except maybe to the extent that (and over the timescales by which) PhD examinees who suffer negative experiences become examiners who can still remember them. “It's not fair!” as Jennifer Connolly once declaimed, “... but that's the way it is”.

The role of empirical rigour

At the moment, and probably since time immemorial, there is a cohort of CS researchers in the fields of programming languages and software engineering who are vociferously advocating greater empirical rigour in research. Early on in my PhD, I thought that this movement could only be bad news for little old me. I barely had the resources to produce an implementation within one PhD, never mind do an empirically rigorous user study. However, now I think that this movement is actually on my side (as well as the side of “truth” and good science, which I didn't doubt). The hoop-jumping that would have satisfied my examiners---producing a working implementation and running it---doesn't actually strengthen my thesis, and in an empirically rigorous discipline, this would be clear. In turn, it would probably be a more “done thing” to submit theses that don't tell a complete story---because telling a complete story on something complex as complex as a practical programming language, and doing so with empirical rigour, is too much work for one PhD. Perhaps it would be more acceptable to package research “towards” a goal, evidence but not yet conclusive evidence, with its outstanding threats to validity clearly explained, yet unresolved. Instead, in our empirically immature discipline, we try to brush these unresolved threats aside by arbitrary hoop-jumping.

The downside of a more empirically rigorous discipline would of course be that each researcher can't race ahead quite so fast. Within the scope of one PhD, there is far less prospect of telling a neat, complete story. In my case, this would have been both good and bad. For empirical rigour's sake, I would have to have spent much longer on case study, including (probably) starting my thesis with an empirical study. Perhaps all implementation would have to be left for the future, and my thesis's contribution would mostly be on understanding the problem empirically, with a paper sketch of the solution validated by detailed analysis of examples. Of course, this paper sketch would have a weight of evidence behind it. The downside is that I actually like the idea of implementing stuff, and even though I haven't (yet) finished the job (and I am now working on it, again), I would have found it frustrating to embark on a PhD with no intention of completing an implementation.

Conclusion

This post probably sounds like a lot of sour grapes, although I hope it doesn't. It's actually a positive thing for me that circumstances have conspired to give me a chance to finish the Cake implementation, since it will be a useful springboard for future work and perhaps even (gasp) impact. Previously, when I was resigned to not finishing it, it was looking like this would become an albatross. More generally though, I can't pretend not to be a little bit sour about the course my PhD took. Despite making what were defensible and reasonable moves at each stage, the whole thing turned into a bit of a mess and has caused me a lot of pain. However, the mess of the work (which could have been better, but I think was comfortably “good enough”) is a different mess from that of the examination. I am now very strongly convinced that there really is a problem with the attitudes to evidence, rigour and the mythical “completeness” in computer science. If I last long enough in this line of work, perhaps I can help do something about it.

[/research] permanent link contact

Thu, 17 Mar 2011

Everything is a somehow-interpreted file

My last post was ranting about the difficulty of maintaining and debugging configuration of complex software. I somewhat naively advocated the position of using the filesystem to the greatest degree possible: as the structuring medium as well as a container of data. This is good because it maximises the range of existing familiar tools that can be used to manage configuration. But at least in some cases, the filesystem---as an implementation, although sometimes as an interface too---is manifestly not good enough (e.g. in terms of space-efficiency for small files). Instead, we want to make our old tools capable of accessing, through the filesystem interface they already know and love, these diverse and wonderful implementations of structured data.

I have heard many researchers claim that binary encodings of data are bad, that XML is good, or even that revision control systems are bad ways of storing code, for the same recurring semi-reason that good things are things that conventional tools work with---sometimes the phrase “human-readable” crops up instead---and bad things are things they don't work with. You can search XML or plain source code using grep, or so the reasoning goes; you can't generally do the same for binary or diffed-text or otherwise munged encodings. This argument is classic tail-wagging-dog material. Plain text is only “human-readable” because there is some widely deployed software that knows how to decode binary data representing characters (in ASCII or UTF-8 or some other popular encoding) into glyphs that can be displayed graphically on a screen or on a terminal. If other encodings of data do not have this property, it's foremost a problem with our software and not with the encoding.

Unix is arguably to blame, as it is obsessed with byte-streams and the bizarre claim that byte streams are “universal”. The myriad encodings that a byte-stream might model are less often mentioned. I'd argue that a major mistake of Unix is a failure to build in any descriptive channel in its model of communication and storage. Without this, we're left with what I call the “zgrep problem”: each file encoding requires its own tool, and the system offers no help in matching files with tools. If I have a mixture of gzipped and uncompressed text files in a directory---like /usr/share/doc on any Debian system---recursively searching is a pain because for some files we need a different grep than for others. Some approach combining analogues of HTTP's Content-Type and Accept-Encoding with make's inference of multi-hop file transformations, could allow the operating system to transparently construct the correct input filters for searching this mixture of files from a single toplevel instance of our plain old grep tool.

For this to work we need an actually sane way of describing data encodings. (Call them “file formats” if you will, although I prefer the more generic term.) Later attempts at describing the structure of byte-streams, notably file extensions or MIME types, have been ill-conceived and half-baked, mainly because they are based on a simplistic model where a file has “a type” and that's that. So we get the bizarre outcome that running file on /dev/sda1 says “block special” whereas taking an image of the same disk partition and running file on that will say something like “ext3 filesystem data”. Doing file -s might be the immediate workaround, but in reality, any given filesystem object has many different interpretations, any of which may be useful in different contexts. A zip file, say, is a bunch of bytes; it's also a directory structure; it might also be a software package, or a photo album, or an album of music.

Another interesting observation is that these encodings layer on top of one another: describing a way of encoding an album of music as a directory structure needn't target a particular encoding of that structure, but presumes some encoding is used---any that conceptually models the album encoding is sufficient. So we want to capture them in a mixin-like form, and have some tool support for deriving different compositions of them. What would be really neat is if a tool like file, instead of doing relatively dumb inode-type and magic-number analyses, actually did a search for encodings that the file (or files, or directory) could satisfy. Each encoding is a compositional construction, so the search is through a potentially infinite domain---but one that can usually be pruned to something small and finite by the constraints provided by the file data. But by giving it more mixins to play with, as configuration data, it could find more valid interpretations of our data. This sort of discovery process would solve the zgrep problem in far more complex cases than the one I described. Implementing (and, er, somehow evaluating) these ideas might make a nice student project at some point.

These ideas have been with me right from my earliest forays into research-like work. I've yet to really nail them though. My bachelor's dissertation was about a filesystem exposing arbitrary in-memory data. And my PhD work addressed some of the problems in making software accessible through interfaces other than the ones it exposes as written. I've yet to do anything with persistent data or other IPC channels so far, but it has been on the list for a long time. (It's become clear with hindsight that this agenda is a characteristically postmodern one: it's one of building tools and systems that provide room for multiple interpretations of artifacts, that don't insist on a grand and neatly-fitting system being constructed in one go, and that accommodate incomplete, open or partially described artifacts.)

The power-management problem that provoked my previous post actually gets worse, because even when not on AC power, closing the lid only usually makes the laptop suspend. Sometimes it just stays on, with the lid closed, until the battery runs down. For my next trick, I may be forced to come up with some bug-finding approaches that apply to scripts, filesystems and the like. If we're allowed to assert that a file has a particular abstract structure, and check that separately, then we can probably factor out large chunks of state space from real software. In turn, that might shift our focus away from “inputs of death”, fuzz-testing and other crash-focussed bug-finding techniques, and towards the harder but more interesting ground of finding functionality bugs. I'll rant about that in a forthcoming post.

[Update, 2011-3-18: I just ran into an interesting example of this same debate, where Ted Ts'o is actively advocating using a database in preference to maintaining lots of small files, for performance and robustness reasons, in relation to ext4's semantic differences from ext3. Understandably, he provokes a lot of discussion, notably here, where people complain that binary files are difficult to maintain. There is a mix of views and a wide variance of clue level, but the real issue is that no one interface---where “interface” includes robustness and performance characteristics--- is the globally “right” one.]

[/research] permanent link contact


Powered by blosxom

validate this page