I'm a Lecturer (“Assistant Professor” in US English) at King's College London's Department of Informatics, where I teach topics in computer systems and software development, and do research within the Software Systems group.
Although my forename is very common in English-speaking countries, increasingly even English-speakers seem to have trouble pronouncing it. Try the following mouseover-assisted IPA, due to Wikipedia, if you're not sure: /ˈstiːvən/.
For my publications, click here (or keep reading).
I am a practical “computer scientist” who builds things; I'd argue this makes me more engineer than scientist per se. I have wide interests: my goal is to make it easier and cheaper to develop useful, high-quality software systems. So far, my work has mostly focused on programming languages and the systems that support them—including language runtimes and operating systems.
My primary research goal is to make possible increasingly humane, programmable computer systems. Currently, I am pursuing this through a humane evolution of the Unix-like operating system.
If I had to limit myself to only a dozen research areas, they would be these.
Most of my research concerns programming, but I identify as a “systems” researcher. For me, “systems” is a mindset rather than a research topic. It means I am primarily interested in the practical consequences of an idea; its abstract properties are of interest only subserviently. (If you can come up with a better definition, let me know.) I generally hesitate to identify as a “programming languages” researcher, not because I dislike languages but because they're only one piece of the puzzle. I'd identify as a “programming systems” researcher if that were a thing.
You can read more below about what I'm working on. A recurring theme is that I prefer pushing innovations into the software stack at relatively low levels, to the extent that this is sensible (which I believe can be surprising). Another is the pursuit of infrastructure that is well-integrated, observable, simple, flexible, and designed for human beings—particularly if we believe (as I do) that computers are for augmenting the human intellect, not just for high-performance data crunching.
I believe that technology cannot be pursued in isolation. “Good” engineering research is done with an understanding of history, human culture, and that research's place in them. This might mean asking small questions, like the impact of a technical change on a wider software ecosystem. It also means asking bigger questions of the human and social assumptions behind the work: who might use it; who it will benefit; why this is an overall good; and so on. By its answers to such questions, perhaps implicitly, all technical work invariably subscribes to what one could call a “philosophical basis”. I believe that understanding these bases can lead to more valuable technical work. Some of the relevant questions are so big that they cannot be answered by experimental methods. I do not subscribe to the view that to be “scientific” is an unalloyed good, or that the methods of the humanities are faulty. To do good engineering research is to be more than purely a scientist.
I did my Bachelor's degree in computer science in Cambridge, graduating in 2005. I then stayed on for a year as a Research Assistant, before starting my PhD in 2006.
My PhD work was based in the Networks and Operating Systems group under the supervision of Dr David Greaves, and centred on the problem of building software by composition of ill-matched components, for which I developed the Cake language.
During summer 2007 I took an internship at Fraser Research, doing networking research.
From January 2011 until March 2012 I was a research assistant in the Department of Computer Science at the University of Oxford. There I worked within the research programme of the Oxford Martin School's Institute for the Future of Computing. My work mostly focused on constucting a continuum between state-space methods of program analysis (notably symbolic execution) with syntactic methods (such as type checking). I ended up moving on before I could see this to fruition, but in 2019 I was awarded some funding from VeTSS and it was briefly taken up in the (now aborted) PhD work of Justus Adam (see the paper at TAPAS '20).
From May 2012 to May 2013, I was a postdoctoral researcher at USI's Faculty of Informatics in Lugano, Switzerland, within the FAN project.
From May until October 2013 I was temporarily a Research Assistant at Oracle Labs, on the Alphabet Soup project.
I returned to Cambridge in October 2013, as a Research Associate (later Senior Research Associate) at the Computer Laboratory (now become the Department of Computer Science and Technology). I worked within the REMS project, and participated in both the Programming, Logic and Semantics group and the Systems Research Group (particularly its Networks and Operating Systems subgroup). As of July 202022 I continue to be a visiting researcher at the Department.
In 2018 I took up the job of Lecturer (again, “Assistant Professor” in US English) at the University of Kent. In 2021 I moved to “the same” job (but one fairly different in practice) at King's College London.
In my industrial life, I enjoyed some spells working for Opal Telecom and ARM around my Bachelor's studies. More recently, I have consulted for Ellexus, Petagene, and conducted some industrial research work (in reality rather development-heavy) for Oracle Labs.
I maintain (sometimes neglectfully) two CVs: a snappy two-page one here, and a longer academic one here. Please be aware that both may be a little rough or incomplete at times. If you're recruiting for jobs in the financial sector, even technical ones, please don't bother reading my CV or contacting me—I promise I am not interested.
Here's what I'm working on as of April 2020. In most cases these are my own individual projects; a few are with collaborators, as I describe.
Unix-like processes are with us to stay, but are impoverished in ways that hold us back. It is too difficult to build robust tools and systems which automate or abstract meta-level programming tasks, such as debugging, profiling, visualisation, instrumentation, composition, error reproduction and dynamic update. In turn, this massively hampers software development generally! In large part, the root problem is that the state of a Unix process is too opaque. Unix lacks adequate meta-level facilities for describing user-supplied abstractions such as data representation, memory layouts, procedural interfaces, file formats, inter-process protocols, and so on. To fix this, rather than designing a whole new system with strong meta-level abstractions, as has been done in Smalltalk and Lisp programming systems, my work chooses instead to evolve the services offered by existing Unix-like systems. Since meta-level tasks are far from uncommon, vestigial features to help carry them out can often be found “lurking” in modern Unix. By extending and evolving these features, my work can (only half-jokingly) be said to be “dragging Unix into the 1980s”. It does so while maximising compatibility, allowing new benefits to be applied to old code, in whatever language—the work spans C and C++ right up to JavaScript, Python and OCaml. One output of this work is liballocs, a run-time library fast whole-process tracking of per-allocation metadata, including detailed type information. Another is libdlbind, which helps make JIT-compiled code visible to debuggers and dynamic loaders. Both of these are building blocks used by my other research efforts (below). Motto: if you must unify, unify at the meta-level of the operating system, not at the base language level or in language implementations.
Huge amounts of code have been written, and continue to be written, in unsafe programming languages including C, C++, Objective-C, Fortran and Ada. Performance is one motivator for choosing these languages; continuity with existing code, particularly libraries, is another. The experience of compiling and debugging using these languages is a patchy one, and often creates large time-sinks for the programmer in the form of debugging mysterious crashes or working around tools' limitations. I have developed the libcrunch system for run-time type checking and robust bounds checking. This consists of a runtime library based on liballocs, together with compilers wrappers and other toolchain extensions for instrumenting code and collecting metadata. Slowdowns are typically in the range 0–35% for type checking and 15–100% for bounds checking. The system is also unusually good at supporting use of uninstrumented libraries, recovering from errors, and supporting multiple languages. Its design is a nice illustration of an approach I call toolchain extension: realising new facilities using a combination of source- and binary-level techniques, both pre- and post-compilation, together with the relevant run-time extensions. Critically, it re-uses the stock toolchain as much as possible, and takes care to preserve binary compatibility with code compiled without use of these extensions. The mainline implementation uses the CIL library; during 2014—15, Chris Diamand developed a Clang front-end, with help from David Chisnall. In 2016–17 Richard Tynan developed a version of the runtime for the FreeBSD kernel. Motto: unsafe languages allow mostly-safe implementations; combine compile- and run-time techniques to make this the pragmatic option
The semantics of programming languages are well studied, but in some cases, notably C and C++, they remain hard to nail down. With Cambridge and ISO WG14 colleagues, I'm involved in an effort to put C as actually practised on a firmer semantic footing. Relatedly, with JKU and Kent colleagues I have worked on understanding the most common ways in which C is effectively extended with inline assembly. Meanwhile, two equally crucial but much less-studied areas are the semantics of linking and (separately) of system calls. The challenges of verified toolchains and verified run-time environments depend critically on accurate specifications for these services. With my colleagues on the REMS project, we're working both on detailed specifications of ELF linking and Linux system call interface. This also means developing tools for experimentally testing and refining these specifications. In summer 2015 Jon French developed tools for specifying system calls in terms of their memory footprints. Motto: a specification for every instruction.
Debugging native code often involves fighting the tools. Programmers must trust to luck that the debugger can walk the stack, print values or call functions; when this fails, they must work with incomplete or apparently impossible stack traces, “value optimized out” messages, and the like. Obtaining debugging insight is even harder in production environments, where unreliable debugging infrastructure risks both downtime and exploits (see Linus Torvalds in this thread, among others). With REMS colleagues, Francesco Zappa Nardelli, and talented students including Théophile Bastian and Simon Ser, we're working on tools and techniques for providing a robust, high-quality debugging experience. This comes in several at its simplest, it means sanity-checking compiler-generated debugging information. At its most complex, it means propagating more information across compiler passes, to enable debugging information that is correct by construction. Debugging is also in tension with compiler optimisation, and is also inherently in need of sandboxing (to limit who can consume what application state). Motto: if it compiles, it should also debug.
Language barriers are holding back programming. They splinter the ecosystem into duplicated fragments, each repeating work. Foreign function interfaces (FFIs) are the language implementers' attempt at a solution, but do not serve programmers' needs. FFIs must be eliminated! Instead, using a descriptive plane (again, provided by liballocs), we can make higher-level language objects appear directly within the same metamodel that spans the entire Unix process, alongside lower-level ones, which are mapped into each language's “object space”. For example, in my hacked nodeJS (quite a bit of V8 hacking was required too), all native functions visible to the dynamic linker are accessible by name under the existing process object. You're no longer dependent on hand-written C++ wrapper code, nor packages like node-ffi to access these objects. The type information gathered by liballocs enables direct access, out-of-the-box. It's far from a robust system at the moment, but the proof-of-proof-of-concept-concept is encouraging. Motto: FFIs are evil and must be destroyed, but one VM need not (cannot) rule them all.
An infamous dichotomy has long divided programming and programmers: between statically type-checked programming, and “not that”. Type-correctness is a great property, because it is highly generic (all programs have some latent notion of “type”) and is very nearly always what you want. Meanwhile, syntax-directed compile-time checking is a specific and limited method of establishing that property. Gradual typing helps with refactoring towards a syntactically checkable form, but still banks on syntax-directed checkability as the end point. Dependently typed languages are pushing that envelope from one particular direction: by using increasingly sophisticated, less syntactic compile-time analyses, expanding the range of checkable properties while seeking to preserve soundness. I'm interested in a third, complementary approach: what if we throw away the hard requirement of soundness, and also lose the paradigm of devising new languages, but try to generalise our approach to static checking beyond the syntactic, but retaining the sweet spot that checking remains sound on those programs that are syntactically amenable? This would still allow some (perhaps underapproximate) static checking even on gnarlier programs, such as reflective or downcast-heavy code. This was briefly taken up by the thesis work of Justus Adam at Kent, but is now once again paused.
Programmers often want to develop custom tooling to help them profile and debug their applications. On virtual machines like the JVM, this traditionally means messing with low-level APIs such as JVMTI, or at best, frameworks like ASM which let you rewrite the application's bytecode. In Lugano, work done together with my DAG colleagues proposed a new programming model for these tasks based on aspect-oriented primitives (join points and advice), and also a yet higher-level model based on an event processing abstraction reminiscent of publish-subscribe middleware. On the run-time side, our infrastructure addresses the critical problem of isolation: traditionally, instrumentation and application share a virtual machine, and may interfere with one another in ways that are difficult to predict and impossible, in general, to avoid except by arbitrarily excluding shared classes (such as java.lang.* and the like). Our work gets around this by avoiding doing any bytecode instrumentation within the VM—instead at any instrumentation point we insert only a straight-to-native code path that hands off run-time events to a separate process (the “shadow VM”) which performs the analysis. This allows much stronger coverage properties than are possible with in-VM bytecode instrumentation, and nicely handles distributed applications—but also brings challenges with event synchronisation and ordering. Motto: robust, high-coverage without bytecode munging or hacky exclusion lists.
Programming by composition is the norm, but the norm also imposes needlessly high costs on developers. Keeping up with APIs as they evolve is a familiar drain on programmers' time. Inability to migrate to a different underlying library or platform is a familiar inhibitor of progress. It doesn't need to be this way. Why isn't software soft, malleable and composable? Why is so much tedious work left to the human programmer? My PhD work explored an alternative approach in which software is composed out of units which aren't expected to match precisely. A separate bunch of languages and tools, collectively called an “integration domain”, are provided to tackle the details of resolving this mismatch. I designed the language Cake to provide a rule-based mostly-declarative notation for API adaptation. The Cake compiler accepts a bunch of rules relating calls and data structure elements between APIs, including in various context-sensitive fashions. The Cake compiler then generates (nasty C++) wrapper code, saving the programmer from writing wrappers by hand. Motto: down with manual glue coding, down with silos!
A recurring theme of my work is that I like to ask big questions, including questions of the form “what is X?” (for various X). The X so far have been the following.
A note about paywalls: those unfamiliar with research publishing may not be aware that authors see none of the proceeds from paywalled content. I have no interest, financial or otherwise, in keeping my work behind paywalls. If something you'd like to read isn't available, this is either from my mistake or neglect; please contact me.
Abstract: Type checkers are the most commonly used form of static analysis, but their design is coupled to the rest of the language, making it hard or impossible to bring new kinds of reasoning to existing, unmodified code. We propose a novel approach to checking advanced type invariants and properties in unmodified source code, while approaching the speed and ease of simple, syntax directed type checkers. The insight is that by combining a deep program analysis (symbolic execution) with a cheaper program abstraction (based on program slicing), it appears possible to reconstitute type-checking in the context of an underapproximate analysis. When the program's “type level” can be opportunistically disentangled from the “value level”, this is done by a program abstraction step, in some cases removing the underapproximation. We implement a simple prototype that demonstrates this idea by checking the safety of generic pointers in C, pointing to benefits such as safe homogeneous and heterogeneous generic data structures.
This paper is about an idea I've had for some years about how to separate type-correctness (the highly generic and valuable property) from type-checking the analysis (the annoyingly shallow syntax-directed rules that get baked into different languages differently). It was gamely picked up by Justus Adam; this paper describes a solid assault on the initial part of the problem.
Abstract: Illich's notion of conviviality centres on the balance between individual freedom to act and collective freedom from domination. This balance, or tension, is present in the design of most user-facing computer systems, and especially in the design of programming systems. Software lore has arisen with at best a skewed perspective on such issues, having developed from an industrial viewpoint. In this paper I survey some tentative design principles, extracted from examples of research work or (more often) systems used in practice, which (sometimes by accident) do show some regard for conviviality. Although preliminary, my hope is that these principles may yet develop into a collection of design hints at least equal, and largely countervailing, to the less conviviality-prone ideas circulating in today's software folklore. Relevant topics include language design, information hiding, language virtual machines, portability, classical logic, and layered system design. I also briefly consider the intertwined social and political constructs, such as copyleft, ownership and community responsibility, asking how to evolve or generalise these towards convivial ends.
This is a paper I threw together based on collected observations about power structures in software, riffing on some extracts from Illich. Such structures can be both “in” software, as technical design points that exploit, assume and/or entrench power differentials. They can also be “behind” software, in the processes by which it is developed. The paper considers both kinds, and is framed around articulating (often obliquely) alternative patterns that might avoid disempowering the individual—often no doubt bringing costs of their own, but perhaps ones worth bearing.
Abstract: Existing approaches to interfacing high- and low-level code push considerable burdens onto the programmer, such as wrapper maintenance, explicit code generation, interface re-declaration, and/or signalling to garbage collectors. We note that run-time information on data layout and allocations in native code is available, and may be extended with knowledge of object lifetimes to assist in automating garbage collection. We describe work in progress towards an extension of the CPython virtual machine along these lines. We report initial experience building a first working prototype,and some early performance experiments.
This is the next step in the quest to eliminate foreign function interfacing (FFIs). Guillaume developed a remarkable prototype CPython extension module which makes good progress towards this. Besides the “no FFI” approach and use of my liballocs for native type info, it takes at least two further unusual steps: (1) avoiding VM modifications, and (2) write-barriering native code in order to liberate the user from counting references. A not-so-hidden theme is that reference counting is a pain in the backside... although the problems are certainly solvable, I would expect better results per unit effort by pursuing the same ideas within a tracing GC context.
Abstract:
Back in 2015 I had noticed some bugs in DWARF unwind info, which seemed needlessly flaky. In the 21st century, we really should be able to have accurate and fast backtraces... so I thought up a few ideas, and got together with Francesco to develop them further. Can we verify the unwind info against the instruction stream, by looking at how they adjust the stack pointer? Can we synthesise it from just the instructions, by the same idea? And can we “compile” it to get faster backtraces? Thanks to Francesco and Théophile, who really ran with this project (my practical involvement was mostly limited to occasional advice and a bit of emergency debugging), the answer to all three is yes.
Abstract: Research on programming systems design needs to consider a wide range of aspects in their full complexity. This includes user interaction, implementation, interoperability but also the sustainability of its ecosystem and wider societal impact. Established methods of evaluation, such as formal proofs or user studies, impose a reductionist view that makes it difficult to see programming systems in their full complexity and, consequently, force researchers to adopt simplistic perspectives. This paper asks whether we can create more amenable methods of evaluation derived from existing informal practices such as multimedia essays, demos, and interactive tutorials. These popular forms incorporate recorded or scaffolded interaction, often embedded in a text that guides the reader. Can we augment such forms with structure and guidelines to obtain methods of evaluation suitable for peer review? We do not answer this question, but merely seek to identify some of the problems and instigate a community discussion. In that spirit we propose to hold a panel session at the conference.
Abstract: The semantics of pointers and memory objects in C has been a vexed question for many years. C values cannot be treated as either purely abstract or purely concrete entities: the language exposes their representations, but compiler optimisations rely on analyses that reason about provenance and initialisation status, not just runtime representations. The ISO WG14 standard leaves much of this unclear, and in some respects differs with de facto standard usage --- which itself is difficult to investigate. In this paper we explore the possible source-language semantics for memory objects and pointers, in ISO C and in C as it is used and implemented in practice, focussing especially on pointer provenance. We aim to, as far as possible, reconcile the ISO C standard, mainstream compiler behaviour, and the semantics relied on by the corpus of existing C code. We present two coherent proposals, tracking provenance via integers and not; both address many design questions. We highlight some pros and cons and open questions, and illustrate the discussion with a library of test cases. We make our semantics executable as a test oracle, integrating it with the Cerberus semantics for much of the rest of C, which we have made substantially more complete and robust, and equipped with a web-interface GUI. This allows us to experimentally assess our proposals on those test cases. To assess their viability with respect to larger bodies of C code, we analyse the changes required and the resulting behaviour for a port of FreeBSD to CHERI, a research architecture supporting hardware capabilities, which (roughly speaking) traps on the memory safety violations which our proposals deem undefined behaviour. We also develop a new runtime instrumentation tool to detect possible provenance violations in normal C code, and apply it to some of the SPEC benchmarks. We compare our proposal with a source-language variant of the twin-allocation LLVM semantics proposal of Lee et al. Finally, we describe ongoing interactions with WG14, exploring how our proposals could be incorporated into the ISO standard.
I helped out on this paper about work towards a realistic formal semantics of C, specifically focusing on the meaning of pointers and “objects”. Pointers have a latent notion of provenance in C owing to the semantic distinctness of one-past-the-end pointers from the potential next object's same-valued address. But also, some provenance-unaware operations are also available (like casts from integers that were not previously created from pointers, or reading pointers from I/O). This paper proposes a formal model, backed by some empirical data, which constitutes a first cut at a realistic and mathematically precise semantics.
Abstract: C codebases frequently embed nonportable and unstandardized elements such as inline assembly code. Such elements are not well understood, which poses a problem to tool developers who aspire to support C code. This paper investigates the use of x86-64 inline assembly in 1264 C projects from GitHub and combines qualitative and quantitative analyses to answer questions that tool authors may have. We found that 28.1% of the most popular projects contain inline assembly code, although the majority contain only a few fragments with just one or two instructions. The most popular instructions constitute a small subset concerned largely with multicore semantics, performance optimization, and hardware control. Our findings are intended to help developers of C-focused tools, those testing compilers, and language designers seeking to reduce the reliance on inline assembly. They may also aid the design of tools focused on inline assembly itself.
Abstract: High-level programming languages and their virtual machines have long aspired to erase operating systems from view. Starting from Dan Ingalls' Smalltalk-inspired position that “an operating system is a collection of things that don't fit inside a language; there shouldn't be one”, I contrast the ambitions and trajectories of Smalltalk with those of Unix and its descendents, exploring why Ingalls's vision appears not (yet) to have materialised. Firstly, I trace the trajectory of Unix's “file” abstraction into Plan 9 and beyond, noting how its logical extrapolation suggests a surprisingly Smalltalk-like end-point. Secondly, I note how various reflection and debugging features of Smalltalk have many analogues in the fragmented world of Unix programming. Finally, I consider these how these two directions of change may yet be convergent within future Unix-derived systems, and what this might mean for programming languages.
This is a history and philosophy piece, based on my paper at the PLOS workshop 2013 but expanding considerably on the historical discussion, and going easier (but not too easy) on the technical content. This version grew out of a talk I gave at the 2016 History and Philosophy of Programming symposium in Paris, and has been improved by two rounds of review and helpful comments from many correspondents, for which I'm very grateful.
Abstract: The C language leads a double life: as an application programming language of yesteryear, perpetuated by circumstance, and as a systems programming language which remains a weapon of choice decades after its creation. This essay is a C programmer's reaction to the call to abandon ship. It questions several properties commonly held to define the experience of using C; these include unsafety, undefined behaviour, and the motivation of performance. It argues all these are in fact inessential; rather, it traces C's ultimate strength to a communicative design which does not fit easily within the usual conception of “a programming language”, but can be seen as a counterpoint to so-called “managed languages”. This communicativity is what facilitates the essential aspect of system-building: creating parts which interact with other, remote parts—being “alongside” not “within”.
This is an essay I wrote after realising that the reasons many PL researchers think motivate people to (still) program in C aren't actually the reasons that I, as a C programmer, find holding true in my own case.
Abstract: Existing approaches for detecting type errors in unsafe languages are limited. Static analysis methods are imprecise, and often require source-level changes, while most dynamic methods check only memory properties (bounds, liveness, etc.), owing to a lack of run-time type information. This paper describes libcrunch, a system for binary-compatible run-time type checking of unmodified unsafe code, currently focusing on C. Practical experience shows that our prototype implementation is easily applicable to many real codebases without source-level modification, correctly flags programmer errors with a very low rate of false positives, offers a very low run-time overhead, and covers classes of error caught by no previously existing tool.
This is a technical paper describing a system (called libcrunch) for run-time type checking in native code (C, for now). Perhaps the most important message of the paper, albeit not strictly its novelty, is that we can check properties of pointers without attaching metadata to the pointers themselves. Indeed, propagating metadata with pointers is quite expensive, whereas looking up properties of the pointed-to objects can be made quite cheap.
Abstract:
Beneath the surface, software usually depends on complex linker
behaviour to work as intended. Even linking
hello_world.c is surprisingly involved, and systems software
such as libc and operating system kernels rely on a host of
linker features. But linking is poorly understood by working
programmers and has largely been neglected by language researchers.
In this paper we survey the many use-cases that linkers support and
the poorly specified linker speak by which they are controlled:
metadata in object files, command-line options, and linker-script
language. We provide the first validated formalisation of a realistic
executable and linkable format (ELF), and capture aspects of the
Application Binary Interfaces for four mainstream platforms (AArch64,
AMD64, Power64, and IA32). Using these, we develop an executable
specification of static linking, covering (among other things) enough to link
small C programs (we use the example of bzip2) into a correctly
running executable. We
provide our specification in Lem and Isabelle/HOL forms. This is the
first formal specification of mainstream linking. We have used the
Isabelle/HOL version to prove a sample
correctness property for one case of AMD64 ABI relocation,
demonstrating that the specification supports formal proof, and as a
first step towards the much more ambitious goal of verified linking.
Our work should enable several novel strands of research, including
linker-aware verified compilation and program analysis, and better
languages for controlling linking.
This is a technical paper about a formal but realistic (non-idealised) specification of Unix linking. Its focus is mostly on static linking of ELF files, although not exclusively. It describes our ongoing effort to produce an executable formal specification of linking that can be used as a link checker, a basis for mechanised reasoning about compiler and linker implementations, and as a document for developers to consult. Most critically perhaps, it explains why linking matters: why linking is not just about separate compilation but actually affects program semantics. It details a large zoo of “linking effects” programmers can achieve by (and often only by) instructing the linker.
Abstract: Dynamic program analyses, such as profiling, tracing and bug-finding tools, are essential for software engineering. Unfortunately, implementing dynamic analyses for managed languages such as Java is unduly difficult and error-prone, because the runtime environments provide only complex low-level mechanisms. Currently, programmers writing custom tooling must expend great effort in tool development and maintenance, while still suffering substantial limitations such as incomplete code coverage or lack of portability. Ideally, a framework would be available in which dynamic analysis tools could be expressed at a high level, robustly, with high coverage and supporting alternative runtimes such as Android. We describe our research on an “all-in-one” dynamic program analysis framework which uses a combination of techniques to satisfy these requirements.
This is an accessible introduction to the DiSL and ShadowVM systems for writing dynamic analyses for Java programs. It also covers recent developments in supporting a less cooperative runtime, namely Android's Dalvik VM, which brings particular challenges.
Abstract: Programmers face much complexity from the co-existence of “native” (Unix-like) and virtual machine (VM) “managed” run-time environments. Rather than having VMs replace Unix, we investigate whether it makes sense for Unix to “become a VM”, in the sense of evolving its user-level services to subsume services offered by VMs. We survey the (little-understood) VM-like features in modern Unix, noting common shortcomings: a lack of semantic metadata (“type information”) and the inability to bind from objects “back” to their metadata. We describe the design and implementation of a system, liballocs, which adds these capabilities in a highly compatible way, and explore its consequences.
This is a fairly technical paper introducing liballocs as a “missing link” in the design of operating systems. It describes some things we can already do (witness libcrunch) and things we expect to be able to do with it. It also puts liballocs in context, relating it both to the Unix tradition and to the Lisp- and Smalltalk-style virtual machine tradition.
Abstract: The concept of “type” has been used without a consistent, precise definition in discussions about programming languages for 60 years. In this essay I explore various concepts lurking behind distinct uses of this word, highlighting two traditions in which the word came into use largely independently: engineering traditions on the one hand, and those of symbolic logic on the other. These traditions are founded on differing attitudes to the nature and purpose of abstraction, but their distinct uses of “type” have never been explicitly unified. One result is that discourse across these traditions often finds itself at cross purposes, such as overapplying one sense of “type” where another is appropriate, and occasionally proceeding to draw wrong conclusions. I illustrate this with examples from well-known and justly well-regarded literature, and argue that ongoing developments in both the theory and practice of programming make now a good time to resolve these problems.
This is an essay I wrote out of discomfort in the vagueness of the word “type”, both in popular usage and among “expert” researchers. I finished it with the feeling that I'd just written “one to throw away”, so it may have been more illuminating for me to write than for you to read. Nevertheless I am very happy to receive comments. Subject to demand, I'll also be happy to accommodate any corrections (including corrections of omissions, where practical) in the author's version.
Abstract: Operating systems and programming languages are often informally evaluated on their conduciveness towards composition. We revisit Dan Ingalls' Smalltalk-inspired position that “an operating system is a collection of things that don't fit inside a language; there shouldn't be one”, discussing what it means, why it appears not to have materialised, and how we might work towards the same effect in the postmodern reality of today's systems. We argue that the trajectory of the “file” abstraction through Unix and Plan~9 culminates in a Smalltalk-style object, with other filesystem calls as a primitive metasystem. Meanwhile, the key features of Smalltalk have many analogues in the fragmented world of Unix programming (including techniques at the library, file and socket level). Based on the themes of unifying OS- and language-level mechanisms, and increasing the expressiveness of the meta-system, we identify some evolutionary approaches to a postmodern realisation of Ingalls' vision, arguing that an operating system is still necessary after all.
This is a bit of a curiosity. It's a position paper which I wrote after Michael Haupt pointed me at the cited Ingalls article. Every so often, the idea emerges that we don't need operating systems if we have language runtimes. This idea doesn't seem to stick, but it's hard to pin down what's wrong with it. Here I argue that a neglected value of operating systems is in their meta-abstractions, of which the Unix file is one classic example. These abstractions are not part of any language (or part of all of them, depending on how you see it). In fact, I argue the OS could and should provide much more of a meta-system. The paper makes a number of contrasts between Unix files and Smalltalk objects, then proposes a few concrete suggestions for evolutionarily improving the Unix-like metasystem towards a more Smalltalk-like one.
Abstract: Accuracy, completeness, and performance are all major concerns in the context of dynamic program analysis. Emphasizing one of these factors may compromise the other factors. For example, improving completeness of an analysis may seriously impair performance. In this paper, we present an analysis model and a framework that enables reducing analysis overhead at runtime through adaptive instrumentation of the base program. Our approach targets analyses implemented with code instrumentation techniques on the Java platform. Overhead reduction is achieved by removing instrumentation from code locations that are considered unimportant for the analysis results, thereby avoiding execution of analysis code for those locations. For some analyses, our approach preserves result accuracy and completeness. For other analyses, accuracy and completeness may be traded for a major performance improvement. In this paper, we explore accuracy, completeness, and performance of our approach with two concrete analyses as case studies.
This paper describes a mechanism for dynamically undeploying and redeploying bytecode instrumentation, and its application to some scenarios where this can usefully optimise an analysis without sacrificing too much correctness or completeness. My personal favourite thing about it is the observation that analysis maintain collections of repeatedly mutated values (profile data, monitors, etc.), and the transition rules obeyed by these mutations give us insight into when instrumentation can be removed. We can only exploit this in a very limited set of cases at present, but the potential for future work is interesting.
Abstract: Dynamic analysis tools are often implemented using instrumentation, particularly on managed runtimes including the Java Virtual Machine (JVM). Performing instrumentation robustly is especially complex on such runtimes: existing frameworks offer limited coverage and poor isolation, while previous work has shown that apparently innocuous instrumentation can cause deadlocks or crashes in the observed application. This paper describes ShadowVM, a system for instrumentation-based dynamic analyses on the JVM which combines a number of techniques to greatly improve both isolation and coverage. These centre on the offload of analysis to a separate process; we believe our design is the first system to enable genuinely full bytecode coverage on the JVM. We describe a working implementation, and use a case study to demonstrate its improved coverage and to evaluate its runtime overhead.
This paper describes a system for dynamic analysis of Java bytecode programs in a highly isolated fashion, by running the analysis in a separate process, adding only straight-to-native instrumentation paths within the Java bytecode itself. This sidesteps several coverage and robustness issues that plague in-VM analysis, as detailed by our VMIL 2012 workshop paper. The approach presented in this paper is probably the best you can do in a way that's (notionally) portable across JVMs. Its performance is sadly not great, but the next step (how to modify the VM's interfaces to support more efficient approaches) is set to be very interesting. This paper also gives an initial treatment of sychronisation requirements (i.e. relative ordering of events in the observed program w.r.t. their observation by the analysis), which vary hugely from one analysis to another and have a critical influence on achievable performance.
Abstract: The Java Virtual Machine (JVM) today hosts implementations of numerous languages. To achieve high performance, JVM implementations rely on heuristics in choosing compiler optimizations and adapting garbage collection behavior. Historically, these heuristics have been tuned to suit the dynamics of Java programs only. This leads to unnecessarily poor performance in case of non-Java languages, which often exhibit systematic difference in workload behavior. Dynamic metrics characterizing the workload help identify and quantify useful optimizations, but so far, no cohesive suite of metrics has adequately covered properties that vary systematically between Java and non-Java workloads. We present a suite of such metrics, justifying our choice with reference to a range of guest languages. These metrics are implemented on a common portable infrastructure which ensures ease of deployment and customization.
This paper describes an effort to extend the space of dynamic metrics defined at the JVM level so that they cover dimensions which exhibit variation between Java and non-Java languages running on the JVM. It describes both a new selection of metrics, and an infrastructure on which they are implemented. One of the parts I find neat is the query-based definition of a large subset of the metrics. Currently this is mostly a convenience, but I hope that in future work, this will prove useful for identifying new kinds of profiling information which is cheap enough to collect and use during dynamic compilation. (In other words, we want cheap observations which do a reasonable job of predicting the workload properties characterised by more expensive metrics, like the ones in this paper.)
Abstract: Dynamic program analysis tools based on code instrumentation serve many important software engineering tasks such as profiling, debugging, testing, program comprehension, and reverse engineering. Unfortunately, constructing new analysis tools is unduly difficult, because existing frameworks offer little or no support to the programmer beyond the incidental task of instrumentation. We observe that existing dynamic analysis tools re-address recurring requirements in their essential task: of maintaining state which captures some property of the analysed program. This paper presents a general architecture for dynamic program analysis tools which treats the maintenance of analysis state in a modular fashion, consisting of mappers decomposing input events spatially, and updaters aggregating them over time. We show that this architecture captures the requirements of a wide variety of existing analysis tools.
This paper is a move towards enabling a more event-driven, reactive style of programming in the creation of dynamic analysis tools. We built an analysis framework, FRANC, based around a library of instrumentation primitives (event sources) together with higher-level “state-oriented” components for decomposing and aggregating events. The neatest idea in the paper is to separate the spatial decomposition of incoming events from their temporal aggregation. This allows a “thin waist” hourglass design which opens up greater opportunity for composition and recombination of independent components.
Abstract: Bytecode instrumentation is a preferred technique for building profiling, debugging and monitoring tools targeting the Java Virtual Machine (JVM), yet is fundamentally dangerous. We illustrate its dangers with several examples gathered while building the DiSL instrumentation framework. We argue that no Java platform mechanism provides simultaneously adequate performance, reliability and expressiveness, but that this weakness is fixable. To elaborate, we contrast internal with external observation, and sketch some approaches and requirements for a hybrid mechanism.
Shortly after joining USI, it became clear that a lot of our group's implementation work was concerned with working around shortcomings with the JVM. Moreover, we had seen some worrying problems with no apparent solution. The reason is that the JVM platform actively encourages tool construction by bytecode instrumentation. But writing tools which instrument bytecode safely are invariably hard and often (depending on the instrumentation) impossible. I don't just mean impossible for non-specialist programmers to get right; I mean outright impossible. Specifically, there is no way to avoid the risk of introducing real and disastrous bugs to the instrumented program. The reason is that instrumentation code is not protected from the base program, and runs at near-arbitrary points, so easily creates deadlock and reentrancy problems, which there is no mechanism to guard against. (Unlike in process-level approaches, “avoid shared data” is not an option. Even dropping to native code doesn't guarantee anything.) There are several more modest difficulties also. This paper is my attempt at turning my colleagues' war stories into a manifesto for VM improvements. It argues that instrumentation is a bad mechanism on which to base VM observation interfaces, and sketches out some alternatives. I take the opportunity to grind one or two personal axes, including the one about how in-VM debugger servers are a step backwards from compiler-generated debugging information.
Abstract: Current VM designs prioritise implementor freedom and performance, at the expense of other concerns of the end programmer. We motivate an alternative approach to VM design aiming to be unobtrusive in general, and prioritising two key concerns specifically: foreign function interfacing and support for runtime analysis tools (such as debuggers, profilers etc.). We describe our experiences building a Python VM in this manner, and identify some simple constraints that help enable low-overhead foreign function interfacing and direct use of native tools. We then discuss how to extend this towards a higher-performance VM suitable for Java or similar languages.
This is a short paper about some in-progress work that was begun with Conrad Irwin when he was a final-year undergraduate in Cambridge and I was his project supervisor. Conrad implemented a usable subset of Python which uses DWARF debugging information to interface with native libraries, rather than the usual wrapper generation step. This paper extends that idea in a few ways, culminating in a design which can (we hope!) allow native tools (gdb, Valgrind, etc.) to operate seamlessly across programs written in a diverse mixture of languages, while avoiding conventional foreign function interfacing overheads. I am continuing the implementation effort.
Abstract: Tools for composing software impose homogeneity requirements on what is composed—that modules must share a language, target the same libraries, or share other conventions. This inhibits cross-language and cross-infrastructure composition. We observe that a unifying representation of software turns heterogeneity of components into a matter of styles: recurring interface patterns that cross-cut large numbers of codebases. We sketch a rule-based language for capturing styles independently of composition context, and describe how it applies in two example scenarios.
This is a short paper describing an idea which I developed in the last technical chapter of my PhD thesis. The idea is that interfaces differ stylistically—in things like how they pass arguments, how they report errors, and in some less routine ways—and that by capturing these styles independently of composition context, we could achieve a lot more compositions with a lot less programmer effort. It's pitched as an extension to Cake; so far it's unimplemented, with the exception of a few foundations in the compiler; I intend to implement it one day, but have no idea when I'll get time. The most compelling content of the paper is therefore the preliminary survey (read: brainstorm) of styles in well-known interfaces most readers will recognise.
Abstract: Software's expense owes partly to frequent reimplementation of similar functionality and partly to maintenance of patches, ports or components targeting evolving interfaces. More modular non-invasive approaches are unpopular because they entail laborious wrapper code. We propose Cake, a rule-based language describing compositions using interface relations. To evaluate it, we compare several existing wrappers with reimplemented Cake versions, finding the latter to be simpler and better modularised.
This is a long research paper describing the design, implementation and evaluation of the basic Cake language and runtime.
Abstract: Conventional tools yield expensive and inflexible software. By requiring that software be structured as plug-compatible modules, tools preclude out-of-order development; by treating interoperation of languages as rare, adoption of innovations is inhibited. I propose that a solution must radically separate the concern of integration in software: firstly by using novel tools specialised towards integration (the “integration domain”), and secondly by prohibiting use of pre-existing interfaces (“interface hiding”) outside that domain.
This is a relatively “long view” on the position underlying my PhD work, expounding my gut feeling that the way we build software is bizarrely fragile and unrealistic, owing to the expectation that big pieces of software should be made from perfectly-fitting smaller pieces without any sort of “glue” or integration material. This is what makes software maintenance expensive, software evolution difficult and software functionality inflexible. I'm always especially glad to receive comments on the argument and general story presented by this paper.
Abstract: Existing black-box adaptation techniques are insufficiently powerful for a large class of real-world tasks. Meanwhile, white-box techniques are language-specific and overly invasive. We argue for the inclusion of special-purpose adaptation features in a configuration language, and outline the benefits of targetting binary representations of software. We introduce Cake, a configuration language with adaptation features, and show how its design is being shaped by two case studies.
This is a short work-in-progress paper describing one part of my PhD work. The idea is that composition of software in binary form is desirable, both for convenience and because binaries offer a somewhat language-neutral model of software. The problem of linking together mismatched components in binary form, specifically at the object code level (meaning these components could be written in any number of languages including C and C++), has had little attention so far. However, a fairly descriptive model of binaries is already provided by debugging info standards like DWARF. The thesis of this work is that a domain-specific configuration language, with its baseline at the DWARF level of abstraction, can be a practical tool for describing compositions of mismatched binaries, where the language explicitly provides for adaptation to address mismatches in a black-box fashion. The main contribution is describing the proposed features of our language, which is called Cake, and the case-study experiences which are shaping it.
Writing a large survey is a task only a fool would undertake... so I had a go. Practical software adaptation techniques means techniques for bridging or modifying the interfaces (mostly...) of software components. It doesn't include adaptive techniques, as found in reflective and/or mobile middleware. The paper is unfortunately a bit disjointed, coming in two halves: first, it covers twentyish approaches found in the literature, and classifies them on what they do and how. Secondly it then discusses (borrowing some bits from my earlier workshop paper) various other aspects of such techniques: adapting component instances versus implementations, static versus dynamic components, elegance of model versus supporting heterogeneity, the utility of various distinctions between computational and communicational code, and what “loosely coupled” really means. In hindsight, a better survey would take all these latter issues and fold them into the first half's taxonomy, visiting the work in some order which allows elaboration on the more discursive matters as they are raised. It also needs a neater way, most likely a better graphical language, to describe the taxonomised nature of each technique, rather than relying on somewhat impenetrable verbal descriptions.
Abstract: Existing work on software connectors shows significant disagreement on both their definition and their relationships with components, coordinators and adaptors. We propose a precise characterisation of connectors, discuss how they relate to the other three classes, and contradict the suggestion that connectors and components are disjoint. We discuss the relationship between connectors and coupling, and argue the inseparability of connection models from component programming models. Finally we identify the class of configuration languages, show how it relates to primitive connectors and outline relevant areas for future work.
ACM copyright notice: © ACM, 2007. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SYANCO ’07. http://doi.acm.org/10.1145/1294917.1294918
This is a somewhat strident and sweeping workshop paper where I try to decode the meaning and intentions of the many and various references to “software connectors” found in the literature. I then talk a bit about how this relates to coordination, adaptation and the concept of coupling, and give a sketchy outline of how I think compositional and communication-oriented concerns could or should be abstracted and supported by tools. Apart from the latter, there's not really any research contribution, except an attempt to hold to account the vague and flimsy motivational arguments that appear in the introductions of lots of (often otherwise good) papers.
See also my author page on DBLP (with much credit to Michael Ley and others for this great bibliographic resource).
This is quite a nice introduction, albeit brief and rather dense, to my PhD work. Or at least the problem, approach, ideas etc., since not much actual work is mentioned....
This is a thesis proposal I submitted after about 12 months of PhD work. It reads nicely enough but is far too ambitious!
I'm currently an investigator on the following research grants.
I was previously an investigator on the following grants.
When Justus decided to withdraw from his PhD after 13 months, the funder refused my offer to find external funds to top up the award back to a full studentship. Instead they terminated the grant, citing concerns that to me seemed bureaucratic at best and incoherent at worst. If you don't want to be messed around like this, please think twice before accepting money from the National Cybersecurity Centre or its Research Institutes. In fairness, I received an apology from the director of VeTSS, whom I cannot fault, while the NCSC have told me they're trying to do better in future. I would welcome reports on how that's going.
I believe that funding policy in the UK (and elsewhere) is on the wrong track, and increasingly so. We should fund people, not projects. In universities, that means mostly funding institutions via a block grant, with corresonding reduction of the funds allocated to individual investigators' short-term projects.
My research interests are outlined at the top of this page. In summary: although broad, they fall mostly within the intersections of systems, programming languages and software engineering.
For my PhD I worked on supporting software adaptation at the level of the operating system's linking and loading mechanisms. Here “adaptation” means techniques for connecting and combining software components which were developed independently, and therefore do not have matched interfaces. My emphasis was on techniques which are practical, adoptable and work with a wide variety of target code, at the (hopefully temporary) expense of safety and automation.
My dissertation is now available as a technical report.
I currently supervising the following PhD students.
I previously supervised the following students
I am actively seeking research students who wish to do a PhD under my supervision. If this interests you, please contact me.
Funding is available, by competition, through King's College London, and possibly from other sources. Please contact me (regardless of the time of year!).
I frequently supervise research-flavoured projects conducted by Bachelor's and Master's students. These are in the Teaching section.
I also maintain a list of project ideas for students at all levels, and am always happy to talk about other ideas.
During 2005–06 I was a Research Assistant in Cambridge on the XenSE and Open Trusted Computing projects, under Steven Hand. Both projects seek to implement a practical secure computing platform, using virtualisation (and similar technologies) as the isolation mechanism.
XenSE never had a web page of its own, but you might want to look at the abstract on the project's EPSRC Grant Portfolio page, or check out the mailing list.
OpenTC is a large EU-funded project involving many major industrial and academic partners, focused on the use of Trusted Computing Group technology to realise many common secure computing use cases.
As part of my work as an RA, I became interested in secure graphical user interfaces including L4's Nitpicker, a minimal secure windowing system. I began work on ports of this system to Linux, XenoLinux and the Xen mini-OS: the Linux version became mostly functional (but not yet secure!) while the others were stymied by various limitations with shared memory on Xen. These limitations are mostly fixed now, but I haven't had time to revisit the project since. Feel free to contact me for more information. If you wanted to take these up, I'd be glad to hear from you.
See also my attic page.
Currently I'm the Programme Chair for Onward! Papers 2020, on the External Review Committee for PLDI 2020, on the PC for ECOOP 2020, on the PC for <Programming> 2020, and on several workshop PCs.
In the past few years I have been on the PC for <Programming> 2019, on the External Review Committee for OOPSLA 2019, on the PC for ISMM 2019, on the PC for the PLOS workshop in 2019, and on the PC for Salon des Refusés at <Programming> 2019. a co-organiser of the 2018 VMIL workshop, on the PC for SPLASH workshops (2016), a co-organiser of Salon des Refusés workshop, and a reviewer for ACM TACO and Software: Practice and Experience.
Before that, on the PC for Onward! Essays (2015) and contributor to a CCC/CRA visioning workshop reporting on funding priorities for topics in accessible computing. Previously I was an external reviewer for the ASE journal, on the programme committee for PPPJ 2013, publicity chair for SC 2013, on the programme committee for RESoLVE 2012 at ASPLOS, and the shadow programme committee for Eurosys 2012.
Even longer ago, I was privileged to contribute external reviews to SVT at SAC 2012, ESOP 2010 and EuroSys 2009.
I'm a member of the ACM, ACM SIGPLAN and SIGOPS.
I continue to be an Associate Member of the Kent Interdisciplinary Research Centre in Cyber Security (iCCS), in connection with the security applications of my work on topics of run-time safety, program analysis and formal specification.
Since August 2018, when I moved from Cambridge to Kent, I've remained a (sometimes-) visiting researcher at the University of Cambridge's Computer Laboratory (now, more properly, the Department of Computer Science and Technology).
From December 2015 until September 2017 I served on the University of Cambridge's Board of Scrutiny, which has an oversight role in University governance. During the 2016–17 year I also acted as the Board's Secretary.
At the Computer Laboratory I served on, and (from June 2017 until July 2018) chaired the “post-doc forum”, later renamed Research Staff Forum, representing research staff.
From May 2015 until July 2018, I was a postdoctoral affiliate at Christ's College (of which I'm also an alumnus, from my BA and PhD days).
From summer 2015 until summer 2018 I was an elected officer of the Postdocs of Cambridge Society (PdOC) which, in addition to providing networking and social events within the cross-disciplinary postdoctoral community, represents postdoctoral staff to various University bodies.
Since May 2008 I've been a Fellow of the Cambridge Philosophical Society.
I was recently a lecturer on the following modules at the University of Kent.
Previously at Kent, I lectured on the following discontinued modules.
If you're a student at my current institution, chances are you can find lectures I've given via the usual online platforms. Selected lecture slides/videos are made publicly available here.
I'm interested in supervising bright and enthusiastic Bachelor's and Master's students for their individual projects. For ideas and to find out what I'm interested in, see my list of suggested projects. I'm also always extremely happy to talk to students who have their own ideas.
I've supervised many projects in the past. I was fortunate to work with the following final-year research project (CO620) student in Kent:
Although I'm no longer in Cambridge, Bachelor's students there can still read my thoughts about Part II projects. I supervised the following Part II projects in Cambridge:
Older teaching material (including various Cambridge-specific things) can be found on my attic page.
Note: as of late 2011, I have started using GitHub. Over time, code for all my larger projects will start to appear on my GitHub page. This page is likely to remain the more complete list, for the time being.
My research work involves building software. Inevitably, this software is never “finished”, rarely reaches release-worthy state, and usually rots quickly even then. So not much here is downloadable yet; this will improve over time, but in the meantime I list everything and invite you to get in touch if anything sounds interesting. My main projects, past and present, are:
I've also submitted small patches to various open-source projects including LLVM (bugfix to bitcode linker), binutils (objcopy extension), gcc (documentation fix for gcj), Knit (compile fixes), Datascript (bug fixes), DICE (bug fixes), pdfjam (support “page template” option) and Claws Mail (support cookies file in RSS plugin). Some of them have even been applied. :-)
Apart from my main development projects, I sometimes produce scripts and other odds and ends which might be useful to other people. Where time permits, I try to package them half-decently and make them available here.
I have written some scripts which attempt to retrieve decent BibTeX for a given paper (as a PDF or PostScript file). details
I've written a nifty script for printing papers, which helps people save paper, share printed-out papers and discover perhaps unexpected collaborators within their institution. details
I have a Makefile which downloads and compiles Tripos past-paper questions. It's pretty much self-documenting. Here it is.
I compiled a list of the University Officers of the Mathematical Laboratory or Computer Laboratory over the period 1937–2000, using the data from ACAD.
I have built a sizable collection of vaguely useful shell scripts and other small fragments. One day “soon” I will get round to publishing them. The biggest chunks are my login scripts, which use Subversion to share a versioned repository of config files across all the Unix boxes that I have an account on, and the makefile and m4 templates that build this web page. I need to clean these up a bit. In the meantime, if you're interested in getting hold of them, let me know.
Occasionally I write down some thoughts which somehow relate to my work. They now appear in the form of a blog. Posts are categorised into research, and teaching, development, publishing, higher education and meta strands. There's also an index of all entries.
I have the beginnings of a personal web page. It's very sparse right now. Have a look, if you like.
Office | Bush House (N)7.06 |
stephen@humprog.org | |
Post | Dr Stephen Kell Department of Informatics King's College London Bush House, 30 Aldwych London, WC2B 4BG United Kingdom |
Content updated at Tue 2 May 16:00:00 BST 2023.
validate this page