Diverting trains of thought, wasting precious time
I was fortunate enough to go to OOPSLA this year---the last year of its present headline form, before it slips undner the federated SPLASH banner, alongside Onward!, another event I was fortunate enough to participate in this year.
On the Monday I was at the Curricula for Concurrency workshop. It was a really interesting venue which I learnt something by attending. Firstly I gained a clearer distinction in my mind between so-called “deterministic concurrency” (essentially data-parallel computation, as far as I could work out) and the more usual shared-resource concurrency problems. It's not that I hadn't differentiated them before, but my experience is mainly with the latter, and I'd written my position targetting it almost exclusively.
My talk went down fairly well, as far as I could tell. Mid-talk I found I was focussing on a balding grey-haired man in the middle of the audience, who nodded along to some of my points. I hadn't noticed him before. Only after my talk did my face recognition circuits reactivate, and I realised that it had been Bjarne Stroustrup. I'm sure he didn't come for my talk per se, but it was nice to have him in the room. The event itself was chock-full of distinguished people (Guy Steele, Doug Lea, Vijay Saraswat) which was a privilege. I intend to keep a presence (albeit fairly passive) in that community to see what comes about... it's a really interesting topic with a diversity of viewpoints. In particular, Peter Welch's talk (and a subsequent chat) has forced me to rethink the times when concurrency has a simplifying power. I still think that sequential reasoning is easier for most humans, but I'll grant that concurrency is often a very nice abstraction and does model the universe fairly directly. I don't yet buy Peter's claim that it's a truer model than a state-based model, necessarily, but I'm open to argument.
Barbara Liskov's keynote was an interesting retrospective, and gave me a reading list to catch up on, but a bit unremarkable as far as future plans go. She even gave the impression, wrongly I believe, that she hadn't been interested in programming language advances since she began concentrating on systems. I don't think she meant that, so never mind.
There was some fairly interesting stuff at the first two Onward! session, but nothing that struck a major chord with me, although Pi (recently featured on LtU) is something I'd like to understand better. It sounds too much like a mathematician's language to me, hence probably no good for most programmers, but there are some interesting principles at work (meta-completeness made an impression on me in the talk, but there are others). It's a pity they called it Pi---surely they know something about the preexisting process calculus of a very similar name?
The Onward! keynote by Thomas Malone was pretty interesting. In the spirit of Onward!, it left more questions than answers, mainly about the preconditions for the various “genes” of collective and collaborative structures that he described. His talk also reminded me that Chris Dellarocas had proposed the core idea of my own work, a separation of functionality from integration (although in a rather different context, and he didn't describe it in those terms) in his mid-90s work. Malone was a great speaker and handled some slightly hostile questioning about his political beliefs rather well. In response to an aggressive lefty questioner, he criticised business schools for treating “making money” as the sole aim of business, as opposed to more human-oriented values. The only foot he put wrong in my eyes was saying that “making money is a valid goal, but not the only one”. Call me radical, but making money is only ever a means to an end, and so never a goal, except among those who've really lost their perspective---tokens of wealth as an end in themselves can only make life a sad and meaningless game of Monopoly. Most people are interested in what money can buy them, of course....
The OOPSLA session on concurrency was interesting enough: JChorus (from Penn State) is a neat adaptive heap-partitioning scheme which allows sequential logic to run amid exclusively-owned heap regions which grow and split as the aliassing structure of the heap changes. They used only one fairly obscure algorithm as an example throughout (some triangulation thing) which wasn't terribly convincing, but it's at least a neat idea. The second talk, however, from Emery Berger of UMass, delivered far less than it promised. It promised safe concurrent programming for C and C++, but as far as the talk made out, it delivered only an umimpressive serializing commit manager (a bit like TSO) which, unless I'm missing something, can't possibly perform as well as an STM (and STM performance still isn't great).
Next it was my talk at the Onward! short presentations session. As usual I was underwhelmed by the response---for what I consider to be a big idea, the reaction I got was pathetically tiny. I must be doing something wrong. The audience was very small though, and talking at 5pm is never going to draw a huge crowd.
Jeanette Wing's invited talk was a fairly US-centric look at funding issues, although it did remind me to do something to follow-up on Pamela Zave's ICSE invited talk about next-generation internet funding. More interesting was the Onward! panel about green software. I particularly enjoyed the contributions of Vivian Loftness, an architect from CMU who was soundly scathing about contemporary building practices and very robustly advocated a return to passively heated, passively lit, passively ventilated buildings. Steve Easterbrook, among other very sensible contributions, demoed a “collective intelligence“ tool for modelling carbon costs of various everyday actions (e.g. “should I print this paper or read it on my screen?”---you may be surprised).
Gerd Holzmann's invited talk made the interesting observation that space missions of 40 years ago implemented software that performed the same task (at least in that the mission was the same) but with an order of magnitude less code and using an order of magnitude less in resources. I'm not sure whether that's because today's software does more in non-essential matters (cf. the amount done by hardware and humans, which might be correspondingly less), but still, it's a frightening change. Unfortunately, most of his talk focussed not on the clever verification techniques of his SPIN model checker, but on rather more mundane code quality measures (if effective, as far as they go) which his team check using shell scripts and the like. Like many high-assurance developers he eschews dynamic memory allocation, which I find hard to understand since even without a heap allocator, in a purely static-allocation scenario, a program has to account for the used-or-ununsed state of its resources.
I saw an interesting demo of the “anti-Goldilocks debugger” from Virginia Tech, a tool which recovers an original source-level debug-time view of Java programs that have been bytecode-munged by various weaving tools. The tool itself, while fine in its concept, revolves around a decidedly clunky language describing what the mungers do, and I wasn't convinced that the right solution wasn't for mungers to munge the debug information also. Of course, my usual take applies, namely that debuggers need “interpretation” (ability to recover a customised abstract view of running code) and “subjectivity” (ability to view many such abstractions simultaneously). I should write a position or something.
Continuing the debugging theme, Blink (from various people at various places, the precise list of which I forget, but UTA was definitely one) is a solution to multi-language debugging in which, say, gdb and jdb are run concurrently with a master control process mediating between the two. Again, I'd rather just have “language” as a custom interpretation of object code, but that does demand some homogeneity in language implementation. I should read the paper.
Finally it was back to Onward! and some interesting talks, in particular Tom Mullen's about elements of psychology in programming. He advocated language design with an awareness of “chunking”, “analogies” and other cognitive phenomena. It was very interesting, and another welcome bringing-together of programming and psychology. I need to read this paper.
I loved the paper about describing points-to analyses in Datalog... the “meta-engineering” of recovering order, orthogonality and principle within such a messy domain really deserves credit, and the performance figures were superb.
A real highlight was William Cook's talk about data abstraction versus objects. His take is that objects are a purely behavioural concept, in that they are observed through an interface. Meanwhile, ADTs are a dual, constructive sort of thing, in that they are described by a concrete representation with functions defined in its terms. Hence objects are nice and dynamic, but terribly difficult to verify, whereas ADTs are verifiable but rigid. He also claimed that one advance of objects concerning inheritance is “this”, and its ability to maintain correct self-reference across inheritance-based extension. He called self-reference “Y”, presumably after the combinator, although I'm not yet sure whether it's exactly analogous to the one we know and love from lambda calculus. Anyway, it was a very stimulating and engaging talk, so I really must read the essay.
Finally, I enjoyed the talk about Google's work on “optimisation using intended semantics”. The idea is that programmers know much more about how their program should behave, and hence how it could be optimised, than the language semantics allow the compiler to infer safely. So, the work provides annotations that let programmers assert constraints on their program which the optimiser can use. I wasn't terribly convinced by the examples, particularly after a chat with Eric Eide later, but I'm sure good examples do exist. Again, I must read the paper.
Bjarne Stroustrup also gave a talk about a language implementation trick for code-sharing in parametrically-polymorphic languages which do ahead-of-time elaboration (so C++ and sometimes C#, but not Java). The idea was the iterators pointing inside instances of multiply-parameterised classes needn't necessarily depend on all the type parameters, so implementations could be shared if the definitions allowed. Unfortunately, the typedefs for these iterators are implementation-defined in C++, with no guarantee that they'll support this sharing, but a future standard should fix that. Bjarne is more compelling as a writer than as a speaker, which might be yet again a reason to say “I'll read the paper”, but without meaning to sound harsh I may not get around to that one.
OOPSLA is big... but not ridiculous. Industrial participation is a good thing... even ICSE didn't have the same extent of practitioner participation, and apparently this year was a very down year as far as OOPSLA's industry attendance went (it's normally fifty-fifty, but maybe only half the normal industry participation happened this year).
The food and event arrangements are unusual: lunch isn't paid for (except, as a surprise, on the last day), but the poster reception was done as a very nice paid-for buffet dinner. Student registrants don't get to go to go to the main social event (except if, as some did, they blag a returned/unwanted ticket). It was, as I was told (by Yvonne), a comically imbalanced “beach party” event with optimistic proposals of dancing (with the conference's “impressive”, with regard to my expectations, 14% female attendance).
There's a slickness about OOPSLA... the keynote auditorium featured rockin' music before and after talks at main (keynote) venue. There's also a sense of fun: TOOTS, the OOPSLA trivia show, was rather a fun way to end the Tuesday evening, and there was even something called “OOPSLA Idol” which I didn't have the privilege of witnessing. People seem to enjoy themselves at OOPSLA, and that's a valuable thing.
Work-wise, the emphasis on language (not “automatic tool” or “mega-efficient runtime”, although there is a little of that) fits my viewpoint. Tools are domain-specific languages... so let's make them good ones, by understanding languages, and not insisting on automation (which inherently rules out many of the really hard problems). I suppose what I'm saying is that the theme of languages, even though it's only one theme, brings in a nice mix of both practical and theoretical academics (Phil Wadler was a prominent and sometimes long-winded question-asker) as well as highly concrete practitioners. Of course there are tool papers, and good tools... but the desire to understand programming through the deep and pervasive concept of languages is definitely there, and definitely positive.
And finally, to counter my misgivings before attending, OOPSLA is much more than just an OOP conference. The SPLASH reformulation is interesting and welcome. I think I've found my community... so I really hope I'll be able to attend in Reno next year. Thinking further ahead, I wonder how long conferences will be physical get-togethers anyway... there were at least two instances of “remote participation” this year (one panel participation by Skype and one talk by pre-recorded presentation), and surely virtual worlds are the long-term sustainable way to do conferences. By chance, on the weekend following the conference I had my first taste of World of Warcraft, which was interesting. Anyway, it'll be a while before they're as fun or as engaging as the physical meet-up experience.
[/research] permanent link contact