Rambles around computer science

Diverting trains of thought, wasting precious time

Wed, 11 Mar 2020

Fund institutions, not projects

[This post follows a previous post discussing changes to UK government research funding, which was itself a follow-up to my earlier “Postdoc myths” piece.]

In my last post I finished by mentioning Alan Kay's favoured dictum that we should “fund people not projects”, and that this has reached the attention of Dominic Cummings in his plans to create an ARPA-like research agency for the UK. In fact the dictum is itself borrowed from J.C.R. Licklider, an early and influential divisional head at ARPA, widely credited as a progenitor of the Internet. I also noted that the point of this dictum is easily misunderstood. Here I'll discuss how this is so, and what I think is a better way to capture its intention in the case of government-funded university-based research. In short: “fund institutions, not projects”.

Consider this well-meaning article which claims to be advocating such an approach (using the dictum in the title of the piece!). It totally misunderstands the idea. It seems to think it's about the question of which ‘faculty members’ should receive the ‘grant money’. Not coincidentally, the post's ideas are feeble—stuck in the paradigm of seeking relatively centralised ways to assess the merits of individuals. Tweaking these criteria is not what Kay or Licklider were talking about. Rather, they were critiquing the very notion of project grants, and consequently the very idea of nominated “leaders” following pre-approved programmes of work directing a “team” underneath. “Funding people” does not mean “funding the empire of professor X”, via grants naming X as PI! Even the article's mooted “fund everybody” assumes a fixed prior notion of who is eligible—“faculty” in American lingo. This inherently fails to address the postdoc issues I discussed in my previous posts. The very notion of “postdoc on a project” is antithetical to Kay's (or Lick's) suggestion. For them it is simply the wrong basis on which to pay people to do research (and remember that a postdoc is by definition an employed, qualified researcher—not a trainee).

My re-statement of the idea, focused on universities rather than (as Kay tends to) industrial labs, is that we should fund institutions, not projects. In other words, devolve the decision-making: universities can hire people “on merit” as they usually aspire to doing, but without a preordained project in mind. This answers a common rejoinder, of “who decides who ‘gets funded’?”. The short answer is: institutions do. They are used to making such decisions: how do you decide which postdoc to hire, or which lecturer [a.k.a. Assistant Professor]? Even in the postdoc case, we like to think that research merit is a major factor. So our answer remains mostly the same, but project-specific criteria are explicitly removed from hiring, and project-specific direction is explicitly removed from the job that even a relatively junior (but post-PhD) person is hired to do. “Fit to the institution” is still a valid criterion of course. Let the institutions attract people who want to make a career there. If the ongoing projects are any good, they'll contribute to them; otherwise, or additionally, they'll come up with their own, and more generally contribute to the “problem-finding”, whose importance Kay also often speaks of. Problem-finding is ruled out if you employ people on preordained problems.

This brings me to my next point: it is far better to spread funds out among institutions, and devolve selection, than to run relatively centralised selection exercises like fellowship schemes. The “fund people” line often encounters an attempted rejoinder, amounting to “fellowships exist”. Some people ask “isn't that funding people? And we already ‘do it’, so maybe we just need to publicise fellowships more?”. That is privilege talking. Of course, research council-funded fellowships do exist, and yes, they are “funding people”. But they are the exception not the norm, and are set up to be so. They are the “prestige case”, and are highly competitive. (And they are, anyway, awarded on the basis of a project proposal!) The vast majority of money paying for the employment of early-career researchers is not funding them on a fellowship basis; it's on someone else's grant, meaning a project someone else proposed. The extreme competition for fellowships—a phenomenon caused by policy, not nature, as I covered in previous posts—means only fellowship applications that are “fully baked” (to borrow the words of Martin Sadler from the aforementioned NCSC RIs' conference) have a chance of being funded. Only those applicants who have received substantial patronage and/or prior funding are likely to have the resources to produce a fellowship proposal that both is and appears fully baked, and get it through the narrow review funnel. The effect is inherently conservative, and again antithetical to the idea that “funding people” is how research at large is carried out.

(More generally, people are often oblivious to their privilege. The people who speak most loudly in favour of fellowships tend to be the people who've received them. That's good for them, and very often these people are great at what they do. But as is often the case with privilege, many are slow to recognise how structural factors have acted in their favour. Sadler's point was that inevitably, polish and patronage become decisive elements in many cases. The way the money is split conspires to ensure that however good the pool of eligible researchers, only a slim fraction will be funded in this manner.)

A slightly more subtle phenomenon is that under a system of funding institutions, many more people will “get funded” in their own right since it inherently involves spreading the money out more widely, building a much wider and flatter structure rather than a “fat pyramid”. (That is rather assuming institutions don't find new, internal ways to subjugate people to projects; but I don't believe our universities have yet become so unenlightened that they would do so.) The goal is not to fund “a team under lead researcher X”; it's to fund more potentially-lead researchers and fewer subordinate ones. I say “potential” because the choice of whether to lead or become a non-leading collaborative partner rests with the researcher.

Fellowships' extreme selection practices, like long proposals, postal review and panels, are far less useful in such a context. Similarly, once institutions are free to hire people as they usually do, by job application—but with more such jobs!—we eliminate a certain fraction of the (hugely effortful) grant applications made by academics, since more work will be achievable with the institution's (increased) block funding. There is nothing infeasible about this; it is exactly the way UK university research funding worked until the 1970s. The total number of research-active roles may well work out about the same; that's an orthogonal issue, in that supposing we hold the budget fixed, the pay distribution could stay exactly the same or could change, as could the salary distribution. Even if the staffing level goes down (i.e. average pay goes up!), I'm confident that the effective research capacity would be much greater, since any shrinkage would be offset by eliminated costs: grant application effort, but also the wastage induced by postdoc-style person/project “compromises”, projectwise fragmentation personnel churn and personal upheaval (“move to another city”) that I've written about previously.

Note also that funding people and institutions in this way does not mean “make everybody permanent”. That misunderstanding arises from the same myth I wrote about earlier: the opposite of “postdoc” really is not “permanent position”. It's potentially fine for early-career research appointments to be fixed-term—if the term is long enough and if the process for renewal or progression is sufficiently lightweight (i.e. definitely not “9 months' funding left; start applying for fellowships!”). Five years seems a sensible minimum for undertaking serious work while living an episode of one's life... and not coincidentally, is what established early-career researchers used to be offered in Cambridge. Going further, in fact, there is an argument that late-career appointments in research roles should also remain conditional on actually being research-productive. An oft-noted flexibility in the current system is that institutions can move academics “sideways”, into teaching and/or admin, when they're no longer research-productive. Increasing institution-centric funding would not diminish that option; it can only increase it, since greater funds would be pooled at institution level.

One more objection that might arise is: are institutions wise enough to spend this money well? My answer is “yes, for now” and again it's because the decision-making is inevitably devolved from the centre. Although many of our universities are run appallingly badly by central administration, at the departmental level academic merit often is still recognised and does still count for something. Of course our means of assessing this are not perfect, and I get frustrated when colleagues resort to “counting papers” rather than weighing contributions. Patronage is sometimes a factor too. But at least in my limited experience, most colleagues still look for mostly the right things.

Finally, it's interesting that Cummings takes inspiration from high-profile “breakthroughs” such as the moon landings, the Internet, and no doubt other things like human genomics. I'd like to sound a note of scepticism that much of the research we really want is going to take this form. In an age of technological plenty, it is wrong to assume that what we “should” work on, in the sense of research that will improve people's lives, takes the form of identifiable “breakthroughs”—and certainly not ones in “new areas” pre-selected by government, whether they be quantum computing, “AI”, or the next fixation of government technocrats. The disconnect between apparent technical progress and improving ordinary people's lives has long been present. (On the subject of moon landings, Gil Scott-Heron's “Whitey on the Moon” comes to mind.) But this seems destined to become even more pronounced. While in the biomedical technologies, true life-improving “breakthroughs” do seem more plausible, I still have an overarching feeling of scepticism—perhaps traceable to Ivan Illich's critique of late-C.20th medicine as primarily enabling survival in an unhealthy society. In general we can learn much from the writings of Illich, E.F. Schumacher and others who have questioned the axioms of “development” and its economics. I'm not a trained philosopher or economist, so if you know other work either in the spirit of these, or critiquing them, I'd love to hear your recommendations. In my actual area of training, I've already been developing my case that advances in software are not clearly helping humanity... but I'll save that topic for another time.

[/highered] permanent link contact


Powered by blosxom

validate this page