Rambles around computer science

Diverting trains of thought, wasting precious time

Wed, 09 Mar 2011

Greek talk

One of the reasons why I'm not a theoretical computer scientist is that I am very, very averse to mathematical notation. “It's like Greek to me!”---no applause, please. Certainly, it's common to see very highly abbreviated notation that takes some serious cognitive gear-turning to decode. If I'm faced with a Greek-heavy paper, I usually skim over the symbolic stuff and look for an explanation in words. Sometimes it's there, and sometimes it isn't. In the cases where it's not, I rarely have the stamina to wade through the Greek.

Natural language, for all its imprecision, is---unsurprisingly---more “natural”! In fact, I'll wager that most of the infamous imprecision found in natural language specifications could be fixed by more precise natural language. Perhaps a semantic checker for English is in order. Diagrams are even better than natural language, of course, although they rarely stand alone.

It strikes me that formalism is primarily useful for avoiding mistakes. By turning complex reasoning into simple pattern-recognition and symbol-pushing, correctness can be checked fairly dumbly. The cost is that although it's hard to make mistakes, it's hard to make progress: there are reams of applicable rules, and expressing anything complex requires a whole lot of symbols. So I'm going to go out on a limb and claim that formalism is notably not very good for acquiring understanding. In a lecture, diagrams and examples and words have always been far more useful to me than slides full of Greek. I'm also going to assert (without proof!) that formalism is not useful for artifact construction, except where mistake-avoidance is paramount. We should allow programmers to make imprecise statements, and refine them later, because humans can be a lot more productive this way. In particular, we can make progress before we fully understand the problem! Only when the cost of the smallest mistake is so great that we really want to rein things in should we resort to fully rigorous constructive methods (such as formal refinement processes, the B method, etc.). This argument also encompasses many of the usual arguments in favour of dynamic languages over statically typed ones.

Of course, that doesn't mean that any formal notation is to be avoided. For whatever quirk of evolution, humans have some aptitude for written language---and that includes more mathematical-style symbolic notations just as well as plain old words made of letters. So mathematical notation is fine if it stays within a particular comfort zone. I can read basic logic and basic algebra without much cognitive burden. Only when the formal notation passes a certain density threshold do I suddenly hit problems. I suspect that most theoretical computer scientists (and mathematicians) have a much higher threshold than I do.

[/research] permanent link contact


Powered by blosxom

validate this page