Closing the Sorites Door After The Cow Has Ambled
The Last Instance has an interesting, pro-slime response to my recent musings on the sorites paradox. TLI offers a more nuanced herd example in Kotlin, explicitly modelling the particularity of empty herds, herds of one cow, as well as herds of two or more cows, and some good thoughts on what code-wrangling metaphors we should keep to hand.
It’s a better code example, in a number of ways, as it suggests a more deliberate language alignment between a domain jargon and the model captured in code. It includes a compound type with distinct Empty
and Singleton
subtypes.
But notice that we have re-introduced the sorites paradox by the back-door: the distinction between a proper herd and the degenerate cases represented by the empty and singleton herds is based on a seemingly-arbitrary numeric threshold.
Probably in my rhetorical enthusiasm for the reductive case (herd=[]
), the nuance of domain alignment was lost. I don’t agree that this new example brings the sorites paradox in by the back door, though. There is a new ProperHerd
type that always has two or more members. By fixing a precise threshold, the ambiguity is removed, and the sorites paradox still disappears. Within this code, you can always work out whether something is a Herd
, and which subtype (Empty
, Singleton
, or ProperHerd
) it belongs to. It even hangs a lampshade on the philosophical bullet-biting existence of the empty herd.
Though you can imagine attempts to capture more of this ambiguity in code – overlapping categories of classification, and so on – there would ultimately be some series of perhaps very complicated disambiguating rules for formal symbolic processing to work. Insofar as something like deep learning doesn’t fit that, because it holds a long vector of fractional weights against unlabelled categories, it’s not symbolic processing, even though it may be implemented on top of a programming language.
Team Slime
I don’t think a programmer should take too negative a view of ontological slime. Part of this is practical: it’s basically where we live. Learning to appreciate the morning dew atop a causal thicket, or the waves of rippling ambiguity across a pond of semantic sludge, is surely a useful mental health practice, if nothing else.
Part of the power of Wimsatt’s slime term, to me, is the sense of ubiquity it gives. Especially in software, and its everyday entanglement with human societies and institutions, general rules are an exception. Once you find them, they are one of the easy bits. Software is made of both planes of regularity and vast quantities of ontological slime. I would even say ontological slime is one of Harrison Ainsworth’s computational materials, though laying that out requires a separate post.
Wimsatt’s slime just refers to a region of dense, highly local, causally entangled rules. Code can be like that, even while remaining a symbolic processor. Spaghetti code is slimy, and a causal thicket. Software also can be ontological slime because parts of the world are like slime. Beyond a certain point, a particular software system might just need to suck that up and model a myriad of local rules. As TLI says:
The way forward may be to see slime itself as already code-bearing, rather as one imagines fragments of RNA floating and combining in a primordial soup. Suppose we think of programming as refining slime, making code out of its codes, sifting and synthesizing. Like making bread from sticky dough, or throwing a pot out of wet clay.
And indeed, traditionally female-gendered perspectives might be a better way to understand that. Code can often use mending, stitching, baking, rinsing, plucking, or tidying up. (And perhaps you have to underline your masculinity when explaining the usefulness of this: Uncle Bob Martin and the Boy Scout Rule. Like the performative super-blokiness of TV chefs.) We could assemble a team: as well as Liskov, we could add the cyberfeminist merchants of slime from VNS Matrix, and the great oceanic war machinist herself …
“It’s just like planning a dinner,” explains Dr. Grace Hopper, now a staff scientist in system programming for Univac. (She helped develop the first electronic digital computer, the Eniac, in 1946.) “You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.”
Hopper invented the first compiler: an ontology-kneading machine. By providing machine checkable names that correspond to words in natural language, it constructs attachment points for theory construals, stabilizing them, and making it easier for theories to be rebuilt and shared by others working on the same system. Machine code – dense, and full of hidden structure – is a rather slimy artifact itself. Engineering an ontological layer above it – the programming language – is, like the anti-sorites, a slime refinement manoeuvre.
To end on that note seems too neat, though, too much of an Abstraction Whig History. To really find the full programmer toolbox, we need to learn not just reification, decoupling, and anti-sorites, but when and how to blend, complicate and slimify as well.
Hey, I’ve really been enjoying this thread. I wrote up something that started as a reply, but then got pretty happily lost in the weeds, over at Feral Machines: http://feralmachin.es/posts/urschleim.md