There Are No Good Philosophers of Software

There are no good philosophers of software.

There is some good philosophy inspired by software, or on the society created by software, or on how software changes the way we think.

There is no good philosopher of software construction.

That is the provocation I threw out on Twitter in response to Hillel Wayne‘s excellent prompt, two and a bit months ago.

The provocation got some attention, and what I was hoping for even more, some excellent responses, and suggestions for new things to read. It definitely highlighted some gaps on my side. But I honestly do feel there is a hole where the philosophy of software construction should be, and that the world feels this lack keenly, especially when we consider how ubiquitous software is in sensing, controlling, and mediating that world. So I’ve recapped the original thread below, as well as some good responses to the original prompt, especially ones that point in interesting new directions.


Deleuze and Guattari describe philosophy, as distinct from art and science, as the creation of concepts.

Cover of Wimsatt - Reengineering Philosophy for Limited Beings

Hillel (the OP) thinks deeply and reads broadly on software construction. I would argue that there is no writer creating deep concepts for software construction as part of a larger project.

This isn’t really an anomaly. It’s because there is very little philosophy of engineering.

The two really compelling books I would put in that category are Simondon’s On the Mode of Existence of Technical Objects and Wimsatt’s Re-Engineering Philosophy For Limited Beings. Simondon only had readily available English translations relatively recently, and should be read more. This book analyses how machines improve through being recomposed in their broader context (milieu). A key concept is concretisation: replacing theory with realized parts. Wimsatt is great and you don’t have the translation excuse. He also touches on software and programming, but is not really a philosopher of software. A key concept is “ontological slime”, spaces of symbolic irregularity which have no simple organizing theory.

Otherwise, there is philosophy which theorises software and uses its concepts to describe the world – such as Sadie Plant’s Zeroes and Ones, or Benjamin Bratton’s The Stack.

There is writing which gives a sense of what computation is – such as Hofstader’s Gödel, Escher, Bach. “Strange loop” counts as the creation of a concept, so it is philosophy, or maybe theory-fiction. Laurel’s Computers as Theatre also fits here.

Cover of Simondon - On The Mode of Existence of Technical Objects, Mellamphy trans.

There is philosophy on the ethics of software, which is mainly about telling programmers what to do. This is fine so far as it goes, but not really a philosophy of software, and often not very software-specific.

There is philosophy written by mathematicians and computer scientists on the impact of computing: Turing’s paper Computing Machinery and Intelligence, and many such contributions since.

There is philosophy which sees computation as fundamental to the universe and intelligence, like Reza Negarestani’s Intelligence and Spirit.

There is philosophy which uses computation, like the analytical philosophers who write Haskell and develop theory provers like Coq, or Peli Grietzer’s Theory of Vibe.

There are software design guides which attempt to go up a level of abstraction, like Ousterhout’s A Philosophy of Software Design.

Then there are drive bys and fragments. Yuk Hui’s On The Existence of Digital Objects builds on Simondon and is probably in the neighbourhood.

James Noble’s classic talk on Post-Postmodern Programming:

Some work on computational materials by Paul Dourish, like The Stuff of Bits, and blog posts on the topic by Harrison Ainsworth and Carlo Pescio.

Abramsky’s paper Two Questions on Computation. Some essays in the Matthew Fuller edited Software Studies: A Lexicon. Other uncollected ramblings on blogs and Twitter.

But there is no coherent project created by a philosopher of software, such that I could point to a book, or ongoing intellectual project, and say “start there, and see your everyday world of software construction differently”.


The Project

A number of people pointed out that this long list of negations didn’t include any positive statement of what I was looking for. Let me fix that. A good philosopher of software would have:

  1. A coherent project
  2. That is insightful on the machinic internals of software
  3. In communication with philosophy, and
  4. That recognises software design as a social and cognitive act.

Hopefully that explains why I left your favourite thinker out. At least for some of you.


Misses

There are a couple of big misses I made in this informal survey of software-adjacent philosophy. There are some other things I deliberately excluded. And there were things I never heard of, and am now keen to read.

I got mostly software people in my replies, and a number of them brought up William Kent’s Data and Reality. I have read it, I think after Hillel’s enthusiastic advocacy for it in his newsletter. ((Personally I thought it was ok but not mindblowing. However, I listened to it on audiobook, which is definitely the wrong format for it, and in one of the late, mutilated editions.)) I would place it as a book in the Philosophy of Information, albeit one not really in communication with philosophy more broadly. The Philosophy of Information is definitely a thing in philosophy more broadly, one most associated with the contemporary Oxford philosopher Luciano Floridi. You can get an overview from his paper What Is The Philosophy of Information? Floridi has written a textbook on the topic, and there’s also a free textbook by Allo et al, which is especially great if you’re poor or dabbling.

I think there’s a lot that software people can learn from pretty much anything in my list, and the Philosophy of Information does shed light on software, but it only rarely addresses point (2) in my criteria: it does not give insights into the machine internals of software. Some of it is tantalizingly close, though, like Andrew Iliadis’ paper on Informational Ontology: The Meaning of Gilbert Simondon’s Concept of Individuation.

I also should have mentioned the Commission on the History and Philosophy of Computing. Though it isn’t really a single coherent intellectual project in the sense I mean above, it is an excellent organisation, and the thinking that comes out of this stream is most reliably both in communication with philosophy and insightful about the machine internals of software. This paper by Petrovic and Jakubovic, on using Commodore64 BASIC as a source of alternative computing paradigms, gives a flavour of it.

It wasn’t mentioned in the thread or the responses, but I’ve recently been enjoying Brian Marick’s Oddly Influenced podcast, which takes a critical and constructive look at ideas from the philosophy of science, and how they can be related to software. Often, given his history (he is that Brian Marick), he’s in reflecting on things like why Test Driven Development “failed” (as in, didn’t take over the world in its radical form), or how the meaning of Agile has morphed into a loose synonym of “software project management”. That makes it sound like just a vehicle for airing old grievances: it’s very far from that, and always constructive, thoughtful, and ranges across many interesting philosophical ideas.

I mentioned Media Theory thinkers via Software Studies (which even includes Kittler), but it’s worth noting that media theorists have been thinking about software quite seriously, and there is some useful dialogue between software construction and design theory, usually restricted to the context of user interfaces. Wendy Chun’s Programmed Visions and Lev Manovich’s Software Takes Command are good examples. I first read Simondon because Manovich recommended it in an interview I no longer have to hand. And other people gave Kittler’s There Is No Software paper the name check it deserves.

Others pointed out that the Stanford Encyclopedia of Philosophy is rigorous, free, and has an article on the Philosophy of Computer Science.


Glorious Discoveries

The discussion unearthed some books and papers that look pretty great, but I hadn’t even heard of, let alone read.

  • Kumiko Tanaka-Ishii – Semantics of Programming
  • Federica Frabetti – Software Theory: A Cultural and Philosophical Study
  • M. Beatrice Fazi – Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics
  • Reilly – The Philosophy of Residuality Theory

And a suggested side-quest from philosophy of engineering

  • Walter G. Vincenti – What Engineers Know and How They Know It: Analytical Studies from Aeronautical History

A Conciliatory Postscript

We can imagine both a broad and a narrow definition of the Philosophy of Software. In the broad definition, most things in this post would be included. And yes, academics, if there were a hypothetical undergraduate subject on the philosophy of software, it would probably be best to use the broad definition, and expose young developers to a broad collection of ideas about the way software works in the world. So I am not completely sincere in my provocation, but I’m not utterly insincere either.

There are delicate and intimate connections between software construction and philosophy. Consider the tortured, necessary relationship that both philosophers and programmers have with names, with intuition, or with causality. The possibility of exploring these connections, and inventing new concepts, gets swamped. It gets swamped by illiteracy and thinkpiecery on the software side, and by mechanical ignorance on the philosophy side. The narrow definition of the philosophy of software, and the recognition that there is no coherent philosophy of software construction, brings it back into view.

Just Like Reifying A Dinner

Closing the Sorites Door After The Cow Has Ambled

The Last Instance has an interesting, pro-slime response to my recent musings on the sorites paradox. TLI offers a more nuanced herd example in Kotlin, explicitly modelling the particularity of empty herds, herds of one cow, as well as herds of two or more cows, and some good thoughts on what code-wrangling metaphors we should keep to hand.

It’s a better code example, in a number of ways, as it suggests a more deliberate language alignment between a domain jargon and the model captured in code. It includes a compound type with distinct Empty and Singleton subtypes.

But notice that we have re-introduced the sorites paradox by the back-door: the distinction between a proper herd and the degenerate cases represented by the empty and singleton herds is based on a seemingly-arbitrary numeric threshold.

Probably in my rhetorical enthusiasm for the reductive case (herd=[]), the nuance of domain alignment was lost. I don’t agree that this new example brings the sorites paradox in by the back door, though. There is a new ProperHerd type that always has two or more members. By fixing a precise threshold, the ambiguity is removed, and the sorites paradox still disappears. Within this code, you can always work out whether something is a Herd, and which subtype (Empty, Singleton, or ProperHerd) it belongs to. It even hangs a lampshade on the philosophical bullet-biting existence of the empty herd.

Though you can imagine attempts to capture more of this ambiguity in code – overlapping categories of classification, and so on – there would ultimately be some series of perhaps very complicated disambiguating rules for formal symbolic processing to work. Insofar as something like deep learning doesn’t fit that, because it holds a long vector of fractional weights against unlabelled categories, it’s not symbolic processing, even though it may be implemented on top of a programming language.

Team Slime

I don’t think a programmer should take too negative a view of ontological slime. Part of this is practical: it’s basically where we live. Learning to appreciate the morning dew atop a causal thicket, or the waves of rippling ambiguity across a pond of semantic sludge, is surely a useful mental health practice, if nothing else.

Part of the power of Wimsatt’s slime term, to me, is the sense of ubiquity it gives. Especially in software, and its everyday entanglement with human societies and institutions, general rules are an exception. Once you find them, they are one of the easy bits. Software is made of both planes of regularity and vast quantities of ontological slime. I would even say ontological slime is one of Harrison Ainsworth’s computational materials, though laying that out requires a separate post.

Wimsatt’s slime just refers to a region of dense, highly local, causally entangled rules. Code can be like that, even while remaining a symbolic processor. Spaghetti code is slimy, and a causal thicket. Software also can be ontological slime because parts of the world are like slime. Beyond a certain point, a particular software system might just need to suck that up and model a myriad of local rules. As TLI says:

The way forward may be to see slime itself as already code-bearing, rather as one imagines fragments of RNA floating and combining in a primordial soup. Suppose we think of programming as refining slime, making code out of its codes, sifting and synthesizing. Like making bread from sticky dough, or throwing a pot out of wet clay.

And indeed, traditionally female-gendered perspectives might be a better way to understand that. Code can often use mending, stitching, baking, rinsing, plucking, or tidying up. (And perhaps you have to underline your masculinity when explaining the usefulness of this: Uncle Bob Martin and the Boy Scout Rule. Like the performative super-blokiness of TV chefs.) We could assemble a team: as well as Liskov, we could add the cyberfeminist merchants of slime from VNS Matrix, and the great oceanic war machinist herself

“It’s just like planning a dinner,” explains Dr. Grace Hopper, now a staff scientist in system programming for Univac. (She helped develop the first electronic digital computer, the Eniac, in 1946.) “You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.”

Hopper invented the first compiler: an ontology-kneading machine. By providing machine checkable names that correspond to words in natural language, it constructs attachment points for theory construals, stabilizing them, and making it easier for theories to be rebuilt and shared by others working on the same system. Machine code – dense, and full of hidden structure – is a rather slimy artifact itself. Engineering an ontological layer above it – the programming language – is, like the anti-sorites, a slime refinement manoeuvre.

To end on that note seems too neat, though, too much of an Abstraction Whig History. To really find the full programmer toolbox, we need to learn not just reification, decoupling, and anti-sorites, but when and how to blend, complicate and slimify as well.

Heaps of Slime

The sorites paradox is a fancy name for a stupid-sounding problem. It’s a problem of meaning, of the kind software developers have to deal with all the time, and also of the kind software generates all the time. It’s a pervasive, emergent property of formal and informal languages.

You have a heap of sand. One grain of sand is not a heap. You take away one grain of sand. One grain of sand makes little difference – so you still have a heap of sand.

You have a grain of sand. You add another grain. Two grains of sand are surely not a heap. You add another. Three grains of sand are not a heap.

If you add only a grain or take away only a grain of sand, since one grain of sand can hardly make a difference, how do you tell when you have a heap?

That’s the paradox. The Stanford Encyclopedia of Philosophy has a more comprehensive historical overview.

 

Slime Baking

To make software, you build a machine out of executable formal logic. Let’s call that code a model, including its libraries and compiler, but excluding the software machinic layers below that.

The model has different elements which we represent in programming language structures, usually with names corresponding to our understanding of the domain of the problem. These correspond to phenomena in two ways: parsing, and delegation to an analogue instrument. Parsing is the process of structuring information using formal rules. An analogue instrument from this perspective is a thermostat, a camera, a human user, a rabbit user, or possibly some statistical or computational processes with emergent effects, like Monte Carlo simulations or machine learning autoencoders.

You can imagine any particular software system as a free-floating machine, just taking in inputs and providing outputs over time. Think of a program where all names of classes, functions, variables, button labels, etc, are replaced with arbitrary identifiers like a1, a2, etc, (which does have some correspondence to the processing happening inside a compiler, or during zip compression). We tether this symbolic system to the world by replacing these arbitrary names with ones that have representational meaning in human language, so that users and programmers can navigate the use and internals of the system, and make new versions of it.

To make it easier to understand, navigate and change this system, we label its interface and internals with names that have meaning in whatever domain we are using it for. Dairy farm systems will have things named after cows and online bookstores will have data structures representing books.

We have then delegated the problem of representation to the user of the system – a human choosing from a dropdown box, on a web form, for example, does the work of identification for the user+software system. But we run slap bang into the problem of vagueness.

Most of the users of our dairy software will not be on quaint farms in the English countryside owning one cow named Britney, so it will be necessary to represent a herd. How many cows do you need to qualify as a herd? Well, in practice, a programmer will pick a useful bucket data structure, like a set or a list, and name that variable “herd”. Nowadays it would probably be a collection in a standard library, like java.util.HashSet. The concept of an empty collection is a familiar one to programmers, furthermore there is a specific object to point to called “herd” (the new variable), so a herd is defined to be a data structure with zero or more (whole) cows. Sorites paradox solved <dusts hands>. And unwittingly too.

herd = []
# I refute it thus!

The loose, informal, family resemblance definition of a concept (herd) gets forced into a symbolic structure, like an everyday Python variable, to treat it as an object in a software system. This identification of a concept with a specific software structure is called reification. In the case of a herd (or a heap of sand) the formalism is a fairly uncontroversial net win; after getting over the slightly weird idea of the empty herd, the language will may converge around this new more formal definition, at least in the context of the system. (Or it may not. It is interesting to note the continuing popularity of the shopping cart usability metaphor, a concrete physical container that can be empty, rather than say, a pile of books that is allowed to have zero books in it.) 

The sorites might be thought of as a limiting case of vagueness, due to the deliberate simplicity of the concept involved (one type of thing, one collection of it). There are much messier cases. Keith Braithwaite points out that software is built on a foundation of universal distinguished types, and it is a constant emphasis of training in science and engineering. People without that training tend to instead organize their thinking around representative examples, and categorize by what Wittgenstein called family resemblance, ie, sharing a number of roughly similar properties. Accordingly Braithwaite suggests foregrounding examples as a shared artifact for discussion between programmers and users, and using legible, executable examples, as in Behaviour Driven Development (BDD).

Example-driven reasoning is also a survival technique in an environment lacking clearly distinguishable universal rules. Training in physical sciences emphasizes the wonderful discovery of universal physical laws such as those for gravity or electrical charge. Biologists are more familiar with domains where simple universal laws do not have sufficient explanatory power, and additional, much more local rules, are the only navigational aids possible. Which is to say, non-scientific exemplary reasoning was likely rational in the context it evolved in, and additionally, there are many times in science and engineering when we can not solve problems using universal rules. William Wimsatt names these conditions of highly localized rules “ontological slime”, and the complex feedback mechanisms that accompany them “causal thickets”. He points out that even if you think an elegant theory of everything is somehow possible, we have to deal with the world today, where there definitely isn’t one to hand, but ontological slime everywhere.

Readers who have built software for organizations may see where this is going. It’s not that (fairly) universal rules are unknown to organizations, but that rules run the gamut from wide generality right down to ontological slime, with people in organizations usually navigating vagueness by rule-of-thumb and exemplar-based categories which don’t form distinguished types. Additionally, well-organized domains of knowledge often intersect in organizations in idiosyncratic ways. For example, a hospital has chemical, electrical and water systems, many different medical domains, radioactive and laser equipment, legal and regulatory codes, and financial constraints to work within. And so the work of software development proceeds, one day accidentally solving custom sorites paradoxes, the next breaking everything by squeezing a twenty-nine sided Escher tumbleweed peg into a square hole.

 

Lunch

For software applications written for a domain, especially, software acts as a model to the world. This relation even holds for a great deal of “utility” software – software that other software is built on. An operating system needs to both use and provide functions dealing with time, for example, which has a lot more domain quirks than you might at first think.

Model is a specific jargon term in philosophy of science, and the use here is deliberate. For most software, the software : world relation is a close relative of the model : world relation in science. The image of code running without labels, untethered to the world, above, is an adaptation of an image from philosopher Chuang Liu: a map, showing only a selected structure, without labels or legend. We use natural language in all its power and ambiguity to attach labels to structures. This relation is organized according to a theory. Michael Weisberg calls the description, in the light of the theory, of how the world maps and doesn’t map to the model, a construal. Unlike scientific theories, the organizing theory for a software application is rarely carefully stated or specifically taught. So individual users and programmers build their own specific theory of the system as they work, and their own construals to go with them.

Software is not just a model: it’s also an instrument through which users act. The world it models is changed by its use, much more directly than for scientific models. Most observably, the world changes to be more like the model in software. Software also changes frequently. New versions chase changes in the world, including those conditioned by earlier versions of the software, in a feedback spiral. (Donald Mackenzie calls this “Barnesian performativity” when discussing economic models, the CCRU called it “hyperstition” when discussing fiction, and Brian Goetz and friends call it “an adventure in iterative specification discovery” when discussing programming.)

It is this feedback spiral which can eliminate ambiguity in terms by identifying them with exactly their use in software, therefore solving the sorites paradox in a stronger sense. It becomes meaningless to talk about an artifact outside its software context. We don’t argue about whether we have a pile of email, as it is obvious it is a container with one limit at Inbox Zero. This is one sense in which software can be said to be “eating the world”: by realigning the way a community sees and describes it.

There are other forms of software / language / world feedback, including ones that destroy meaning, dissolve formal definitions and create ambiguity. It’s often desirable, but perhaps not always, to collapse definitions into precise model-instrumented formality. Reifying an ambiguous concept by collapsing a sorites paradox into a concrete machine component is simply one process to be aware of when building software; an island of sediment in a river of slime.

References

Braithwaite – Things: how we think of them, what that means for building systems https://www.darkpeakconsulting.co.uk/blog/things-how-we-think-of-them-what-that-means-for-building-systems
Goetz et al – Java Concurrency In Practice
Hyde and Raffman – Sorites Paradox https://plato.stanford.edu/entries/sorites-paradox/
Liu – Fictionalism, Realism, and Empiricism on Scientific Models http://philsci-archive.pitt.edu/11162/
Mackenzie – An Engine, Not A Camera: How Financial Models Shape Markets
Visee – Falsehoods Programmers Believe About Time https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b923ca
Weisberg – Simulation and Similarity
Wimsatt – Re-Engineering Philosophy For Limited Beings