An Anatomical Sketch of Software As A Complex System

As intellectually awkward artifacts that open up new capabilities, are surprising, frustrating and costly in other ways, and which regularly confound our physical intuitions about their behaviour, software systems meet an everyday language definition of complexity. A more systematic comparison, presented here, shows a significant family resemblance. Complexity science studies common techniques across a number of fields, and using that framework to analyze software engineering could allow a more precise technical understanding of these software problems.

This isn’t a unique thought. Various approaches, such as David Snowden’s Cynefin framework, have used complexity science as a source for insight on software development. Herbert Simon, in works like “Sciences of the Artificial”, helped build complexity science, with software programs and Good Old Fashioned AI as reference points. Famous papers such as Parnas et al’s “The Modular Structure of Complex Systems” also point the same way. As I was introduced to this material myself, I missed a more recent reference that lined up these features of complex systems with modern software in a brief and systematic way. These notes attempt that in the form of an anatomical sketch.

This note considers software systems of many internal models and at least thousands of lines, rather than shorter programs analysed in formal detail. This places it more under software engineering than formal computer science, without intending any strict break from the latter. Likewise, by default, it addresses consciously engineered software rather than machine learning. This complexity differs from algorithmic processing time complexity as captured by O(x) notation, though there may be interesting formal connections to be explored there too.

 

Anatomical Sketch

Ladyman et al give seven features of complex systems, and I’ve added one more from Crutchfield.

1. Non-linearity

Software exhibits non-linearity in the small and the large. Every ‘if’ condition, implicit or explicit, represents distinct possible outputs. This is most obvious in response to unexpected input or state; error and exit, segmentation fault, stack trace, NullPointerException.

2. Feedback

From a use perspective, many software systems are part of a feedback loop, with users and the world, and this feedback can often involve internal software state.

From an engineering perspective, all software systems beyond a trivial size are built in cycles where the current state of a codebase is a rich input into the next cycle of engineering. This is true whether iterative software development methodologies are used or not. For instance, consider bug fixes resulting from a test phase in waterfall.

3. Spontaneous Order

Spontaneous order is not a feature of large software systems. If anything, the usual condition of engineering large software systems is constantly and deliberately working to maintain order against a tendency for these systems to suffer from entropy, or into complicated disorder. The ideas of ‘software crisis’ and ‘technical debt’ are both reactions to lack of perceived order in engineered software.

4. Robustness and lack of central control

In the small, or even at the level of the individual system, software tends to brittleness, as noted above. Robustness, being “stable under perturbations of the system” (Ladyman), must be specifically engineered in by considering a wide variety of inputs and states and testing the system under those conditions. However, certain software ecosystems such as the TCP/IP substrate of the Internet display great robustness. Individual websites go down, but the whole Internet or World Wide Web tends not to. This is related to the choice of a highly distributed architecture based on relatively simple, standard, protocols and design guidelines like Postel’s principle (be tolerant in what you accept and strict in what you send). Like a flock of birds, the lack of central control makes the system tolerant of local failure. High availability systems make use of similar principles of redundancy.

5. Emergence

Software systems tend not to exhibit emergent behaviours as highly visible features of the system, in the way say a flock of birds assumes a particular overall shape once each bird follows certain simple rules about their position relative to their neighbour. Certain important non-visible features are emergent. Leveson, in Engineering A Safer World, argues that system safety (including software) is an emergent feature: “Determining whether a plant is acceptably safe is not possible, for example, by examining a single valve in the plant. In fact, statements about the ’safety of the valve’ without information about the context in which that valve is used are meaningless.” Difficult bugs in established software systems are often multi-causal and emerge from systemic interactions between components rather than isolated failure.

Conway’s law, the observation that a software system’s internal component structure mirrors the team structure of the organisation that created it, describes system shape emerging from social structure without explicit causal rules.

6. Hierarchical organisation

Formal models of computation did not originally differentiate between parts of a program; for instance Turing machines or the Church lambda calculus do not even distinguish between programs and data. Many of the advances in software development have by contrast been tools for structuring programs in hierarchies and differing levels of abstraction. A reasonable history of programming could be told simply through differentiated structure, eg:

  • Turing machines / Church lambda calculus
  • Von Neumann machine separation of program, data, input, output
  • MIT Summer Session Computer: named instructions
  • Hopper: ALGOL compiler and functions
  • Backus: FORTRAN distinguished control structures IF and DO-WHILE
  • Parnas: module decomposition through information hiding
  • Smalltalk object orientation
  • Codd: relational databases
  • GoF design patterns
  • Beck: xUnit automated unit testing
  • Fowler refactoring for improved structure
  • Maven systematic library dependency management

Navigating program hierarchy from user interface through domain libraries to system libraries and services is a significant, even dominant, proportion of modern programming work (from personal observation, though a quantified study should be possible).

7. Numerosity (Many more is different)

The techniques for navigating, designing and changing a codebase of hundreds of classes are different than with a short script, at least partly due to the limitations of human memory and attention span. An early recognition of this is Benington’s Production of large computer programs; a more recent one is Feathers’ Working Effectively With Legacy Code. Feathers states: “As the amount of code in a project grows, it gradually surpasses understanding”.

8. Historical information storage

“Structural complexity is the amount of historical information that a system stores” according to Crutchfield. This is relevant for both use- and engineering-time views of software systems.

In use, the amount of state stored by a software system is historical information in this sense. An example might be a hospital patient record database. A subtlety here is suggested measures of complexity based on amounts of information (such as Kolmogorov) tend to specify maximum compression. Simply allocating several blank terabytes of disk isn’t enough. This also covers implicit forms of complexity such as dependencies in code on particular structures in data. Contrast a hospital database alone (just records and basic SQL) and the same database together with software which provides a better user interface and imposes rules on how records may be updated to suit the procedures of the hospital.

Source control changes provide a build-time problem of historic information. In practice, when extending or maintaining a system, classes are rarely replaced wholesale or deleted. New classes are added or existing classes modified to add functionality. The existing code is always an input to the new state of the code for the programmer doing the change, even if existing code was left untouched. Welsh even declared, in a paper of that name, that “Software is history!”

The result, regardless, is increasing historical information in a codebase over time, and therefore complexity.

 

References
Conway – How Do Committees Invent? Datamation 1968
Crutchfield – Five Questions on Complexity, Responses
Feathers – Working Effectively With Legacy Code, 2006
Ladyman, Lambert, Wisener – What Is A Complex System?
Leveson – Engineering a Safer World, Chapter 3 p64, 2011
Parnas – On The Criteria To Be Used in Decomposing Systems into Modules, Communications of the ACM, 1972
Parnas, Clements, Weiss – The Modular Structure of Complex Systems, IEEE Transactions on Software Engineering, 1985
Postel –  RFC 761 Transmission Control Protocol https://tools.ietf.org/html/rfc761 https://en.wikipedia.org/wiki/Robustness_principle
Simon – Sciences of the Artificial
Snowden and Boone – A Leader’s Framework For Decision Making (Cynefin)
Welsh – Software Is History!

Just Like Reifying A Dinner

Closing the Sorites Door After The Cow Has Ambled

The Last Instance has an interesting, pro-slime response to my recent musings on the sorites paradox. TLI offers a more nuanced herd example in Kotlin, explicitly modelling the particularity of empty herds, herds of one cow, as well as herds of two or more cows, and some good thoughts on what code-wrangling metaphors we should keep to hand.

It’s a better code example, in a number of ways, as it suggests a more deliberate language alignment between a domain jargon and the model captured in code. It includes a compound type with distinct Empty and Singleton subtypes.

But notice that we have re-introduced the sorites paradox by the back-door: the distinction between a proper herd and the degenerate cases represented by the empty and singleton herds is based on a seemingly-arbitrary numeric threshold.

Probably in my rhetorical enthusiasm for the reductive case (herd=[]), the nuance of domain alignment was lost. I don’t agree that this new example brings the sorites paradox in by the back door, though. There is a new ProperHerd type that always has two or more members. By fixing a precise threshold, the ambiguity is removed, and the sorites paradox still disappears. Within this code, you can always work out whether something is a Herd, and which subtype (Empty, Singleton, or ProperHerd) it belongs to. It even hangs a lampshade on the philosophical bullet-biting existence of the empty herd.

Though you can imagine attempts to capture more of this ambiguity in code – overlapping categories of classification, and so on – there would ultimately be some series of perhaps very complicated disambiguating rules for formal symbolic processing to work. Insofar as something like deep learning doesn’t fit that, because it holds a long vector of fractional weights against unlabelled categories, it’s not symbolic processing, even though it may be implemented on top of a programming language.

Team Slime

I don’t think a programmer should take too negative a view of ontological slime. Part of this is practical: it’s basically where we live. Learning to appreciate the morning dew atop a causal thicket, or the waves of rippling ambiguity across a pond of semantic sludge, is surely a useful mental health practice, if nothing else.

Part of the power of Wimsatt’s slime term, to me, is the sense of ubiquity it gives. Especially in software, and its everyday entanglement with human societies and institutions, general rules are an exception. Once you find them, they are one of the easy bits. Software is made of both planes of regularity and vast quantities of ontological slime. I would even say ontological slime is one of Harrison Ainsworth’s computational materials, though laying that out requires a separate post.

Wimsatt’s slime just refers to a region of dense, highly local, causally entangled rules. Code can be like that, even while remaining a symbolic processor. Spaghetti code is slimy, and a causal thicket. Software also can be ontological slime because parts of the world are like slime. Beyond a certain point, a particular software system might just need to suck that up and model a myriad of local rules. As TLI says:

The way forward may be to see slime itself as already code-bearing, rather as one imagines fragments of RNA floating and combining in a primordial soup. Suppose we think of programming as refining slime, making code out of its codes, sifting and synthesizing. Like making bread from sticky dough, or throwing a pot out of wet clay.

And indeed, traditionally female-gendered perspectives might be a better way to understand that. Code can often use mending, stitching, baking, rinsing, plucking, or tidying up. (And perhaps you have to underline your masculinity when explaining the usefulness of this: Uncle Bob Martin and the Boy Scout Rule. Like the performative super-blokiness of TV chefs.) We could assemble a team: as well as Liskov, we could add the cyberfeminist merchants of slime from VNS Matrix, and the great oceanic war machinist herself

“It’s just like planning a dinner,” explains Dr. Grace Hopper, now a staff scientist in system programming for Univac. (She helped develop the first electronic digital computer, the Eniac, in 1946.) “You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.”

Hopper invented the first compiler: an ontology-kneading machine. By providing machine checkable names that correspond to words in natural language, it constructs attachment points for theory construals, stabilizing them, and making it easier for theories to be rebuilt and shared by others working on the same system. Machine code – dense, and full of hidden structure – is a rather slimy artifact itself. Engineering an ontological layer above it – the programming language – is, like the anti-sorites, a slime refinement manoeuvre.

To end on that note seems too neat, though, too much of an Abstraction Whig History. To really find the full programmer toolbox, we need to learn not just reification, decoupling, and anti-sorites, but when and how to blend, complicate and slimify as well.

Heaps of Slime

The sorites paradox is a fancy name for a stupid-sounding problem. It’s a problem of meaning, of the kind software developers have to deal with all the time, and also of the kind software generates all the time. It’s a pervasive, emergent property of formal and informal languages.

You have a heap of sand. One grain of sand is not a heap. You take away one grain of sand. One grain of sand makes little difference – so you still have a heap of sand.

You have a grain of sand. You add another grain. Two grains of sand are surely not a heap. You add another. Three grains of sand are not a heap.

If you add only a grain or take away only a grain of sand, since one grain of sand can hardly make a difference, how do you tell when you have a heap?

That’s the paradox. The Stanford Encyclopedia of Philosophy has a more comprehensive historical overview.

 

Slime Baking

To make software, you build a machine out of executable formal logic. Let’s call that code a model, including its libraries and compiler, but excluding the software machinic layers below that.

The model has different elements which we represent in programming language structures, usually with names corresponding to our understanding of the domain of the problem. These correspond to phenomena in two ways: parsing, and delegation to an analogue instrument. Parsing is the process of structuring information using formal rules. An analogue instrument from this perspective is a thermostat, a camera, a human user, a rabbit user, or possibly some statistical or computational processes with emergent effects, like Monte Carlo simulations or machine learning autoencoders.

You can imagine any particular software system as a free-floating machine, just taking in inputs and providing outputs over time. Think of a program where all names of classes, functions, variables, button labels, etc, are replaced with arbitrary identifiers like a1, a2, etc, (which does have some correspondence to the processing happening inside a compiler, or during zip compression). We tether this symbolic system to the world by replacing these arbitrary names with ones that have representational meaning in human language, so that users and programmers can navigate the use and internals of the system, and make new versions of it.

To make it easier to understand, navigate and change this system, we label its interface and internals with names that have meaning in whatever domain we are using it for. Dairy farm systems will have things named after cows and online bookstores will have data structures representing books.

We have then delegated the problem of representation to the user of the system – a human choosing from a dropdown box, on a web form, for example, does the work of identification for the user+software system. But we run slap bang into the problem of vagueness.

Most of the users of our dairy software will not be on quaint farms in the English countryside owning one cow named Britney, so it will be necessary to represent a herd. How many cows do you need to qualify as a herd? Well, in practice, a programmer will pick a useful bucket data structure, like a set or a list, and name that variable “herd”. Nowadays it would probably be a collection in a standard library, like java.util.HashSet. The concept of an empty collection is a familiar one to programmers, furthermore there is a specific object to point to called “herd” (the new variable), so a herd is defined to be a data structure with zero or more (whole) cows. Sorites paradox solved <dusts hands>. And unwittingly too.

herd = []
# I refute it thus!

The loose, informal, family resemblance definition of a concept (herd) gets forced into a symbolic structure, like an everyday Python variable, to treat it as an object in a software system. This identification of a concept with a specific software structure is called reification. In the case of a herd (or a heap of sand) the formalism is a fairly uncontroversial net win; after getting over the slightly weird idea of the empty herd, the language will may converge around this new more formal definition, at least in the context of the system. (Or it may not. It is interesting to note the continuing popularity of the shopping cart usability metaphor, a concrete physical container that can be empty, rather than say, a pile of books that is allowed to have zero books in it.) 

The sorites might be thought of as a limiting case of vagueness, due to the deliberate simplicity of the concept involved (one type of thing, one collection of it). There are much messier cases. Keith Braithwaite points out that software is built on a foundation of universal distinguished types, and it is a constant emphasis of training in science and engineering. People without that training tend to instead organize their thinking around representative examples, and categorize by what Wittgenstein called family resemblance, ie, sharing a number of roughly similar properties. Accordingly Braithwaite suggests foregrounding examples as a shared artifact for discussion between programmers and users, and using legible, executable examples, as in Behaviour Driven Development (BDD).

Example-driven reasoning is also a survival technique in an environment lacking clearly distinguishable universal rules. Training in physical sciences emphasizes the wonderful discovery of universal physical laws such as those for gravity or electrical charge. Biologists are more familiar with domains where simple universal laws do not have sufficient explanatory power, and additional, much more local rules, are the only navigational aids possible. Which is to say, non-scientific exemplary reasoning was likely rational in the context it evolved in, and additionally, there are many times in science and engineering when we can not solve problems using universal rules. William Wimsatt names these conditions of highly localized rules “ontological slime”, and the complex feedback mechanisms that accompany them “causal thickets”. He points out that even if you think an elegant theory of everything is somehow possible, we have to deal with the world today, where there definitely isn’t one to hand, but ontological slime everywhere.

Readers who have built software for organizations may see where this is going. It’s not that (fairly) universal rules are unknown to organizations, but that rules run the gamut from wide generality right down to ontological slime, with people in organizations usually navigating vagueness by rule-of-thumb and exemplar-based categories which don’t form distinguished types. Additionally, well-organized domains of knowledge often intersect in organizations in idiosyncratic ways. For example, a hospital has chemical, electrical and water systems, many different medical domains, radioactive and laser equipment, legal and regulatory codes, and financial constraints to work within. And so the work of software development proceeds, one day accidentally solving custom sorites paradoxes, the next breaking everything by squeezing a twenty-nine sided Escher tumbleweed peg into a square hole.

 

Lunch

For software applications written for a domain, especially, software acts as a model to the world. This relation even holds for a great deal of “utility” software – software that other software is built on. An operating system needs to both use and provide functions dealing with time, for example, which has a lot more domain quirks than you might at first think.

Model is a specific jargon term in philosophy of science, and the use here is deliberate. For most software, the software : world relation is a close relative of the model : world relation in science. The image of code running without labels, untethered to the world, above, is an adaptation of an image from philosopher Chuang Liu: a map, showing only a selected structure, without labels or legend. We use natural language in all its power and ambiguity to attach labels to structures. This relation is organized according to a theory. Michael Weisberg calls the description, in the light of the theory, of how the world maps and doesn’t map to the model, a construal. Unlike scientific theories, the organizing theory for a software application is rarely carefully stated or specifically taught. So individual users and programmers build their own specific theory of the system as they work, and their own construals to go with them.

Software is not just a model: it’s also an instrument through which users act. The world it models is changed by its use, much more directly than for scientific models. Most observably, the world changes to be more like the model in software. Software also changes frequently. New versions chase changes in the world, including those conditioned by earlier versions of the software, in a feedback spiral. (Donald Mackenzie calls this “Barnesian performativity” when discussing economic models, the CCRU called it “hyperstition” when discussing fiction, and Brian Goetz and friends call it “an adventure in iterative specification discovery” when discussing programming.)

It is this feedback spiral which can eliminate ambiguity in terms by identifying them with exactly their use in software, therefore solving the sorites paradox in a stronger sense. It becomes meaningless to talk about an artifact outside its software context. We don’t argue about whether we have a pile of email, as it is obvious it is a container with one limit at Inbox Zero. This is one sense in which software can be said to be “eating the world”: by realigning the way a community sees and describes it.

There are other forms of software / language / world feedback, including ones that destroy meaning, dissolve formal definitions and create ambiguity. It’s often desirable, but perhaps not always, to collapse definitions into precise model-instrumented formality. Reifying an ambiguous concept by collapsing a sorites paradox into a concrete machine component is simply one process to be aware of when building software; an island of sediment in a river of slime.

References

Braithwaite – Things: how we think of them, what that means for building systems https://www.darkpeakconsulting.co.uk/blog/things-how-we-think-of-them-what-that-means-for-building-systems
Goetz et al – Java Concurrency In Practice
Hyde and Raffman – Sorites Paradox https://plato.stanford.edu/entries/sorites-paradox/
Liu – Fictionalism, Realism, and Empiricism on Scientific Models http://philsci-archive.pitt.edu/11162/
Mackenzie – An Engine, Not A Camera: How Financial Models Shape Markets
Visee – Falsehoods Programmers Believe About Time https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b923ca
Weisberg – Simulation and Similarity
Wimsatt – Re-Engineering Philosophy For Limited Beings