Industrializing The Noosphere

Control Environment

We are not practicing Continuous Delivery. I know this because Jez Humble has given a very clear test. If you are not checking your code into trunk at least once every day, says Jez, you are not doing Continuous Integration, and that is a prerequisite for Continuous Delivery. I don’t do this; no one on my team does it; no one is likely to do it any time soon, except by an accident of fortuitous timing. Nevertheless, his book is one of the most useful books on software development I have read. We have used it as a playbook for improvement, with the result being highly effective software delivery. Our experience is one small example of an interesting historical process, which I’d like to sketch in somewhat theoretical terms. Software is a psychologically intimate technology. Much as described by Gilbert Simondon’s work on technical objects, building software has evolved from a distractingly abstract high modernist endeavour to something more dynamic, concrete and useful.

The term software was co-opted, in a computing context, around 1953, and had time to grow only into an awkward adolescence before being declared, fifteen years later, as in crisis. Barely had we become aware of software’s existence before we found it to be a frightening, unmanageable thing, upsetting our expected future of faster rockets and neatly ordered suburbs. Many have noted that informational artifacts have storage and manufacturing costs approaching zero. I remember David Carrington, in one of my first university classes on computing, noting that as a consequence of this, software maintenance is fundamentally a misnomer. What we speak of as maintenance in physical artifacts, the replacement of entropy-afflicted parts with equivalents within the same design, is a nonsense in software. The closest analogue might be system administrative activities like bouncing processes and archiving logfiles. What we call (or once called) maintenance is revising our understanding of the problem space.

Software has an elastic fragility to it, pliable, yet prone to the discontinuities and unsympathetic precision of logic. Lacking an intuition of digital inertia, we want to change programs about as often as we change our minds, and get frustrated when we cannot. In their book Continuous Delivery, Humble and Farley say we can change programs like that, or at least be changing live software very many times a day, such that software development is not the bottleneck in product development.

With this approach, we see a rotation and miniaturisation of mid-twentieth century models of software development. The waterfall is turned on its side.

Continue reading

Test Driven Development As Computational Positivism

Test driven development is a craft answer to a seemingly quite formal theoretical problem, verification and validation. This can be seen as a vernacular versus institutional architectural style. We can also compare it to different styles of science. When considering historical astronomy, Roddam Narasimha interprets radically empirical approaches as a different style of science, which he terms “computational positivism”.

[W]hile the Indian astronomer found the epicycle a useful tool of representation, he would cheerfully abandon the classical epicycle if he found something which was more efficient or led to a shorter algorithm and to better agreement with observation. For somebody subscribing to Greek ideals, however, this attitude would presumably seem sacrilegious – the rejection of the epicycle would question the basic assumption that the circle is a perfect figure preferred by nature, and hence precipitate a major philosophical crisis.
Narasimha, Axiomatism and Computational Positivism

Or in more familiar terms, the approach of the Keralan School was to discount theories and models and go with whichever calculation was most predictive.

In computing the problem of verification and validation had a clear academic answer. It came from the same school of thought – Dijkstra, Knuth, et al – that delivered major algorithmic and language design innovations, many of which (eg structured programming) were unintuitive conceptual shifts for the working programmer. The answer was formal methods, with the classic citation being Carrol Morgan’s Programming From Specifications. I was trained this way and remain sympathetic, but it hasn’t penetrated everyday practice. (Granted I should read more on the latest work,  but I suspect formal methods suffered from the same misreading of the software crisis as other spec-centric high-modernist schemes. I also suspect they will live again in another form, both good topics for another day.)

Test-driven development, by contrast, saw rapid uptake among working programmers, even if it is short of universal. Formal proof and TDD aren’t equivalent techniques, as one is deductive and one inductive. Proof is also a stronger standard. Both involve some degree of upfront investment, but unit tests are much lower cost to interweave into an existing codebase or introduce in an incremental way. One way to think of TDD is a vernacular style suited to an artisan scale, in contrast with the high cost institutional approach of formal methods. It’s the caravan or service station of refinement from specifications.

This is the hoary old craft-practical vs academic-theoretical dialectic. It’s useful, but obscures as well as reveals. Formal methods practitioners never saw proof as replacing testing, let alone automated testing. On the other side, looking at JUnit as an example, the originators like Kent Beck and Erich Gamma were hardly without university links, and their Smalltalk background wasn’t mainstream. 

This is where Narasimha’s typology seems to apply, the idea of not doing science without theory, but doing science in an alternative mode. One of the striking aspects of TDD as described in the original JUnit essays, like Test Infected, is their relaxed attitude to the ultimate implementation. This is computational positivism – the only measure is whether the formula hits a requisite number of data points. The symbolic coherence of the solution itself is not granted any weight; the code can be spaghetti. Though aesthetically displeasing, there’s an argument that its empirical rigour is more scientifically sound.

The experience of using unit testing widely across a codebase usually shows a different emerging property. By decomposing into testable components, the overall coherence of the design actually tends to improve.

Though Keralan astronomy was superior to European for some hundreds of years, post-Newtownian astronomy finally trumped it for precision, together with a very powerful corresponding theory. The right test for such an ambitious scheme in software would be a computationally positivist one, just as for all the bodgy solutions that preceded it: sure it’s pretty, but does it keep the test bar green?

The Consensus Reality Based Community

null

1. There’s a concept from science fiction criticism which has become a favourite of mine. Indeed it seems fundamental to this 21st century glocal postmodernity of ours, the concept of consensus reality.
1.1 It is worth remembering that this consensus often refers to the beliefs of the society in the work under criticism, in which marmalade may be money, spaceships may fly faster than light, and handheld communicators with vid screens may be ubiquitous.

2. The idea of consensus reality neatly captures several insights.
2.1 Reality proper, what Kant called the unsynthesized manifold, is unavoidably mediated by our senses and brain.
2.2 Our model of the world is socially constructed by a group we live in.
2.3 Powerful institutions of mainstream thought – like large newspapers – work within certain parameters of perception.
2.3.1 The first page of search engine results are representative. They are consensus reality engines. Common sense engines, in Bruce Sterling’s words.
2.4 Something in the consensus is inevitably and always wrong.
2.4.1 The consensus contains arguments with known for and against positions.
2.4.1.1 The argument itself can be wrong, irrelevant, meaningless side effect, not resolvable as either pro or con, etc.
2.5 Broad consensus realities often have enduring correlations with events.
2.6 Consensus is reinforced by breadth.

3. Kuhn’s concept of a scientific paradigm resembles a consensus reality, but is far more systematic.
3.1 Consensus reality includes cultural convention and everyday discussion including obvious internal logical contradictions.
3.2 Consensus reality is intuitive.
3.3 Consensus reality may be surprising – chance events – but not unanticipated ones.
3.3.1 “Black swans” are demonstrations of consensus reality.
3.3.2 Commuting to work is also demonstrative.

4. A reality based community responds to empirical sense-data.
4.1 Measures.
4.2 Adjusts in response to changes in data.
4.3 Follows technique.
4.3.1 Technique may be systematic. It may have a model.
4.3.1.1 The model may be tested empirically and systematically.
4.3.1.2 One might use a randomised controlled trial, or survey, or historical data source, or blind peer review.
4.4 Reality based communities survive by adaptation.
4.5 Strongly reality based communities would necessarily be scientific communities.
4.5.1 No serious political community today is also a scientific community.
4.5.1.1 Establishing professional pools of expertise for these processes is necessary but not sufficient.
4.5.1.1.1 Any such group analysing a public problem is inherently political.
4.5.1.1.2 This is technocracy.

5. The consensus reality based community is always broad, often well-established and always vulnerable to disruption of its reality.
5.1 This is the nature of Karl Rove’s insult.
5.1.1 By always anchoring themselves in well established consensus reality, Rove’s opponents fail to react to events initiated by his faction which change the broad understanding of reality.
5.1.2 Rove’s faction has since, with amusing consistency, repeatedly showed themselves to not be reality based.
5.1.2.1 This faction acts as an alternative consensus reality based community.
5.1.3 In rejecting the dominant consensus reality, and its rhetoric of objective evaluation, they went straight on and also rejected a reality base for their community.
5.1.3.1 This is not a survival technique.
5.1.3.2 On the day of the 2012 US Presidential election, both major parties expected to win.
5.2 The consensus reality based community may even tacitly acknowledge it is not reality based.
5.2.1 This is a society in which the consensus ritual detaches from its social meaning.
5.2.2 Incongruence between political consensus reality and reality manifests in scandal.
5.2.2.1 Fin de siècle Vienna.
5.2.2.2 Late Ming China.
5.2.3 Incongruence between social consensus reality and geophysics and biology manifests in natural disaster.
5.2.3.1 The Aral Sea.
5.2.4 Incongruence between financial consensus reality and economic and psychological reality manifests in financial crisis.
5.2.4.1 CDOs and CDSs.
5.2.4.2 South Sea Bubble.
5.2.4.3 Louisiana.
5.2.4.4 Tulips.

6. The siblings of consensus reality are the consensus future and the consensus past.
6.1 Revision is the change of the consensus past.
6.2 Changes to the consensus future feel like betrayal or relief.

Link From Twitter

Twitter have damaged their phone app by adding a feature. This is a problem software is particularly prone to, so let’s sift through it.

I was surprised to find Twitter useful. It had originally seemed a concentration of the least interesting ingredients of online culture: celebrities wittering moments from their shadow lives in a medium where smalltalk was enforced by a strict character limit. That’s not wrong, but it is incomplete. Twitter can be rendered functional, for me, by following interesting people, who link in depth, and by dropping anyone who emits more than two dozen undirected tweets in a week. 

Despite my faddish embrace of the medium du jour, two of the best discussion groups I am a part of are still closed mailing lists of mutual friends. It is also easy, with mail, to copy other random people that might care. This electronic mail thing really seems to have a future. Someone should look into that.

With this use pattern, and the primacy of the smartphone in a busy life, a fair proportion of the times I find something cool on twitter involves mailing a link.

Until recently, the email composed by twitter consisted of a link, my default signature, and an empty subject. This wasn’t great. Typing a subject, like typing anything on the phone, is a bit of a pain. Blankness is lousy microcontent, a terrible breach of information etiquette for a platform focused on short semantic bursts. Feedly – heck even Safari, dog that it is – at least has the sense to use the title of the web page in question.

Twitter fixed this bug. The latest version of the app sets email subjects to “Link from Twitter” and, as well as the link, adds a note to “Download the official Twitter app here. The fix of course is worse than the bug. Not having a subject just looks careless, like leaving your fly undone. “Link from Twitter” looks like somebody paid you €5 to tattoo an advertisement on your arse and then moon out car windows.

The time spent to delete that guff and replace it with something more meaningful is time wasted. Pretty much anything would be more meaningful to most recipients, who care about what was sent, not how it was sent. The empty subject is better. The subject “lol” would even be better, as at least it tells the audience about the content instead of whether it was sent by carrier pigeon or whichever. This is true even if you drink from the twitter firehouse; then you waste even more time.

If Twitter really thought it was important to squeeze some self-promotion into my email, they would find a way that added to my user experience. Why are people using the tool in the first place? It’s for snippets of content in a social network context. I don’t care that something came from Twitter, but I might care that it came from a particular user on Twitter. Maybe quote the tweet the link originated from, or mention the @user. Maybe link back to that tweet. Maybe I followed a few onward links, and am mailing that, so provide a breadcrumb trail of that history with a chain of vias. Do neat things that bring people into your conversation. Don’t make my email look like a spam. And don’t waste my time.

Seeing Like A Facebook

The insistence on a single, unique, legal identity by Facebook and Google continues a historical pattern of expansion of power through control of the information environment. Consider the historical introduction of surnames:

Customary naming practices are enormously rich. Among some peoples, it is not uncommon to have different names during different stages of life (infancy, childhood, adulthood) and in some cases after death; added to those are names used for joking, rituals, and mourning and names used for interactions with same-sex friends or with in-laws. […]  To the question “What is your name?” which has a more unambiguous answer in the contemporary West, the only plausible answer is “It depends”.
For the insider who grows up using these naming practices, they are both legible and clarifying.
 — James C. Scott, Seeing Like A State

It’s all rather reminiscent of the namespace of open internets since they emerged in the 80s, including BBS, blogs, IRC, message boards, slashcode, newsgroups and even extending the lineage to the pseudonym-friendly Twitter. You can tell Twitter has this heredity by the joke and impersonating accounts, sometimes created in ill-spirit, but mostly in a slyly mocking one. CheeseburgerBrown’s autobiography of his pseudonyms captures the spirit of it.

Practically any structured scheme you might use to capture this richness of possible real world names will fail, as  Patrick McKenzie amusingly demonstrates in his list of falsehoods programmers believe about names.

Scott goes on to show how the consistent surnames made information on people much easier to access and organize for the state – more legible. This in turn made efficient taxation, conscription and corvee labour possible for the feudal state, as well as fine grained legal title to land. It establishes an information environment on which later institutions such as the stock market, income tax and the welfare state (medical, unemployment cover, universal education) rely. Indeed the idea of a uniquely identifiable citizen, who votes once, is relied on by mass democracy. Exceptions,  where they exist, are limited in their design impact due to their rarity. Even then, the introduction of national ID cards and car registration plates is part of that same legibility project, by enforcing unique identifiers. For more commercial reasons but with much the same effect, public transport smartcards, mobile phones  and number plates, when combined with modern computing, make mass surveillance within technical reach. 

The transition to simplified names was not self-emerging or gentle but was aggressively pursued by premodern and colonial states. In the course of a wide survey Scott gives a striking example from the Philippines:

Filipinos were instructed by the decree of November 21, 1849, to take on permanent Hispanic surnames. The author of the decree was Governor (and Lieutenant General) Narciso Claveria y Zaldua, a meticulous administrator as determined to rationalise names as he had been determined to rationalise existing law, provincial boundaries, and the calendar. He had observed, as his decree states, that Filipinos generally lacked individual surnames, which might “distinguish them by families,” and that their practice of adopting baptismal names from a small group of saints’ names resulted in great “confusion”. The remedy was the catalogo, a compendium not only of personal names but also of nouns and adjectives drawn from flora, fauna, minerals, geography and the arts and intended to be used by the authorities in assigning permanent, inherited surnames. […] In practice, each town was given a number of pages from an alphabetized catalogo, producing whole towns with surnames of the same letter. In situations where there has been little in-migration in the past 150 years, the traces of this administrative exercise are still perfectly visible across the landscape.
[…]
For a utilitarian state builder of Claveria’s temper, however, the ultimate goal was a complete and legible list of subjects and taxpayers. […] Schoolteachers were ordered to forbid thier students to address or even know one another by any other name except the officially inscribed family name. More efficacious, perhaps, given the minuscule school enrolment, was the proviso that forbade priests and military and civil officials from accepting any document, application, petition or deed that did not use the official surnames.

The ultimate consequences of these simplification projects can be good or bad, but they are all expansions of centralized power, often unnecessary, and dangerous without counterbalancing elements. Mass democracy could eventually use the mechanism of citizen registration to empower individuals and restrain the government, but this was in some sense historically reactive: it came after the expansion of the state at the expense of more local interests.

The existence of Farmville aside, Google and Facebook probably don’t intend to press people into involuntary labour. People are still choosing to click that cow no matter how much gamification gets them there. The interest in unique identities is for selling a maximally valued demographic bundle to advertisers. Even with multitudes of names and identities, we usually funnel back to one shared income and set of assets backed by a legal name.

Any power grab of this nature will encounter resistance. This might be placing oneself outside the system of control (deleting accounts), or it might be finding ways to use the system without ceding everything it asks for, like Jamais Cascio lying to Facebook.

The great target of Scott’s book is not historical states so much as the high modernist mega-projects so characteristic of the twentieth century, and their ongoing intellectual temptations today. He is particularly devastating when describing the comprehensive miseries possible when high modernist central planning combines with the unconstrained political power in a totalitarian state.

Again, it would be incorrect and unfair to describe any of the big software players today as being high modernist, let alone totalitarian. IBM in its mainframe and KLOC heyday was part of that high modernist moment, but today even the restrictive and aesthetically austere Apple has succeeded mainly by fostering creative uses of its platform by its users. The pressures of consumer capitalism being what they are, though, the motivation to forcibly simplify identity to a single point is hard for a state or a corporation to resist. Centralization has a self-perpetuating momentum to it, which good technocratic intentions tend to reinforce, even when these firms have a philosophical background in open systems. With the combined marvels of smartphones, clouds, electronic billing and social networks, I am reminded of Le Corbusier’s words. These software platforms are becoming machines for living.