Symbiotic Design

Do we build code, or grow it? I was fortunate enough to attend a Michael Feathers workshop called Symbiotic Design earlier in the year, organized by the good people at Agile Singapore, where he is ploughing biological and psychological sources for ideas on how to manage codebases. There’s also some links to Naur and Simondon’s ideas on technical objects and programming that weren’t in the workshop but are meanderings of mine.

Ernst Haeckel - Trachomedusae, 1904 (wiki commons)

Ernst Haeckel – Trachomedusae, 1904 (wiki commons)

Feathers literally wrote the book on legacy code, and he’s worked extensively on techniques for improving the design of code at the line of code level. Other days in the week focused on those How techniques; this session was about why codebases change the way they do (ie often decaying), and techniques for managing the structures of a large codebase. He was pretty clear these ideas were still a work in progress for him, but they are already pretty rich sources of inspiration.

I found the workshop flowed from two organizing concepts: that a codebase is an organic-like system that needs conscious gardening, and Melvin Conway’s observation that the communication structure of an organization determines the shape of the systems its people design and maintain (Conway’s Law). The codebase and the organization are the symbionts of the workshop title. Some slides from an earlier session give the general flavour.

Feathers has used biological metaphors before, like in the intro to Working Effectively With Legacy Code:

You can start to grow areas of very good high-quality code in legacy code bases, but don’t be surprised if some of the steps you take to make changes involve making some code slightly uglier. This work is like surgery. We have to make incisions, and we have to move through the guts and suspend some aesthetic judgment. Could this patient’s major organs and viscera be better than they are? Yes. So do we just forget about his immediate problem, sew him up again, and tell him to eat right and train for a marathon? We could, but what we really need to do is take the patient as he is, fix what’s wrong, and move him to a healthier state.

The symbiotic design perspective sheds light on arguments like feature teams versus component teams. Feature teams are a new best practice, and for good reasons – they eliminate queuing between teams, work against narrow specialization, and promote a user or whole-product view over a component view. They do this by establishing the codebase as a commons shared by many feature teams. So one great question raised is “how large can the commons of a healthy codebase be?” Eg there is a well known economic effect of the tragedy of the commons, and a complex history of the enclosure movement behind it. I found it easy to think of examples from my work where I had seen changes made in an essentially extractive or short-term way that degraded a common codebase. How might this relate to human social dynamics, effects like Dunbar’s number? Presumably it takes a village to raise a codebase.

Feathers didn’t pretend to have precise answers when as a profession we are just starting to ask these questions, but he did say he thought it could vary wildly based on the context of a particular project. In fact that particularity and skepticism of top down solutions kept coming up as part of his approach in general, and it definitely appeals to my own anti-high-modernist tendency. I think of it in terms of the developers’ theory of the codebase, because as Naur said, programming is theory building. How large a codebase can you have a deep understanding of? Beyond that point is where risks of hacks are high, and people need help to navigate and design in a healthy way.

((You could, perhaps, view Conway’s Law through the lens of Michel Foucault, also writing around 1968: the communication lines of the organization become a historical a priori for the system it produces, so developers promulgate that structure without discussing it explicitly. That discussion deserves space of its own.))

Coming back to feature teams, not because the whole workshop was about that, but because it’s a great example, if you accept an organizational limit on codebase size, this makes feature/component teams a spectrum, not good vs evil. You might even, Feathers suggests, strategically create a component team, to help create an architectural boundary. After all, you are inevitably going to impact systems with your organizational design. You may as well do it consciously.

A discussion point was a recent reaction to all of these dynamics, the microservices approach, of radically shrinking the size of system codebases, multiplying their number and decentralizing their governance. If one component needs changes, the cost of understanding it is not large, and you can, according to proponents, just rewrite it. The organizational complement of this is Fred George’s programmer anarchy (video). At first listen, it sounds like a senior manager read the old Politics Oriented Software Development article and then went nuts with an organizational machete. I suspect that where that can work, it probably works pretty well, and where it can’t, you get mediocre programmers rewriting stuff for kicks while the business paying them drowns in a pool of its own blood.

Another architectural approach discussed was following an explicitly evolutionary approach of progressively splitting a codebase as it grew. This is a technique Feathers has used in anger, with the obvious biological metaphors being cell meiosis and mitosis, or jellyfish reproduction.

The focus on codebases and the teams who work on them brings me back to Gilbert Simondon’s idea of the “theatre of reciprocal causality”. Simondon notes that technical objects’ design improvements have to be imagined as if from the future. They don’t evolve in a pure Darwinian sense of random mutations being winnowed down by environmental survival. Instead, when they improve, especially when they improve by simplification and improved interaction of their components, they do so by steps towards a potential future design, which after the steps are executed, they become. This is described in the somewhat mindbending terms of the potential shape of the future design exerting a reverse causal influence on the present: hence the components interact in a “theatre of reciprocal causality”.

This is exactly what programmers do when they refactor legacy code. Maybe you see some copy-pasted boilerplate in four or five places in a class. So you extract it as a method, add some unit tests, and clean up the original callers. You delete some commented out code. Then you notice that the new method is dealing with a concept you’ve seen somewhere else in the codebase. There’s a shared concept there, maybe an existing class, or maybe a new class that needs to exist, that will tie the different parts of the system together better.

That’s the theatre of reciprocal causality. The future class calling itself into existence.

So, given the symbiosis between organization and codebase, the question is, who and what is in the theatre? Which components and which people? Those are the components that have the chance to evolve together into improved forms. If it gets too large, it’s a stadium where no one knows what’s going on, one team is filming a reality TV show about teddy bears and another one is trying to stage a production of The Monkey King Wreaks Havoc In Heaven. One of the things I’ve seen with making the theatre very small, like some sort of Edinburgh Fringe Festival production with an audience of two in the back of an old Ford Cortina, is you keep that component understandable, but cut off its chance at technical evolution and improvement and consolidation. I’m not sure how that works with microservices. Perhaps the evolution happens through other channels: feature teams working on both sides of a service API, or on opportunistically shared libraries. Or perhaps teams in developer anarchy rewrite so fast they can discard technical evolution. Breeding is such a drag when drivethru immaculate conception is available at bargain basement prices.

Seeing Like A Facebook

The insistence on a single, unique, legal identity by Facebook and Google continues a historical pattern of expansion of power through control of the information environment. Consider the historical introduction of surnames:

Customary naming practices are enormously rich. Among some peoples, it is not uncommon to have different names during different stages of life (infancy, childhood, adulthood) and in some cases after death; added to those are names used for joking, rituals, and mourning and names used for interactions with same-sex friends or with in-laws. […]  To the question “What is your name?” which has a more unambiguous answer in the contemporary West, the only plausible answer is “It depends”.
For the insider who grows up using these naming practices, they are both legible and clarifying.
 — James C. Scott, Seeing Like A State

It’s all rather reminiscent of the namespace of open internets since they emerged in the 80s, including BBS, blogs, IRC, message boards, slashcode, newsgroups and even extending the lineage to the pseudonym-friendly Twitter. You can tell Twitter has this heredity by the joke and impersonating accounts, sometimes created in ill-spirit, but mostly in a slyly mocking one. CheeseburgerBrown’s autobiography of his pseudonyms captures the spirit of it.

Practically any structured scheme you might use to capture this richness of possible real world names will fail, as  Patrick McKenzie amusingly demonstrates in his list of falsehoods programmers believe about names.

Scott goes on to show how the consistent surnames made information on people much easier to access and organize for the state – more legible. This in turn made efficient taxation, conscription and corvee labour possible for the feudal state, as well as fine grained legal title to land. It establishes an information environment on which later institutions such as the stock market, income tax and the welfare state (medical, unemployment cover, universal education) rely. Indeed the idea of a uniquely identifiable citizen, who votes once, is relied on by mass democracy. Exceptions,  where they exist, are limited in their design impact due to their rarity. Even then, the introduction of national ID cards and car registration plates is part of that same legibility project, by enforcing unique identifiers. For more commercial reasons but with much the same effect, public transport smartcards, mobile phones  and number plates, when combined with modern computing, make mass surveillance within technical reach. 

The transition to simplified names was not self-emerging or gentle but was aggressively pursued by premodern and colonial states. In the course of a wide survey Scott gives a striking example from the Philippines:

Filipinos were instructed by the decree of November 21, 1849, to take on permanent Hispanic surnames. The author of the decree was Governor (and Lieutenant General) Narciso Claveria y Zaldua, a meticulous administrator as determined to rationalise names as he had been determined to rationalise existing law, provincial boundaries, and the calendar. He had observed, as his decree states, that Filipinos generally lacked individual surnames, which might “distinguish them by families,” and that their practice of adopting baptismal names from a small group of saints’ names resulted in great “confusion”. The remedy was the catalogo, a compendium not only of personal names but also of nouns and adjectives drawn from flora, fauna, minerals, geography and the arts and intended to be used by the authorities in assigning permanent, inherited surnames. […] In practice, each town was given a number of pages from an alphabetized catalogo, producing whole towns with surnames of the same letter. In situations where there has been little in-migration in the past 150 years, the traces of this administrative exercise are still perfectly visible across the landscape.
[…]
For a utilitarian state builder of Claveria’s temper, however, the ultimate goal was a complete and legible list of subjects and taxpayers. […] Schoolteachers were ordered to forbid thier students to address or even know one another by any other name except the officially inscribed family name. More efficacious, perhaps, given the minuscule school enrolment, was the proviso that forbade priests and military and civil officials from accepting any document, application, petition or deed that did not use the official surnames.

The ultimate consequences of these simplification projects can be good or bad, but they are all expansions of centralized power, often unnecessary, and dangerous without counterbalancing elements. Mass democracy could eventually use the mechanism of citizen registration to empower individuals and restrain the government, but this was in some sense historically reactive: it came after the expansion of the state at the expense of more local interests.

The existence of Farmville aside, Google and Facebook probably don’t intend to press people into involuntary labour. People are still choosing to click that cow no matter how much gamification gets them there. The interest in unique identities is for selling a maximally valued demographic bundle to advertisers. Even with multitudes of names and identities, we usually funnel back to one shared income and set of assets backed by a legal name.

Any power grab of this nature will encounter resistance. This might be placing oneself outside the system of control (deleting accounts), or it might be finding ways to use the system without ceding everything it asks for, like Jamais Cascio lying to Facebook.

The great target of Scott’s book is not historical states so much as the high modernist mega-projects so characteristic of the twentieth century, and their ongoing intellectual temptations today. He is particularly devastating when describing the comprehensive miseries possible when high modernist central planning combines with the unconstrained political power in a totalitarian state.

Again, it would be incorrect and unfair to describe any of the big software players today as being high modernist, let alone totalitarian. IBM in its mainframe and KLOC heyday was part of that high modernist moment, but today even the restrictive and aesthetically austere Apple has succeeded mainly by fostering creative uses of its platform by its users. The pressures of consumer capitalism being what they are, though, the motivation to forcibly simplify identity to a single point is hard for a state or a corporation to resist. Centralization has a self-perpetuating momentum to it, which good technocratic intentions tend to reinforce, even when these firms have a philosophical background in open systems. With the combined marvels of smartphones, clouds, electronic billing and social networks, I am reminded of Le Corbusier’s words. These software platforms are becoming machines for living.

Deliberate Anarchy As Climate Governance

It is informative to think about the science of changing climate as two fields. The first is long-term meteorology, making predictions about how the atmosphere and climatic conditions change over long periods of time. This is about a century and a half old and built on physics, chemistry, and observations from a variety of real time and historical sources such as satellites and ice cores. The current dominant paradigm of long-term meteorology includes anthropogenic climate change driven by atmospheric carbon and other gases. It’s a very successful theory whose dominance has been cemented by a track record of new data emerging and anamolies resolving in ways which confirm it. The discovery that satellite measured temperatures were not accounting for relativistic effects caused by the speed of the satellites, and this was causing almost exactly the anomalous difference between ground and satellite temperatures, was one of the more dramatic of these. This was nearly ten years ago. The existence of a handful of outlying dissenting experts outside the paradigm is just confirmation that it’s a real scientific community; the same phenomenon accompanied Newtonian mechanics and the molecular theory in chemistry too. This is reality, as best we can tell.

The second field is political climatology, dealing with the ways a mass of people and their social institutions deal with the climate of the planet they live on. This is a new field at which we are still pretty awful (including attempts by climate scientists). I use the term political climatology deliberately, by analogy with the political economy, ie, economics, and the constraints that politics as a human behaviour places on it. We are pretty bad at the political economy, though we’ve had a few wins over the last century. At political climatology we are just pants.

I don’t just mean we are awful in that we have lousy outcomes, I mean the whole structure of the discussion and the seriousness of institutional design is lacking. The entire debate is in the wrong place. There are interesting arguments within climate science, and there are major and controversial policy decisions to be made. We have a science built on all the sophistication of the Englightenment and the Industrial Revolution, and a monster set of interlinked problems caused by the wondrous success of the same. Meanwhile our toolset for discussing and organizing around it as a society is like five drunk old men with head injury debating the existence of an iPhone.

There is one intellectually tenable policy position which can be shared between someone serious about seeing the world as it is and the fairy land tales of climate fabulists or deniers. That is the policy of deliberate neglect. Accepting the fact of human driven climate change, we choose not to make governments act to remediate it.

Though the changing climate is indeed something to dread and gird ourselves against, the argument goes, any political solution would cause damage too great to our institutions. 

Usually this is framed as economic cost, and people like Jim Manzi argue, contra Stern et al, that the GDP costs of mitigation are simply smaller than the benefits.

There are technical problems with Manzi’s argument: scenario choice is highly selective, and GDP is a lousy basis for century scale prediction. That latter post also suggests in an ecological catastrophe, money may not be everything. (When The Economist suggests you are suffering compulsive quantification disorder and need to sit back and smell the drowning flowers, something is up.) Nevertheless Manzi’s willingness to grapple publicly with scientific reality in arguing policy, something that say, George Monbiot, does routinely from a different political tradition, gets towards the type of debate required.

Climate change is a global problem, and worse than that, a global collective action problem. It’s also larger than a few percent of GDP. In the history of the world, there has been environmental catastrophe, but there has never been democratic world government. Dan Hannan, among others, argues that this is a straightforward function of the distance of the government from individual concerns. It helps to know that Hannan is a ferociously euroskeptic MEP, and has more recently found it convenient to disparage the science without fully disavowing it. Even souveriniste libertarian conviction politicians have bases to mollify, I guess.

The sorry record of corruption and bad policy in global institutions does rather support Hannan’s position, though. Indeed, even the experience of the smaller, transnational, EU supports it – technocratic, with little democratic check, and corrupt to the degree its accounts have not been signed off by an auditor in a dozen years. For those who support a different factional football team, consider the IMF, or the WTO. And as beautiful as the vision of the United Nations is, the power there is with the Security Council, a standing committee of Great Powers and their proxies. 

This is not a screed about UN black helicopters and mind control rays. We simply need to be clear-eyed about the state of our global political institutions before we hand them the Earth’s thermostat. This is especially since decades of dithering makes geoengineering more likely, or necessary.

Some (say, certain large, industrial, non-democracies) may  take the utilitarian line that political niceties are a luxury in the face of catastrophe – a case of give me liberty and give me megadeath. And certainly geophysics doesn’t care about politics. However, the argument for ecofascism is not only rather odious in itself, but highly centralised government has an appalling environmental record. Capitalism and democracy have their environmental failures, but communism is the most toxic pollutant man has yet devised. Contrast the Cuyahoga River and the Aral Sea. 

The environment, in this argument, is too important to be passed off to a global bureaucracy to create a Common Fisheries Policy for carbon. Human nature and its politics will not change any time soon. Better for liberty and ecosystems alike that nations remain in productive mutual anarchy.

That is not my position – this note is a way of thinking through the problem. There are other approaches. The world almost tried one with Kyoto-Copenhagen. Tech can change faster than human nature, and different social contexts allow it different expression. Deliberate anarchy is credible enough to be the benchmark. We can easily do worse. Can we do better?

花雨从天来 /已有空乐好 – 李白:寻山僧不遇作

A light rain fell as if it were flowers falling from the sky, making a music of its own – Li Bai, Looking For A Monk And Not Finding Him, Allen trans.