Gatsby of the Three Kingdoms

“When I was living in the country, they told me that Cao Cao 曹操 was building a pavilion on the River Zhang; it was to be named the Bronze Bird Tower. It is an exceedingly handsome building, and he has sought throughout all the world for the most beautiful women to live in it. For Cao Cao really is a sensualist.
“Now there are two very famous beauties in Wu, born of the Qiao family. So beautiful are they that birds alight and fishes drown, the moon hides her face and the flowers blush for shame at sight of them. Cao Cao has declared with an oath that he only wants two things in this world: the imperial throne in peace and the sight of those two women on the Bronze Bird Terraces. Given these two, he would go down to his grave without regret. This expedition of his, his huge army that threatens this country, has for its real aim these two women.
— Romance of the Three Kingdoms 三国演义 Chapter 44, Brewitt-Taylor trans.

Baz Luhrmann is great at the upswing. He’s great at movement and at sounds, and so his new Great Gatsby has great parties. Magnificent parties, indeed, as it should be; great tumbling exuberances and small illicit wildnesses pushed into rooms that don’t quite fit them.

He’s great at the busyness and spectacle, as we have come to expect, mixing recent pop songs with jazz standards and going from music video jump cuts to overhead Busby Berkeley crane shots. Him and his team, including, for instance, costume designer Catherine Martin, writer Craig Pearce, and art director Michael Turner, are very good at that now: it’s less a trick than a trademark. But he’s good at the gaps too, moments of quiet and disconnection near the crowd, like Nick Carroway peering back at himself from the street in a literary metaphor turned into a moment of surreal visual literalness.

Luhrmann’s even good at the hangovers. This film lacks not so much on the downswing but on the slow downfall, the slide into ruin. This wasn’t the case in Romeo+Juliet, which has a more even pace of car crashes and sex and suicide and Leonardo DiCaprio through to the end, and was first a play, not a novel. Framing devices slow things down, they intermediate, they pad, and here we have at least two: a writer talking to a doctor about a poseur.

The screenplay doesn’t quite trust the words in the last third, and Luhrmann reaches for visuals, even pushing letters all over the screen at one point, as if Nick Carroway were losing a melancholy game of Scrabble. It’s a fine cast of actors, and they do fine, though the last act leans hard on the jock vacuity of the Tom Buckman character, and I’m not sure that Joel Edgerton quite lands it. Isla Fisher’s performance as Myrtle reprises her party girl role from Wedding Crashers, and other places. (Wedding Crashers itself is structurally a Gatsby story, with Owen Wilson faking privelege to chase romance shielded by wealth, while Vince Vaughn plays wingman.)

Gatsby is an American classic with tropes that are really universal to civilisation. It’s young hicks on the make and old money, aristocracy replenishing itself with energetic new blood. They are usually male stories, more’s the pity, but they’re good ones: Gatsby and Obama, Cicero and Cao Cao. Some have history on the inside and fiction on the outside, and some vice versa, but they are stories all. (First you get the manners. Then you get the tower. Then you get the woman.) There’s a rush in seeing a brilliant young player dominate an old game. It seems supernatural: 说曹操,曹操到 … the Chinese saying isn’t “speak of the devil”, its “speak of Cao Cao”, like Gatsby, “nephew to von Hindenburg and second cousin to the devil”, and yet it also feels true.

I have developed an affection for adaptations that cheerily exploit the strengths of their new media, and cop the weaknesses on the chin. On this criteria, the lovingly crafted 8-bit Great Gatsby Game is a truer adaptation than the pretty and somnambulant 1974 film. This echoes Dave Thier’s reaction. I used to think we’d know computer games could be art when we managed a game that felt like The Double Life of Veronique, the opaque and masterly Kieslowski film. There’s a moment in Arkham Asylum, half way through, where after hours of being Batman, you are brought back through the same long corridor, this time as the Joker. It’s a moment of poetic disorientation like Veronique, but one that can only happen in a game, in that particular balance of protagonist control and setting intransigence.

Cao Cao pointed his finger first at his guest and then at himself, saying, “The only heroes in the world are you and I.”

Big Powerful New Data

Power and language are both crucial currents for innovation. Two alternative tools for macro analysis of an economy’s innovative capacity and output then suggest themselves. Firstly, a power-centric analysis of changes in the economy, from in physical and political senses. Secondly, linguistic analysis of mass printed and digital material produced in an economy, from standalone and comparative perspectives. These techniques can complement one another, given that shifts in power and language also interact. Power-centric analysis of technology is a technique introduced by Russell et al in “The Nature of Power: Synthesizing the History of Technology and Environmental History”. An example of linguistic analysis of the economy is the R-word index run by The Economist, where the frequency of the word recession is used as an indicator of recession.

In a power analysis of the economy, energy flows and transitions are modelled qualitatively or quantitatively. Using this lens, we may note that the rise of the Internet has accompanied surging electrical power needs in large relatively centralized datacentres, with cloud computing being the current extreme of this. At the same time it has disintermediated middlemen such as travel agents. The move of labour – and spending of biochemical energy – from a travel agent in an office to the consumer at home or on a smartphone in turn requires increased electricity requirements for mobile phone towers and households. Given this analysis we can get insight into Google’s investment and research into alternative energy and distributed generation technologies such as solar photovoltaics. We might also note that, globally, the Internet mostly runs on coal. Combining the physical energy analysis with political analysis, we can see where innovation actors are constrained by energy and whether shifts in power are dominated by local or foreign actors, be they wind power entrepeneurs or multinational oil companies.

A focus on physical power can yield quantitative metrics of joules and watts that are not available to more structural approaches such as the system of innovation model. It focuses on facts about the economy that are fairly readily available for most countries, and also in comparative form. Though power analysis does include the labour market and its use of biochemical energy, this focus on economic output may make analysis of innovation capacity relatively indirect. How much did the energy use of a US mathematician change over the twentieth century, except as a consumer of productivity tools, such as computers, available to all professionas? This is a technique pioneered by historians, and it may speak most clearly in retrospect, requiring extrapolations to deduce capacity which are more prone to subjective policy hobby horses.

The linguistic approaches strengths and weaknesses seem to complement power analysis. By focusing on words, it will tend to weight research and development activity more strongly, such as use of terms in journal articles or social media. One weakness of linguistic analysis is that mass corpuses of content must be available to do “big data” style analysis. A developing economy, particularly in the poorest parts of the world, may not produce enough readily available searchable content to discover meaningful shifts and opportunities. Relying on the linguistic approach too heavily in a poor developing country may skew policy too much to theoretical research and ignore useful innovations happening on the ground but not on Twitter.

The innovation systems approach may have a weakness that the initial categories of organization (university, R&D lab, etc) constrain future analysis, missing trends which cut across traditional organizations. In this way both power and linguistic analysis may show up perspectives that do not emerge as readily in the otherwise more comprehensive innovations systems approach, and thereby supplement it.

Invention As A Hub

In a linear model of innovation, innovation is imagined to proceed through an orderly sequence of steps, from pure scientific research, to applied science, formulation as a technology, then developing and scaling up distribution of that technology as a product. One alternative model might be that of a “techno-social hub”. In a techno-social hub model, science, applied science, capital provision, product development and the exchange of products and services in the market are connected to each other through a media of technology and social processes. This can be represented graphically as a techno-social hub node connected by a single edge to nodes representing research, applied research, and so on. These nodes are similar but not identical to the stages in the linear model of innovation.

The techno-social model is an improvement on the linear model, as it distinguishes different factors in innovation without unrealistically segregating those factors. It represents that once a technological or process innovation is made, influence doesn’t flow in a straight line, but feeds back to different parts of society via the artifact or social change. For example, the development and use of the Newcomen steam engine in factories in 18th century Britain opened up the possibility of applied research and prototypes of steam trains by the early 19th and the capital provision required to build railway networks. The steam engine also spurred pure research in thermodynamics and was an influence on the psychological theories of Freud.

Operationally this model recognises the importance of institutions and organisations that support each aspect of innovation, such as universities for basic research and markets for exchange and use. By emphasizing the links between different stages it might direct policy makers and people in the field to the importance of good communications amongst organisations, via physical co-location, libraries, journal publication, less formal collaboration over the Internet, and so on. It recognises that, in William Gibson’s phrase, “the street finds its own use for things”, and that research and capital should be able to dynamically react to new uses of a technology.

A disadvantage of the model may be underemphasizing the links between closely related areas, such as basic and applied research. By placing technology at the centre of the model, it tends to technological determinism. The social aspect of the techno-social may also be too broad a category to effectively operationalise for setting innovation policy. Overall, however, the techno-social hub model avoids the constraints of the linear model at the cost of being slightly harder to say, and draw.

Industrializing The Noosphere

Control Environment

We are not practicing Continuous Delivery. I know this because Jez Humble has given a very clear test. If you are not checking your code into trunk at least once every day, says Jez, you are not doing Continuous Integration, and that is a prerequisite for Continuous Delivery. I don’t do this; no one on my team does it; no one is likely to do it any time soon, except by an accident of fortuitous timing. Nevertheless, his book is one of the most useful books on software development I have read. We have used it as a playbook for improvement, with the result being highly effective software delivery. Our experience is one small example of an interesting historical process, which I’d like to sketch in somewhat theoretical terms. Software is a psychologically intimate technology. Much as described by Gilbert Simondon’s work on technical objects, building software has evolved from a distractingly abstract high modernist endeavour to something more dynamic, concrete and useful.

The term software was co-opted, in a computing context, around 1953, and had time to grow only into an awkward adolescence before being declared, fifteen years later, as in crisis. Barely had we become aware of software’s existence before we found it to be a frightening, unmanageable thing, upsetting our expected future of faster rockets and neatly ordered suburbs. Many have noted that informational artifacts have storage and manufacturing costs approaching zero. I remember David Carrington, in one of my first university classes on computing, noting that as a consequence of this, software maintenance is fundamentally a misnomer. What we speak of as maintenance in physical artifacts, the replacement of entropy-afflicted parts with equivalents within the same design, is a nonsense in software. The closest analogue might be system administrative activities like bouncing processes and archiving logfiles. What we call (or once called) maintenance is revising our understanding of the problem space.

Software has an elastic fragility to it, pliable, yet prone to the discontinuities and unsympathetic precision of logic. Lacking an intuition of digital inertia, we want to change programs about as often as we change our minds, and get frustrated when we cannot. In their book Continuous Delivery, Humble and Farley say we can change programs like that, or at least be changing live software very many times a day, such that software development is not the bottleneck in product development.

With this approach, we see a rotation and miniaturisation of mid-twentieth century models of software development. The waterfall is turned on its side.

Continue reading

Test Driven Development As Computational Positivism

Test driven development is a craft answer to a seemingly quite formal theoretical problem, verification and validation. This can be seen as a vernacular versus institutional architectural style. We can also compare it to different styles of science. When considering historical astronomy, Roddam Narasimha interprets radically empirical approaches as a different style of science, which he terms “computational positivism”.

[W]hile the Indian astronomer found the epicycle a useful tool of representation, he would cheerfully abandon the classical epicycle if he found something which was more efficient or led to a shorter algorithm and to better agreement with observation. For somebody subscribing to Greek ideals, however, this attitude would presumably seem sacrilegious – the rejection of the epicycle would question the basic assumption that the circle is a perfect figure preferred by nature, and hence precipitate a major philosophical crisis.
Narasimha, Axiomatism and Computational Positivism

Or in more familiar terms, the approach of the Keralan School was to discount theories and models and go with whichever calculation was most predictive.

In computing the problem of verification and validation had a clear academic answer. It came from the same school of thought – Dijkstra, Knuth, et al – that delivered major algorithmic and language design innovations, many of which (eg structured programming) were unintuitive conceptual shifts for the working programmer. The answer was formal methods, with the classic citation being Carrol Morgan’s Programming From Specifications. I was trained this way and remain sympathetic, but it hasn’t penetrated everyday practice. (Granted I should read more on the latest work,  but I suspect formal methods suffered from the same misreading of the software crisis as other spec-centric high-modernist schemes. I also suspect they will live again in another form, both good topics for another day.)

Test-driven development, by contrast, saw rapid uptake among working programmers, even if it is short of universal. Formal proof and TDD aren’t equivalent techniques, as one is deductive and one inductive. Proof is also a stronger standard. Both involve some degree of upfront investment, but unit tests are much lower cost to interweave into an existing codebase or introduce in an incremental way. One way to think of TDD is a vernacular style suited to an artisan scale, in contrast with the high cost institutional approach of formal methods. It’s the caravan or service station of refinement from specifications.

This is the hoary old craft-practical vs academic-theoretical dialectic. It’s useful, but obscures as well as reveals. Formal methods practitioners never saw proof as replacing testing, let alone automated testing. On the other side, looking at JUnit as an example, the originators like Kent Beck and Erich Gamma were hardly without university links, and their Smalltalk background wasn’t mainstream. 

This is where Narasimha’s typology seems to apply, the idea of not doing science without theory, but doing science in an alternative mode. One of the striking aspects of TDD as described in the original JUnit essays, like Test Infected, is their relaxed attitude to the ultimate implementation. This is computational positivism – the only measure is whether the formula hits a requisite number of data points. The symbolic coherence of the solution itself is not granted any weight; the code can be spaghetti. Though aesthetically displeasing, there’s an argument that its empirical rigour is more scientifically sound.

The experience of using unit testing widely across a codebase usually shows a different emerging property. By decomposing into testable components, the overall coherence of the design actually tends to improve.

Though Keralan astronomy was superior to European for some hundreds of years, post-Newtownian astronomy finally trumped it for precision, together with a very powerful corresponding theory. The right test for such an ambitious scheme in software would be a computationally positivist one, just as for all the bodgy solutions that preceded it: sure it’s pretty, but does it keep the test bar green?