Twitter Abuse Solution Sketches

Rilke used to say that no poet would mind going to gaol, since he would at least have time to explore the treasure house of his memory. In many respects Rilke was a prick. – Clive James

So a particularly nasty bout of threatening, possibly illegal, abuse against Caroline Criado-Perez triggered a petition asking for a report abuse button. Brooke Magnanti counters with examples of how this, and a twitter boycott, may be unproductive; its insightful in itself, and as former Belle Du Jour she does have an interesting angle on pseudonymity and publishing.
So this is society’s pathology, mediated by technology, and because Twitter is pretty neat, mediated in real time and connecting strangers at massive scale. It’s Larry Niven’s Flash Crowd of course, taken to its fastest immaterial instantiation. There are slow hard things to change about human society to make it less awful. The petition is right, though, in that technology got us into this specific version of the problem and there are surely smarter technical things to limit it, but it’s worth noting that right now no-one actually knows what they are. So here’s a few design assumptions and speculations, in the hope it sparks ideas in others.

Parameters / Assumptions
Manual abuse reporting is a deliberate usability choice. It makes you think about the accusation of abuse, and will place a premium on a coherent case. Abuse reporting is judicial and needs due process. It’s probably also rational laziness by Twitter: at small scales this is the cheapest solution to implement.
Adding structure is adding due process, but it’s also institutionalising abuse. At uni, I broke my right arm in a soccer game. I had a lecturer in rationality at the time who noted that soccer had incorporated a whole system of breaking the rules into the game itself, with yellow / red cards. That then motivates the entire diving substructure (pretending to be injured or fouled to get advantage). As in soccer, so in Twitter: all systems will be gamed, especially judicial ones. This effect manifests right down to the amount of structure you put on the report abuse form. Each element narrows the likely scope of human judgement; an abuse form also describes the sort of thing that might be considered abuse.
Human review is needed – with tools that scale. I don’t know any Twitter employees, so this is speculation, but it sounds like it is just reading emails and kicking individuals at this point.
The criminal justice system is needed, and shouldn’t be outsourced to a corporation. This part will be slow. Write to your government, but also keep in mind a certain slowness is a side effect of due process.

Use data visualization to analyse abuse events rapidly and at scale. Using new data views to augment human judgement is a digital humanities problem. Require one example tweet in form submission. The abuse support person needs to be able to rapidly see the extent and intensity of the abuse. To facilitate this, when they open an abuse ticket, they should be able to see the offending tweet, the conversation it happened in, and all of the user and reporters twitter network. This consists of followers, people followed, people replied to, people mentioned, people mentioning, people using the same hashtag. They can view much of this as a literal graph. All this can be pre-calculated and shown as soon as the ticket is open without any automated intervention in the tweets themselves. Show ngrams of word frequencies in reported tweets. In the recent example, they aren’t subtle. Allow filtering by time window.
Rank tickets in an automated way and relate to other abuse tickets. The time of the abuse team is limited, but the worst events are flash mobs. Make it easy to see when network-related abuse events are occurring by showing and linking abuse reports in the graph visualization above. Identify cliques implicated in abuse events, in the social and graph-theoretic senses. Probably once an abuse mechanism is established, there will be events where both sides are reporting abuse: make it easy to see that. And yes, show when identified users are involved – but don’t ditch pseudonymity as an account option.
Allow action on a subgraph, slowdowns and freezes. Up until now we have just described readonly tools. Through the same graphical view, identify subgraphs to be acted on. Allow operators to enforce slowdowns in tweeting – the tweet is still sent, but after a number of minutes or hours. The advantage of being able to set say one minute is it will be less obvious investigation is going on. A freeze is a halt on posting until further notice. The operator can choose to freeze or slowdown any dimension of the graph – eg a hashtag, or all people who posted on that tag, or all people in a clique replying to certain users with a certain word. This is similar to a stock exchange trading halt. This has to be a manual action because its based on human judgement and linguistic interpretation. Finally allow account deletion, but not as a mass action.
Capture and export all this data for use by a law enforcement agency you are willing to collaborate with.
Open the API and share at least some of the toolset source so people can get perspective on the shape of an attack when it happens. And of course, don’t do this at once – start with simple read only monitoring and iterate rapidly. Remember that the system will be gamed. Keep the poets out of gaol.

Big Powerful New Data

Power and language are both crucial currents for innovation. Two alternative tools for macro analysis of an economy’s innovative capacity and output then suggest themselves. Firstly, a power-centric analysis of changes in the economy, from in physical and political senses. Secondly, linguistic analysis of mass printed and digital material produced in an economy, from standalone and comparative perspectives. These techniques can complement one another, given that shifts in power and language also interact. Power-centric analysis of technology is a technique introduced by Russell et al in “The Nature of Power: Synthesizing the History of Technology and Environmental History”. An example of linguistic analysis of the economy is the R-word index run by The Economist, where the frequency of the word recession is used as an indicator of recession.

In a power analysis of the economy, energy flows and transitions are modelled qualitatively or quantitatively. Using this lens, we may note that the rise of the Internet has accompanied surging electrical power needs in large relatively centralized datacentres, with cloud computing being the current extreme of this. At the same time it has disintermediated middlemen such as travel agents. The move of labour – and spending of biochemical energy – from a travel agent in an office to the consumer at home or on a smartphone in turn requires increased electricity requirements for mobile phone towers and households. Given this analysis we can get insight into Google’s investment and research into alternative energy and distributed generation technologies such as solar photovoltaics. We might also note that, globally, the Internet mostly runs on coal. Combining the physical energy analysis with political analysis, we can see where innovation actors are constrained by energy and whether shifts in power are dominated by local or foreign actors, be they wind power entrepeneurs or multinational oil companies.

A focus on physical power can yield quantitative metrics of joules and watts that are not available to more structural approaches such as the system of innovation model. It focuses on facts about the economy that are fairly readily available for most countries, and also in comparative form. Though power analysis does include the labour market and its use of biochemical energy, this focus on economic output may make analysis of innovation capacity relatively indirect. How much did the energy use of a US mathematician change over the twentieth century, except as a consumer of productivity tools, such as computers, available to all professionas? This is a technique pioneered by historians, and it may speak most clearly in retrospect, requiring extrapolations to deduce capacity which are more prone to subjective policy hobby horses.

The linguistic approaches strengths and weaknesses seem to complement power analysis. By focusing on words, it will tend to weight research and development activity more strongly, such as use of terms in journal articles or social media. One weakness of linguistic analysis is that mass corpuses of content must be available to do “big data” style analysis. A developing economy, particularly in the poorest parts of the world, may not produce enough readily available searchable content to discover meaningful shifts and opportunities. Relying on the linguistic approach too heavily in a poor developing country may skew policy too much to theoretical research and ignore useful innovations happening on the ground but not on Twitter.

The innovation systems approach may have a weakness that the initial categories of organization (university, R&D lab, etc) constrain future analysis, missing trends which cut across traditional organizations. In this way both power and linguistic analysis may show up perspectives that do not emerge as readily in the otherwise more comprehensive innovations systems approach, and thereby supplement it.