Captain Howdy: https://b3ta.com/board/11351222
Sir Clive Sinclair died last week – still best-known, perhaps despite his own intentions, for the burst of innovation that saw the production of a series of cheap, mass-produced computers in the late 1970s and early 1980s. It perhaps wasn’t clear at the time, and – if the BBC’s very good one-off drama, Micro Men, is any guide – it wasn’t clear to Sinclair himself just how much the transformation of computers from big, unseen presences to small machines that could sit in your house would then go on to transform the world.
I learned to program on a Sinclair QL, one of the two commercial flops that Sinclair Research tried to push in the mid-1980s that led to the company’s bankruptcy and its purchase by Alan Sugar’s Amstrad. (The other was the electric trike, the C5. Possibly this now looks ahead of its time; possibly not.) The QL was rushed to market well before the machines were ready to be sold, and its hardware was somewhat underpowered relative to the demanding software it came installed with – notably, the multitasking operating system and its implementation of the BASIC language. Its Motorola CPU was a version of the same chip later installed in the 16-bit Amiga and Atari ST, but with the external databus (the CPU’s link to the rest of the computer) restricted to 8 bits – a cost-saving measure, but which meant (in practice) that the QL was just a bit slow and flopped around.
To really complete the package, Sinclair Research spurned the standard floppy disk drives in favour of their proprietary Microdrives, which were slow, unreliable, had a (even by 1985 standards) derisory 100k of storage space, but which were cheap to manufacture. Likewise the peculiar membrane keyboard – a step up from the dead flesh feel of the ZX Spectrum’s rubber keys, but whose money-saving lack of responsiveness is probably today responsible for my attack-the-keys, smash-the-laptop approach to typing. Almost no-one bought the QL, which meant no-one wrote games for it – and in any case it wasn’t much good at them - so I had no choice but to learn to code, quite badly, before happily acquring a secondhand Spectrum. (My dad, displaying an uncanny technological sixth sense, later bought a Betamax video recorder.)
But as bat020 pointed out, it was at least possible, with the first personal computers, to understand, just about, what the machine was doing. You could, in principle, stop the thing and work out what every part of it was up to at that point in time. The Spectrums and the QLs and the rest were hugely more limited than whatever we have today, and would only very rarely be connected to any other computer – the internet existed, but was monopolised by academics and the first web browser wasn’t written until 1990. But once the machines became more complex, and more connected, this capacity for the user to understand the technology was lost: and of course today the overwhelming majority of computers sold, especially in the form of smartphones, are sold as black boxes: understanding and modifying whatever is going on inside the box is actively discouraged. As computer technology has spread, it has become harder to understand either what the technology is doing – or even what impact it is really having.
Growth, or not
Take the claim made by Oliver Dowden, the recently-departed Secretary of State for Digital, Culture Media and Sport, when talking up the little-noticed bonfire of data regulations he and former Trade Secretary Liz Truss have been pushing through under the cover of Brexit. (The Open Rights Group have a detailed guide to the data rights implications of the Japan-U.K. trade deal.) Ripping up existing protections on the use of data will apparently usher in a “new era of data-driven growth”.
It would have to be a new era. The hard truth is that there is no obvious relationship between the production of data and the production of economic growth. Take the last twenty years: global internet traffic has grown from 156GB transferred every second in 2002, to 150,000GB a second today – a thousand times larger.
But the world economy has grown, as measured by GDP (and in constant 2010 dollars) from $52tr in 2002 to $82tr in 2020 – just over one and a half times larger. Both the economy as a whole and the internet have grown but if there’s any meaningful relationship between data growth and economic growth then on first glance it is spectacularly inefficient: almost anything else you can think of - digging holes and filling them in again, say - would be a better route to producing economic growth that adding more data.
And take the economic history of the last two decades: data use expands continually, and at an increasing pace. But the world has seen two major crises and recessions during that time. It’s stretch, obviously, to blame data itself for the crises of 2008 and 2020, but not only did recession not halt the growth of the data economy, recession actively accelerated its growth, dramatically so in the case of covid, with global internet traffic surging 40% between February and April last year. If anything, the link between data use and economic growth starts to look possibly negative: that more data might mean less growth.
It’s more than thirty years since Robert Solow quipped that “I can see the computer age everywhere, except in the productivity statistics.” We invest huge amounts in digital technology. Data is ubiquitous. And yet productivity growth remains low. The measurable economic benefits of the computer revolution are always just ahead of us, somewhere. I say “measurable”: as Will Page of Spotify put it, quoted in the recent review of government statistics headed by Charles Bean, GDP “was originally designed to measure tangible manufactured goods which are losing relevance in the modern economy”. GDP is good at counting things physically bought and sold, but very bad at counting anything where a market transaction is not directly involved – which includes, for example, a severe understatement of the true value of carework. A lot of what takes place in the digital world, from streaming to writing tweets, will not be counted properly by conventional economic statistics.
But this shift into immaterial production doesn’t mean the economy has also dematerialised. Exactly the opposite: the internet uses around 10% of global electricity generated, forecast to rise to one-fifth by 2025. Meanwhile, a series of environmental upsets, from the outbreak of covid to fires and droughts, have severely disrupted the production of silicon chips, in turn breaking the supply chains for the immense number of manufactured products that now depend on them. The immaterial digital world has an immense, and growing, material footprint and, as the material, physical environment becomes more unstable, it will be increasingly subject to disruption.
So at enormous expense and effort we are building systems we do not understand and whose real economic impact, as we usually think of it, is hard to locate. The point at which the “computer age” becomes apparent in the productivity statistics has now been deferred for three decades; it may well be time to admit that it never will appear – that if we are creating anything useful here, its use may not be the additional dollars of GDP but in the wider and basically unmeasurable creative and social capacities that computers, especially when networked, can provide. And if we are thinking of computers less as a black box to make us more productive in the narrow sense, but as an open-ended machine to make us more creative in the wider, we might also want to think about how their use can be changed to expand the creative potential whilst limiting the environmental impacts. “Permacomputing” as a low-impact redesign for computers and computing systems was suggested by Ville-Matias Heikkilä; more prosaically, the new “right to repair” should be expanded into “right to reprogram”: breaking the cycle of continually producing newer and faster machines in favour of maximising the value and use of those we have.
By happy coincidence, This Machine Kills took a break from their usual cutting-edge technological cynicism to respond to a brilliant essay by David Graeber and the archaeologist David Wengrow, “How to Change the Course of Human History”, which picks up the argument I touched on last week about the propensities of people to organsie collectively - and extends the point to draw on what we know about the multiple ways the first cities and urban settlements were organised. Meanwhile, Jathan from TMK also popped up in the nigh-on essential Ten Thousand Posts, here talking about the Luddites.
I buried a link to it in the text but this b3ta classic probably warrants an actual plug. Nostalgia for nostalgia: Baudrillard had a point. As do Kraftwerk; I’d forgotten how good (and anti-nostalgia) this is.