[responsivevoice_button rate=”1″ pitch=”1.2″ volume=”0.8″ voice=”US English Female” buttontext=”Story in Audio”]
The Constant Risk of a Consolidated Internet
Like many “verified” Twitter users who compose its obsessive elite, I was briefly unable to tweet as the hack played out, Twitter having taken extreme measures to try to quell the chaos. I updated my password, a seemingly reasonable thing to do amid a security breach. Panicked, Twitter would end up locking accounts that had attempted to change their password in the past 30 days. A handful of my Atlantic colleagues had done the same and were similarly frozen out. We didn’t know that at the time, however, and the ambiguity brought delusions of grandeur (Am I worthy of hacking?) and persecution (My Twitterrrrrr!). After less than a day, most of us got our accounts back, albeit not without the help of one of our editors, who contacted Twitter on our behalf.
Read: Twitter’s least-bad option for dealing with Donald Trump
The whole situation underscores how centralized the internet has become: According to the Times report, one hacker secured entry into a Slack channel. There, they found credentials to access Twitter’s internal tools, which they used to hijack and resell accounts with desirable usernames, before posting messages on high-follower accounts in an attempt to defraud bystanders. At The Atlantic, those of us caught in the crossfire were able to quickly regain access to the service only because we work for a big media company with a direct line to Twitter personnel. The internet was once an agora for the many, but those days are long gone, even if everyone can tweet whatever they want all the time.
It’s ironic that centralization would overtake online services, because the internet was invented to decentralize communications networks—specifically to allow such infrastructure to survive nuclear attack.
In the early 1960s, the commercial telephone network and the military command-and-control network were at risk. Both used central switching facilities that routed communications to their destinations, kind of like airport hubs. If one or two of those facilities were to be lost to enemy attack, the whole system would collapse. In 1962, Paul Baran, a researcher at RAND, had imagined a possible solution: a network of many automated nodes that would replace the central switches, distributing their responsibility throughout the network.
The following year, J. C. R. Licklider, a computer scientist at the Pentagon’s Advanced Research Projects Agency (ARPA), conceived of an Intergalactic Computer Network that might allow all computers, and thereby all the people using them, to connect as one. By 1969, Licklider’s successors had built an operational network after Baran’s conceptual design. Originally called the ARPANet, it would evolve into the internet, the now-humdrum infrastructure you are using to read this article.
Over the years, the internet’s decentralized design became a metaphor for its social and political ethos: Anyone could publish information of any kind, to anyone in the world, without the assent of central gatekeepers such as publishers and media networks. Tim Berners-Lee’s World Wide Web became the most successful interpretation of this ethos. Whether a goth-rock zine, a sex-toy business, a Rainbow Brite fan community, or anything else, you could publish it to the world on the web.