Over the past two decades, changes are underway with profound consequences for both social organization and system design. Virtual machines, cloud computing, and containers are reducing the need for general purpose multi-user systems and the stewardship that maintenance of such systems requires. At the same time, while we live in an age in which we are more connected than ever, we are increasingly cut off from one another because the systems we use are isolated clients. The loss of centralized loci of computing has changed the way we work and communicate online, in many ways making it more difficult to collaborate and causing more isolation by removing the shared spaces that brought us together in the past. Unix systems which used to be the durable social centers of computing have been replaced by a disposable legion of lobotomized unices.

Back in the olden days when uptimes were measured in years and Unix systems were lovingly hand maintained by cadres of grey-bearded sysadmins, it was common for individual systems to evolve a unique personality. No two boxen were exactly alike—hardware quirks, hacky workarounds, custom patches, niche startup scripts, and years of general wizardry led to these machines taking on a life of their own. These machines were given and referred to by their real names. Joining uni or a new company you’d be introduced to them as if they were one of your fellow students or close co-workers. They became the social center of the campus or office complex: all the applications one needed for work and communication were in one place.

In the era of cloud computing we have sadly lost some of this magic and replaced it with highly automated, fully reproducible devops recipes where provisioning scripts ensure that all server images are exactly identical for instant deployment. If anything goes wrong, the server is simply terminated and the puppet/chef/ansible scripts adjusted to prevent any subtle imperfections. Individual machines matter much less because in a high-availability fault-tolerant architecture, you expect systems to fail while your service continues to operate on a failover node.

Rebooting used to be a mark of failure as a sysadmin: you couldn’t figure out what was going wrong and had to resort to the nuclear option. Today, we don’t even bother to reboot the system. Instead, just destroy the whole thing and start over. What went wrong? Who cares! Just reinstall the OS from a fresh image, install the necessary packages, adjust the launch scripts and the problem goes away. These systems never have time to develop their own personalities, they’ve been surgically altered to be simple brain-dead repositories of executable functions. There’s no place for users to interact, to explore, to share. The server is now a disposable vessel to be readily discarded when the single task it was assigned has completed. We don’t even bother to give them personalized names anymore, just something boring like node-407 or ec2-nnn-nn-nn-nnn. By default, cloud init wipes out the hostname and replaces it with the public DNS address of the network interface. It’s no longer common to even have your own username and home directory. Instead we all login boringly as “ubuntu” or “ec2-user”.

There is a good reason why the w command is just a single letter. It was one of the first things you would do after logging in from your terminal: to see who else was online and what they were doing. Reading mail or newsgroups, writing a paper or code with vi or emacs, using ftp or gopher, compiling programs, using the talk daemon, or using telnet to remote in to some other system.

I grew up at a time when it was common to see hundreds of other users all working on the same Unix system together. Even though “the web” didn’t exist, “the net” was abuzz with social activity. The finger command would print out the user information associated with your account, and .plan and .project files saved in home directories were the original status updates we would use to jot down ideas and keep others informed and inspired about what we were working on. You could even use finger remotely, to find your friends on other Unix systems anywhere else in the world. This was back when the motd was likely to contain a pithy quotation from the fortune file, not just a dry legal notice about the Computer Misuse Act. This was a tiny yet significant reminder that the system was meant to be used by humans, all working together. Now there’s no excitement when logging in, because there’s nobody else there. It’s just solitude, loneliness. None of the shared magic that made computing social in the first place.

Historically, it was only a brief sliver of time, and there’s no sense arguing against the engine of progress. And perhaps I am to some extent nostalgic for this era simply because of its co-occurrence with my own coming of age. But we’ve lost some of this magic. We’ve taken powerful multiprocess multiuser systems and relegated them to single task single user specialization.

Now with the trend toward containers, we’ve taken the disposable model one step further—entire operating systems running as a process inside another OS. We’ve decided that system and application configuration is so complicated that it makes more sense to isolate everything in a black box. Forget about trying to get everything to work in the same process space, that’s too difficult. Just plug and play, no sysadmin needed.

One of the ways we can recapture this lost era is to reject the disposable model in favor of durable systems. We should treat stewardship of a server as a responsibility, and bring back appreciation for the beauty of the hand-crafted system. Build systems that will last, where multiple services can coexist effectively, and where users will want to return to work, communicate, and setup up their .project files.