Early last year in a state of growing frustration with the evolution of mainstream linux distributions, I wrote about Linux From Scratch and the benefits of having the freedom to carefully craft your operating system based on personal preferences. I am now more convinced than ever that we need the Linux From Scratch project to inspire momentum to counter the prevailing forces of uniformity which threaten to remove the creative element from computing altogether. I’m no longer participating in the boring cookie-cutter committee-designed mainstream distro upgrade circus. From now on, it’s my distro, my rules. My goal is to run LFS systems, in production, everywhere.

This post will explain my motivations in a bit more detail. You can read my EC2 hint here.

Because sudo apt-get install is cheating!

Last year I got back into Linux From Scratch for three reasons. First as explained in my post Linux From Scratch in 2016 was a growing dissatisfaction after setting up a new system. Just when I feel I’ve got a handle on how things are working, the distribution yanks the rug from underneath my proverbial feet and I have to follow along and do it differently, just because someone else thought it was a better idea. And yes systemd was a big part of this dissatisfaction, as you can see in my little rant about systemd, but it was not the only reason. I wanted more choice, not merely in which packages to install but in the very basic operation of the system from the ground up.

The second reason was wanting to play with AVX-512 instructions in the new Skylake processors. This meant I needed a very recent stack and for this I had to go upstream to the source for the kernel and gcc. If we’re going to go that far, why not take it a step further back and build the whole system afresh with all the supporting packages specially tailored for my requirements?

But really there was a more fundamental underlying reason for going back to LFS. Naturally I was attracted by the educational aspect as it would be a wonderful refresher on all the pieces of the puzzle that make up a GNU/Linux system. This on its own is both very rewarding and enlightening. But I soon realized there was some deeper need I was feeling, a signal of a personal backlash against the recent seemingly unstoppable trend of commodification of the computing stack. I call it ‘disposable computing’ and my nostalgia-tinged sentiments against this trend led me to pen Legion of lobotomized unices.

After spending more time working with LFS at home, it became a disappointing shock when I had to do something on my production EC2 systems. Here I have this perfect system at home, I wondered why can’t I run LFS in production? I got a little obsessed with the idea, even strongly considering spending large amounts of cash on physical hardware and a colocation facility where I could run LFS on a real box on the real internet. But for a man of modest means and no business plan, colocation of a dev server just for fun would be an extravagance that even with my penchant for spending unncessarily large amounts of money on computing I could not bring myself to indulge. There’s got to be a better way! Fortunately the answer was staring me in the face.

One of the many tangential excursions I took with my home LFS build involved setting up a virtualization lab with the Xen project. For those times when my little Xeon server didn’t need all 64 GB of RAM, I decided to carve up the territory into smaller virtual fiefdoms to experiment with various frameworks, like building my own Apache Spark cluster. After the dust settled, I wrote and submitted my Xen hint to the LFS Hints project [NB: the hint has not yet been merged into the official hints directory tree so I’ve linked to a copy stored here.]

Since AWS uses Xen to manage EC2 instances, I wondered if it would be possible to create my own AMI using LFS. After researching a bit, I discovered that it was actually very easy to do. Amazon now has a simple stremlined process for creating a custom AMI. Of course they have a financial incentive for making EC2 as widely useful as possible! And when at last I had my own LFS system running on an EC2 instance, I was thrilled! No longer am I restricted to the major distros, I now have a very low cost way of running LFS in the cloud. I documented the process in a new hint for LFS on EC2 and will soon submit this to the LFS hints project as well.

In this post I’d like to add some color commentary to the hint description to add some extra detail. But first, I would like to dispel some of the common arguments I hear against using LFS for production systems. Since LFS is first and foremost an educational project, there is a long-standing bias—held to a degree even among some of its core developers—that it should not be used beyond its intended capacity as a tool for learning how a GNU/Linux system works. A great deal of attention in BLFS is devoted to the X Window System for building desktop environments and the assumption is that many LFS users will be primarily interested in using the project for desktop use on a home system. For me personally, linux on the desktop is not nearly as high a priority as linux on the server.

In several online and offline discussions of LFS I’ve encountered the same tired knee-jerk refrain, “you can’t run that in production!” And my response is always the same: “Maybe you can’t, but I certainly can!” I’d like to review some of the arguments I commonly hear against LFS so that anyone else considering using it for real projects can assess the situation for themselves.

Argument 1: it doesn’t even have a package manager

It’s true that LFS doesn’t “come with” a package management system, but this doesn’t invalidate its utility as a server operating system. Package managers should be a convenience, not a crutch. This topic is frequently encountered on the various LFS support mailing lists, and while there is no official package management solution built into LFS, there are several approaches one can take if you want the functionality of package management on LFS. To explore in more detail, start with the chapter in the LFS book on Package Management.

Honestly, this is one of the things I do miss about LFS, but again the inconvenience of not having a built-in package management system does not invalidate the use of LFS for production servers. The LFS and BLFS projects are very actively maintained and make regular major releases approximately every six months—this is the same frequency as Fedora. Since these releases typically involve rebuilding the entire toolchain, it may make sense to rebuild the complete system rather than attempt piecemeal in-place upgrades of select critical packages. On the other hand, this is entirely up to the user. The development versions of both projects are updated continuously, so when new releases are released by upstream maintainers, instructions for these new versions are usually introduced to LFS/BLFS very quickly. There is, of course, nothing stopping you from watching important packages on your own and deciding what and when to upgrade. Is this more inconvenient than using dnf, yum, or apt-get? Certainly, but the point with LFS is that you should know what is on your system and make the decision to upgrade when necessary.

Argument 2: what about security?

What about security? This is usually a follow-on to the first argument about package managment. Some people can’t simply seem to figure out how they would possibly secure their systems without having a package manager. I’ve even heard assumptions it would take “hours a day” to monitor all the packages on your system for security alerts and patches. Preposterous! I suppose these same critics’ idea of security is to stick sudo apt-get update in a nightly cron job and walk away. If this is your idea of security, I can tell you I’ve seen it before and it doesn’t end well. There is no excuse or substitute for knowing what packages are on your systems, where the potential attack vectors lie, and what you need to monitor to be aware of potential threats.

The BLFS book has some good pointers for how to keep up with security in the Security section chapter on Vulnerabilities.

Argument 3: we need an easily scriptable deployment process

There is no doubt that an initial build of Linux From Scratch is a much more intensive time committment than using a major distribution’s installer. However, there are several tools and methods one can use to make rolling out LFS onto new systems nearly as simple as if you were using an installer image. One sub-project to consider here is ALFS. At the center of this project is the jhalfs script which can very nearly automate a complete LFS build. But this isn’t the only approach.

The very point of this post is to introduce a new way that you can automate LFS rollouts, at least as far as EC2 is concerned. The main idea behind this is that once you have a successful LFS build, you can quite easily transfer it to another machine with the same architecture, with only a modest number of changes to system configuration files. I suppose I should now finally get to the point!

Here is a quick summary of the approach from the introduction in the hint:

First, we will prepare a LFS filesystem either using one you have already built or following the book all the way up to the kernel build in Chapter 8. We will add a few essential BLFS packages, namely openssh and dhcpcd. Once our LFS system is staged, we’ll make a tar archive of it. Next, we’ll launch a new temporary EC2 instance to package our build, create a new EBS volume, attach it to the instance, then build a MBR and partition table on the EBS volume. Untar our LFS snapshot, create the virtual filesystems, rebuild the kernel, install grub, and setup ssh pubkey authentication. We’ll review a few essential system configuration files to customize our LFS system to run as its own host on the internet. When our customizations are complete, we’ll unmount the EBS volume, create a snapshot, then create an AMI from the snapshot. The final step is to launch a new EC2 instance using our custom AMI. At last you will have your own Linux From Scratch running in the cloud!

My initial plan was to conduct the build in EC2 itself. However, since I already had an x86_64 build of LFS 8.0 on my home system, I decided to try out making the AMI by taking a snapshot of my home system and moving it to an EBS volume to make the final changes.

If you are interested in using an EC2 instance to perform a complete LFS build, I would advise selecting at least a c4.large instance type. You can use any instance type, even the t2.micro which is covered under the introductory free tier, but the general purpose t2 instances have a limited CPU quota and will soon be throttled during the large amount of compilation activity that building LFS entails. The c4 compute optimized instances have more CPU resources allocated to them so your SBUs will be shorter and more consistent. For example, on a fresh t2.small instance with LFS 8.0, I initially got a binutils SBU of 2m23s. Then, I compiled the linux kernel in a loop to run down the CPU credit balance. A t2.small is always guaranteed 20% of one CPU core, but it can burst up to 100% of a CPU core if you have a positive balance of CPU credits. After some time, the balance drops and your instance will then be restricted to the CPU limits. After my CPU credit balance was exhausted, the SBU increased to 3m44s. Keep this in mind should you choose to do the build on EC2, since it might take quite a bit of time. In any case, as noted above it is perfectly possible to do most of the build at home. For this step I used Fedora 26 on with a 4.12 series kernel so this matches perfectly with LFS 8.1. Since the bulk of the compilation is behind us at this point, you don’t need a hefty instance type. I used a t2.small and the kernel build took only 12 minutes.

All of the steps used in the hint can be conducted on the command line as well as the AWS console, but since this activity is not one that you are likely to be doing very frequently—and you may feel somewhat hesitant to script something that has the ability to charge your credit card—I have not included the commands.

I note in the hint that you should keep the AMI generated through this process private to your account. There are several additional steps one would take if wanting to make the AMI publically available. First, you would want to use EC2 Instance Metadata to install the private key on first boot. In my hint, I merely copy the private key from the staging EC2 instance, so it is hard-coded in the AMI itself. Also hard-coded is everything you would have in your /etc/passwd file from whatever LFS build you were using to create the tar archive. When distributions build their cloud images, they will typically create a special user (centos, ubuntu, ec2-user) to be the default user as whom you would connect for the first time. They would also probably do something special with the root password, such as setting it to a very large random string and prohibiting root shells (relying on sudo instead).

There is another issue as well concerning licenses. If you make an AMI public, you must then comply with the license terms of all the source packages. I can offer no advice on this issue. In addition, do not try to release this AMI under the Linux From Scratch name as that name belongs to the LFS project and you would be infringing on their intellectual property. Perhaps someday the LFS project will build and release their own cloud images, but this does stray a bit from their stated purpose of showing you the user how to build a linux system rather than simply handing it to you as if it were any other binary distro.