Before saying goodbye, a happy look backward.
Six months ago at the end of my last article, “DNS Cache Poisoning, Part II: DNSSEC Validation” (www.linuxjournal.com/article/11029), I mentioned I was going on hiatus. The good news is, this month I'm back! The sad news is, I don't plan to return to writing a monthly column. The demands of my day job and family life finally have overtaken my energy for late-night technical writing, so it's time for me to close this chapter of my writing career.
But what a long chapter it's been! Two things have surprised me about this experience. The first is how quickly the past 11 years have gone! The other is how long-lived some of these Paranoid Penguin articles have been. As a technical writer, you really don't expect people to remember tutorials you wrote six years ago, but I still get e-mail about some of my earliest pieces.
So, I thought it might be fun to touch on what, for me, have been some of my favorite Paranoid Penguin articles, topics and interview subjects through the years.
My relationship with Linux Journal began in the summer of 2000. Earlier that year, I'd given two presentations (on DNS security and Postfix) at Root Fest 2, a hacker convention in Minnesota, and I thought to myself, now that I've done all the work of researching these presentations, I wonder whether my favorite magazine would like an article on either topic?
With that, I wrote to the editors. I was delighted when they accepted my proposal for “Securing DNS and BIND” (www.linuxjournal.com/article/4198). I was relieved when they liked the finished article, which for the first and last time I completed a full week before deadline. But, I was quite surprised when they asked if I could submit another article for the same issue!
Being both puzzled and worried as to how I could pull that off, I called my good friend and Root Fest 2 co-presenter Brenno de Winter, and the result was “Using Postfix for Secure SMTP Gateways” (www.linuxjournal.com/article/4241). Thus, thanks to Brenno, I made my debut in the October 2000 issue with not one but two articles!
Not long afterward, I half-jokingly suggested “Hey, if you guys like my stuff that much, why not make me a regular columnist?” Apparently lacking a snappy rejoinder, the editors simply replied, “Sure!” The Paranoid Penguin had been hatched.
Although I didn't plan it this way, I think it's a happy coincidence that I both ended and began the column with pieces on DNS security. Just as much now as 11 years ago, the Domain Name Service remains a critical Internet service with serious security limitations, whose infrastructure still relies heavily on Linux and other UNIX-like operating systems.
After a two-month period of editorial-staff turnover during which the magazine briefly forgot but then, thankfully, remembered they wanted me to be a columnist, I began churning out tutorials for implementing all of my favorite security applications on Linux, including Secure Shell in January and February 2001 (www.linuxjournal.com/article/4412 and www.linuxjournal.com/article/4413), Nmap in May 2001 (www.linuxjournal.com/article/4561), Nessus in June 2001 (www.linuxjournal.com/article/4685), GnuPG in September and October 2001 (www.linuxjournal.com/article/4828 and www.linuxjournal.com/article/4892), and Syslog in December 2001 (www.linuxjournal.com/article/5476).
Although I had been using Linux since 1995, I never called myself an expert. Rather, the sensibility I tried to convey was “if even I can get this to work properly, you can too!” This isn't false modesty. Although I consider myself to be a knowledgeable and experienced network security architect, the fact is I've never been a full-time Linux system administrator. For me, Linux always has been a means to an end.
So, if you ever got the sense that any of my articles resembled lab notes in prose form, that probably wasn't a complete coincidence! I'm not in the least bit embarrassed by that. A procedure that works is a procedure that works, no matter who writes it and why, and I've worked very hard over the years to produce tutorials that work reliably and verifiably.
(The fact that my readership always has included such an abundance of bona fide experts and all-around alpha geeks provided palpable incentives for getting things right! But I must say, as much as I used to worry about being exposed as a charlatan by some angry sysadmin or another, that day never came. I've been subject to plenty of criticism and error-correction during the years, but 99% of it has been constructive and kind, for which I've been abidingly grateful.)
That first year, I also wrote a couple more-generalized pieces, “Designing and Using DMZ Networks to Protect Internet Servers” (March 2001, www.linuxjournal.com/article/4415) and “Practical Threat Analysis and Risk Management” (January 2002, www.linuxjournal.com/article/5567). These were both pieces that involved skills I had exercised regularly in my day-to-day work as a security consultant, and they gave (I hope) some context to the tools I was tutorializing.
This was the pattern I tried to maintain through the subsequent decade: carefully researched and tested technical tutorials, interspersed now and then with higher-level security background.
The higher-level articles consisted of more than just me ranting about what I think constitutes good security. Sometimes, I let other hackers do the ranting, in candid interviews: Weld Pond (Chris Wysopal) in the September 2002 issue (www.linuxjournal.com/article/6126); Richard Thieme in December 2004 (www.linuxjournal.com/article/7934); Marcus Meissner in October 2008 (www.linuxjournal.com/article/10183); Anthony Lineberry in August 2009 (www.linuxjournal.com/article/10505); and most recently, “Ninja G” in March and April 2011 (www.linuxjournal.com/article/10970 and www.linuxjournal.com/article/10996).
The Richard Thieme and Ninja G interviews were especially fun for me to write, because in both cases the entire exercise amounted to replicating in print exactly the type of private conversations I've enjoyed with Richard and G through the years at DEF CON and elsewhere. And sure enough, they each rose to the occasion, displaying in their own ways not only technological brilliance, but also fascinating opinions and stories about many other things besides, including Homeland Security, hacking as quest for truth, ninjutsu and nautical martial arts.
Besides interviewing hacker celebrities, I also wrote a couple product reviews: BestCrypt in June 2002 (www.linuxjournal.com/article/5938) and Richard Thieme's book Islands in the Clickstream in March 2005 (www.linuxjournal.com/article/7935). You may wonder, why so few reviews, given what an excellent way this is to obtain free products?
First, I can recall attempting at least two other evaluations: one was of some WLAN (802.11b) host adaptors that were supposedly Linux-compatible, and the other was of a miniature embedded computer platform that supposedly was optimized for Linux. In both cases, I failed to get the evaluation hardware working properly. Because “it doesn't work” falls rather short of the 2,500-word submission quota I usually had to meet, I chose different topics for those two months' columns.
Four attempted reviews (two successful) in 11 years isn't a very high rate, I admit. The other reason I didn't attempt more of them was philosophical. It seemed that it was more immediately useful for me to stick mainly to writing tutorials of popular, free software tools, than to evaluate commercial products that in many cases were redundant with such tools.
Which isn't to say I was or am against commercial software. For example, by covering the free (GPL) version of the Zorp firewall in March and April 2004 (www.linuxjournal.com/article/7296 and www.linuxjournal.com/article/7347), I indirectly gave a minor boost to the commercial Zorp Pro, which (at that time, at least) was configured in a very similar way. Rather, I chose to focus mainly on free software, because I could and because it felt good to support developers to whom I felt I owed something.
Some ubiquitous tools, like BIND and iptables, I covered more than once through the years. With others, I may have written about them only once in Linux Journal, but revisited them in further depth when I wrote the book Building Secure Servers With Linux in 2002, and its second edition, retitled Linux Server Security in 2005. (Like many of my articles, I've been pleasantly surprised at how much of Linux Server Security is still relevant. But the main reason I mention the book here is that it grew directly out of my Paranoid Penguin columns!)
Other tools, however, I was happy to abandon shortly after figuring out how to operate them properly. In one case, the tool itself wasn't bad; the underlying protocol, of which the tool was simply an implementation, was and is hopelessly convoluted.
I'm going to indulge myself in a little coyness and not name these tools I found excuses to abandon. Just because I don't enjoy using something doesn't mean I'm not grateful to those who donated their time and talent to develop it.
What I'm really trying to say is that complexity-fatigue is still one of Linux's biggest ongoing challenges. Even hackers sometimes are overwhelmed by how complicated it can be to get a single piece of software running properly under Linux. By “complexity”, I don't just mean “requiring the use of a command prompt”. On the contrary, I have an abiding fondness for applications that pull all their configuration information from a single text file, rather than scattering settings across multiple files (or worse, in a binary data file that can be modified only by some GUI tool).
Have you ever noticed that one of the highest forms of praise we give Linux distributions and applications is “it just works”? This, in my opinion, is a big reason Ubuntu has been so successful. An almost unprecedented percentage of things in Ubuntu “just work”. This speaks not only to Ubuntu's stability and sensible default settings, but also to how easy it is to configure it properly.
I don't value simplicity just because I'm mentally lazy (which I totally admit to being). Complexity is the enemy of security. It makes it harder to spot configuration errors, it leads to unforeseen dependencies and interactions, and it incites otherwise-upstanding and industrious system administrators to take shortcuts they wouldn't ordinarily contemplate.
Which, if you've been reading the column a while, probably is something you've read here before. Across all these different applications and technologies I've researched, tested and written about, I've seen a number of recurring themes and commonalities.
First, the key to securing anything, be it a single application or an entire operating system, is to disable unnecessary functionality and to leverage available (and relevant) security capabilities fully.
Second, the worst way to use any Linux tool is to succumb to the notion that only root can do anything useful. The more things running as root on your system, the more things an attacker might be able to abuse in a way that leads to total system compromise. Therefore, it's important to run processes under less-privileged accounts or to use SELinux or AppArmor to curb root's omnipotence.
Third, firewalls are neither dead, irrelevant nor obsolete. With so much of our network use focused on the browser, and with mainstream firewalls (including Linux's Netfilter/iptables) having made little progress overall in gaining application visibility and intelligence, firewalls certainly are less helpful than they were 11 years ago. But this doesn't mean we can live without firewalls. It just means we need to find additional controls (application gateways/proxies, encryption tools and so forth) to pick up the slack.
Fourth, we're still suffering from a general lack of security controls in protocols on which the Internet relies. SMTP, DNS and even IP itself were created long ago, at a time when computer networks were exotic and rare. Even TLS/SSL, whose sole purpose is to add security to Web transactions, was designed with a very primitive and limited trust model that has not stood up very well against man-in-the-middle attacks or against Certificate Authority breaches.
Securing these old protocols, like securing Linux itself, usually amounts to implementing new security features (SMTP STARTTLS, DNSSEC and so forth) that have entered the mainstream only recently.
On a related note, in January 2010, I wrote a column titled “Linux Security Challenges 2010” (www.linuxjournal.com/article/10647). I'm both pleased and depressed by how much of it still seems relevant, nearly two years later. Suffice it to say that virtualization, cloud computing, man-in-the-middle attacks against TLS/SSL and targeted malware have, collectively, made it that much more imperative to do the hard work of securing our systems and applications, and to find new ways both to implement “least-privilege” security models and to make it easier to run applications securely.
So here I am, 11 years after I started, paranoid about nearly all the same things I was paranoid about when I began, just more so. Am I worried? Not really. On the contrary, I'm comforted knowing that so many things both bad and good about how we understand security to work appear to be more or less constant. This doesn't get us off the hook for keeping current with new attacks and new technologies. It does, however, mean that what we knew yesterday will make it easier for us to learn what we need to know tomorrow, in order to operate securely.
Thank you, Jill Franklin, Carlie Fairchild and my other dear friends at Linux Journal, and especially to you, my engaged, inquisitive and altogether remarkable readers, for accompanying me on this 11-year journey and for making it possible for me to learn so much in such a public way. I don't know when I'll be writing about Linux security again, but I know that it will be here.
I hope that in the meantime, you remain safe!