cellular phones and airplane navigation systems

Here’s an interesting tidbit: apparently the use of cellular telephones on board airplanes does not interfere with navigation systems. It does, however, wreak havoc upon base stations when you’re flying past them at 500mph.

I didn’t know this until NewsScan Daily posted it. Here’s the excerpt:

MOBILE PHONES CLEARED FOR TAKEOFF?
Contrary to popular airline lore, mobile phones don’t really interfere with airline navigation systems. The real reason phones are banned during flight is that they disrupt mobile networks on the ground as they zoom from one base-station to the next at 500 miles an hour. But that problem is about to be solved, thanks to new technology that will prevent cell phone signals from leaving the airplane cabin. Instead, a laptop-sized base station, called a “picocell,” will emit a network signal that will enable onboard cell phones to “roam” — eliminating any interference with avionics and terrestrial networks. The new technology is the creation of WirelessCabin, a consortium led by the German Aerospace Center and including members such as Airbus, Siemens and Ericsson. It is designed for cell phones using the European-dominant GSM standard and also supports the popular Wi-Fi protocol. A similar system targeting business jets will be flight-tested this year, and European and U.S. regulatory bodies are developing rules to address the use of wireless devices in flight. Airlines likely will team with wireless carriers or satellite operators to administer the in-flight mobile calling systems, and may try tying the service to their frequent flyer programs, offering members lower rates or flyer miles when they make calls. (The Economist 1 Apr 2004) http://www.economist.com

I have to wonder why we’ve been lied to all these years. Perhaps it’s because it would be too difficult to explain to Random Clueless Guy what the actual problem is.

Conferences are for the elite, part 2

On May 23 last year I wrote about how the pricing of conferences puts them beyond the reach of many qualified individuals who could benefit from them. I took some heat for this on the SAGE-members mailing list.

Without trying to gloat, I’d like to point out that my concerns have been touched on by a number of the candidates in the upcoming USENIX elections. I quote:

Although the economy has been slowly recovering after the hi-tech implosion, with some signs of life in the computer industry, USENIX continues to face a host of challenges. Companies are still extremely conservative with travel budgets, a trend which continues to threaten attendance levels at USENIX’s upcoming conferences. — Theodore Ts’o, Candidate for Treasurer

Over the past several years, attendance at some of our flagship conferences has dropped, and the financial picture is not as strong as it could be. Some of this can be attributed to economic woes, but the Association must still look carefully at its meetings to see how to make them as useful to our constituencies as possible. — Brian Noble, Candidate for Director

USENIX is primarily a conference organisation. This is both our strength and our weakness. I believe USENIX must grow beyond our present focus on conferences and beyond our present membership base (whilst always continuing to support this most important part of our purpose). — Geoff Halprin, Candidate for Director

Today a lot of people in the Free and Open Source movement are not paid by their employers to develop software. At a recent Linux conference, fully half of the attendees paid their registration and travel fees themselves. Employers who do fund their people to attend want to know how much better they will be able to do their job after attending our conference/training sessions. USENIX has to reduce the cost of the conferences, so that we can continue to help spread the knowledge about software design. — Jon “maddog” Hall, Candidate for Director

I’m glad that USENIX is finally waking up to the fact that the high conference fees keep the conference attendees limited to those from large, research-oriented corporations or universities, and that this is to the detriment of the organization. I’m particularly thrilled by maddog’s enlightened remarks about how conferences must cut costs in order to reach the maximum number of people. (You can read his statement here.)

All of the candidates this year appear to be exceptionally well-qualified for the positions they are seeking. I wish them all the best of luck.

messy Linux dmesgs

Season’s greetings, everyone! It’s time for yet-another-edition of Things In IT That Bug Me. Today’s victim is: overly chatty Linux dmesgs. This may seem a bit frivolous of a complaint. However, I feel that since the dmesg is one of the first things one seems when one boots an operating system, having a ridiculously chatty and verbose bootup sequence makes Linux look like it’s patched together with no overarching control. Basically, I don’t think 90% of end-users care about seeing:

  • Memory address space allocation dumps
  • The compiler used to create the kernel
  • RCS ID strings, version numbers, names and companies of the authors of various pieces
  • Debugging information only useful to the developers of a particular piece.

I’m a big fan of the way the BSD kernel messages are
structured. With a few exceptions, all one really needs to know when
the OS is booting up is what devices were detected. And that’s all.

Just have a look at the following bootup sequence from my work machine. Do you really think an end-user cares, for example, that "Linux NET4.0 for Linux 2.4" is "[b]ased upon Swansea University Computer Society NET3.039" or that the USB UHCI driver was committed on October 11 at 3:36 p.m. with revision 1.275, or that Richard Gooch (rgooch@atnf.csiro.au) wrote the mtrr driver? I highly doubt someone is going to e-mail Richard Gooch directly based on the contents of the dmesg, but this shows up on every Linux dmesg.

The following dmesg is nearly 140 lines long. Booting FreeBSD on the same machine yields a dmesg that’s around 80 lines. It’s time that Linux got its act together and cleaned up the messy dmesg, or the problem will continue to balloon out of control.

My dmesg:


Linux version 2.4.20-20.9.XFS1.3.1 (root@naboo.americas.sgi.com) (gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) #1 Sat Oct 11 15:23:43 CDT 2003
BIOS-provided physical RAM map:
BIOS-e820: 0000000000000000 - 00000000000a0000 (usable)
BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
BIOS-e820: 0000000000100000 - 000000003ff77000 (usable)
BIOS-e820: 000000003ff77000 - 000000003ff79000 (ACPI NVS)
BIOS-e820: 000000003ff79000 - 0000000040000000 (reserved)
BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved)
BIOS-e820: 00000000fee00000 - 00000000fee10000 (reserved)
BIOS-e820: 00000000ffb00000 - 0000000100000000 (reserved)
127MB HIGHMEM available.
896MB LOWMEM available.
On node 0 totalpages: 262007
zone(0): 4096 pages.
zone(1): 225280 pages.
zone(2): 32631 pages.
Kernel command line: auto BOOT_IMAGE=2.4.20-20.9.XFS ro BOOT_FILE=/boot/vmlinuz-2.4.20-20.9.XFS1.3.1 hdd=ide-scsi root=LABEL=/
ide_setup: hdd=ide-scsi
Initializing CPU#0
Detected 1993.983 MHz processor.
Console: colour VGA+ 80x25
Calibrating delay loop... 3971.48 BogoMIPS
Memory: 1026556k/1048028k available (1407k kernel code, 17896k reserved, 1072k data, 136k init, 130524k highmem)
kdb version 4.3 by Keith Owens, Scott Lurndal. Copyright SGI, All Rights Reserved
Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
Inode cache hash table entries: 65536 (order: 7, 524288 bytes)
Mount cache hash table entries: 512 (order: 0, 4096 bytes)
Buffer-cache hash table entries: 65536 (order: 6, 262144 bytes)
Page-cache hash table entries: 262144 (order: 8, 1048576 bytes)
CPU: Trace cache: 12K uops, L1 D cache: 8K
CPU: L2 cache: 512K
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#0.
CPU: After generic, caps: bfebfbff 00000000 00000000 00000000
CPU: Common caps: bfebfbff 00000000 00000000 00000000
CPU: Intel(R) Pentium(R) 4 CPU 2.00GHz stepping 07
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Checking 'hlt' instruction... OK.
POSIX conformance testing by UNIFIX
mtrr: v1.40 (20010327) Richard Gooch (rgooch@atnf.csiro.au)
mtrr: detected mtrr type: Intel
PCI: PCI BIOS revision 2.10 entry at 0xfbe5e, last bus=2
PCI: Using configuration type 1
PCI: Probing PCI hardware
Transparent bridge - Intel Corp. 82801BA/CA/DB PCI Bridge
PCI: Using IRQ router PIIX [8086/2440] at 00:1f.0
isapnp: Scanning for PnP cards...
isapnp: No Plug & Play device found
Linux NET4.0 for Linux 2.4
Based upon Swansea University Computer Society NET3.039
Initializing RT netlink socket
apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16)
Starting kswapd
allocated 32 pages and 32 bhs reserved for the highmem bounces
VFS: Disk quotas vdquot_6.5.1
pty: 2048 Unix98 ptys configured
Serial driver version 5.05c (2001-07-08) with MANY_PORTS MULTIPORT SHARE_IRQ SERIAL_PCI ISAPNP enabled
ttyS0 at 0x03f8 (irq = 4) is a 16550A
ttyS1 at 0x02f8 (irq = 3) is a 16550A
Real Time Clock Driver v1.10e
Floppy drive(s): fd0 is 1.44M
FDC 0 is a post-1991 82077
NET4: Frame Diverter 0.46
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
Uniform Multi-Platform E-IDE driver Revision: 7.00beta3-.2.4
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
ICH2: IDE controller at PCI slot 00:1f.1
ICH2: chipset revision 4
ICH2: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:DMA
hda: ST340016A, ATA DISK drive
blk: queue c03ed4e0, I/O limit 4095Mb (mask 0xffffffff)
hdc: Lite-On LTN486S 48x Max, ATAPI CD/DVD-ROM drive
hdd: HL-DT-ST GCE-8481B, ATAPI CD/DVD-ROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
hda: attached ide-disk driver.
hda: host protected area => 1
hda: 78165360 sectors (40021 MB) w/2048KiB Cache, CHS=4865/255/63, UDMA(100)
ide-floppy driver 0.99.newide
Partition check:
hda: hda1 hda2 hda3
ide-floppy driver 0.99.newide
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
NET4: Linux TCP/IP 1.0 for NET4.0
IP Protocols: ICMP, UDP, TCP, IGMP
IP: routing cache hash table of 8192 buckets, 64Kbytes
TCP: Hash tables configured (established 262144 bind 65536)
Linux IP multicast router 0.06 plus PIM-SM
NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
RAMDISK: Compressed image found at block 0
Freeing initrd memory: 394k freed
VFS: Mounted root (ext2 filesystem).
SGI XFS 1.3.1 with ACLs, no debug enabled
SGI XFS Quota Management subsystem
XFS mounting filesystem ide0(3,2)
Ending clean XFS mount for filesystem: ide0(3,2)
Freeing unused kernel memory: 136k freed
usb.c: registered new driver usbdevfs
usb.c: registered new driver hub
usb-uhci.c: $Revision: 1.275 $ time 15:36:30 Oct 11 2003
usb-uhci.c: High bandwidth mode enabled
PCI: Found IRQ 11 for device 00:1f.2
PCI: Setting latency timer of device 00:1f.2 to 64
usb-uhci.c: USB UHCI at I/O 0xff80, IRQ 11
usb-uhci.c: Detected 2 ports
usb.c: new USB bus registered, assigned bus number 1
hub.c: USB hub found
hub.c: 2 ports detected
PCI: Found IRQ 9 for device 00:1f.4
PCI: Setting latency timer of device 00:1f.4 to 64
usb-uhci.c: USB UHCI at I/O 0xff60, IRQ 9
usb-uhci.c: Detected 2 ports
usb.c: new USB bus registered, assigned bus number 2
hub.c: USB hub found
hub.c: 2 ports detected
usb-uhci.c: v1.275:USB Universal Host Controller Interface driver
usb.c: registered new driver hiddev
usb.c: registered new driver hid
hid-core.c: v1.8.1 Andreas Gal, Vojtech Pavlik
hid-core.c: USB HID support drivers
mice: PS/2 mouse device common for all mice
hub.c: new USB device 00:1f.2-1, assigned address 2
Adding Swap: 1044216k swap-space (priority -1)
input0: USB HID v1.10 Mouse [Logitech USB Optical Mouse] on usb1:2.0
XFS mounting filesystem ide0(3,1)
Ending clean XFS mount for filesystem: ide0(3,1)
hdc: attached ide-cdrom driver.
hdc: ATAPI 48X CD-ROM drive, 120kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.12
SCSI subsystem driver Revision: 1.00
hdd: attached ide-scsi driver.
scsi0 : SCSI host adapter emulation for IDE ATAPI devices
Vendor: HL-DT-ST Model: CD-RW GCE-8481B Rev: C102
Type: CD-ROM

When I get a chance, I’ll capture a FreeBSD dmesg on this same box and you can see how much cleaner it is.

something lost in translation

Company name of the offender removed for their protection (they’re already in Chapter 11 – no need to help them along here)

Dear valued customer,


On the 25.November 2003, 11:35 we had a route leakage. Due to a mistake on the -------- backbone, we anounced to many routes.


For this reason many peering-sessions were closed automaticly. Issue is resolved. Situation is going normalized.

Regards

[deleted]

Heh. "Situation is going normalized" — I have to use that one in resolving RT tickets.

Blocking Opera Ads using Squid

If you use Opera — like I do, on occasion — you’ll notice that it’s ad-sponsored software. Fortunately, since I use Squid as a proxy server (so that I can use Cameron Simpson’s very excellent adzapper to replace ads based on regular expressions), I just added ACLs for the Opera ad servers in order to stop the browser from downloading the ad database.

Add this ACL to your squid.conf:


# Opera browser spamvertisement access server
acl operaads dst 209.225.0.6
acl operaads-master dst 209.225.11.237

http_access deny operaads
http_access deny operaads-master

Then you can just squid -k reconfigure and away you go.

Incidentally I used this to block requests to Verisign’s much-maligned SiteFinder [dis]service:


# Trap and deny crock A records
acl sitefinder dst_as 30060

http_access deny sitefinder

SiteFinder has been suspended, but you might like to block that AS anyway in case they start it up again 🙂

*LET* the Lawsuits Fly.

Wow. It’s been a while since I last wrote. Happy Hallowe’en, by the way.

I now work for CBC.ca in the New Media Production & Operations department as a software developer. Primarily this involves Java development, but I also do a bit of Perl (as much as I hate to). I also weigh in on system administration matters quite a bit, since I think of myself as a half-time sysadmin, half-time programmer.

I just had to respond to this slightly brain-damaged article which appeared in eWeek recently. Now I know eWeek is one of these magazines for PHBs but I still like to flip through it (very briefly) to see what the PHBs are being told these days, and how I can counteract that. So this article — if you go and read it — basically says well, Microsoft has to make "Longhorn" really secure, and improve their security in general, or else legislatures will impose security warranties upon software developers, and this will impact all developers and not just Microsoft.

My reaction — as both a sysadmin and a developer — is: so what? Isn’t that a good thing? I’ve often railed about the fact that software is but one of the few industries where you can sell an expensive product to someone and not be held to any legal liability whatsoever. No warranty to speak of beyond the value of the actual compact disc that the software arrived on. In my mind, this is a bad thing. So when Brian Livingston says something like

Such an earthquake could emerge not just from legislatures but also from courts. All it would take would be a precedent-setting ruling that the “we’re-not-liable” language that’s commonplace in shrink-wrap licenses is “unconscionable and unenforceable.” The lawsuits would fly.

I think he’s bang on — but the lawsuits should fly, in fact, if the software is defective. This would certainly stop not only Microsoft from shipping poorly-tested products, but all software vendors.

On a lighter note — check this out. If you work for Allstate, you can submit your resignation online. No word on whether a security guard will be e-mailed to you to escort you out of the building, too. (Speaking of which — someone at work was joking about dressing up for Hallowe’en as a manager who was unceremoniously sacked some time ago. “I’ll just dress up two mannequins in security guard uniforms, put one under each arm, and I’ll be [name removed] being escorted out of the building!”)

PGP: Why isn’t it more widely used?

Preamble: Today was my last day at FSC Internet but I started writing the piece below some time ago. It still needs some work, so it’ll probably get a few more edits as time goes along, but I wanted to post it up here to mark the day I left the field of Internet security. 🙂

Ever since I started working for an Internet security company, I’ve been using PGP (GnuPG) a lot more both in my daily work, and at home. Even though PGP has been around for ages, it hasn’t been widely adopted. Even other secure e-mail technologies like S/MIME have not enjoyed wide acceptance either. I started to ask myself why, and I’ve come up with a number of explanations as to why secure e-mail hasn’t taken off:

  • Insufficient size of critical user base. This is the classic technology adoption problem that faced inventions from the cell phone (who are you going to call if nobody else has one) to the VCR (what are you going to play in your VHS VCR if all the movies are still in BetaMax). With PGP, the problem is compounded by the fact that the trust value of your key is affected by the trust value of the keys of the peers that have signed your key; if nobody signs your key, the trust value of it is very low.
  • No interoperability between competing secure e-mail technologies. In part, we can blame the invention of proprietary
    and closed technologies like that "Secure E-mail Certificate" widget in Microsoft Outlook. PGP has been around for years; why didn’t they just use that? On the other hand, PGP itself has been through many mutually incompatible revisions; PGP 2.x; Network Associates PGP 5.0, PGP 6.0, and finally, GnuPG as an open-source alternative to PGP proper. Such needless forking does nothing to build the image of secure e-mail technology as reliable and robust.
  • Poor GUI frontends to PGP. Before writing this piece I decided to do some investigation as to what frontends were out there, that are still being actively maintained. There certainly aren’t a lot. On this Debian GNU/Linux box I picked out two that appeared worthwhile: gpgp and kgpg. gpgp as I soon discovered was out of date. kgpg core dumped when I tried to retrieve keys from a remote keyserver. Neither of them implements the features that I would want in a front end, namely, easy modification of all parameters of a given key on the keyring, including trust levels, adding and removing signatures, and so on.

Fundamentally, though, these aren’t insurmountable problems. Technical and adoption issues, while irritating, are comparably easy to fix. (Okay, convincing Microsoft to use PGP in Outlook might be more difficult, but even the PGP GUI is a problem waiting to be solved.) It’s my belief that the lack of interest in secure e-mail technologies as a whole is motivated by people’s desire to not only be anonymous on the Internet, but to never be held accountable for anything they say.

Perhaps I’ve been hanging around too many marketing weasels, but there are plenty of folks who don’t want to be held accountable at a later date for some bald statement they made today. I’m sure that the Enron and WorldCom executives wished they hadn’t sent certain e-mails that are now sitting in evidence vaults. Those e-mails would probably carry even more weight (against said executives) if they were digitally signed with the originator’s PGP key.

The lesson to be learned here is one that relates to human nature. Once you have attached a digital signature to something, you can’t take it back. Ever. Particularly if the message is in the public domain, it can come back to haunt you. This is not generally what people want to hear; it makes them feel less secure, not more. This is the critical flaw in secure e-mail technology.

a few parting words on security

After four months I am leaving FSC Internet to get back into the field of software development. While security is interesting, it, like many things, is only interesting to me if I don’t have to do it full-time.

This doesn’t mean that I’ll stop weighing in on security matters. Heck no. I have a few parting thoughts as I wrap up at FSC.

Today’s Daily Dave is, as usual, pretty entertaining. It’s not quite as cohesive as past entries, since he tries to talk about a whole plethora of topics, eventually winding up at a discussion about how many security companies are being co-opted by developing “partnerships” with the very industries they are supposed to be protecting. In principle, I agree with him: from a 30,000 foot view, it would seem that any security company that’s been hired to assess vulnerabilities in a client’s products would not do anything to embarrass the client.

However, any ethical security company would still disclose security vulnerabilities to the client, and to work with them to deliver a measured advisory and response to the community. Failure to do this means the security company isn’t worth its salt.

In the specific case Dave mentions in his article, there is a glaring remote root exploit in the code for RealNetworks’ streaming media server products. He is claiming that the various security companies that RealNetworks has hired over the years to do vulnerability assessment are accomplices in this massive coverup to hide the security hole.

I don’t particularly buy this point of view. By the principle of Occam’s Razor I believe there is a simpler explanation: the security companies that RealNetworks hired are simply incompetent. In the article Dave says that the hole can be found within 10 seconds of starting up SPIKE (which I’ve mentioned here before in this journal) but there’s nothing to prove that the security firms actually tried to use this tool, or any other tool. For all we know, their "code audits" could have been a complete joke — and not necessarily just because they were working for RealNetworks. Perhaps it was just a quick smash-and-grab for them to assuage the vulture capitalists.

I have one more viewpoint to post on security issues — it’s about PGP/GnuPG and why I think digital signatures/encryption of correspondence isn’t more widely used. I’ll tidy it up and post it on Wednesday, my last day at FSC. After that, I’m on vacation to NYC for a few days before starting my new position as a software developer for the CBC.

California Gubernatorial Race

Now that the California gubernatorial race has turned into a complete circus sideshow, with both Arnold Schwarznegger and Larry Flynt of Hustler running, I’m suggesting that Darl McBride should mount a campaign, as well. Since the state of California isn’t doing so well financially, he can mount frivolous lawsuits against other states in an attempt to prop up the economy.

In fact, he could have the State of CalifOrnia (SCO) claim to own the copyright to the concept of rolling blackouts, which they purchased from PG&E. Then, he can sue, say, Idaho, for initiating blackouts without paying proper licensing fees.

Or perhaps, after IBM’s lawyers are finished breaking his spine on the Catherine wheel, he’ll just have to find another ailing public company in need of a business model that involves suing people.

Linux is for Bitches

Pardon the slight profanity; I don’t generally like to swear when I’m trying to make a point, but I didn’t invent the name of this site.

The views espoused by the author are obviously not much different from those in this excellent article in USENIX’s own journal, ;login:. (You’ll need to be a member to access that link, by the way) I’ve complained before about the proliferation of poorly-configured, poorly-managed Linux boxes taking over from the Windows boxes. It’s obviously still continuing to happen. Of course, the vendors are partly to blame, too. When the author of linuxforbitches.org writes about /var being an inappropriate place for web content (I wholeheartedly agree) you have many vendors to thank for that.

I lay the blame for the kernelized web-server, though, at the foot of Linus himself. Given that Linus is so militant about accepting patches, idiotic or not, I’m surprised — no, shocked — that he accepted this one. Considering that many kernel hackers are the same folks who probably bitched and whined about insecurity and instability when Windows NT 4.0 moved the drivers from user mode to supervisor mode (or Ring 1 to Ring 0, I don’t remember the exact terminology), the kernelized web server is a completely brain-damaged idea. It should be removed from the kernel at once, if it hasn’t already been so excised.

You know, despite all the claims about Linux’s stability, it still has a long way to go before it achieves the stability level of the BSDs. Under heavy workload, Linux still doesn’t cut mustard. Andrew Hume from AT&T Research presented a paper at HotOS-iX entitled Operating Systems: Shouldn’t They Be Better? True, he takes Solaris 2.6 to task in this paper as well, but the Linux flaws he describes are pretty shocking (these are from David Oppenheimer’s summary notes in August’s ;login::

Hume described eight problems the Gecko [his billing system] implementers experienced with Linux (versions 4.18 through 4.20), including Linux’s forcing all I/O through a file-system buffer cache with highly unpredictable performance scaling (30MB/sec. to write to one file system at a time, 2MB/sec. to write to two at a time), general I/O flakiness (1-5% of the time corrupting data read into gzip), TCP/IP networking that was slow and that behaved poorly under overload, lack of a good file system, nodes that didn’t survive two reboots, and slow operation of some I/O utilities such as df. In general, Hume said that he has concluded that "Linux is good if you want to run Apache or compile the kernel. Every other application is suspect."

The problem with many people measuring "stability" of Linux is that they think it’s a relative measurement: as long as it’s more stable than Windows, then it’s good. This is obviously a stupid way to look at it. Just because my Kia[1] doesn’t have exploding tires, doesn’t mean that it’s a particularly safe car.

People working on performance and stability in the Linux kernel are far outnumbered by the people trying to get their little pet project into the tree — vis à vis the kernelized webserver. Admittedly, performance and stability aren’t the most exciting research areas, but making Linux as stable as the BSDs is critical to its long term success. I mean, who cares if Linux can run on a zSeries or S/390 if the thing goes down like a ton of bricks when you throw a heavy workload at it?

Ultimately as a system administrator, I care much more about stability, and failing that, predictable, recoverable failure, rather than "feature-niftiness". When you have 1000 user accounts to manage and you get DDoSed, I want an OS that is feature-conservative but rock solid.

And that, in a convoluted way of my saying so, is why I don’t run Linux on my servers.

[1] I don’t, for the record, own a Kia. 🙂