My Journey with ZFS on Linux

Toward the latter half of 2016, I decided to convert my home file server’s file system to ZFS as an experiment on real hardware in my own test environment. I knew before embarking on this journey that ZFS on Linux (hereafter referenced as ZoL) is stable but not production ready. The experiment is nearing its end, and I’ll be switching to an mdadm-based array with ext4 for reasons I’ll be exploring in this post. If your storage needs require the redundancy and self-healing properties of ZFS, consider other operating systems where it’s better supported and stable, like FreeBSD.

This post illustrates my thinking as of early 2017 and why I’m not going to continue with ZoL (for now).

Conclusions

I don’t wish to bore you with excess detail, so I figured I’d present my conclusions up front. You can then decide if the rest of this post is worth reading. Let’s begin exploring.

First, using ZoL in a home file server environment presents us with the following pros and cons:

Pros

  • Transparent compression can save a surprising amount of space with with LZ4 (useful!)
  • Lower administrative overhead (a wash; not always true in practice)
  • Data integrity guarantee (always make backups)
  • Self-healing (always have backups)
  • It’s not Btrfs (you’ll probably use your backups)

Cons

  • May require more extensive deployment planning
  • Some applications may require dataset tweaking
  • Administrative overhead not always as advertised (see above), but this is mostly the fault of the current state of ZoL
  • Poor performance for some workloads (especially databases; versus ext4)
  • Lack of recovery tool support limits possible restoration options when things go wrong (backups!)

Noteworthy bullet points when using ZoL:

  • Stable but not production ready (an important distinction!)
  • Upgrades will be painful!
  • /boot on ZFS is possible but…
  • DKMS ZFS modules are a horrible idea; don’t do this–don’t ever do this
  • Always create bootable media with ZFS tools installed in case something goes wrong (it will–don’t forget backups!)

The Meat and Potatoes of ZoL

I would not recommend ZoL for anything outside a test environment at this time (I’ll explain in a moment). ZoL may be useful for long term storage or a backup NAS box if you’re particularly brave. However, if you plan on deploying ZFS permanently or at least deploying it in a long term installation, I’d recommend using an OS with which ZFS is tightly integrated; recommendations include FreeBSD and FreeBSD-based distributions (including FreeNAS) or OpenIndiana-based platforms. I’d also shy away from using ZFS on systems that are intended to be used as general purpose machines; in my experience, ZFS really shines in a heavily storage-centric configuration, such as NAS, where throughput isn’t as important as integrity. Outside this environment, ZFS may be a performance liability, and current benchmarks demonstrate underwhelming behavior when used as a backing store on Linux for databases. In effort to work around this, ZFS requires planning and tuning for use with RDBMSes and may also impact write-heavy applications. Read-only loads are something of a wash–file system choice is better made with regards to workload requirements: Is raw performance or data integrity more important? Note that and the latter–data integrity–can be solved via other means, depending on use case, but ZFS’s automated self-healing capabilities are hugely attractive in this arena.

As of early 2017, ZFS on Linux is still exceedingly painful to install and maintain for rolling release distributions. In particular, on Arch Linux, ZFS will cripple most upgrade paths available for the kernel, itself, and applications that require ZFS dataset tuning. It’s almost impossible to upgrade to more recent kernels if you pride your sanity (and system stability) unless you expect to wait until such time as the latest kernel has been tested, new ZFS PKBUILDs are ready, and you’re open to an afternoon of potential headaches. Generally speaking, the upgrade process itself isn’t frustrating, but it should always be preceded by a fresh backup–and keep your rescue media handy!

Never use ZFS with DKMS in effort to shortcut kernel versioning requirements, even if upstream ZoL appears to support the new version–the package you’re using may not be updated yet, and the AUR DKMS PKGBUILDS for ZFS are not as stable or well maintained as the kernel-pinned packages. With DKMS, if there’s a mismatch, even slightly, you risk potential kernel panics and a long, painful recovery process. I was bit by this early on and discovered that some kernel panics aren’t immediate; instead, they may occur after an indeterminate period of time depending on file system activity, and the recompilation process of ZFS and SPL will consume a sufficient amount of time such that your system will likely panic before the build completes. Stick with the version-fixed builds; don’t use the DKMS modules. Of course, this introduces a new problem: Now you have to pin ZFS to a specific kernel version, leading to extra work on rolling release distributions like Arch…

It’s necessary to plan for a lot of extra work when deploying ZoL. Estimate how much time you’re likely (or willing) to spend and multiply it by two. I’m not even kidding. ZoL on distributions without completely pre-built packages (possibly others) require all of the following: Building SPL, its utilities; ZFS and its utilities; installing unstable versions of grub if you plan on keeping /boot on ZFS (with the implication that you’re one upgrade away from an unbootable system at all times); dataset tweaking per application; and potential bugs. Lots of them. When I first deployed ZFS on Linux, I was completely unaware of a deadlock condition which affected the arc_reclaim kernel process, and everytime I’d run rsync, arc_reclaim would hang, CPU usage would spike, and it’d be necessary to manually intervene. To say nothing of the RAM usage…

ZFS performance under Linux is also relatively poor for my use case. While read/write speeds are acceptable, it’s nowhere near ext4, and it’s likely slower than ZFS on the same hardware running FreeBSD. Furthermore, the memory pressure due to the ZFS driver’s ARC failing to release RAM back to the system (it’s supposed to, but I’ve yet to see it in practice) under memory-intensive operations can cause an out-of-memory condition, swapping, and a potentially fatal invocation of the Linux OOM killer. For this reason alone, I could never recommend ZoL on a general purpose server. If your deployment is intended exclusively as a NAS and you have literally no other services besides NFS/Samba, ZFS’ memory usage would be perfectly fine, provided it’s on a system with 8+ GiB RAM (though you should have more). Then there’s the possibility of L2ARC devices, caching devices, and so forth. If you’re planning on running other services in addition to a ZFS-backed NFS server, such as GitLab or Minecraft, you’ll quickly find that striking a balance between RAM allocation for the ARC versus other applications becomes a somewhat tedious chore. In fact, I might even venture as far as to suggest that you shouldn’t consider running ZFS plus other applications on anything less than 16 GiB RAM–preferably 32 to hand off a nice big chunk to the ARC, particularly if you plan on expanding drive capacity, and you still shouldn’t run anything you don’t absolutely need (seriously: Make it a pure NAS).

Tweaking ZFS for database loads doesn’t seem particularly noisome–certainly not on the surface–until you encounter one or more upgrade cycles that require more than just a dump/load. If you follow the default Arch Linux upgrade process for PostgreSQL, you’ll quickly find ZFS less flattering than alternatives. Not only is it necessary to tweak the recordset size in addition to a few other file system attributes for performance reasons (though you only do this at dataset creation), but suddenly, following the Arch upgrade guide by moving the old data directory and creating a new one is plagued with matters of shuffling around extra files, remounting previously tweaked datasets to the appropriate location, and more. In my case, I had to copy all of the original data to a new directory, wipe the old mount point for both the data directory and the write-ahead log, then create a new data directory in the old dataset mount point, copy the WAL into the WAL-specific mount point, mount it at pg_xlog, and only then could I complete the upgrade process. MySQL on ZFS is generally easier to upgrade, in my experience, but I also use it much less frequently. Be aware that MySQL still requires dataset tweaks, and the tweaks applied depend on whether you’re using primarily MyISAM or InnoDB. I’ve not experimented sufficiently to understand whether it’s possible to tweak datasets for individual storage engines.

Of course, there’s a few other relatively minor niggles that depend on your specific case. For example, grub2 naturally has no understanding of ZFS (or XFS for that matter), so it’s necessary to install grub-git if your /boot partition isn’t on a separate ext3/4 install. Under Arch, it’s also necessary to make certain your initrd is correctly configured via /etc/mkinitcpio.conf, and it’s almost always a good idea to re-run mkinitcpio even after upgrading or installing the kernel just in case it didn’t pick up your ZFS modules (you may need the binaries, too). Otherwise, you’ll be resorting to your emergency boot media to fix the dilemma (you did create it, didn’t you?).

A Less Optimal Solution

I consider the experiment with ZFS on Linux a fantastic success even though I’m already planning to migrate away from it. For my needs, I’m reluctant to run FreeBSD even though I’ve used it for similar purposes in the past. Thus, I’ll be reinstalling the machine with a combination of ext4 + mdadm (actually copying it back over, but there’s no functional difference insofar as downtime). In retrospect, I’ll probably miss ZFS’ transparent compression the most. Given my relatively modest data size and the fact that it defaults to lz4 compression (speed optimized), it’s surprising that it’s saved close to 200 GiB of storage! No other file system, save for Btrfs, provides transparent compression, and in spite of the integrity guarantees ZFS provides, I think compression is a far more pragmatic upside since its impact is real and immediate rather than theoretical.

Although I’d like to wax philosophical about ZFS’ touted benefits, I still can’t help but think it’s solving a problem that is gratuitously overblown. Perhaps bitrot is a real, material risk, but I’ve rarely been affected by it (ancient backup CDROMs notwithstanding). Has it affected my archives? Almost certainly so, but if it has, it’s never had an impact on photos or media, much less other, more critical data; the few times I’ve performed checksum validation of archives versus physical disk contents, I haven’t encountered a mismatch. Indeed, although it’s a problem ZFS is almost tailor-made to fix, it still doesn’t beat regular, extensive backups. Of course, that assumes you have a mechanism in place that would prevent your backups from being adulterated or overwritten by later, corrupted snapshots (and that your backups aren’t subject to bitrot as well), but I think Google’s solution here is far more apropos: Keep no fewer than three copies of your most important data. Surely one of them, statistically, will survive.

You’ll notice that I haven’t mentioned ZFS snapshots (or send/receive), because I’ve yet to encounter a circumstance (besides upgrades, perhaps) where they’re useful to me. While I’d like to use them with containers, there’s still the very real problem of running software inside a container that requires dataset tweaking, and there’s also the specter of lingering problems with ZoL’s implementation of ZFS which has had problems as recently as last year with snapshots (mostly with send/receive if memory serves). In my case, I tend to avoid advanced features if there’s a risk of causing damage because they’re either not well-tested, buggy, or have had a recent history of inducing failures. But alas, I’m conservative at heart; I’d happily poke about with ZFS snapshots in a virtual machine to see how they work, but I’m much less happy about using them on a real server that’s doing real work where downtime would likely interfere with important activities when those same kernel drivers are of dubious stability. I also have no other ZFS systems where send/receive would benefit me.

There is an alternative file system some of the more astute readers among you may have noticed in my list of omissions: Btrfs. I considered Btrfs for my server, testing it briefly, but at the time (mid-2016), I encountered some evidence that suggested Btrfs may not be particularly stable in spite of it being among the list of default file systems for some distributions. Btrfs’ tools feel lacking, dampening my confidence further.

The Btrfs authors have as recently as August 2016 admitted to substantial, possibly unfixable problems with then-current Btrfs’ RAID5/6 implementations. Although I’m running a simple mirror, the fact that such bug would be present in a file system some distributions have optimistically labeled as “stable” is worrisome (but just don’t use its RAID5/6 features–or whichever other features happen to be broken). I’ve seen comments from as early as 2014/2015 lauding the benefits of Btrfs as a stable, tested platform, but I have serious reservations substituting caution with optimism, particularly when 1-2 years later in 2016, it would appear such optimism is horribly misplaced. Consequently, I don’t see Btrfs as a viable alternative (yet!), and that’s without addressing Btrfs’ performance history with regards to PostgreSQL and other databases. This may change, eventually, and Btrfs does look promising. There are some valid criticisms that Btrfs is simply reinventing ZFS (poorly), but being as ZFS will likely never be included in the kernel due to licensing conflicts, Btrfs is better poised to reach parity with the likes of ext4 and company. I’m not optimistic about the Btrfs timeline, and while I’d be surprised if it attains feature completeness before 2020, I do believe many of its misgivings will be resolved well before then.

Back to ZFS: Will I revisit it in the future? Absolutely. If I do, I may put together a FreeBSD NAS for additional storage or the likes. For now, however, ext4 is suitable enough for my needs. It’s certainly good enough for Backblaze, and while I don’t share their requirements, I take a more pragmatic approach. If ext4 is good enough for them, it’s good enough for me.

No comments.
***

Cannibalism or Convergence?

I’ve been following some of the commentary and fallout (and some of the overblown suggestions) regarding Apple’s latest iPhone. Now that most of the hype has died down and things have more or less returned to normal, I’d like to share some of my own thoughts on the matter and what changes (if any) we’ll be seeing in the near future. First though, I’ll admit: I’m no fan of Apple, but I do commend them for having the foresight to migrate iOS to a 64-bit platform well ahead of when they may actually need it. Many of the comments in the HN article are insightful: The performance gains to be had from 64-bit are minimal at best, particularly on a phone, but in another 2-3 years, phones will probably be in the 4-8GiB RAM range, and 32-bit will suddenly become a liability. Any migration forgone now will be mandatory once the 4GiB limit is reached, so it certainly makes sense to do it much earlier.

Before anyone points out something I see repeated in replies to the very insightful HN comments, I’d like to preemptively address it to get this out of the way. Yes, I’m aware you can address greater than 4GiB RAM from a 32-bit processor using PAE, but if you’re going to see increases in on-board RAM every 6 months to a year, why not just sideline the issue entirely?

Convergence: Resistance is Futile?

The real nagging question that’s been permeating tech-circles for weeks is one of the convergence of platforms. It seems with the latest iPhone, dozens of pundits and dozens more droves of Apple fans are touting the death of the desktop, that the time of the reign of mobile will soon be upon us. I’m not so sure I’m convinced, but I do think that news and musings like this don’t occur in a vacuum isolated from everything else. I think I now know why Microsoft has made some bizarre decisions that may soon prove to be fatal, but I’ll get to that later.

When I was about 18, I remembered watching a short clip on Good Morning America on the future of technology. My mum insisted that I watch the whole thing, too, because she seemed puzzled by the teaser offered earlier in the show. I don’t precisely remember the contents of the program, but I do remember one of the guests talking about the future of desktop computers, the Internet, and technology. He suggested that within 5-10 years (bear in mind this was circa 1999), the desktop would be supplanted by a thin client; he insisted that the systems would consist of little more than a monitor, some RAM, and a network connection. They would then be tied into a central server run by a large corporation (essentially a network appliance before the term “network appliance” was in vogue), and all of your applications, games, and just about everything else would run from that central server.

At the time, I had the distinct advantage in that I understood a little bit about networking. While I was no networking genius (I’m still not, but I know quite a bit about the protocols we rely on), I knew enough about bandwidth and the rate bandwidth was growing to know that such dreams were prohibitive–at least for a while–but there was the nagging question about games and similar applications that relied on relatively quick rendering or significant network throughput. Would that be sent down the pipe, too? It seemed absurd, and while there have been some attempts today at remotely rendered games, the latency and throughput precludes any such utility outside laboratory curiosity. Likewise, the processing power simply isn’t available to power hundreds of thousands of players simultaneously playing something like the latest CoD or whatever other graphics-intensive games happen to be on the market. The gaming industry will likely be the saving grace of the desktop, and this may be a surprise to everyone but the lowly gamer. It’s no surprise then that the PS4 and Xbone are migrating more toward commodity PC hardware when just a decade ago, everyone assumed that PowerPC-based platforms would become the norm for the next ten years. If only we had the gift of foresight…

Still, the irony is not lost on me that the talkshow guest all those years described something that would later evolve into what is now known as cloud computing with the minor exception that the dream of thin, inexpensive client devices has not yet been realized. To a limited extent that may be true, but “thin client” applications (now cloud apps) have instead demonstrated incredible utility in niche use cases rather than general consumption. One could argue that smart phones and tablets have long supplanted the dream of the thin client (and they’re cheaper, too) with greater capability and storage. The future seems to be one where computing is something you carry with you, not something that rests centralized in a data center thousands of miles away. It’s such a romantic thought to consider highly portable devices when you consider that it was just a little more than 20-30 years ago when the home computer transitioned from dream to widespread reality, isn’t it? It’s also important not to get too caught up in the romance, because it’s easy to make assumptions that might never come to fruition.

So, I would like to make a prediction: I don’t think there will be a convergence of desktop and mobile in the future. Maybe I’ll be eating my words in 5 years, in 10 years, or maybe I’ll be right. Instead, I think the two represent use cases different enough to force them into the position they’re in now: They’ll continue working as complements. I’ll explain why.

The True Face of Tablets

Tablets have presented an amazing boom to an already growing industry. Nearly everyone has at least seen a tablet, and many people own at least one. A year or two ago, it would’ve been a surprise to see an old lady stepping out of her car in the church parking lot, shuffling into the building, taking a pew, and carefully plucking a tablet from her purse. Today it’s almost commonplace, or at least it’s becoming common enough to be unsurprising. Using a healthy dose of anecdotal evidence to support such claims, I’d like to point out that my mum has a tablet she takes to church. Several of her friends from church have tablets. I’ve even heard from others who also have tablets, and speak highly of the devices, often describing them as liberating. (Actually, the term they use is “handy,” but the idea they’re describing is one of liberation.) The only unusual thing about this particular demographic is that none (or few) of their husbands also own tablets. Many of the older men won’t even touch them. I’m not quite sure what this says about the 60-75 age group and up, but I do know what this says about the technology and, more importantly, about the predictions.

With the rise of mobile, pundits have been lamenting the death of the PC as a drawn out but inevitable obituary. The more progressive minds among them proclaim that the day will soon come that everyone will be equipped with tablets. Office workers, programmers, bus drivers, and teachers. If you need a desktop, you’ll simply plug your tablet (or phone) into a docking station, and begin working from the OS embedded in your mobile device. If this sounds familiar, it should. The concept of a “desktop replacement” isn’t new, and according to Wikipedia, it dates back to the 1980s. What is new, however, is that for the first time in the history of computing, desktop sales have been stagnating while mobile devices have been enjoying record growth in sales.

Does this support any evidence that suggests the desktop will soon pass on into the hereafter? Should we ready our speeches and mournfully reminisce about days gone by? No. I’ll explain why I feel this is just another notch in the tree of technological evolution.

First, and most obviously, mobile device sales numbers are somewhat conflated. Pundits who point to the sales figures as definitive evidence that the PC is a dead man walking typically neglect to consider planned obsolescence, particularly in mobile data and voice contracts. Even tablets have fairly limited useful lifespans of approximately 2-3 years. Technological pressures exerted on the mobile platform are far greater than that of the PC which often has a useful service life of 5-8 years in light use or in an office environment. Software requirements, hardware capabilities, and battery age all factor in to determine the life time of a mobile device. Of course, this doesn’t neatly explain everything. With the world economy struggling, shouldn’t mobile device sales be impacted, at least slightly? Well, maybe not. They are rather cheap, after all.

Second, and perhaps more importantly, mobile devices are relatively inexpensive for what they do. For those of you who don’t regularly play video games, write code, or spend far more time staring into the abyss than is otherwise healthy, mobile devices in general (tablets in particular) are fantastic for casual use. They’re great for reading, they’re great for browsing, they’re great for casual games (“party” games as some of you might call them), and they can be taken almost anywhere provided the battery is in good healthy. The only thing they’re undeniably terrible at is content creation. Maybe that will change the day someone figures out how to make a sort of magnetic/repulsive haptic-style system that provides tactile feedback for a software keyboard? As a touch typist, I find it difficult to spend a great deal of time tapping away at a screen with no distinction as to where my fingers are at any given time. I guess I’m one of those who can’t adjust.

Going back to what I mentioned earlier: Do you remember the somewhat anecdotal evidence I offered up of the old ladies and their tablets? It seems like an atypical use case, particularly in a world where technology is dominated by twenty-somethings carting around the latest iDevice as a sort of electronic status symbol among their peers. The thing is, the twenty-something demographic is reaching saturation, and what seemed atypical just a year or two ago as the rest of society catches up may become far more common than many of us realize. The 20-somethings aren’t everyone.

Essentially, I suspect that the pro-mobile apologists (the PC is dead!) can’t see the forest for the trees and the pro-PC mobile denialists (long live the PC!) don’t want to concede to the reality of the marketplace. Are you ready for it? I’ll even bold it to make it more apparent.

Not everyone needs to own a desktop PC.

I know that’s a shock, but the simplest truth of the matter is that tablets are a better match for the majority use case that the PC previously enjoyed. They’re excellent media consumption devices, and for casual users of technology–like my mum–who rarely e-mail but are voracious readers and researchers, sometimes the tablet is a far more useful device. It’s easier to pick up a tablet and thumb it over to a book you’ve been reading than to fiddle with the overhead lamp and stumble around the house looking for a small paperback you’ve misplaced. It’s easier to pick up a tablet than it is to go into another room, wait for your computer to boot, and go about looking for knitting or crochet patterns. Let’s face it: It’s easier to keep your brains neatly tucked away in a little electronic device not much bigger than the books you used to read as a kid. For many use cases with the exception of content creation, using a tablet simply makes sense, and the demographic I believe that’s fueling the growth–at least in the tablet world–is the 60+ age range. They don’t need to own a computer. Moreover, while many of them may have been exceptional typists at one point (my mum for instance is a touch typist and is responsible largely for my early education as one, too), they’re of the generation where tactile interfaces, like touch, simply make sense. When you grew up in a world where you manipulated knobs, buttons, and widgets, it’s so much easier to use your fingers to manipulate their virtual equivalents than it is to point-and-click. (Point-and-what?)

So, I’m arguing against the convergence of desktop and mobile, but I just made the case for mobile supplanting everything else. Right?

Not quite: The point here is that many of the people who own desktop computers probably never needed to. They don’t usually create a great deal of content. They don’t write e-mails often. They don’t write letters to print out (they do that by hand with a pen and paper–you under-30s know what those are, don’t you?). If they do write something electronically, it’s little more than a quick note. Sure, this use case could easily be filled by a fairly low-powered desktop, tucked away in a back room and only used once a month for printing out letters or the likes, but in general, the older population is beginning to understand that mobile devices have greater utility than their bulkier forebears. As this discovery spreads and seasoned citizens become savvy to the benefits of a small, highly portable computer, sales will continue to skyrocket, and desktop sales will continue to decline.

That means the desktop and mobile device will converge, with the desktop riding off into the night. Doesn’t it?

No, it doesn’t mean anything of the sort.

I alluded to the notion that many pundits fail to recognize many of the realities facing technology, and largely, I think it’s the fault of a combination of misplaced optimism, misinterpretation of market forces, and a healthy dose of wishful thinking. I think some of them also place their predictions on a secret desire to see one platform or the other “win” in the end (e.g. Apple versus Android), and in their minds the desktop is mere collateral damage. Yet, in spite of all the advances in mobile computational power, somehow, virtually everywhere you look, pro-mobile pundits recognize the rapid break-neck speed of mobile advancements while simultaneously ignoring the fact that the same technology that brought the mobile environment to live also powers desktops, and it certainly won’t remain at a standstill. Many of them even claim that Intel’s days are numbered, but Intel is still the one of the largest manufacturers of chips in the world, and they’re dumping billions of dollars annually into research and development. For example, their new 14 nanometer technology will be just around the corner, and the x86 architecture is unlikely to go extinct anytime soon. If anyone should be concerned about mobile, it should be Intel. Yet Intel certainly seems to be hedging their bets on x86 in spite of the encroachment of ARM. Why? Are they that stupid?

I think Intel knows a bit more about the market than we give them credit for. Sure, AMD has introduced ARM-based server chips, but Intel isn’t going to throw away a multibillion dollar industry. In fact, I think they’re banking on growth, because more mobile devices almost directly equate to more media consumption, more users, and more services requiring new hardware to grow and expand. Although speculation has been mounting that ARM will likely oust Intel in the server space, I hardly see that happening. While ARM capabilities are growing, Intel’s chips are sipping less and less power. The next generation of Intel server CPUs will likely be fast and energy efficient. They’ll have many of the benefits that ARM currently boasts, mitigating the expensive decision of migration.

Yet, paradoxically, even if ARM were to win this battle and oust x86, it likely wouldn’t spell the end of the desktop. There are ARM ports of Windows (although legacy x86 applications won’t run on them), Windows RT, and most open source applications can be recompiled for a new architecture without much fuss. Apple, also built on an empire of open source technologies, could just as easily migrate their OSX offerings to other architectures, but chances are pretty high that they won’t. Modern x86 chipsets are still substantially more powerful than the CPUs in mobile devices, and if history provides us with any incite into this trend, it’s a reality that will continue indefinitely pending unforeseen circumstances.

Sorry mobile buffs. x86 is here to stay. As power requirements drop, it’ll be big iron with a slimmer waistline, from your desktop to the datacenter.

What’s this mean for the desktop?

As far as predictions go, I think the more outlandish and progressive a theory, the more likely it is to be incorrect. Careful, cautious, and more conservative predictions tend to be accurate, and I think that the next 5-10 years will be more of the same that we’ve had the last 2-3 years. Mobile use will continue to increase, particularly among people who don’t really need a desktop, and desktops will still be purchased each year for tens of thousands of students, families with children, and grandparents who need a device that is more suitable for creating content than consuming it. That isn’t to say that mobile devices won’t be powerful enough to fill that niche in the next 5 years. No, mobile devices will be plenty powerful. It’s simply that the use cases for which they are designed (largely media consumption) don’t lend themselves well to writing essays at length or generally creating content. Casual photo manipulation may be one such realm conquered by mobile, but don’t count on anything more complicated than cropping, resizing, and other basic edits to family photos; finger-Photoshop is unlikely to supplant the real thing, because real graphics designers don’t even use a mouse. Indeed, among graphics designers, a “tablet” is something with a pen and a touch surface; it isn’t a mobile computer.

Another particularly problematic aspect for mobile devices is one of freedom. With a desktop, most users enjoy relative freedom to choose what they want to install and how the platform behaves. More savvy users can even repair or upgrade their computer, and the savviest of them all can build them from scratch. The PC gained much of its momentum because the platform is mostly open and relatively easy to maintain. From a developer’s perspective, nearly anyone could write software for most desktop environments without fear of walled gardens. Anyone could buy new hardware. Unfortunately, mobile threatens that freedom. Mobile threatens to concentrate the capabilities of the software in the hands of a few corporations and to consolidate software development to the anointed few. “App stores” are the antithesis of freedom, and while they operate under the guise of security, it’s difficult to reject the notion that users are trading their freedom for convenience. Of course, those of us who are aware of the dangers of computing-as-an-appliance are few and far between. While we may not be numerous, we have a secret weapon stashed away in a dark closet that we can unleash in a moment’s notice: The gamer.

I’ll warn you: I’m about to wax philosophical in this section, and this is where many disagreements will undoubtedly lie.

Gamers are notorious individuals in the tech community. They’re the folks you go to when you want to tweak your hardware or install fancy lighting, creating something of an outrageous and ridiculous exhibit of post-modern art meets Thomas Edison. Yet as much as major studios and console developers have tried, the PC has stubbornly lived on, thumbing its nose at enforced conventions and plowing its own way into the fields. The XBox 360 was slated to serve as a PC-gamer replacement, shifting players from the ubiquitous keyboard-and-mouse to thumbsticks and bumper buttons. It didn’t. It did replace the PC for some casual gamers, but for the MMO and hardcore FPS gamers, the console is surprisingly absent and unwanted. It isn’t for lack of capabilities, either, and while the real reason for this escapes me, I suspect it might have something to do with the very thing that Microsoft has been more than willing to destroy as of late.

For many gamers, the PC is the end all, be all of their hobby. They’ll have browsers open, they’ll have instant messengers running, and they might even be checking e-mail all at the same time while speaking on a VoIP client with a handful of other gamers. The PC, while not particularly useful for casual use, is very good at many things, and that’s where I think it will continue to shine. It took years for Apple to introduce some semblance of task switching in iOS, and even Android still suffers, in my opinion at least, from non-intuitive task switching. Simply put: Mobile devices don’t have an alt-tab. They instead attempt to weld multiprocessing into a platform for which it wasn’t natively designed. For the PC, running dozens of applications and switching among them is an uninteresting problem. It was solved long ago, and the UI design for switching among them has long since been established. For mobile? Not so much and there’s plenty of room for disagreement on how it should be managed in the future.

Part of this limitation in mobile devices is due, at least in part, to the technical requirements of saving battery power. By giving the OS greater control of an application’s life cycle and suspending or terminating it when it’s not in use, the OS is able to control power consumption more directly. Halting a processor-hungry application while the user isn’t using it works great on a platform where available energy is at a premium until the next recharging cycle, but it isn’t much of a consideration for a computer that’s plugged into a wall outlet. This, I believe, is mobile’s Achilles’ Heel. Whereas on the desktop, a CPU-hungry task can run indefinitely, simultaneously sharing available process time with other running tasks, on a mobile device, such desires are fantasy at best and a dead battery at worse. The ability to use a computer for more than one thing at a time is what I believe will continue to breathe life into the desktop and may stay or entirely halt the encroachment of the mobile device into the realm of office work. That’s to say nothing about specialized use cases or niche fields (think software developers, monitoring stations, enterprise, and scientific use).

To this end, I think the dream of carrying a tablet computer to work every morning (to say nothing of the security issues surrounding such practice) will remain such. It’s a dream akin to the prediction some 12 years ago that everyone would be using thin clients attached to a centralized server. Mobile devices are useful to a lot of people for a lot of things, but I highly doubt they’ll replace them entirely. As complements, however, mobile devices shine like little stars in the blackest of nights.

Curiously, there’s one last thing that might hold back the mobile device beyond simply the logistics of convincing your workforce to take home a device and plug it into their workstation every morning. It’s also much more primal than any of the other reasons: Tactile feedback. Have you ever stopped to type a lengthy letter on a touch screen? It’s not fun. As I mentioned earlier, and it’s worth repeating here again for those of you who fell asleep due to my bloviating, touch typing on a mobile device sucks. I daresay it’s even a waste of time. Until someone can think up a way to use electric or magnetic forces to provide some sort of a tactile barrier above each key that provides feedback similar to a keyboard, lengthy writings will be limited to the desktop or a tablet of a different era–the paper tablet.

So what’s this got to do with Microsoft?

I’m glad you asked, and I’m sorry for making you suffer through thousands of words. I’ve never been concise, especially when I’m particularly vocal about a subject. Or excited. Or have a captive audience. (Don’t worry, I’ll let you go very shortly.)

It’s no surprise that Microsoft has been sitting on the sidelines watching the mobile universe speed by almost perpetually out of their grasp. It’s also no surprise that Microsoft has been desperate to snag some share of a rapidly growing market, and they’ve even gone so far as to alienate their entire installed base by grafting a mobile UI onto a desktop OS.

I’m talking, of course, about the blasphemy to everything Microsoft has every done in recent memory called Windows Hate. Wait, no, that’s not right. Windows 8! There we go! I knew it was something that sounded like a bodily function. I’m just pleased to know that I remembered which function that happened to be.

Ignoring for a moment the love-hate relationship many users have with Windows 8 and the shameless fanboys who are undoubtedly paid to praise it for all its shortcomings, it’s been painfully obvious since its inception that Windows 8 represents a new, self-destructive era for Microsoft. The Windows Store, left unchecked, may threaten the vast ecosystem of 3rd party applications if Microsoft should ever choose to lock down the OS to install only certified applications. But, curiously, Windows 8 also represents what I think may be the evolution of Microsoft’s market strategy.

Earlier this year, Microsoft announced the closure of the Games for Windows Live marketplace, leaving dozens of games and DLC in a state of perpetual limbo. It remains to be seen what Microsoft intends to do with software that is now rendered unavailable to newcomers, but the fate of GfWL is already sealed: It’s due to close entirely by next year (2014).

At this point, there’s only two possibilities that remain. The first, that Microsoft plans to reopen the marketplace under unified banner to serve Windows and Windows-based devices and the Xbone. The second, that Microsoft plans to segregate Windows and Windows devices under the Windows Store and away from their gaming platform marketplace. While I’m hoping the latter won’t occur, part of me is awfully suspicious.

Microsoft has been fairly vocal about their plans for the Xbone, even reneging on promises based on consumer feedback. But I almost worry that their plans are to entirely cannibalize the gamer market with their Xbone offerings. If Microsoft refuses to reopen anything like GfWL again abandoning dozens of titles in the process, then cannibalization might be the only thing in their strategy. Where else can you get customers?

The only problem is that cannibalization is never a good strategy. The first hint to me that Microsoft is likely planning on killing off Windows as a gaming platform comes not from Microsoft but from Valve in the form of their Steam Machines. Working closely with dozens of vendors, gaming studios, and developers, the Steam Machines are Linux-based and likely to use a distribution model not all that dissimilar from Android. Valve won’t necessarily be making the hardware themselves, but, like Google, they’ll be releasing the operating system entirely for free to companies that do plan on making hardware. Undoubtedly, these companies will be afforded the opportunity to customize the OS (within reason; probably adhering to a specific set of standards) much as it appears they’ll be taking liberties in terms of hardware selection and capabilities. Applying the Android model to consoles is almost a brilliant maneuver, and it makes me wonder who will be left as the “iPhone” of consoles: The XBOX or Playstation? I think you can guess which company I’m betting on.

This leaves Microsoft in a precarious situation. By all but forcing the gaming crowd onto the Xbone by limiting Windows to a certain degree (either by way of no longer porting titles or allowing dozens of established ones to stagnate) and attempting to create a homogeneous platform between desktop and mobile long before anyone else, Microsoft may wind up shooting themselves in the foot. Valve is already discretely collecting various bits of productivity software in their Steam platform, so it’s not all that difficult to imagine a world where one simply needs to download SteamOS, login, and have available all the software they’d ever need. I can also guarantee that SteamOS’ package manager will be indistinguishable from Steam itself but some concessions may be made in terms of other popular package managers. We’ll know more in the coming weeks and months.

I’m just not sure how I feel about a DRM platform, like Steam, becoming the new walled garden courtesy a highly customized variant of Linux. It’s almost ironic to consider that a free operating system to which users may be driven due to a lack of freedom elsewhere would ultimately become their internment.

I think it’s also somewhat ironic that in a battle between Apple and Android (more specifically, Google), one of the most notable casualties would be Microsoft. In effort to reach for greater market share in the mobile world, like Icarus reaching for the sun, Microsoft may find themselves plummeting to earth. Having sacrificed the desktop for their mobile and Xbone divisions, the only thing that might cushion their fall is the enterprise (or business, or government), but if a handful of European States have taught us only one thing, it’s that switching isn’t difficult after all.

That Microsoft’s empire could crumble at the hands of a war in which they were hardly targeted is indeed ironic, but it’s also a testament of where their leadership has driven them. In that sense, mobile may indeed destroy the desktop, if that desktop happens to carry a Windows logo.

No comments.
***

KDE 4.10 Upgrade and Disappearing Windows

So, KDE 4.10 just came out last week, the Arch repos sometime thereafter, and I updated today. I really love new versions of KDE–honestly–but this time something really bizarre happened. I updated, rebooted to the new kernel (I was running 3.7.4, the update was to 3.7.6), fired up KDE, and started loading my usual applications.

Then nothing worked quite right. I’d open one window, and the others I had open would disappear. Opening the launcher would also exhibit this problem: Open terminal windows would disappear the instant the launcher was clicked or any other application (including Dolphin) was opened. Infuriating. It seemed that I couldn’t have all my usual windows open at once–everything would disappear!

Fortunately, after some trial and error, I found the culprit. This latest version of KDE has made significant changes to ~/.kde4/share/config/kconf_updaterc. So, there’s two solutions:

Easy Mode

Log out, switch to a virtual terminal (ctrl+alt+F1), log in from there, and delete the file ~/.kde4/share/config/kconf_updaterc. Then switch back to X (usually ctrl+alt+F7 or F8) and log back in to KDE so it regenerates the file. Enjoy!

Hard Mode

Follow the previous steps but retain a backup of your .kde4 directory. Then run a diff on the two versions of kconf_updaterc and figure out what you want to keep, what line caused the problem, and retain as many of your previous settings as possible.

Otherwise, I hope this saves you some difficulty if you run into the same thing I have!

February 13th edit:

It seems that this post by “George” on the Arch Linux forums indicates a potentially easier solution, which is to change the window transparency settings. I changed the title transparency settings, but I’m not entirely sure if this will resolve the disappearing windows.

No comments.
***