Extraterrestrials, Rectal Probes, and Infographics

A couple of months ago, an infographic was making its rounds in the Internet titled “Let’s Say (for whatever reason) You’re the First Human Ever to Make Alien Contact”, and I wrote this response almost immediately but neglected to publish it mostly because I wanted to revisit my thoughts at a later date. To wit, it’s certainly a thought-provoking piece and its author raises a number of clever points (peppered with observations). I just can’t shake the feeling that its primary purpose is to provide entertainment and, possibly, elicit conversation. There’s nothing wrong with either circumstance, so I won’t fault the creator, but I certainly advise against taking it seriously. I’m sure it wasn’t exactly intended to be taken seriously but not everyone is likely to catch on. Indeed, it seems from many of the comments that I’ve seen crop up whenever this image makes the rounds, no one has caught on. In other words, it’s full of hot air.

I admit that I really enjoy this sort of thing, and thought games are a fantastic waste of an evening, but I’ll warn you that I have no experience that enables me as an authoritative critic. I just like code, Linux, technology, and building things. I’m neither an astronomer nor a biologist. I’m not a physicist nor am I a mathematician. I enjoy articles written by people much smarter than I, and being a skeptic at heart whenever the masses get riled up about the latest fad, I can’t help myself from examining the subject more closely.

I’ll warn that this article is primarily an opinion piece. There’s no solid information contained within, and it’s even less likely to contain intelligent musings that are my own. Many of the ideas expressed here are just parroted from other sources that have a better handle on the subject (scientific journals, other sites and web personalities, my dad). I can’t credit myself for much more than assembling a handful of disconnected thoughts tangentially related to “first contact.”

So, let’s begin.

First, I want to discuss a little bit about the basic premise of first contact illustrated in the infographic and often repeated in pop culture. It’s wrong. Aliens are not going to make first contact with some random Joe Schmoe on the streets (see below). Instead, they’re much more likely to observe our civilization indirectly, gathering information pertaining to their curiosities, and then–without us ever knowing–they might just leave. That’s right: First contact may have already happened and we’ll never know. Of course, assuming we don’t nuke ourselves back into the stone age or somehow make landfall on a distant star system before our Sun decides to cannibalize its progeny, we might bump into them again. (Hey, remember that planet you came from? We visited you about 200,000 years ago and you didn’t blow yourselves up. I’m proud. Maybe you can make it another 200,000 years?) In this scenario, there’s no point to further discussion. They came, they saw, they left. For most sufficiently advanced civilizations interested in us solely as a celestial curiosity, that’s probably as far as they’ll go (assuming they don’t have proxies). There’s nothing we can provide them apart from satisfying their own scientific curiosity in terms of observing other marginally intelligent life. Well, mostly.

There is one other reason we might make first contact and know it: They need us. I don’t mean they’ll land outside the Whitehouse and start playing sappy love songs, either. If they need us–and I mean really need us–it’s because they’re looking for something. It won’t be oil; hydrocarbons of various grades are moderately common in our own solar system. It won’t be water, because that’s all over the place. It won’t be rock, steel, diamonds, or Russian brides. But it might just be proteins, chlorophyll and other organically-created compounds that aren’t readily available elsewhere. That’s right, they might just be hungry and we’re their next meal. But, I wouldn’t worry myself about this too much. If the aliens are able to reach us here and they want to eat us, there’s nothing we’ll be able to do to stop them outside of giving them a righteous case of heartburn. I know you’ve seen Independence Day. I know you’ve seen other flicks where humans arise victorious in the face of insurmountable odds. I also know you’ve never seen a coup de slaughterhouse, lead by the cattle mutineer, bravely fighting the human tyranny of his people.

Oh, but we’re smarter than cattle, you say. Sure, and I’d bet you’d turn down a trip across the galaxy if the aliens offered it to you with no strings attached (except for the fine print). Think of it this way: The operator of some celestial alien amusement park offers free rides to the first 100 people except that they never return. Maybe that’s the schtick, “Come to Planet Paradise! It’s everything you’ve ever wanted! Stay a while, stay forever!” I’d imagine there’d be droves of people lining up. Limiting travel to the able-bodied would filter out the unfit, disease-ridden subjects rendering (heh…) post-processing just a little bit easier. Bon appétit!

Most likely, though, they won’t be here in person (unless they like really fresh produce). Instead, it’d be more likely and more efficient to send the interstellar equivalent of a factory ship that harvests, collects, processes, and cans organic materials in a single sweep. Heck, I doubt it’d be a matter of packaging “dolphin-free tuna” or soylent green (remember, it’s people! Well, except in the book…). Instead, we might just encounter giant factory ships that suck up all the organic compounds on a given world, process it into a mostly homogenous slurry, and dispose of anything they don’t want as waste. I’ll bet you’ve never seen a pooping spaceship before, have you?

If you’ve never seen a meat emulsion, I’d highly recommend it so you have a better picture of what I’m talking about. The exception in this case being that it’d be an emulsion of everything evolution has (had?) to offer sloshing about billions of storage tanks at a comfortable -100°C.

Even if they don’t want to eat us, chances are they won’t send a biological emissary (leastwise, one that hasn’t been engineered), and I would find it highly unlikely they’d send a member of their own species–with the notable exception of the “Star Trek Paradox“.

They Sent a Robot Army

One only needs to look outside the periphery of our own planet to catch a glimpse of the most likely sort of first contact: Robotic probes. We’ve sent probes in orbit, to the Moon, to each of the planets (and beyond), and we’re even doing some really amazing science on Mars. If we can do it, there’s no reason more advanced civilizations can’t do it on an interstellar (if not intergalactic) scale. The crux of this argument essentially boils down to first contact being made by cold, heartless probes (again, ignoring bioengineering for a moment) that snap a few pictures and shuttle them back to some central data store for further processing. Then again, such probes might possess a great deal of artificial intelligence and decide we’re not worth it after a few nanoseconds of deliberation.

I can’t say I blame them.

The ironic thing in this case is that, pending a fully robotic canning ship that’s come here for some distant civilization’s next meal, an alien intelligence-gathering probe might be somewhat less hazardous to encounter and a million times more friendly. Well, assuming we could even recognize it as an alien device. For all we know, such probes might be microscopic, or they might even be disguised as a small space rock that happens to be on a really weird trajectory. Bat ‘er up!

They Don’t Want You for Tea

If we can assume for a moment that our poor lost soul who happened to be abducted is aboard an alien vessel, it won’t be for tea. And chances are, our subject won’t make it out alive. If the aliens have taken sufficient interest in poor Mr. Schmoe, it’s probably to examine his biological features further than their highly advanced scanners may be unable to ascertain. Such questions as “At what atmospheric pressure does Mr. Schmoe fail to live at?” or “Can Mr. Schmoe breathe water, sulfur dioxide, cyanide, or any number of substances and at what quantity?” In all likelihood, Mr. Schmoe will just be one poor Schmoe in a sea of Schmoes, all collected randomly to limit statistical outliers, each cataloged from point of discovery to time of death.

This may seem appalling to some, but I want you to take a moment to think about biologists studying a new species for the first time. Biologists first observe, then collect, then dissect. Collected specimens may or may not be returned to the ecosystem from which they were recovered, and even if they were released, they probably wouldn’t have the means of returning home. Likewise, specimens are not always left in one piece. It’s not that biologists want to brutally murder previously unseen organisms as much as it is simply an artifact of science. Sometimes creatures just die because they’re stressed or because they’ve been removed from an environment they’re adapted to and placed inside one they’re not. Besides, sometimes (okay, most times) creatures can’t tolerate being dissected for very long before expiring, and for the most part, complex organisms don’t function particularly well as a disassembled puzzle.

What I’m hinting at here is that no amount of Crazy Glue is going to put Aunt Suzie back together after the Rectoids are done with her. It’s kind of like a human Humpty Dumpty.

While an advanced civilization undoubtedly has techniques to non-invasively examine living organisms, such scanners may not be able to tell the whole story. Neither is observation fully unlimited in its pursuits. Collectively, a variety of tools could be used to determine our chemical composition, what we eat, how we interact, and (very generally) the environmental conditions we tolerate. An astute civilization could deduce a great deal from direct and indirect observations alone, including metrics like population distribution (the relative lack of settlements in Antarctica would indicate we don’t do well below certain temperatures), farming techniques and subsequent food consumption, and technological achievements. But there are very few substitutes for outright killing an organism to determine its absolute boundaries. And, well, few alternatives beat violating its external boundaries to see how it ticks.

Think of it this way: If we encountered a potentially dangerous but otherwise technologically inferior species that was not like us, what would we do? We may try to avoid provocation through direct assault, but if a few of our scientists got killed, it’s unlikely anyone would feel all that upset if we snagged a handful for experimental purposes. And, of course, by “experimental” I mostly mean “figure out the quickest way to kill these things in case they get out of hand.”

Now, imagine going up against a primitive organism that was dangerous and bred like rabbits. You scoff, but with almost 7 billion people on the planet, “dangerous bipedal apes with rabbit-like reproductive skills and access to weapons of mass destruction” sort of fits the bill. Who knows? Maybe the aliens coming here are a sort of cosmic Orkin man hired to get rid of a human infestation.

The Star Trek Paradox

I have no idea if this is a “thing.” In fact, it probably isn’t, because I doubt anyone outside sociology would be dumb creative enough to think this one up. On the other hand, because rule #34 of the Internet applies surprisingly well outside the realm of smut insofar as “if you can think of it, it exists,” this probably is a thing. If it’s not, it is now.

The paradox essentially goes like this: Civilization A has had X decades/centuries/millennia to think up crazy ways to greet less advanced civilizations. They stumble upon evidence of Civilization B. They observe Civilization B. Then, for whatever reason–maybe it’s because B makes some killer pork ribs or because B just discovered the Warp Drive–A decides to make contact. Maybe they remember that one time some four thousand years ago when they collectively thought “It’d be rad if aliens totally landed on our capitol building and started doing a jig.” Maybe it’s a spur of the moment thing. Either way, in spite of their ridiculously advanced technology and capability to annihilate B a million times over without so much as breaking a sweat, they decide to plop down in a vacant parking lot outside a truck stop just to say “Hi.” It doesn’t make much sense, because the backwards Bs are of absolutely now use to the Awesome As.

Now, this isn’t so much a paradox as much as it is a plot device to explain where Vulcans came from and to provide TNG: First Contact with a semi-believable story arc–if you can believe all aliens look exactly like us and have all of the same features. That said, having the technology to silently observe any given civilization from the safety of your own roost hundreds–if not thousands–of light years away only to pop out of the shadows like gleaming targets for a bunch of bucktoothed hillbillies (who undoubted hold a bit of a grudge against aliens anyway since they swear their Uncle Bob was abducted a few years prior and probed in the anus with a phallic metal object) sort of doesn’t make any sense. Paradox or not, it seems a bit like suicide, or perhaps their culture views such vulnerability as a sign of peace.

I can only imagine the end of human civilization beginning with the phrase: “Hey, ya’ll, they isn’t one of us!”

The Star Trek Paradox neatly outlines the implausibility that a civilization would just happen to show up the moment we make a ground breaking discovery–or for any reason, really–just because they happen to think “it’s time.”

First, while we haven’t yet invented a warp drive, the destructive specter of atomic weaponry has existed for more than half a century. One might think that it would be more pertinent to visit a promising civilization before it annihilates itself, thus ushering in an era of peace and prosperity (or speed up the process by provoking our benefactors into doing the dirty work for us). Second, just because a civilization might suddenly be capable of transiting among star systems doesn’t mean it’s any more “ready” than it was previously. It just means that civilization is capable of bringing its bad habits with it even further than before. Going from Earth-bound to the-galaxy-is-our-playground overnight isn’t going to suddenly change our behavior, and if you disagree with that sentiment, I’d like you to ask American natives how that whole European thing worked out for them.

I could be completely wrong since I’m not an alien, but the Star Trek Paradox is something that makes little rational sense and probably has even less bearing on reality.

What if it Happens Anyway?

So, let’s just assume first contact happens anyway either because of the Star Trek Paradox or because these aliens are feeling exceptionally cheeky and get their jollies out of scaring the organic refuse out of lesser species. The question is: Do you need to know math?

The answer: No.

Before I explain why, I’d like to point out that the infographic (linked earlier in this post) has a single, very significant contradiction. First, it suggests the aliens contacting us would possess technology so far beyond anything we can comprehend that we’d be better off doing nothing. Then, it suggests that we would need to resort to demonstrating some capability of math and scientific understanding as if they’re completely oblivious to everything we’ve done. I’m sorry, but I don’t buy it; if an alien civilization were to make contact (with good intentions), you can bet they’d observe us to gain a better understanding of our capabilities.

Instead, they’d watch the construction of buildings and roadways, which demonstrates a knowledge of architecture (and by extension trigonometry) and engineering. They’d observe aircraft and satellites, both of which demonstrate aeronautical progress beyond a simple “sticks and stones” society. They’d detect radio emitters peppered across the planet and probably try to decipher it. But perhaps most importantly: Their advanced technology wouldn’t preclude a basic understanding of sociology and behavioral patterns. Indeed, they might try to determine who the leaders were and make contact with them directly. After all, if their civilization were anything like ours, it would behoove them to avoid abducting a random stranger wandering the woods late at night. While such a person would be representative of the general population (and what they know), such a find is mostly useless outside of playing biologist (see above).

Leaders have a certain amount of influence over their tribe. Think about human history and instances where first contact was made between various civilizations. Invariably, the leadership of those making landfall sought out the leadership of whatever natives they encountered. Of course, they mercilessly slaughtered them in most cases–or in the case of the Chinese, abducted the king of Sri Lanka to personal apologize to the emperor for insulting his troops–but generally speaking, enterprising explorers often sought out leadership. There are exceptions, mind you, but fortunately most of those involved sport–like hunting.

To this extent, the old first encounter joke “take me to your leader” and its derivatives might not be so far off. Assuming some backwater hillbilly happens to be the target of our future encounter with aliens, it’s doubtful that he or she will need to be well versed in binary arithmetic or anything else. Considering the absolute shock of encountering a highly advanced race for the first time, it probably wouldn’t matter if the subject of our discussion was a mathematician or not. For something as historic and important as a first encounter that could potentially change the fate of our species, most people would probably have the first reaction of running to the hills.

“Z’katek. You did it again. You scared them off.”

“I know, B’thuk. So much for asking directions to the nearest fueling station.”

Joking aside, if a civilization isn’t bent on destroying us and is genuinely curious about the human species as a whole, I can almost completely guarantee that we will likely never encounter them. Indeed, I suspect that they would be more likely to observe us at a distance, gather whatever it is that suits their curiosity, and then leave. They might take a memento or two, and not the biological sort, so such a circumstance would be punctuated by the mystery of a missing satellite. You can tell quite a bit about a culture’s technology, capabilities, influence, and more by simply snagging a piece of their work. For an advanced society that came here to observe, snagging a satellite might be a perfect sample as it illustrates (roughly) our computing capabilities, communications capabilities, and how we’ve discovered to best align or control objects in microgravity–all of which are important in ascertaining whereabouts on the technology curve human civilization lies.

So no, I don’t think it’s necessary that everyone be well versed in what to do when encountering aliens. Such a discussion is only useful as a thought experiment, nothing more. On the other hand, if the signal to your television suddenly goes blank and no one knows what happened to the satellite (but no doubt there would be some finger pointing at the international level), an alien ship might have just come–and gone–inadvertently ruining your football Sunday dinner party. Unless, of course, they’re here to harvest us, in which case we’re the dinner.

No comments.
***

A Lesson from Twitter

Today, I got a curious e-mail from Twitter:

Hi, zancarius

Twitter believes that your account may have been compromised by a website or service not associated with Twitter. We’ve reset your password to prevent others from accessing your account.

You’ll need to create a new password for your Twitter account. You can select a new password at this link: [redacted]

As always, you can also request a new password from our password-resend page: https://twitter.com/account/resend_password

Please don’t reuse your old password and be sure to choose a strong password (such as one with a combination of letters, numbers, and symbols).

In general, be sure to:

Always check that your browser’s address bar is on a https://twitter.com website before entering your password. Phishing sites often look just like Twitter, so check the URL before entering your login information!
Avoid using websites or services that promise to get you lots of followers. These sites have been known to send spam updates and damage user accounts.
Review your approved connections on your Applications page at https://twitter.com/settings/applications. If you see any applications that you don’t recognize, click the Revoke Access button.

For more information, visit our help page for hacked or compromised accounts.

(Before you ask, yes this did come from Twitter.)

It turns out that my Twitter account had been compromised. I hadn’t posted anything since 2011, and I seriously doubt I logged into Twitter any time recently on my browser (though I probably have it active on a mobile device–I just never check it). This was puzzling to me, as I thought I had used a random password on the account as per my usual habit.

Except that I hadn’t. Instead, I had used a simple throw away that could’ve been relatively easy to brute force given sufficient time. This was entirely my fault, and while there’s no excuse for it, I admit that I hadn’t ever thought enough of using Twitter to protect the account. Furthermore, the account was created circa 2009 when I used to use fairly simple passwords for throwaways and strong passwords for accounts I wanted to protect (my personal e-mail accounts use > 40-70 character pass-phrases, for example). So, this was entirely my mistake, and while it’s plausible that I may have given access to a 3rd party to tweet on my behalf, I suspect this isn’t the case; there were no apps listed in the authorized application list, and the Twitter e-mail strongly hints that they will remain there until manually removed.

So, lesson learned I suppose.

However, this did present a unique opportunity to learn from one of the top social networking sites in the world. Rather than closing accounts or granting spammers free reign, Twitter resets the account password and sends a polite notice to the e-mail address registered for the account indicating what the problem is and how to rectify it. It’s a brilliant idea, I think, and I’d love if more sites followed suite. After all, spammers are using similar tactics elsewhere (including Youtube) to exploit accounts that might otherwise hold good standing with the community to continue their nefarious activities. Plus, is it really fair to terminate someone’s account that’s been compromised, just because it was used to spam? I don’t think so–not anymore.

The other lesson in all of this is to use strong passwords even for accounts you don’t think you’ll use again. It can affect your reputation, it can cause embarrassment, and it feels unnaturally violating to see spammy comments from an account with your picture on it. While my account was only used for two spam tweets before Twitter shut it down, the sensation of such violation wrought deep into my core.

For a couple of years, I’ve been using the excellent KeePass password storage application (more specifically, the KeePassX v2 port) to generate and store random passwords. The tactic of generating random passwords is increasingly more and more viable as forum software (like vBulletin) exhibits such strong weaknesses that MD5-hashed passwords are no longer strong enough to protect against attackers with even modest resources. By using randomly generated passwords, even if one is compromised, you don’t have to worry about an attacker gaining access to other accounts–or to the mental algorithm you use to generate passwords you can remember.

That said, for my most important accounts, I do use fairly lengthy pass-phrases. By mixing KeePass with pass-phrases, I can save my mental energies for remembering those passwords that are the most important, and offload the remainder of the work to the computer. So far, it’s worked fairly well. Twitter being the only account I’ve had compromised due to forgetting to change the password to something random and having used an older throw-away password, being somewhat “cutesy” (or so I thought) in the process, serves as a good testament to this. It doesn’t mean I won’t have another account compromised, but it does dramatically reduce the probability. The fact that an account I seldom used was compromised helped push me into action to reset some of my more important passwords and to verify the ones that I have collected to ensure they meet my criteria of strong and random.

So, even if you have an account you never think you’ll use again, be absolutely certain you use a strong (preferably random) password or pass-phrase. After all of this nonsense, I think I might have to go back to using my Twitter account. At least I didn’t lose it; all I lost was some face (but I have hardly any followers whom I don’t personally know in real life… so does it really matter?).

The other moral in all of this is that such compromises can hit anyone. Even you.

No comments.
***

Cannibalism or Convergence?

I’ve been following some of the commentary and fallout (and some of the overblown suggestions) regarding Apple’s latest iPhone. Now that most of the hype has died down and things have more or less returned to normal, I’d like to share some of my own thoughts on the matter and what changes (if any) we’ll be seeing in the near future. First though, I’ll admit: I’m no fan of Apple, but I do commend them for having the foresight to migrate iOS to a 64-bit platform well ahead of when they may actually need it. Many of the comments in the HN article are insightful: The performance gains to be had from 64-bit are minimal at best, particularly on a phone, but in another 2-3 years, phones will probably be in the 4-8GiB RAM range, and 32-bit will suddenly become a liability. Any migration forgone now will be mandatory once the 4GiB limit is reached, so it certainly makes sense to do it much earlier.

Before anyone points out something I see repeated in replies to the very insightful HN comments, I’d like to preemptively address it to get this out of the way. Yes, I’m aware you can address greater than 4GiB RAM from a 32-bit processor using PAE, but if you’re going to see increases in on-board RAM every 6 months to a year, why not just sideline the issue entirely?

Convergence: Resistance is Futile?

The real nagging question that’s been permeating tech-circles for weeks is one of the convergence of platforms. It seems with the latest iPhone, dozens of pundits and dozens more droves of Apple fans are touting the death of the desktop, that the time of the reign of mobile will soon be upon us. I’m not so sure I’m convinced, but I do think that news and musings like this don’t occur in a vacuum isolated from everything else. I think I now know why Microsoft has made some bizarre decisions that may soon prove to be fatal, but I’ll get to that later.

When I was about 18, I remembered watching a short clip on Good Morning America on the future of technology. My mum insisted that I watch the whole thing, too, because she seemed puzzled by the teaser offered earlier in the show. I don’t precisely remember the contents of the program, but I do remember one of the guests talking about the future of desktop computers, the Internet, and technology. He suggested that within 5-10 years (bear in mind this was circa 1999), the desktop would be supplanted by a thin client; he insisted that the systems would consist of little more than a monitor, some RAM, and a network connection. They would then be tied into a central server run by a large corporation (essentially a network appliance before the term “network appliance” was in vogue), and all of your applications, games, and just about everything else would run from that central server.

At the time, I had the distinct advantage in that I understood a little bit about networking. While I was no networking genius (I’m still not, but I know quite a bit about the protocols we rely on), I knew enough about bandwidth and the rate bandwidth was growing to know that such dreams were prohibitive–at least for a while–but there was the nagging question about games and similar applications that relied on relatively quick rendering or significant network throughput. Would that be sent down the pipe, too? It seemed absurd, and while there have been some attempts today at remotely rendered games, the latency and throughput precludes any such utility outside laboratory curiosity. Likewise, the processing power simply isn’t available to power hundreds of thousands of players simultaneously playing something like the latest CoD or whatever other graphics-intensive games happen to be on the market. The gaming industry will likely be the saving grace of the desktop, and this may be a surprise to everyone but the lowly gamer. It’s no surprise then that the PS4 and Xbone are migrating more toward commodity PC hardware when just a decade ago, everyone assumed that PowerPC-based platforms would become the norm for the next ten years. If only we had the gift of foresight…

Still, the irony is not lost on me that the talkshow guest all those years described something that would later evolve into what is now known as cloud computing with the minor exception that the dream of thin, inexpensive client devices has not yet been realized. To a limited extent that may be true, but “thin client” applications (now cloud apps) have instead demonstrated incredible utility in niche use cases rather than general consumption. One could argue that smart phones and tablets have long supplanted the dream of the thin client (and they’re cheaper, too) with greater capability and storage. The future seems to be one where computing is something you carry with you, not something that rests centralized in a data center thousands of miles away. It’s such a romantic thought to consider highly portable devices when you consider that it was just a little more than 20-30 years ago when the home computer transitioned from dream to widespread reality, isn’t it? It’s also important not to get too caught up in the romance, because it’s easy to make assumptions that might never come to fruition.

So, I would like to make a prediction: I don’t think there will be a convergence of desktop and mobile in the future. Maybe I’ll be eating my words in 5 years, in 10 years, or maybe I’ll be right. Instead, I think the two represent use cases different enough to force them into the position they’re in now: They’ll continue working as complements. I’ll explain why.

The True Face of Tablets

Tablets have presented an amazing boom to an already growing industry. Nearly everyone has at least seen a tablet, and many people own at least one. A year or two ago, it would’ve been a surprise to see an old lady stepping out of her car in the church parking lot, shuffling into the building, taking a pew, and carefully plucking a tablet from her purse. Today it’s almost commonplace, or at least it’s becoming common enough to be unsurprising. Using a healthy dose of anecdotal evidence to support such claims, I’d like to point out that my mum has a tablet she takes to church. Several of her friends from church have tablets. I’ve even heard from others who also have tablets, and speak highly of the devices, often describing them as liberating. (Actually, the term they use is “handy,” but the idea they’re describing is one of liberation.) The only unusual thing about this particular demographic is that none (or few) of their husbands also own tablets. Many of the older men won’t even touch them. I’m not quite sure what this says about the 60-75 age group and up, but I do know what this says about the technology and, more importantly, about the predictions.

With the rise of mobile, pundits have been lamenting the death of the PC as a drawn out but inevitable obituary. The more progressive minds among them proclaim that the day will soon come that everyone will be equipped with tablets. Office workers, programmers, bus drivers, and teachers. If you need a desktop, you’ll simply plug your tablet (or phone) into a docking station, and begin working from the OS embedded in your mobile device. If this sounds familiar, it should. The concept of a “desktop replacement” isn’t new, and according to Wikipedia, it dates back to the 1980s. What is new, however, is that for the first time in the history of computing, desktop sales have been stagnating while mobile devices have been enjoying record growth in sales.

Does this support any evidence that suggests the desktop will soon pass on into the hereafter? Should we ready our speeches and mournfully reminisce about days gone by? No. I’ll explain why I feel this is just another notch in the tree of technological evolution.

First, and most obviously, mobile device sales numbers are somewhat conflated. Pundits who point to the sales figures as definitive evidence that the PC is a dead man walking typically neglect to consider planned obsolescence, particularly in mobile data and voice contracts. Even tablets have fairly limited useful lifespans of approximately 2-3 years. Technological pressures exerted on the mobile platform are far greater than that of the PC which often has a useful service life of 5-8 years in light use or in an office environment. Software requirements, hardware capabilities, and battery age all factor in to determine the life time of a mobile device. Of course, this doesn’t neatly explain everything. With the world economy struggling, shouldn’t mobile device sales be impacted, at least slightly? Well, maybe not. They are rather cheap, after all.

Second, and perhaps more importantly, mobile devices are relatively inexpensive for what they do. For those of you who don’t regularly play video games, write code, or spend far more time staring into the abyss than is otherwise healthy, mobile devices in general (tablets in particular) are fantastic for casual use. They’re great for reading, they’re great for browsing, they’re great for casual games (“party” games as some of you might call them), and they can be taken almost anywhere provided the battery is in good healthy. The only thing they’re undeniably terrible at is content creation. Maybe that will change the day someone figures out how to make a sort of magnetic/repulsive haptic-style system that provides tactile feedback for a software keyboard? As a touch typist, I find it difficult to spend a great deal of time tapping away at a screen with no distinction as to where my fingers are at any given time. I guess I’m one of those who can’t adjust.

Going back to what I mentioned earlier: Do you remember the somewhat anecdotal evidence I offered up of the old ladies and their tablets? It seems like an atypical use case, particularly in a world where technology is dominated by twenty-somethings carting around the latest iDevice as a sort of electronic status symbol among their peers. The thing is, the twenty-something demographic is reaching saturation, and what seemed atypical just a year or two ago as the rest of society catches up may become far more common than many of us realize. The 20-somethings aren’t everyone.

Essentially, I suspect that the pro-mobile apologists (the PC is dead!) can’t see the forest for the trees and the pro-PC mobile denialists (long live the PC!) don’t want to concede to the reality of the marketplace. Are you ready for it? I’ll even bold it to make it more apparent.

Not everyone needs to own a desktop PC.

I know that’s a shock, but the simplest truth of the matter is that tablets are a better match for the majority use case that the PC previously enjoyed. They’re excellent media consumption devices, and for casual users of technology–like my mum–who rarely e-mail but are voracious readers and researchers, sometimes the tablet is a far more useful device. It’s easier to pick up a tablet and thumb it over to a book you’ve been reading than to fiddle with the overhead lamp and stumble around the house looking for a small paperback you’ve misplaced. It’s easier to pick up a tablet than it is to go into another room, wait for your computer to boot, and go about looking for knitting or crochet patterns. Let’s face it: It’s easier to keep your brains neatly tucked away in a little electronic device not much bigger than the books you used to read as a kid. For many use cases with the exception of content creation, using a tablet simply makes sense, and the demographic I believe that’s fueling the growth–at least in the tablet world–is the 60+ age range. They don’t need to own a computer. Moreover, while many of them may have been exceptional typists at one point (my mum for instance is a touch typist and is responsible largely for my early education as one, too), they’re of the generation where tactile interfaces, like touch, simply make sense. When you grew up in a world where you manipulated knobs, buttons, and widgets, it’s so much easier to use your fingers to manipulate their virtual equivalents than it is to point-and-click. (Point-and-what?)

So, I’m arguing against the convergence of desktop and mobile, but I just made the case for mobile supplanting everything else. Right?

Not quite: The point here is that many of the people who own desktop computers probably never needed to. They don’t usually create a great deal of content. They don’t write e-mails often. They don’t write letters to print out (they do that by hand with a pen and paper–you under-30s know what those are, don’t you?). If they do write something electronically, it’s little more than a quick note. Sure, this use case could easily be filled by a fairly low-powered desktop, tucked away in a back room and only used once a month for printing out letters or the likes, but in general, the older population is beginning to understand that mobile devices have greater utility than their bulkier forebears. As this discovery spreads and seasoned citizens become savvy to the benefits of a small, highly portable computer, sales will continue to skyrocket, and desktop sales will continue to decline.

That means the desktop and mobile device will converge, with the desktop riding off into the night. Doesn’t it?

No, it doesn’t mean anything of the sort.

I alluded to the notion that many pundits fail to recognize many of the realities facing technology, and largely, I think it’s the fault of a combination of misplaced optimism, misinterpretation of market forces, and a healthy dose of wishful thinking. I think some of them also place their predictions on a secret desire to see one platform or the other “win” in the end (e.g. Apple versus Android), and in their minds the desktop is mere collateral damage. Yet, in spite of all the advances in mobile computational power, somehow, virtually everywhere you look, pro-mobile pundits recognize the rapid break-neck speed of mobile advancements while simultaneously ignoring the fact that the same technology that brought the mobile environment to live also powers desktops, and it certainly won’t remain at a standstill. Many of them even claim that Intel’s days are numbered, but Intel is still the one of the largest manufacturers of chips in the world, and they’re dumping billions of dollars annually into research and development. For example, their new 14 nanometer technology will be just around the corner, and the x86 architecture is unlikely to go extinct anytime soon. If anyone should be concerned about mobile, it should be Intel. Yet Intel certainly seems to be hedging their bets on x86 in spite of the encroachment of ARM. Why? Are they that stupid?

I think Intel knows a bit more about the market than we give them credit for. Sure, AMD has introduced ARM-based server chips, but Intel isn’t going to throw away a multibillion dollar industry. In fact, I think they’re banking on growth, because more mobile devices almost directly equate to more media consumption, more users, and more services requiring new hardware to grow and expand. Although speculation has been mounting that ARM will likely oust Intel in the server space, I hardly see that happening. While ARM capabilities are growing, Intel’s chips are sipping less and less power. The next generation of Intel server CPUs will likely be fast and energy efficient. They’ll have many of the benefits that ARM currently boasts, mitigating the expensive decision of migration.

Yet, paradoxically, even if ARM were to win this battle and oust x86, it likely wouldn’t spell the end of the desktop. There are ARM ports of Windows (although legacy x86 applications won’t run on them), Windows RT, and most open source applications can be recompiled for a new architecture without much fuss. Apple, also built on an empire of open source technologies, could just as easily migrate their OSX offerings to other architectures, but chances are pretty high that they won’t. Modern x86 chipsets are still substantially more powerful than the CPUs in mobile devices, and if history provides us with any incite into this trend, it’s a reality that will continue indefinitely pending unforeseen circumstances.

Sorry mobile buffs. x86 is here to stay. As power requirements drop, it’ll be big iron with a slimmer waistline, from your desktop to the datacenter.

What’s this mean for the desktop?

As far as predictions go, I think the more outlandish and progressive a theory, the more likely it is to be incorrect. Careful, cautious, and more conservative predictions tend to be accurate, and I think that the next 5-10 years will be more of the same that we’ve had the last 2-3 years. Mobile use will continue to increase, particularly among people who don’t really need a desktop, and desktops will still be purchased each year for tens of thousands of students, families with children, and grandparents who need a device that is more suitable for creating content than consuming it. That isn’t to say that mobile devices won’t be powerful enough to fill that niche in the next 5 years. No, mobile devices will be plenty powerful. It’s simply that the use cases for which they are designed (largely media consumption) don’t lend themselves well to writing essays at length or generally creating content. Casual photo manipulation may be one such realm conquered by mobile, but don’t count on anything more complicated than cropping, resizing, and other basic edits to family photos; finger-Photoshop is unlikely to supplant the real thing, because real graphics designers don’t even use a mouse. Indeed, among graphics designers, a “tablet” is something with a pen and a touch surface; it isn’t a mobile computer.

Another particularly problematic aspect for mobile devices is one of freedom. With a desktop, most users enjoy relative freedom to choose what they want to install and how the platform behaves. More savvy users can even repair or upgrade their computer, and the savviest of them all can build them from scratch. The PC gained much of its momentum because the platform is mostly open and relatively easy to maintain. From a developer’s perspective, nearly anyone could write software for most desktop environments without fear of walled gardens. Anyone could buy new hardware. Unfortunately, mobile threatens that freedom. Mobile threatens to concentrate the capabilities of the software in the hands of a few corporations and to consolidate software development to the anointed few. “App stores” are the antithesis of freedom, and while they operate under the guise of security, it’s difficult to reject the notion that users are trading their freedom for convenience. Of course, those of us who are aware of the dangers of computing-as-an-appliance are few and far between. While we may not be numerous, we have a secret weapon stashed away in a dark closet that we can unleash in a moment’s notice: The gamer.

I’ll warn you: I’m about to wax philosophical in this section, and this is where many disagreements will undoubtedly lie.

Gamers are notorious individuals in the tech community. They’re the folks you go to when you want to tweak your hardware or install fancy lighting, creating something of an outrageous and ridiculous exhibit of post-modern art meets Thomas Edison. Yet as much as major studios and console developers have tried, the PC has stubbornly lived on, thumbing its nose at enforced conventions and plowing its own way into the fields. The XBox 360 was slated to serve as a PC-gamer replacement, shifting players from the ubiquitous keyboard-and-mouse to thumbsticks and bumper buttons. It didn’t. It did replace the PC for some casual gamers, but for the MMO and hardcore FPS gamers, the console is surprisingly absent and unwanted. It isn’t for lack of capabilities, either, and while the real reason for this escapes me, I suspect it might have something to do with the very thing that Microsoft has been more than willing to destroy as of late.

For many gamers, the PC is the end all, be all of their hobby. They’ll have browsers open, they’ll have instant messengers running, and they might even be checking e-mail all at the same time while speaking on a VoIP client with a handful of other gamers. The PC, while not particularly useful for casual use, is very good at many things, and that’s where I think it will continue to shine. It took years for Apple to introduce some semblance of task switching in iOS, and even Android still suffers, in my opinion at least, from non-intuitive task switching. Simply put: Mobile devices don’t have an alt-tab. They instead attempt to weld multiprocessing into a platform for which it wasn’t natively designed. For the PC, running dozens of applications and switching among them is an uninteresting problem. It was solved long ago, and the UI design for switching among them has long since been established. For mobile? Not so much and there’s plenty of room for disagreement on how it should be managed in the future.

Part of this limitation in mobile devices is due, at least in part, to the technical requirements of saving battery power. By giving the OS greater control of an application’s life cycle and suspending or terminating it when it’s not in use, the OS is able to control power consumption more directly. Halting a processor-hungry application while the user isn’t using it works great on a platform where available energy is at a premium until the next recharging cycle, but it isn’t much of a consideration for a computer that’s plugged into a wall outlet. This, I believe, is mobile’s Achilles’ Heel. Whereas on the desktop, a CPU-hungry task can run indefinitely, simultaneously sharing available process time with other running tasks, on a mobile device, such desires are fantasy at best and a dead battery at worse. The ability to use a computer for more than one thing at a time is what I believe will continue to breathe life into the desktop and may stay or entirely halt the encroachment of the mobile device into the realm of office work. That’s to say nothing about specialized use cases or niche fields (think software developers, monitoring stations, enterprise, and scientific use).

To this end, I think the dream of carrying a tablet computer to work every morning (to say nothing of the security issues surrounding such practice) will remain such. It’s a dream akin to the prediction some 12 years ago that everyone would be using thin clients attached to a centralized server. Mobile devices are useful to a lot of people for a lot of things, but I highly doubt they’ll replace them entirely. As complements, however, mobile devices shine like little stars in the blackest of nights.

Curiously, there’s one last thing that might hold back the mobile device beyond simply the logistics of convincing your workforce to take home a device and plug it into their workstation every morning. It’s also much more primal than any of the other reasons: Tactile feedback. Have you ever stopped to type a lengthy letter on a touch screen? It’s not fun. As I mentioned earlier, and it’s worth repeating here again for those of you who fell asleep due to my bloviating, touch typing on a mobile device sucks. I daresay it’s even a waste of time. Until someone can think up a way to use electric or magnetic forces to provide some sort of a tactile barrier above each key that provides feedback similar to a keyboard, lengthy writings will be limited to the desktop or a tablet of a different era–the paper tablet.

So what’s this got to do with Microsoft?

I’m glad you asked, and I’m sorry for making you suffer through thousands of words. I’ve never been concise, especially when I’m particularly vocal about a subject. Or excited. Or have a captive audience. (Don’t worry, I’ll let you go very shortly.)

It’s no surprise that Microsoft has been sitting on the sidelines watching the mobile universe speed by almost perpetually out of their grasp. It’s also no surprise that Microsoft has been desperate to snag some share of a rapidly growing market, and they’ve even gone so far as to alienate their entire installed base by grafting a mobile UI onto a desktop OS.

I’m talking, of course, about the blasphemy to everything Microsoft has every done in recent memory called Windows Hate. Wait, no, that’s not right. Windows 8! There we go! I knew it was something that sounded like a bodily function. I’m just pleased to know that I remembered which function that happened to be.

Ignoring for a moment the love-hate relationship many users have with Windows 8 and the shameless fanboys who are undoubtedly paid to praise it for all its shortcomings, it’s been painfully obvious since its inception that Windows 8 represents a new, self-destructive era for Microsoft. The Windows Store, left unchecked, may threaten the vast ecosystem of 3rd party applications if Microsoft should ever choose to lock down the OS to install only certified applications. But, curiously, Windows 8 also represents what I think may be the evolution of Microsoft’s market strategy.

Earlier this year, Microsoft announced the closure of the Games for Windows Live marketplace, leaving dozens of games and DLC in a state of perpetual limbo. It remains to be seen what Microsoft intends to do with software that is now rendered unavailable to newcomers, but the fate of GfWL is already sealed: It’s due to close entirely by next year (2014).

At this point, there’s only two possibilities that remain. The first, that Microsoft plans to reopen the marketplace under unified banner to serve Windows and Windows-based devices and the Xbone. The second, that Microsoft plans to segregate Windows and Windows devices under the Windows Store and away from their gaming platform marketplace. While I’m hoping the latter won’t occur, part of me is awfully suspicious.

Microsoft has been fairly vocal about their plans for the Xbone, even reneging on promises based on consumer feedback. But I almost worry that their plans are to entirely cannibalize the gamer market with their Xbone offerings. If Microsoft refuses to reopen anything like GfWL again abandoning dozens of titles in the process, then cannibalization might be the only thing in their strategy. Where else can you get customers?

The only problem is that cannibalization is never a good strategy. The first hint to me that Microsoft is likely planning on killing off Windows as a gaming platform comes not from Microsoft but from Valve in the form of their Steam Machines. Working closely with dozens of vendors, gaming studios, and developers, the Steam Machines are Linux-based and likely to use a distribution model not all that dissimilar from Android. Valve won’t necessarily be making the hardware themselves, but, like Google, they’ll be releasing the operating system entirely for free to companies that do plan on making hardware. Undoubtedly, these companies will be afforded the opportunity to customize the OS (within reason; probably adhering to a specific set of standards) much as it appears they’ll be taking liberties in terms of hardware selection and capabilities. Applying the Android model to consoles is almost a brilliant maneuver, and it makes me wonder who will be left as the “iPhone” of consoles: The XBOX or Playstation? I think you can guess which company I’m betting on.

This leaves Microsoft in a precarious situation. By all but forcing the gaming crowd onto the Xbone by limiting Windows to a certain degree (either by way of no longer porting titles or allowing dozens of established ones to stagnate) and attempting to create a homogeneous platform between desktop and mobile long before anyone else, Microsoft may wind up shooting themselves in the foot. Valve is already discretely collecting various bits of productivity software in their Steam platform, so it’s not all that difficult to imagine a world where one simply needs to download SteamOS, login, and have available all the software they’d ever need. I can also guarantee that SteamOS’ package manager will be indistinguishable from Steam itself but some concessions may be made in terms of other popular package managers. We’ll know more in the coming weeks and months.

I’m just not sure how I feel about a DRM platform, like Steam, becoming the new walled garden courtesy a highly customized variant of Linux. It’s almost ironic to consider that a free operating system to which users may be driven due to a lack of freedom elsewhere would ultimately become their internment.

I think it’s also somewhat ironic that in a battle between Apple and Android (more specifically, Google), one of the most notable casualties would be Microsoft. In effort to reach for greater market share in the mobile world, like Icarus reaching for the sun, Microsoft may find themselves plummeting to earth. Having sacrificed the desktop for their mobile and Xbone divisions, the only thing that might cushion their fall is the enterprise (or business, or government), but if a handful of European States have taught us only one thing, it’s that switching isn’t difficult after all.

That Microsoft’s empire could crumble at the hands of a war in which they were hardly targeted is indeed ironic, but it’s also a testament of where their leadership has driven them. In that sense, mobile may indeed destroy the desktop, if that desktop happens to carry a Windows logo.

No comments.
***