nginx v1.4.5 and IPv6

I recently updated the VPS this blog is sitting on. Coincidentally, this also updated nginx to the latest version and broke everything. I didn’t think much of it at the time, but when I linked a friend to this post over on my fun blog, he was delivered to the default nginx page. Puzzled, I poked around for a while, mostly examining DNS records and server configurations. I couldn’t find anything wrong.

Then I had a eureka moment.

I’m on IPv6 at home. I have this site (and others) configured to use IPv6. It hadn’t occurred to me until then that it might be protocol related. Using curl (curl -4 and curl -6), I confirmed my suspicions. Although the server was listening on TCP and TCP6, it was only serving up the vhosts on IPv6 and not IPv4. IPv4 was receiving the standard welcome page.

I knew that I had configured the server appropriately for both stacks. I’ve read through the docs. I combed through dozens of blog posts documenting the process. I was convinced the server was correctly configured. I must’ve fiddled with it for a good hour or so, reviewing documentation and the likes to no avail.

Infuriating.

Since nginx 1.2 or 1.3 (I can’t remember precisely), it’s been necessary to add ipv6only=off to the listen directives in order to support a dual stack environment. It’s my understanding this trick doesn’t work on some BSDs, but I know for a fact it worked fine under Linux. Or so I thought. I tried it successfully under Arch and Ubuntu with identical results with the exception that I neglected to recall one minor detail: My Arch install updated to nginx 1.4-something well after I had configured my desktop for developing on a dual IPv4/IPv6 stack. I suspect it’s probably broken in the same manner. But, I use it strictly for development, so I’m not particularly concerned whether or not it works on IPv4. I don’t use the protocol much within my network, so why worry, right?

To continue: I decided to take another stab at it and discovered something curious. Previously, all that was required to enable dual-stack support in nginx was to add the following to whatever was configured as the default host

    listen [::]:80 ipv6only=off default_server

And then all subsequent vhosts simply required

    listen[::]:80;

That’s all. It used to work–like magic. But, sadly, magic eventually runs out. This is why electronics stop working once you let all the “magic smoke” escape. Sorry, it’s an old electrical engineering joke my father has oft repeated. I guess it’s brushed off onto me.

Anyway, here’s the solution. You might find it contrary to some of the antiquated information out there lurking on various blogs dating back from 2011 through the middle of 2013. It works for nginx 1.4.5 (and possibly earlier versions), but the trick is to add this to the default vhost configuration

    listen[::]:80 ipv6only=on default_server;
    listen 80 default_server;

And for all subsequent vhosts

    listen[::]:80;
    listen 80;

I should note it works fine without adding the ipv6only=on directive, just like the generic vhost config (above). I believe I’ve read that this is because the default behavior enables ipv6only automatically. However, if you’re running a slightly older version, you might need to keep it. Hence why I’m not going to remove it from my examples. Better safe than sorry, right?

default_server is (hopefully) obvious, but only required if you want to provide a default site (or page) for users hitting your web server’s IP. Or for ancient browsers that haven’t been taught how to use the Host header. Are there any of those left?

So, the trick is that you need two listen directives. Period. Yes, even for TLS/SSL. If you skip these directives on any vhost, the missing protocol binding will be skipped for that vhost. I suspect this is probably documented somewhere. The problem though is that there are literally dozens of blogs pointing the old instructions that used to work. These are now deprecated. Following them will only lead to sadness.

Initial frustration aside, I find meshes well with my preferences. It’s more explicit and there’s no question which protocols nginx will use when binding to the configure port or ports. However, it will cause headaches for IPv6-enabled sites migrating from nginx 1.2. So, if you’re running Ubuntu and have decided to update in order to gain access to newer features (websocket support, SPDY, et al), expect breakage. More importantly, be absolutely certain you’ve independently tested all of your deployed sites using IPv4 and IPv6. Make liberal use of the -4 and -6 switches for curl. It’ll save you from unpleasant surprises.

1 comment.
***

A Lesson from Twitter

Today, I got a curious e-mail from Twitter:

Hi, zancarius

Twitter believes that your account may have been compromised by a website or service not associated with Twitter. We’ve reset your password to prevent others from accessing your account.

You’ll need to create a new password for your Twitter account. You can select a new password at this link: [redacted]

As always, you can also request a new password from our password-resend page: https://twitter.com/account/resend_password

Please don’t reuse your old password and be sure to choose a strong password (such as one with a combination of letters, numbers, and symbols).

In general, be sure to:

Always check that your browser’s address bar is on a https://twitter.com website before entering your password. Phishing sites often look just like Twitter, so check the URL before entering your login information!
Avoid using websites or services that promise to get you lots of followers. These sites have been known to send spam updates and damage user accounts.
Review your approved connections on your Applications page at https://twitter.com/settings/applications. If you see any applications that you don’t recognize, click the Revoke Access button.

For more information, visit our help page for hacked or compromised accounts.

(Before you ask, yes this did come from Twitter.)

It turns out that my Twitter account had been compromised. I hadn’t posted anything since 2011, and I seriously doubt I logged into Twitter any time recently on my browser (though I probably have it active on a mobile device–I just never check it). This was puzzling to me, as I thought I had used a random password on the account as per my usual habit.

Except that I hadn’t. Instead, I had used a simple throw away that could’ve been relatively easy to brute force given sufficient time. This was entirely my fault, and while there’s no excuse for it, I admit that I hadn’t ever thought enough of using Twitter to protect the account. Furthermore, the account was created circa 2009 when I used to use fairly simple passwords for throwaways and strong passwords for accounts I wanted to protect (my personal e-mail accounts use > 40-70 character pass-phrases, for example). So, this was entirely my mistake, and while it’s plausible that I may have given access to a 3rd party to tweet on my behalf, I suspect this isn’t the case; there were no apps listed in the authorized application list, and the Twitter e-mail strongly hints that they will remain there until manually removed.

So, lesson learned I suppose.

However, this did present a unique opportunity to learn from one of the top social networking sites in the world. Rather than closing accounts or granting spammers free reign, Twitter resets the account password and sends a polite notice to the e-mail address registered for the account indicating what the problem is and how to rectify it. It’s a brilliant idea, I think, and I’d love if more sites followed suite. After all, spammers are using similar tactics elsewhere (including Youtube) to exploit accounts that might otherwise hold good standing with the community to continue their nefarious activities. Plus, is it really fair to terminate someone’s account that’s been compromised, just because it was used to spam? I don’t think so–not anymore.

The other lesson in all of this is to use strong passwords even for accounts you don’t think you’ll use again. It can affect your reputation, it can cause embarrassment, and it feels unnaturally violating to see spammy comments from an account with your picture on it. While my account was only used for two spam tweets before Twitter shut it down, the sensation of such violation wrought deep into my core.

For a couple of years, I’ve been using the excellent KeePass password storage application (more specifically, the KeePassX v2 port) to generate and store random passwords. The tactic of generating random passwords is increasingly more and more viable as forum software (like vBulletin) exhibits such strong weaknesses that MD5-hashed passwords are no longer strong enough to protect against attackers with even modest resources. By using randomly generated passwords, even if one is compromised, you don’t have to worry about an attacker gaining access to other accounts–or to the mental algorithm you use to generate passwords you can remember.

That said, for my most important accounts, I do use fairly lengthy pass-phrases. By mixing KeePass with pass-phrases, I can save my mental energies for remembering those passwords that are the most important, and offload the remainder of the work to the computer. So far, it’s worked fairly well. Twitter being the only account I’ve had compromised due to forgetting to change the password to something random and having used an older throw-away password, being somewhat “cutesy” (or so I thought) in the process, serves as a good testament to this. It doesn’t mean I won’t have another account compromised, but it does dramatically reduce the probability. The fact that an account I seldom used was compromised helped push me into action to reset some of my more important passwords and to verify the ones that I have collected to ensure they meet my criteria of strong and random.

So, even if you have an account you never think you’ll use again, be absolutely certain you use a strong (preferably random) password or pass-phrase. After all of this nonsense, I think I might have to go back to using my Twitter account. At least I didn’t lose it; all I lost was some face (but I have hardly any followers whom I don’t personally know in real life… so does it really matter?).

The other moral in all of this is that such compromises can hit anyone. Even you.

No comments.
***

Updating an ancient Arch Installation

A close friend of mine recently decided it was time to update his Arch Linux installation. It had been over a year since the core OS was completely updated, and I did everything I could to discourage him from trying the usual pacman -Syu. The problem, for all intents and purposes, is that I knew it wouldn’t work. Much of the file system structure has changed dramatically over the course of the past year (the /lib move and the more recent merging of /bin and /sbin into their /usr counterparts) and a straightforward update was now difficult (but not impossible–more on this in a minute). Specifically, the glibc and filesystem packages have gone through several iterations even last July and would now be permanently blocking each other thanks to these two updates with no immediately obvious path forward.

I have observed some comments dismissing the update process from a pre-systemd installation as virtually impossible. While I suspect this is because the individual(s) asking for help are seen by others to be insufficiently experienced to do such an update, I’m not a particularly huge fan of the term “impossible” as it pertains to difficult feats–even for the inexperienced. After all, few things are truly impossible; it’s simply a matter of how much time and energy one is willing to invest in working toward a specific goal. Besides, if the recommended solution is to reinstall, why not figure out an optimal way to upgrade an “impossible” system? If it’s going to break either way, we may as well have some fun with it. (You have backups, right?) Even if you don’t have the necessary experience or knowledge relating to the underlying system, there’s no time like the present to begin the learning process.

So, Thursday evening, I set out looking for a solution. Fortuitously, I had an old Arch Linux virtual machine installed on my desktop (which, conveniently, is also Arch Linux) that dated back to approximately the same kernel revision as my friend’s (v3.3.6 for mine; v3.3.4 for his) and roughly analogous software. Like his system, my VM was also a pre-systemd installation, was still running glibc 2.15, and a very early version of filesystem-2012. In many regards, my VM was more or less in a state identical to that of the machine we needed to fix. However, the problem was compounded by an additional requirement. Console access was somewhat tedious for my friend to obtain because of the systems he had to go through, and he was exceptionally busy the afternoon we planned the updates. So, I had to to keep network access (particularly SSH) up and running as best as I could.

Disclaimer

This narrative guide is intended as general reference for anyone who might be in a similar situation: Pre-systemd, pre-filesystem moves, and operating with a requirement that the machine be (mostly) network-accessible throughout the duration of the update. Be aware that nothing in this guide should be considered authoritative. I am a user of Arch Linux. There’s still much about system internals (and Linux in general) that I don’t know or don’t fully understand. Consequently, I’m always learning, and there may be better alternatives to specific problems encountered during this update process. However, I do know enough about Arch to recommend that you never use –force unless you’re specifically instructed to do so. At no point during this update process should you use it. Doing so will assuredly destroy your system, and you’ll be forced to reinstall.

Secondly, this update process is not supported by the Arch Linux developers. If you have failed to maintain your system by keeping it up-to-date (and updating your Arch Linux installation is one of the core tenants of being an Arch user), you’re unlikely to receive much help since you’ve already established that your OS has been inadequately maintained. Furthermore, this update process relies on tricks using partial updates which are also unsupported. The developers and users of Arch who frequently the forums are nice folks who have lives outside of Arch, and due to the tremendous task presented to the community of supporting a relatively large user base, it’s impractical for them to spend a significant chunk of their time helping you with issues that–to avoid mincing words–are the fault of no one but yourself.

Thus, I hope this guide will be useful to those of you who may have a neglected Arch Linux box under your care. Be aware that I have targeted this guide to a system that was last updated somewhere in mid-2012. Although it should work for earlier systems, I’d highly recommend reading through the Arch news archives if you’re updating such a beast. You can usually determine where you need to begin by examining the version of your filesystem package, e.g. pacman -Qs filesystem. This guide should be general enough such that even if it doesn’t help you determine the exact update path for your system, you’ll be able to figure out where to get started.

Also, be sure to read through the guide in its entirety first. Check your installed packages, and make sure you’re not downgrading them. filesystem and glibc should never be downgraded at any point during this update otherwise you may exacerbate system breakage. This guide illustrates the update process with the x86_64 architecture. If you’re using a 32-bit system, you’ll need to replace all references to x86_64 with i686.

Read more…

No comments.
***
Page 1 of 41234