Some time ago I was in need of some cheap web hosting for some inessential landing page I had, with only a handful visits a month. I found VDS6 which provide cheap IPv6 enabled virtual machines that start at just 4.75$ per year. And since all involved devices supported v6 anyways, why not go for the IPv-only option?
The setup was as easy as expected: I went with the Debian image and configured nginx to serve different sites as virtual hosts, listening on a few different IP addresses VDS6 had allotted to me for a cents extra (as far as I can remember). I can’t find the option right now, though and the order page states additional IPv6 addresses are free.
Thankfully Debian mirrors are reachable via IPv6, but as soon as you try to use anything Ruby it all breaks down. The first thing the Ruby Bundler usually does is to try and connect to GitHub to resolve dependencies and will ultimately fail as GH is only reachable via v4. With a bit of convincing and the help of Google’s DNS64 this can be overcome. Let’s encrypt supports IPv6 and works without problems, the certbot is even able to update the vserver’s configuration files and automatically reloaded the service for me 👍.
Because up until that point it went all so smooth, I even moved this blog onto there.
Connectivity is relative
The machine ran reliably, until one day it just didn’t anymore. I noticed it on my cell phone first, but didn’t think much of it since on some rural towers you would sometimes just get v4 connectivity. But when the host continued to be unreachable, I began to investigate.
The Hoster’s VMmanager backend showed the host as up and running and reachable, which it still wasn’t on my end. The nginx didn’t respond and several SSH connections attempts timed out. I then restarted the host and when it showed as back up online, I still couldn’t connect. I considered re-imaging the entire host, when I found the button to open a Terminal from inside of the host.
The v6 addresses had changed! All of them: the one main address that came free with the hosting, as well as the additional ones!
After begrudgingly updating the DNS records for all the domains on my end, updating the nginx configurations and re-triggering the certbot script once for every domain, all seemed to be working again. Except for SSH, which still didn’t want to connect properly. I left it at that a the time, because you only can put so much effort to tinker with stuff and also the main purpose of the machine (namely hosting low traffic landing pages) was also met again. Plus technically there still was shell access via the Terminal from within the management panel.
I then somewhat later noticed that the SSH was working, albeit with a really long time waiting for the connection. After a bit of searching and trying I found that there was the 220.127.116.11 and 18.104.22.168 Google Public DNS in the resolv.conf! With the only DNS resolver configured on the IPv6-only host being an v4 address literal, meant that there was no way the system could have resolved anything at all. Which explains why the nginx was still reachable with its literally hard coded IPv6 vhosts configurations and SSH that by default reverse lookups the connecting hosts via DNS timed out.
“Fixing” the resolv.conf didn’t work, as systemd-network will overwrite the file with the same old non-functioning Google Public DNS at every chance it gets.
I couldn’t for the live of mine figure out where the actual config for that comes from. None of the files mentioned in the documentation seem to exist on the VM. I don’t even know if this is the doing of VDS6 or systemd or something I broke when setting up the host (though I didn’t do anything there as the addresses were already mapped). Currently my best guess is the invalid DNS resolver being assigned by the DHCP.
Long story short: when you’re going for the cheapest option you have to be willing to spend some of your time troubleshooting and solving problems.