World Wide IPv6 allocation review

And they probably say RIPE isn’t political. Well it is. It takes political pressure to get 1/4096th of the IPv6 address space.

The rest of the largest players such as Deutsche Telekom and France Telecom seem to be getting blocks of /19.

So we can still allocate /13 for 4095 operators or 262,144 of these smaller /19 blocks.

But regardless of the huge size of IPv6 it seems such waste. But perhaps the policy is to allocate large portions to large operators which then can themselves serve huge numbers of people.

But with these huge numbers of IPv6 I believe it would be possible to allocate addresses for every possible object, exactly like I calculated my address space to not quite big enough to give my apartment’s oxygen molecules each their own address, but to give addresses to inodes in filesystems for example.

I too have 65,536 networks of /64, each with 18,446,744,073,709,551,616 addresses to allocating subnets to filesystems and then addresses to inodes is not even a long shot idea.

Debugging the SixXS latency problem

We can now see that the latency problem is not because of IPv4 (6to4) latency problem, but is independent of it.

There are no spikes in my immediate network latency:

graph_image (11)

Nor are the spikes on the way to Sweden:

graph_image (12)

But when you look at IPv6 latencies, they are everywhere:

graph_image (13)

graph_image (14)

graph_image (15)

So what on earth happened at 5:45am?

And more interestingly the IPv4 latencies to this SixXS endpoint remain steady during the spike:

graph_image (16)

Got in touch with DNA Oyj IP core networks chief at Lahti and let him know this. Hopefully the problem is in their system because otherwise I have made fool out of myself.

Two days of data

It is certainly some sort of early morning process:

graph_image (18)

Happens at the same time and looks identical.

New graphs to monitor IPv4 and IPv6 latencies on primary network

Funet is the network of all the large scientific organizations and basically de facto point to reference all latencies within Finland. So I measure my ISP immediate network performance against this point.

graph_image (6) is my IPv6 end point. It is now quite stable but I am expecting this to fluctuate heavily. Hopefully it will provide me some information what is going on and when. It is physically located about 150km up north so the traffic must go from Helsinki to Lahti and back and then in to the world.

graph_image (7)

IPv6 latencies

Already the latencies are not as stable:

graph_image (9)

graph_image (10)

SELinux and named/bind9 slave configuration

If you want to allow BIND to write the master zone files. Generally this is used for dynamic DNS or zone transfers, you must turn on the named_write_master_zones boolean.

setsebool -P named_write_master_zones 1

This is required to make slave able to write the zones it is fetching from the master.

Currently re-building my rotten DNS setup with brand new virtualized CentOS 7 and the new VPS at Sweden.

Sadly the new setup won’t include IPv6 since my secondary slave DNS does not support it because my ISP there doesn’t have IPv6 support. Or I might route SixXS there.


It took some time but now my DNS setup is fully IPv6 capable as well.

IPv4 and IPv6 are routed from different providers because one supplying IPv4 does not provide IPv6 and must be routed from SixXS network which in fact runs on top of this IPv4 so completely unnecessary complication because they have not worked hard enough to provide their customers with native IPv6.

Also the zone transfers seem to go over the IPv6. Perhaps Bind prefers IPv6 if that is available. That would be my guess anyways.

Also because this is simple machine and automatic updates will be just fine.

Odd DNA SixXS latency behavior

SixXS works perfectly for most of the time but occasionally there seem to be weird latency spikes somewhere.

PING 56 data bytes
64 bytes from icmp_seq=1 ttl=57 time=12.7 ms
64 bytes from icmp_seq=2 ttl=57 time=10.0 ms
64 bytes from icmp_seq=3 ttl=57 time=90.2 ms
64 bytes from icmp_seq=4 ttl=57 time=19.0 ms
64 bytes from icmp_seq=5 ttl=57 time=22.5 ms
64 bytes from icmp_seq=6 ttl=57 time=124 ms
64 bytes from icmp_seq=7 ttl=57 time=7.42 ms
64 bytes from icmp_seq=8 ttl=57 time=67.3 ms
64 bytes from icmp_seq=9 ttl=57 time=7.50 ms
64 bytes from icmp_seq=10 ttl=57 time=7.51 ms
64 bytes from icmp_seq=11 ttl=57 time=19.0 ms
64 bytes from icmp_seq=12 ttl=57 time=8.87 ms
64 bytes from icmp_seq=13 ttl=57 time=121 ms
64 bytes from icmp_seq=14 ttl=57 time=68.3 ms
traceroute to (2001:708:10:55::53), 30 hops max, 80 byte packets
 1  2001:14b8:135:1::2 (2001:14b8:135:1::2)  0.455 ms  0.431 ms  0.409 ms
 2 (2001:14b8:100:38d::1)  10.681 ms  10.699 ms  10.685 ms
 3 (2001:14b8:0:3401::6)  33.281 ms  33.307 ms  33.335 ms
 4 (2001:14b8:0:3401::2)  33.423 ms  33.654 ms  34.035 ms
 5 (2001:14b8::2164)  33.594 ms  35.555 ms  35.543 ms
 6 (2001:14b8::74)  36.697 ms  35.623 ms  42.062 ms
 7 (2001:14b8::75)  48.797 ms  57.652 ms (2001:14b8::18)  47.988 ms
 8 (2001:14b8::9)  70.046 ms  61.649 ms  61.645 ms
 9 (2001:7f8:7::1741:1)  81.208 ms  81.217 ms  81.462 ms
10 (2001:708:10:55::53)  81.447 ms  81.623 ms  81.501 ms

And then the next moment:

traceroute to (2001:708:10:55::53), 30 hops max, 80 byte packets
 1  2001:14b8:135:1::2 (2001:14b8:135:1::2)  0.411 ms  0.387 ms  0.370 ms
 2 (2001:14b8:100:38d::1)  7.521 ms  7.530 ms  7.511 ms
 3 (2001:14b8:0:3401::6)  9.348 ms  9.350 ms  9.335 ms
 4 (2001:14b8:0:3401::2)  9.547 ms  9.799 ms  9.794 ms
 5 (2001:14b8::2164)  9.239 ms  9.439 ms  9.436 ms
 6 (2001:14b8::74)  9.689 ms  9.037 ms  9.011 ms
 7 (2001:14b8::75)  10.997 ms  7.724 ms  7.715 ms
 8 (2001:14b8::9)  7.697 ms  9.016 ms  8.989 ms
 9 (2001:7f8:7::1741:1)  8.958 ms  8.940 ms  7.919 ms
10 (2001:708:10:55::53)  8.247 ms  8.233 ms  8.483 ms

So some problems with this network but traffic still moves.

End-point IPv4 latencies are stable:

$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=56 time=3.89 ms
64 bytes from icmp_seq=2 ttl=56 time=4.06 ms
64 bytes from icmp_seq=3 ttl=56 time=3.91 ms
64 bytes from icmp_seq=4 ttl=56 time=3.86 ms
64 bytes from icmp_seq=5 ttl=56 time=3.96 ms
64 bytes from icmp_seq=6 ttl=56 time=3.95 ms
64 bytes from icmp_seq=7 ttl=56 time=3.87 ms
64 bytes from icmp_seq=8 ttl=56 time=3.95 ms
64 bytes from icmp_seq=9 ttl=56 time=3.99 ms
64 bytes from icmp_seq=10 ttl=56 time=3.92 ms
64 bytes from icmp_seq=11 ttl=56 time=3.84 ms
64 bytes from icmp_seq=12 ttl=56 time=4.46 ms
64 bytes from icmp_seq=13 ttl=56 time=4.09 ms
64 bytes from icmp_seq=14 ttl=56 time=3.92 ms
64 bytes from icmp_seq=15 ttl=56 time=3.88 ms
64 bytes from icmp_seq=16 ttl=56 time=3.93 ms
64 bytes from icmp_seq=17 ttl=56 time=3.91 ms
64 bytes from icmp_seq=18 ttl=56 time=3.92 ms
64 bytes from icmp_seq=19 ttl=56 time=3.89 ms
64 bytes from icmp_seq=20 ttl=56 time=3.93 ms

But once you go over IPv6 it starts to really bounce around:

$ ping6
PING 56 data bytes
64 bytes from icmp_seq=1 ttl=57 time=7.48 ms
64 bytes from icmp_seq=2 ttl=57 time=10.1 ms
64 bytes from icmp_seq=3 ttl=57 time=8.46 ms
64 bytes from icmp_seq=4 ttl=57 time=9.89 ms
64 bytes from icmp_seq=5 ttl=57 time=8.60 ms
64 bytes from icmp_seq=6 ttl=57 time=8.14 ms
64 bytes from icmp_seq=7 ttl=57 time=10.0 ms
64 bytes from icmp_seq=8 ttl=57 time=15.7 ms
64 bytes from icmp_seq=9 ttl=57 time=18.5 ms
64 bytes from icmp_seq=10 ttl=57 time=9.21 ms
64 bytes from icmp_seq=11 ttl=57 time=8.80 ms
64 bytes from icmp_seq=12 ttl=57 time=9.57 ms
64 bytes from icmp_seq=13 ttl=57 time=8.51 ms
64 bytes from icmp_seq=14 ttl=57 time=7.77 ms
64 bytes from icmp_seq=15 ttl=57 time=11.3 ms

Raw TCP performance with FTP gives me 4-6MiB/s while with IPv4 I get 11MiB/s.

Postfix and IPv6

Since I now have /64 network of IPv6 at my new VPS I decided to make my e-mail IPv6 compatible finally.

Pretty straight forward. And just because I can I will make it have its own IPv6 address because there are plenty to spend.

And to mention it: I use ClouDNS as free DNS provider. Great service. Nice and simple old school front-end for management.

Seems to work.


CakePHP install with Composer

Quite handy tool.


Couple little problems with the network because composer has IPv6 support (or the host has, rather) and since I am constantly changing my network to fit my needs, something is sometimes bound to not work, this time it was iPv6 on my virtual guests, which luckily doesn’t have any impact, since none really use IPv6. But radvd was still distributing addresses, but firewall was configured improperly, and did not allow traffic.

But composer itself worked just fine.

And yes, we have a working CakePHP now:


Sort of. But for developers this is a working system.

Nice feature is that it has built-in web server. So no need to hassle with nginx or Apache or lighttpd to get off the ground. Very convenient for hobby developers or back-end developers with little Linux experience. Certainly cuts the time it takes to get going.

OK, I did literally like nothing but created few database tables, ran some commands related to baking cakes, and now I am left with this:



Which is sort of scary, because it used to take few hours to get something like this done, say, 5 or let alone 10 years ago. And I got this while I was doing other things and not really paying any attention.

So so far CakePHP has positively surprised me and I have done nothing yet. The one problem above was from missing mysql extension and once I installed it and restarted the built-in server, it showed beautiful Welcome page.

And I am certain that template can be modified with little effort. Yeah it seems the code is there, and there’s a lot of it. So it really speeds up the development. Also judging from the __ prefix it seems to have translations built in, which is no surprise.

Also provides nice debug kit:



Can be especially good for identifying those bottleneck situations. If the database is slow to respond that should show as long bars at those locations. I wonder if this shows function calls, it might. So if you program and design the internal structures well, then you can immediately see which function has sub-par performance.

Size of /48 network of IPv6 addresses

Quick discussion sparked because IPv4 are running out like today and decided to do a little calculation how large my /48 pool of networks actually is.

/48 is 65 536 networks each consisting of 18 446 744 073 709 551 616 addresses. So in total that is over 100 000 billion million addresses.

My apartment is 42m² and oxygen content in atmospheric air at sea level is approximately 21%. So the oxygen volume with 2.7m room height is (42m² x 2.7m) / 100% x 21% ≅ 23m³

23m³ is 23 000 liters and liter of oxygen at 0°C weighs 1.429 grams so 23m³ weighs 32.87kg and because mole of oxygen weighs 16 grams we have 2 054 moles of oxygen.

2 054 moles is 6.022 x 10^23 atoms, roughly 1 236 918 800 000 000 000 000 000 000.

Sadly I have only 1 208 925 819 614 629 174 706 176 IPv6 addresses so 1 023 atoms must share one address sad

Edit: calculations may be incorrect because oxygen of course does not appear as free oxygen atoms but molecules of two bonded atoms in form of O2.

The another server

I got three servers one of which is running 24 hours a day. That’s my main server which mainly does routing and all the basic infrastructure. It is used to test all sorts of things mainly network related. Lately of course IPv6 and SixXS.

And that SixXS IPv6 address space they kindly gave me is by the way so big that it can accommodate 65,535 subnets each roughly 18 million trillion addresses wide.

But the other server I was going to talk about is my HP DL140 G3 which currently is being equipped. I intend to replace the current dual core low-end CPU with two Quad core high-end ones.

In addition to this it will have first 16 GiB of memory and later if needed another 16 GiB of memory.

Disks are going to be enterprise level. It has two bays so one is going to house 3 TB WD Re and the other one either generation 5 or generation 6 WD VelociRaptor 10K.

The Re is expensive one but it has over double the MTBF of any consumer grade. The VelociRaptor of course is legendary and can compete with SSD neck and neck with no trouble at all. That is going to be either 300 or 500 or maybe 600 GB depending.

That is quite an expensive investment at around $500 perhaps but combined with 8 cores and memory I should be able to run very generous amount of any sort of virtual machines.

And the enterprise level of these disks will guarantee they will last for a long time.

The rationality for having one 3 TB is to have it as a backup and long-term data whereas the faster (access time) VelociRaptor will house the working data such as virtual machine images.

And the CPU count is extremely important to me since it is hard to find (impossible) SATA drives with hardware encryption. Sadly the CPU I have do not have the x86 instruction set for Advanced Encryption Standard.

But given the CPU will be relatively modern and high-end Xeon I am expecting to get enough throughput to have the disks as a bottleneck – not the processing power.

And if one core must be sacrificed for the encryption then so be it and there will still be 7 left for actual processing.