This page will rarely be changed because the water is come out.
$Id: ipv6_harmful_eng.html,v 1.9 2012/01/29 19:39:04 netch Exp $
The "killer benefit" of IPv6 deployment is larger address space. Instead of 32-bit address (which gives approximately 3.7 billion world-routable unicast addresses), 2^125 (~4.2E37) world-routable unicast addresses are available due to general allocation scheme. This gives uncountable number of addresses to a person, and ~3000 addresses to an atom of Earth surface. Is it reasonable? Let's discuss.
History shows there were no real discussion for correct address space. In early 1994, two parallel projects named "SIPP-64" and "SIPP-128" were discussed. After some event, SIPP-64 quietly disappeared without any clear notice. The mysterious event seems to be related with the following letter (posted in July 1994):
Message-ID: <9407072035.aa15952@sundance.itd.nrl.navy.mil> Yes, you said the magic words, "FIXED LENGTH." I have espoused the virtues of fixed length addresses before. My biggest beef with variable length addresses is the issue of maintaining additional state or taking additional time to parse an address whose length I don't know at the start. [...] < 8 byte fixed length address. I have no problem with this, but this alienates many people. Also, there seems to be a trend toward low percentage of allocated address space used. (I offer NRL itself as an example, though not the worst, of such waste.) If we could pull it off with 8 bytes, I'd be all behind it. < 16 byte fixed length address. This eliminates much alienation. Furthermore, wasted address space is not a significant problem here. [...] My position can be best summarized as: The SMALLEST possible fixed-length address. And that length better have a good justification for why it cannot be smaller.
While arguments for fixed-length address sounds good (but see below for source routing in IPv6), the exact address length were not discussed (or it was not recorded in mailing list, which acts the same).
And how the SIPP-128 project was created? Returning to the mailing list archives, earlier versions constructed address from two parts: globally unique and locally unique, both 64 bits and both used as complete address, either in its own space. Does one network need two complete and possibly conflicting addresses?
The 1994 was "good old time" with quite small Internet and naive Internet community. The most problems of current Internet were not known even in their baby forms. Could one imagine that in 2005, spam occupies more than 90% of e-mail traffic, and one can not leave unpatched Windows XP for more than 5 minutes on direct Internet connection without huge risk to be infected with viruses and trojans?
The most problem of Internet is its size. The situation is new and unfamiliar to mankind: each Internet host is in a few seconds (and most in less than second) far from any another host. This gives big advantages, but also give big disadvantages. Don't mention particular software again, but the main requirement to any system to alive in Internet is possibility to achieve passive security against most classes of attacks, in a space where one have no resources even to track attacking agents.
And now less than 1/6 of available space (which is approx. 3.7 billion hosts) is used. Let's consider that 1/2 of any big address space is usable due to allocation restrictions (2-powered boundaries, allocations for future); that is, less than 1/3 of really available space is used. And what will be when all available IPv4 address space will be used? One can understand life in Internet will be much harder than now. I have got many messages that people close their e-mail boxes and return to "snail mail" or other forms of messaging outdoor of Internet.
So, 2^32 Internet addresses seems to be a line which can't be crossed as easily as had been considered in early naive 1994.
Well, but it is usually avoided that all hosts are directly connected to Internet; networks have their own structure with a few gates to Internet. There can be another reasons to have big address length. Let's considered them.
Link-local address is formed from a unique interface identifier (for most widespread case, MAC address) using some interface-specific way. MAC address is supposed to be unique; but, some conflict detection is obligatory. The idea to have all 48 bits of MAC address in link-local IP address is good; but is it needed to form EUI-64? 16-bit prefix is enough to separate them clearly and to avoid waste of address space.
Are mobile node identifiers real IP addresses? Clearly, not. Only uselessly big address space is reason to place them in the same address space. OTOH, they can form another subspace analogically to link-local addresses; 48 bits are definitely enough for them (and please don't mention for 640K precedent;))
Is MAC-based address applicable in all cases? Definitely not; it is fool to use such address for public service visible from other networks. A marketing FUD was spreaded that IPv6 doesn't use ARP, as opposed to IPv4, and that it is Very Good; but in real full ARP analog exists, it had been only masked as subpart of Neighbor Discovery Protocol and has not separate L2 protocol identifier. In the same way, ARP table remains in masked form. The only benefit is avoidance of DHCP in typical local network: host addresses are stable, based on MAC addresses, and other parameters are got using Router Solicitiation (see below). Well, it is good benefit; but, again, this can be implemented in 64-bit address space with the same result.
128 bit addresses are nearly impossible to be learned "by heart", i.e. to be restored from memory without written-down record, which is needed for system administrators. It is no problem to learn 32-bit addresses, either in decimal notation (as is current IPv4 tradition) or in hexadecimal notation; it is almost no problem to learn 64-bit addresses in hexadecimal notation. 128 bit addresses are too big as to fit in human memory as one word, so they will be splitted into parts and then parts will be learned.
Of course, there are many men which can learn by heart huge volumes of arbitrary, unustructured information. But what kind they are of? What kind are men of, which can learn many digits by heart and make calculations with bug numbers? They are very rare, and most of them are insane some way. Will we need Internet which is governed by a bunch of insane admins?
As shown above, the main argument which had urged SIPP committee to quiet adoption of 128-bit address space, was fixed address length. Of course, fixed address length is more applicable for fast routing; it is much more complicated even to parse one-octet field of address length, than compare fixed-size field placed at fixed offset in packet. But,
This reinvention of variable-sized address is well masked using technique named "source routing" and Routing header in IPv6 packet (according to RFC 2460). But, instead of explicit implementation, this one has a few large disadvantages.
First, all addresses in routing address list are formed as full IPv6 addresses. This gives a chance to use another host not as gateway between internal and external network, but as proxy for arbitrary traffic. Of course, such situation means incorrect administering; but there are too many loosely administered hosts in modern Internet, and the current politics to add more and more home stations and home devices (fridges, microwave stations...) to Internet gives huge flow of unfixable devices and brain-dead self-made admins. At least, distributed DoS is quite easy with such uncontrolled hosts.
Yes, we suppose 64 bit address space is strictly enough for now. Growing of Internet will stop very soon. Even if every coffee pot, fridge and car wheel would got its own address, 200 billion (2.0E11) is real ceil of real address usage. For 64-bit address space, it means ~1/46000000 part of address space.
So, if mankind would remain only on Earth, 64-bit address space will be never used even for a millionth part. One can calculate really used part of 128-bit address space; the number is written with float number notation, but can't be imagine in any way. Is IPv6 designed as applicable for mankind settled the whole Galactics?
A few changes have place, and some of them are cosmetics. TTL is renamed to hop count. Underlying protocol is renamed to next header; this reflects possible insertion of L3 (e.g. Routing) or IPsec information. Hop count outside of checksum gives much faster forwarding. But, if checksum isn't recalculated on each router, why is it too weak? Broken packets with correct IP header checksum are too usual, CRC16 (not saying for more strong sums) is too easy to implement using current hardware.
Again, we see that protocol which fundamentals were developed in 1994 isn't even basically well-designed in 2005.
Router Solicitation is IPv6 mechanism to achieve zero-administrated configuration of network in simple cases. Good idea. Well, it is good for hosts in networks with the only external router. On the other hand, let's consider a node with two network interfaces, one to the external network and second to the internal one. Shall this node be router for hosts on external interface? Current implementations (at least KAME) doesn't give mechanism to restrict router advertisements to the set of interfaces where such node is really router for another hosts.
This problem is more acute with NAT (see below).
This was said that IPv6 eliminates need for NAT (network address translation, in all its forms and variants including PAT, PNAT, masquerading and other vendor-specific treatings). But, advocates of this position lose fact that NAT is used not only for extending of address space, but (and more) for hiding of internal address space and internal network details. External nodes shall not know details of internal structure: count of active nodes in internal network, grouping of activities, etc.; NAT masquerades all this activity as coming from one (or a few) visible nodes, and this is good in modern Internet.
On the contrary, idea to eliminate NAT moves us either to open structure of internal network (1), or application proxy without forwarding (2). Both variants can be much better than NAT in some circumstances; but now admin has more chances to choose appropriate variant. Elimination of IPv6 reduces the choice set and makes gatewaying much harder. So, I think NAT will remain in use even after full IPv6 deployment.
The MTU mechanism is one of the darkest corners of current IP. There are almost no sysadmin which didn't hear ever for MTU problem. It was naturally to hope that the full IP redesign (as IPv6 is claimed) will help this problem. But what do we see? The problem becomes even more harder:
(Yes, I know that there is specification for PMTUD particularly for IPv6, described in RFC1981. But its means are conceptually too close to RFC1191 that there is no real difference between them.)
For now, IPv4 has inefficient but working method to transport packets of any 16-bit size: don't set DF on them. This is commonly used for UDP and can be used in some IP stacks for TCP (disabling PMTUD on host or on connection). This can be used for detecting MTU problems and for dealing with almost any environment in outstanding circumstances. IPv6 removes this option. Is it reasonable? Of course, fragmenting is expensive (and much more expensive in IPv6 than in IPv4, due to complicated packet structure). But, there are many other expensive activities - e.g. replying with ICMP. Cisco IOS replies with ICMP very lazily. If routers were allowed to perform fragmenting (but not obliged), this could help with MTU problems.
At least the following problems had accumulated in TCP since its creation:
Of course, TCP problems aren't directly related to IPv6, but inventing of new network level protocol is good occasion and good reason to reconstruct main transport layer protocol and APIs involved in it. This moment is irrevocably lost.
You can mail me your feedback.