Thoughts on IPv6 /64 scanning and NDP cache size

December 24, 2013

Today I was talking with some friends about the possibility to make a DOS attack against an IPv6 router/switch if I was in the same /64 subnet by simply sending IPv6 NDP Packets to fill the neighbour cache on the router. But the question I was thinking than about was how many packets can I send e.g. over an 1Gbit link per second? How many entries will the neighbour cache need to hold if the timeout is e.g. set to 120 sec? How long would it take to scan the whole /64? So I sat down and looked at the questions.

How man packets can I send in one direction send over an 1Gbit Ethernet link?

The amount of packets which can be sent over a link depends on the size of the packets. The smallest ones used for calculation are 64byte in the IP world.  We need to put that into a Ethernet frame which adds up to 84 octets Details can be found here. Which leads to following formula:

1000MbitPerSec / 8 Bits  / 84 OctetsPerFrame= 1.488.095 FramesPerSec

As only one packet can be in a frame we can send 1.488.095 packets per second (often called: pps), which is also often called line speed or wire speed. The calculation is true for pure Ethernet, but I changes if you use VLAN Tags, QinQ or MPLS … in these cases take a look at this article.

How many entries will the neighbour cache need to hold if the timeout is e.g. 120 sec?

So now we know how many packets a can send at most and forget that we need some additional bytes for the NDP, which makes it easy to set the limit for the neigbour cache of our router.

1.488.095 PacketsPerSecond * 120 SecondsTimeout = 178571400 entries = 178  Million Entries

Lets say that this is only a RAM problem and everything else would work. Each entry contains a least the IP address and the MAC address. (There would be an optimization possible in only to store the host part of the IP address). An IPv6 address has 128bit = 16byte and the MAC address has 48bit = 6byte which leads to a total of 22byte per entry. A router needs 3,6Gbyte of RAM to store this table … not impossible but not common also. 😉

How long would it take to scan the whole /64?

And as bonus question we talk on how long it would take to scan that many IP addresses. First we need to get the amount of IP addresses a /64 can hold.

2^64 = 18.446.744.073.709.551.616 = 1,844674407×10¹⁹ IP Addresses

We know that we can scan 1.488.095 IP addresses per second which leads to

1,844674407×10¹⁹IPaddresses / 1488095 packetsPerSec / 60/ 60 / 24 / 365 = 393081 years

Ok not practical. But wait … we need only to scan for /48 IP addresses as the host part is derived from the MAC … this makes only 2,814749767×10¹⁴ IP addresses

2,814749767×10¹⁴IPaddreses / 1488095 packetsPerSec / 60/ 60 / 24 / 365 = 6 years

Much smaller but still too long for my spare time. 😉

 

Some thoughts on NFS

December 23, 2013

In the last weeks I was working (from time to time 😉 ) on a new setup of my NAS at home. During this I learn some stuff I didn’t know about NFS which I want to share here. I assume that you got the basis NFS stuff working or know how it works and want some addition tips and ticks

  1. How to query a NFS server for the exported directories and the settings? Easy, use showmount -e <servername>. Here an example:
    $ showmount -e 10.x.x.x
    Export list for 10.x.x.x:
    /data/home                   10.x.x.0/255.255.255.0
  2. Use fsid – why?
    1. NFS needs to be able to identify each filesystem that it exports.  Normally it will use a UUID for the filesystem (if the filesystem has such a thing) or the device number of the device holding the filesystem (if  the  filesystem is stored on the device). If you want to an export different file system (e.g. replacement HDD) the fsid changes and you’re clients have to remount (as they have a stale NFS mount). Read more on this topic here.
    2. Some file systems don’t have a UUID, e.g. encfs does not … use a separate fsid for each export!
  3. You can export multiple file system with one mount request by the client, if you use nohide. Normally, if a server exports two file systems one of which is mounted on the other, then the client will have to mount both file systems explicitly to  get access to them.  If it just mounts the parent, it will see an empty directory at the place where the other file system is mounted.  That file system is “hidden” – if you don’t want that use nohide, like in this example:
    /data/media     10.x.x.0/255.255.255.0(fsid=1,rw,async,no_subtree_check)
    /data/media/movies     10.x.x.0/255.255.255.0(fsid=2,rw,async,no_subtree_check,nohide)
  4. If you changed /etc/exports you don’t need to restart your NFS daemon. If the init script provides a reload thats good – if not use exportfs -rav.

So far  that are my new learned tips and tricks … I think there was one more but I can’t remember it now. Will add it later if I remember.

 

Powered by WordPress
Entries and comments feeds. Valid XHTML and CSS. 25 queries. 0.038 seconds.