Howto filter rogue DHCP servers on Ubiquiti Networks UniFi access points

June 25, 2015

This short post shows how to filter rogue DHCP servers, which are connected via the WiFi to the network. The UniFi management software allows you to block traffic between 2 clients connected to the same access point. This feature is often called “client isolation”. But for seamless handover to an other access point, all need to be in the same layer 2 network. So an rogue DHCP server can serve clients on an other access point.  This setup filters that traffic.

For this you need to put following lines into a file called config.properties (most likely you need to create the file).

config.system_cfg.1=ebtables.1.cmd=-A FORWARD -i ath* --protocol ipv4 --ip-protocol udp --ip-destination-port 68 -j DROP
config.system_cfg.2=ebtables.2.cmd=-A FORWARD -i ath* --protocol ipv4 --ip-protocol udp --ip-source-port 67 -j DROP
config.system_cfg.3=ebtables.3.cmd=-A FORWARD -i eth0 --protocol ipv4 --ip-protocol udp --ip-destination-port 67 -j DROP

The location of the file depends on the version of your UniFi management software.

  • Version 2: /opt/UniFi/data/config.properties
  • Version 3+: /opt/UniFi/data/sites/the_site/config.properties – to get the site id take a look at this article.

After that change you need to trigger the re-provision on the access points affected. You can do this by enabling and disabling the guest portal(for the entire site) or on a per access point basis, changing TX power one by one, for example.

To verify that the configuration got deployed, log into the access point via ssh and check the ebtables – it should look like this:

BZ.vx.x.x# ebtables -L
Bridge table: filter

Bridge chain: INPUT, entries: 0, policy: ACCEPT

Bridge chain: FORWARD, entries: 3, policy: ACCEPT
-p IPv4 -i ath* --ip-proto udp --ip-dport 68 -j DROP
-p IPv4 -i ath* --ip-proto udp --ip-sport 67 -j DROP
-p IPv4 -i eth0 --ip-proto udp --ip-dport 67 -j DROP

Bridge chain: OUTPUT, entries: 0, policy: ACCEPT

Securing your client network 2: Separate by device classes

June 16, 2015

The second article in the securing your client network series (after Enforce DHCP usage) is about separating different client device classes in the network. Typically enterprises separate different departments in separate VLANs. If the VLANs are routed in the same VRF and no ACLs separate them, the gained security is negligible. If you’re configuring ACLs for this, you have too much time on hand or the rules are not tight. And the setup works only good if you’re within one central office building and your network is not distributed over an city or even country.  So after I told you what is not a good idea – what setup do I recommend for bigger networks (> 500 client switch ports .. works great for > 10.000 ports and more).

Separate not by department, separate by device class

Yes, that’s the basic idea behind it. Why is that better?

  • less work
    Employees and departments move around. You need to keep your configuration up to date and if part of a department moves to an other location you need to extend the layer2 network think about something else
  • simpler and more secure firewall rules
    If your VoIP phones, PCs and printers of an department are in the same Layer2 network you need to keep track of the devices for the firewall rules or allow a printer the same access as an PC or an VoIP phone. If you separate your printers in a separated network the firewall rules for them are easy, every device in that network is a printer. The firewall rules can be much more strict than in the PC network – a printer needs to talk to the print server (and dns, dhcp, ntp) but nothing else – a PC needs much more
  • network authentication tailored to the device class
    MAC authentication works for any device, but 802.1x only works if the device supports it. Switching 802.1x on for all devices at the same time won’t work, but if only one device is allowed into a network area with only MAC authentication – It does not help that all others use 802.1x, the attacker just fakes that MAC address. With a separation by device classes you can  implemented 802.1x for some networks and others not. e.g. 802.1x for Windows PCs with AD integration is not that complicated – so for the PC network 802.1x could be required, but for the printer network MAC authentication is Ok.  This is specially valid if the firewall rules in the printer network are much more strict – even if someone gets access to that network he is not able to connect to the Exchange, database or file server … only the print server is allowed to connect to the printers and not the other way round
  • separate systems with different patch intervals
    Most likely your Windows clients get an update very month but when did your company the last time update the firmware of the printers? Separate them and attacker can’t jump systems that easy any more.
  • block client to client communication
    If a network area is only used for devices classes that don’t need (or should) communicated directly with each other, you can just block that communication with ACLs. The ACLs are the same for all Layer 2 client access switches and are maintenance free. A classic example for this would be the printer network … why should one printer talk with an other printer – just the print server needs to be able to reach the printers.  So if one printer gets pwned it does not affect the other printers. The same is true for building automation networks (like cameras, access control systems, attendance clock) or maybe your PCs don’t need to talk to each other – VoIP most likely needs to 😉

I hope I convinced you its an good idea, but how is it technically done.

Dynamic VLAN assignment

I recommend to use dynamic VLAN assignment via MAC or 802.1x authentication (via RADIUS Server) .Lets assume you’ve following setup:

  • Edge: Layer 2 edge switch to which the clients are connected to
  • Distribution: Layer 3 switch which aggregates multiple Layer 2 edge switches in the same building
  • Core: aggregates the distribution switches in the data center
  • Firewall: firewall between DMZ and between the different client network areas

dynamic_vlan_blogpost-01

The names of the VLANs on every edge switch are the same, just the VLAN IDs are different. This allows the RADIUS server to return the name of the VLAN the switch should assign to a port or MAC. As the name is the same for all switches, the RADIUS server does not need to know the VLAN IDs. The RADIUS server just has a table that tells it which MAC or common name (in case of 802.1x EAP-TLS) does go into which VLAN. All your switches are configured exactly the same, just the management IP address and the VLAN IDs are different … that makes deploying and maintaining really easy.

For getting the traffic from the edge to the data center I recommend using VRF (Virtual Routing and Forwarding) and OSPF. Just assign the PC VLANs in one VRF and vlanPrinter in an other VRF. The link from the core to the firewall is also tagged. The firewall is now the only way to get from the PC network to the printer network.

I hope that example makes the setup clear, if now just write a comment.

Securing your client network 1: Enforce DHCP usage

June 14, 2015

In my last blog post I talked about going the full Layer 3 way and not building complex Layer2 subnets throughout your network. As many have the argument of security for building their networks this way I thought I write down how I secure client networks. With client networks I mean the part of the network client systems like PCs, phones, printer, … are connected to. Some of the articles and setups can also be used for the data center networks but thats an other story … 😉

All setups I describe in this series I have implemented in productions networks over the years and are therefore not stuff that only works in theory but they work in real life and solve real world problems. So lets start with something easy but which has real benefits not only for security – enforcing DHCP usage by all client systems.

Motivation

Sure, everybody knows for what DHCP is used but lets talk a little bit about the benefits besides not needing to configure each clients manually.

  • If clients get their IP address via DHCP its easier to move the client systems to other subnets. So the need to extend your subnet over multiple switches decreases.
    Result: Helps you to a more routed network and so simpler and more stable network. Clients can move through out your network and it just works.
  • It is also easier to change the client subnet if needed it for an upgrade/change of the network architecture.
    Result: Makes much more flexible to change your network.
  • If you enforce the use of DHCP you also get an log file which client had which IP address at a given time and also to which switch port the client was connected. If there are static IP addresses in your network which you don’t control your log file ins incomplete.
    Result: Audit logs in case you need to do a forensic investigation on how and by what systems an attack was carried out. Most systems log the IP address and you need to map that to a specify systems/location.
  • Also if you enforce the usage of DHCP, you can use the DHCP requests/replies for protection against of ARP spoofing (or at least mitigation) in your network.
    Result: An attacker can not sniffer the traffic from an other client system in the same subnet.
  • If enforced, no idiot configures an IP address static which is also used dynamically.
    Result: Quieter life for you. 😉

Implementation

To enforce DHCP usage we need to make sure that not using DHCP does not work. How can we do that? Simple – disable ARP learning on the Layer 3 switch, which is the default gateway of a client subnet. ARP (Address Resolution Protocol) is used to resolve IP addresses to MAC addresses, so if the default gateway needs to send a packet to a client systems and it does not know the MAC address (in its ARP table) – its not able to send the packet.  Of course the setup needs to work for systems that use DHCP. How is this done? Also simple, the default gateway is most likely already configured as DHCP relay for the central DHCP server so it gets every request and reply. The DHCP reply contains the IP address assigned by the server and the MAC address of the client. The layer 3 switch just needs to write that into its ARP table. From this time on the IP address resolves always to that MAC address until a new DHCP rely provides and not MAC address for a given IP address.

For Extreme Networks switches (XOS) it is as simple as typing that lines per client VLAN/subnet:

enable ip-security dhcp-snooping vlan <vlanClient> port <ClientPorts> violation-action drop-packet snmp-trap
configure trusted-ports <UpLinkPorts> trust-for dhcp-server  (only once needed)
enable ip-security arp learning learn-from-dhcp vlan <vlanClient> ports <ClientPorts>
enable ip-security arp gratuitous-protection vlan Default
disable ip-security arp learning learn-from-arp vlan <vlanClient> ports <ClientPorts>

If the clients are connected directly to the layer 3 switch (default gateway for the client subnet) I recommend changing the first command to

enable ip-security dhcp-snooping vlan <vlanClient> port <ClientPorts> violation-action drop-packet block-port permanently snmp-trap

So that guy who did start a DHCP server in your network needs to call you, before it works again – otherwise I recommend configuring that this way on the switch the client is connected to.

Optional

Following setups / configurations should be done to increase the security in this part still more:

  • Save the DHCP log file for a longer time period as is default for Windows DHCP servers which rotate every week and make sure all information you need is in the log file.
  • Enable ARP spoofing protection also on the clients systems where possible (most likely on on PCs possible). Most enterprise endpoint protection systems allow such a configuration.
  • Integrate the configuration of DHCP reservations (e.g. for printers) into your network authentication solution. It already needs to know the MAC address of the client so adding the IP address there is simple. It keeps also the DHCP scopes clean, so if a client is removed from the network authentication, it automatically removes the reservation from the DHCP server. The side benefit is that your service desk employees could also use this to create DHCP reservations without needing DHCP administrator privileges – and its often easier to have an audit log of the changes than on the Windows DHCP server.

Layer 3 network architecture is the way to go

May 24, 2015

In some of the last conferences I attended, other attendees showed their new network architectures in a short presentations and I’ll talked with some and there was something that puzzled me really. Their network design was completely different from mine. I’m a strong believer in Layer3 network architectures – outside the data centers with the virtualisation – there is no need to Layer 2 spanning more than one building.  I go even as far as the  floor switches are the only Layer2 switches – the building switch is already routing, also between 2 floor switches in the same building. At leased since 2010 I’m designing the networks in this way. I’m talking about networks > 500 or 1000 users in this blog post.

So lets dive a little bit deeper to see from what architecture they are generally coming and what their new architectures are like.

Their old architectures

The old setup is a grown architecture which is not balanced. Its most often a mix of various device types and often also vendors. A department is within its own VLAN even if its distributed over multiple locations. Some operate even some servers in the same VLAN. This leads to many Layer 2 domains distributed over the whole network – in one example I can remember it was over 700 VLANs tagged on one link. So it is already a complex setup which gets added additional complexity with spanning tree protocols (e.g. RSTP … so 700 separated spanning tree instances in this case – so you see why I can remember it 😉 )  for redundancy throughout the whole network. The IP addresses of the devices in the client networks are not all assigned by DHCP – some are static.

Their new architectures

As designing and deploying a new architecture is already such a big project they don’t want to change IP addresses if at all possible. As this would need the various department IT guys to change something and nobody knows how long that would take. And the other problem is that their firewall rules are based on the IP addresses of the clients and the servers in the client VLANs. This leads to a similar architecture in almost of presentations. Big devices in the center (ASR if Cisco is the vendor) and using L3VPN or VPLS to let everything the same for the user but getting rid of the spanning trees over the network backbone.

My opinion on that

My opinion on that is that they don’t ask why do they need to have everything the same. What are the reasons for this – can they be accomplished with other methods? These architectures are complicated, need big iron and are therefore expensive. Talking with them did reveal several points.

The servers in the client networks are operated by the various departments and they want to make sure that only their stuff is able to access them. Even if the server is not in the client network but in a data center network, the servers should only be reachable from the given department employees. The simplest way to do that was using a separate subnet for the department and use it as source address filter in the firewall rules.  So basically they need a way to identify users for firewall rules and other stuff. In their setup the IP address of the client is that way to identify the user.

They are not identifying the user but the device in the first place and secondly there is a better way.

Identifying users

Forget about IP addresses to identify a single user. It can be used to identify a big group like all internal devices and users vs the Internet but otherwise you should not use it. I recommend using and enforcing DHCP for all devices in the client networks and use one of following methods to identify devices and users:

Active Directory integration

If your institution/company is running an Active Directory the simplest way to identify users is to install an agent on a Windows server which can parse the domain controllers logs and send user name + client IP address to the firewalls. It does not matter in which building or to which subnet the user is connected to. This allows you to setup firewall rules with AD groups as source addresses. The client IP address and user name combination is valid for a given time, e.g. 8h. This allows to use the department AD group in firewall rules. Second example: A special group is used to allow access to a special application . The same group can be used to let the traffic of the right persons through the firewall.

This setup provides another benefit for the networks/firewall guys. Allowing a new employee access to an application is generally done via an identify management system by a local IT guy or by the service desk of the IT department. In any case the firewall guys just got rid of the routine task – they just need to make sure every firewall rule for clients accessing servers is globally for all users or does use an AD group. In both cases no manual adding/removing is needed and if a user gets deleted in the AD his rights are removed too.

This setup also works for none Windows clients … the users just must be required to mount a SMB share with a domain user to allow the domain controller to make the mapping. This is also possible on MacOS and Linux. If this is not possible, most firewalls provide an web interfaces the user needs to log on once every e.g. 8h.

DNS Names

If you need to identify clients and not users, or in your setup client=user, than following setup is possible. Configure the client to update the DNS records for its host name after getting an IP address via DHCP. This can be configured for modern Windows clients to allow only a secure way. Most firewalls allow the usage of DNS names in firewall rules and every e.g. 15min the firewall resolves the DNS name to the current IP address. If you also make sure that a client can only use the IP address assigned by DHCP (disable arp learning on the Layer3 switch and using DHCP snooping for the IP to MAC address mapping) this is also fairly secure of an internal network.

802.1x

802.1x provides the possibility to identify devices and users. On centrally management windows clients you can configure if the device certificate is send or a user certificate after logon. Otherwise just provide the User with the correct certificate … be he on Linux oder Windows. If you provide per user certificates the RADIUS server has the user name to MAC address mapping (The MAC is part of the RADIUS requests). Combine that with the IP to MAC address mapping of the DHCP server you got username to IP via an standard protocol that works for any client operation system that supports 802.1x … just give the user the correct certificate. You can also use PEAP if you fear the complexity of EAP-TLS. Now you only need to provide that to the firewall with an API most provide – add and remove IPs from an IP Group in the firewall.

Outlook

This post got longer than I thought so I stop here with solutions for identifying users and remove the “using IP addresses for authentication”-requirement from the requirements list of a new network architecture.

major website in Tirol deanonymizes users

February 9, 2015

Important: This is a new version of the post, which does not contain the name of the website, as the owner removed the code and explained to my why he did it.

As you most likely know I’m running NIDS (Network Intrusion detection systems) to monitor the traffic going into and out of multiple networks. Today I saw a traffic which is normal if one is using VoIP via the Internet – but the source address did not use VoIP at that time.

[**] [1:2016149:2] ET INFO Session Traversal Utilities for NAT (STUN Binding Request) [**] [Classification: Attempted User Privilege Gain] [Priority: 1] {UDP} xxx.xxx.xxx.xxx:47865 -> 54.172.47.69:3478

Together with 2 friends I started investigating. The destination IP address belongs to Amazon AWS … even more interesting. So we took a look at the DNS requests the PC made and that resolved to that IP address, which showed:

stun1.webrtc.us-east-1.prod.mozaws.net

So that originates from the Firefox browser and is connected to WebRTC. So we went to the PC and looked through the browsing history, but the pages looked “normal”, so we started to access them again to find out which one triggers the request. And we found it, visiting thread pages on XXX did trigger the requests. Even more interesting was that we were able to reproduce the requests on a Firebox browser running NoScripts! So we looked at the HTML code of the page and found following:

javascript

And at the end there are following lines:

javascript2

So we did take a look at the HTTP requests made by the client and at the DOM tree and found following:

dom2

This shows that the local IP addresses (behind a NAT router) and the external IP address used for the WebRTC request is sent in an JPG image request to the server. This seemed to be aimed at deanonymizing the client, if the user accesses it via a VPN connection. Some weeks ago there was a post about something like this – deanonymizing Tor users – a little searching revealed following page. Looking at the source code there, showed that the most parts are identical, just adding it to the DOM tree was new. The exploit works for Firefox and Chrome currently – Internet Explorer does not support WebRTC so far.

After this findings I did sent an email to the owner of the homepage, asked if this is by purpose or if the homepage got hacked. He responded a day later and explained that it was needed against an attacker. Anyway as some people rely on their anonymity I wrote this post, to get the word out that if you need anonymity you need to take actions – not only for this page but for other too. The code is out in the wide – it will be used and misused.

Solutions:

  • The best solution is to set the Tor/VPN Tunnel up on the router and not the PC – also for similar exploits in the future.
  • Fast solution for this is to install a special plugin: Firefox, Chrome (does currently not work with Chrome V40.0.2214.111, as a reader just reported to me – ScriptSafe does) – this is also a good idea if you’re not using Tor or a VPN.
  • Verify that you’re secure on this page or this page

ps: I want to thank my two friends (Benjamin Kostner and one friend who wants to stay anonymous) for helping as that made the process of finding the source of the problem much easier and faster.

Do not rely on Windows DHCP server logs as security logs

January 18, 2015

Many companies I know backup their DHCP log files so that they are able to but a MAC address to an IP address seen in an security incident. Sure it is possible that an attacker uses a static IP address, but more often than not is a dynamic one – just because it is easier or he does not posses the privileges to change it. Even if you’re using a simple MAC address based network authentication solution you’ll have log files which ties the MAC address to a specific Ethernet port and so a physical location.

So far so good, but there is a problem with this setup and the Windows DHCP server (at least in 2008R2 and newer) – I didn’t check other server. Lets take a look at the log file and how it looks normally.

Microsoft DHCP Service Activity Log
Event ID Meaning
...
10 A new IP address was leased to a client.
11 A lease was renewed by a client.
...
ID,Date,Time,Description,IP Address,Host Name,MAC Address,User Name, TransactionID, QResult,Probationtime, CorrelationID,Dhcid.
11,01/16/15,00:00:15,Renew,10.xxx.xxx.xxx,coolhostname.domain,940C6D4B992E,,373417312,0,,,

So we’ve a renew here and we’re able to tie the IP address to the MAC address. But sometimes you’ll see entries like this:

11,01/18/15,09:00:55,Renew,10.xxx.xxx.xxx,,1049406658305861646638,,2351324735,0,,,

or
11,01/18/15,12:49:12,Renew,10.xxx.xxx.xxx,coolhostname.domain,4019407634303263415422,,2657325422,0,,,

That does not look like a MAC address? Whats that?

I’ve seen this with some embedded devices and a Fedora 21 client. This put me on the right track. Following Bugzilla entry explains the problem:

“In Fedora 20, it sends a client identifier, and that client identifier is equal to the MAC address of the interface. This is recognized by the DHCP server’s static configuration and the Fedora 20 client gets an IP address.

Fedora 21 now sends a different client identifier that is not equal to the MAC address of the identifier. This new string format for client identifier doesn’t match anything in the static configuration of the DHCP server so it fails to get an IP address assigned.”

“Same issue here. I can confirm “send dhcp-client-identifier = hardware;” fixes the issue. DHCP server is a microsoft windows server and there’s nothing I can do to change its configuration.”

To see the difference in various DHCP packets I’ve some screenshots for you:

A DHCP request without a client identifier:

dhcp_without_identifier

 

A DHCP request with a MAC address as client identifier:

dhcp_with_mac_identifier

A DHCP request with a non MAC address as client identifier:

dhcp_with_identifier

To summaries it – devices can use the MAC address or an UID to identify with the DHCP server. The problem now is that the Microsoft DHCP server does not log the MAC address anywhere and you won’t find the UID in your network logs. But as you see all requests have the client MAC address in the packet – Microsoft just does not write it into the log.

Whats funny is that the column in the Windows DHCP server log is called “MAC Address” but there is sometimes no mac address. A discussion with the Microsoft Premier Support reviled that this is a indented feature and no bug. 😉

Filter traffic from and to Tor IP addresses automatically with Mikrotik RouterOS

November 30, 2014

Some newer malware communicates with their command and control servers via the Tor network, in a typical enterprise network no system should connect the Tor network. A other scenario is that you’re providing services which don’t need to be accessed via the Tor network but your servers get attacked from Tor Exit Nodes. In both cases it may be a good defence to filter/log/redirect the traffic on your router. With Mikrotiks RouterOS this is possible. You need also a small Linux/Unix server to help. This server needs to be trustworthy one as the router executes a script this server generates. This is required as RouterOS is only able to parse text files up to 4096 by itself, and the Tor IP address list is longer.

Linux Part

So first we create the script /usr/local/sbin/generateAddTorIPsScript.sh on the Linux server with following content:
#!/bin/sh
# the full path of the file we create
filename=/var/www/html/addTorIPs.rsc

# remove the comment  if you want to use the List of All Current Tor Server IP Addresses
#url=http://torstatus.blutmagie.de/ip_list_all.php/Tor_ip_list_ALL.csv
# remove the comment  if you want to use the List of All Current Tor Server Exit Node IP Addresses
#url=http://torstatus.blutmagie.de/ip_list_exit.php/Tor_ip_list_EXIT.csv

echo "# This scrip adds Tor IP addresses to an address-list (list created: $(date))" > $filename
echo "/ip firewall address-list" >> $filename

/usr/bin/wget -q -O - $url | sort -u | /bin/awk --posix '/^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/ { print "add list=addressListTor dynamic=yes address="$1" " ;}' >> $filename

The filename path works on CentOS, on Ubuntu you need to remove the html directory. Now make the file executable

chmod 755 /usr/local/sbin/generateAddTorIPsScript.sh

and execute it

/usr/local/sbin/generateAddTorIPsScript.sh

No output is good. Make sure that the file is reachable via HTTP  (e.g. install httpd on CentOS) from the router. If everything works make sure that the script is called once a day to update the list. e.g. place a symlink in /etc/cron.daily:

ln -s /usr/local/sbin/generateAddTorIPsScript.sh /etc/cron.daily/generateAddTorIPsScript.sh

Mikrotik part

Copy and pasted following to get the script onto the router:

/system script
add name=scriptUpdateTorIPs policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive source="# Script which will download a script which adds the Tor IP addresses to an address-list\
\n# Using a script to add this is required as RouterOS can only parse 4096 byte files, and the list is longer\
\n# Written by Robert Penz <[email protected]> \
\n# Released under GPL version 3\
\n\
\n# get the \"add script\"\
\n/tool fetch url=\"http://10.xxx.xxx.xxx/addTorIPs.rsc\" mode=http\
\n:log info \"Downloaded addTorIPs.rsc\"\
\n\
\n# remove the old entries\
\n/ip firewall address-list remove [/ip firewall address-list find list=addressListTor]\
\n\
\n# import the new entries\
\n/import file-name=addTorIPs.rsc\
\n:log info \"Removed old IP addresses and added new ones\"\
\n"

To make the first try run use following command

/system script run scriptUpdateTorIPs

if you didn’t get an error

/ip firewall address-list print

should show many entries. Now you only need to run the script once a day which following command does:

/system scheduler add interval=1d name=schedulerUpdateTorIPs on-event=scriptUpdateTorIPs start-date=nov/30/2014 start-time=00:05:00

You can use this address list now in various ways .. the simplest is following

/ip firewall filter
add chain=forward comment="just the answer packets --> pass" connection-state=established
add chain=forward comment="just the answer packets --> pass" connection-state=related
add action=reject chain=forward comment="no internal system is allowed to connect to Tor IP addresses" dst-address-list=addressListTor
add chain=forward comment="everything from internal is ok --> pass" in-interface=InternalInterface

Google services seems to be down if you’re accessing them via an IPv6 tunnel provider [Update 2]

November 8, 2014

For one of my Internet connections I use Hurricane Electric as IPv6 tunnel broker and the Google services (also Youtube) seems to be not accessable over it. I searched through the Internet and it seems that this is a more wide spread problem also with other tunnel brokers and other users. It is also interesting that following works.

first the dns request:
$ host www.google.com
www.google.com has address 188.21.9.57
www.google.com has address 188.21.9.56
www.google.com has address 188.21.9.59
www.google.com has address 188.21.9.53
www.google.com has address 188.21.9.52
www.google.com has address 188.21.9.55
www.google.com has address 188.21.9.54
www.google.com has address 188.21.9.58
www.google.com has IPv6 address 2a00:1450:4014:80b::1013

the ping to the IPv6 address works too:
$ ping6 2a00:1450:4014:80b::1013
PING 2a00:1450:4014:80b::1013(2a00:1450:4014:80b::1013) 56 data bytes
64 bytes from 2a00:1450:4014:80b::1013: icmp_seq=1 ttl=57 time=82.5 ms
64 bytes from 2a00:1450:4014:80b::1013: icmp_seq=2 ttl=57 time=93.3 ms
64 bytes from 2a00:1450:4014:80b::1013: icmp_seq=3 ttl=57 time=68.3 ms
64 bytes from 2a00:1450:4014:80b::1013: icmp_seq=4 ttl=57 time=75.5 ms

but a HTTP request runs into a timeout:
$ wget www.google.com
--2014-11-08 11:42:29-- http://www.google.com/
Resolving www.google.com (www.google.com)... 2a00:1450:4014:80b::1013, 188.21.9.56, 188.21.9.59, ...
Connecting to www.google.com (www.google.com)|2a00:1450:4014:80b::1013|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://www.google.de/?gfe_rd=cr&ei=lfNdVKbtEumk8wfgv4DgDg [following]
--2014-11-08 11:42:29-- http://www.google.de/?gfe_rd=cr&ei=lfNdVKbtEumk8wfgv4DgDg
Resolving www.google.de (www.google.de)... 2a00:1450:4014:80b::1017, 188.21.9.52, 188.21.9.56, ...
Connecting to www.google.de (www.google.de)|2a00:1450:4014:80b::1017|:80... connected.
HTTP request sent, awaiting response...

after the initial redirect … so small packets seem to go through but big not .. that looks like an MTU problem.

ps: yes
$ ping6 2a00:1450:4014:80b::1017
PING 2a00:1450:4014:80b::1017(2a00:1450:4014:80b::1017) 56 data bytes
64 bytes from 2a00:1450:4014:80b::1017: icmp_seq=1 ttl=57 time=100 ms
64 bytes from 2a00:1450:4014:80b::1017: icmp_seq=2 ttl=57 time=63.6 ms

works too. 😉

Update:

Take also a look at following links:

  • https://www.sixxs.net/forum/?msg=general-12626989
  • https://forums.he.net/index.php?topic=3281.0

Update 2:

The PMTUD seems to be not working .. Details on PMTUD und MTU and MSS can be found here.  Workaround seems to be to set the MTU size to 1480 – it works for me and in IPv6 that’s MSS 1420 (60byte instead of 40 in IPv4). On a Mikrotik RouterOs it works like this:

/ipv6 firewall mangle add action=change-mss chain=forward new-mss=1420 protocol=tcp tcp-flags=syn tcp-mss=!0-1420 comment="max MTU size in Tunnel 1480 .. workaround for google bug"

On Linux it is similar with iptables:
ip6tables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1420

Mikrotik changed the update policy in their license agreement

October 13, 2014

Just as information for you guys using Mikrotik’s RouterOS but who don’t monitor the wiki for changes or are regular readers of the forum. Mikrotik changed its license concerning the updates. Before this summer following paragraph was on their license wiki page:

mikrotik_license

You can take a look at the full old version here. Anyway this whole paragraph has been removed. Also with RouterOS 6.20 the following got removed when typing /system license print on the router:

mikrotik_upgrade

So what does that mean now? jarda from the forums put it nicely:

Updates and upgrades will be unlimited till the end of the life of each device type. It means until the new versions will support your hardware, you can upgrade for free. But expect the problems like some mipsbe devices with 32MB ram are sometimes poorly running RouterOS 6.x. And old 1xx devices are not supported by 6.x. So actually moving upgradable-to field forward or keeping support to old hw in new versions is one like other. No certainty anyway.

My first IPv6 problem – multihoming my home network without NAT

August 10, 2014

Today I ran into my first IPv6 problem … all was easy so far for some years .. just configured it and it worked … but this weekend I deployed a second Internet connection for my home. With IPv4 and masquerading the internal IP addresses to the one provider-given IP addresses I was able on the router to configure which traffic goes out over which provider. If one provider fails the route is withdrawn and all goes over the other link. But now comes IPv6 and it is not that easy anymore, as my router does not support IPv6 NAT. The problem is described in detail in this nice blog post by Ivan Pepelnjak.

My router is able to do VRF (Virtual Routing and Forwarding) also for IPv6 (at least the documentation says so .. didn’t try it so far). So the “best” option for me seems to advertise both subnets the providers gave me to the clients  and route source based to the providers. Without VRF I would be depended that the providers don’t do a RPF (Reverse Path Forwarding) check, which is also not a good idea. But this leads to the problem that the end device decides which uplink it uses, which is most likely not the one I would choose ….

An other idea was to use one of my servers in a data center to tunnel the traffic through. Basically running two IP tunnels from my router to the server (one via each provider) and using IP addresses that are routed from the Internet to the server. But this is also not a good solution.

Anyway I don’t have good solution so far, maybe one of my readers does.

 

Powered by WordPress
Entries and comments feeds. Valid XHTML and CSS. 76 queries. 0.330 seconds.