March 8, 2014
Normally I use standard Linux distributions as NAS systems, but in this case it had to be a real NAS (size and price was more important than performance) and it was not at my place –> so I chose a Synology DS214se. But I still needed to setup a certificate based OpenVPN where the NAS was the client and it needed to stay connected all the time. First I though that must be easily done in the GUI as OpenVPN is easy for stuff like this … but I was wrong. First it is not possible to configure a certificate based authentication for OpenVPN in the Synology GUI and secondly if the connection got disconnected it stayed that way. But with some magic it was easily fixed:
Configure Certificate based authentication
First go to the VPN window in Control Panel and configure what is possible via the GUI. e.g. the CA certificate or the server IP address or DNS name. Use anything as username/password:
After that save it .. but don’t connect as it won’t work. You need to log in via ssh (use username root and the admin user password) and change some files and upload some new.
will give you something like this
drwxr-xr-x 3 root root 4096 Feb 23 20:21 .
drwxr-xr-x 7 root root 4096 Mar 7 21:15 ..
-rwxr-xr-x 1 root root 1147 Feb 22 18:10 ca_234324321146.crt
-rw-r--r-- 1 root root 524 Mar 2 09:24 client_234324321146
-rw------- 1 root root 425 Feb 22 18:10 ovpn_234324321146.conf
the file without extension is the configuration for OpenVPN, which gets created from the GUI. The GUI config is stored in the .conf file. So if we change the OpenVPN configuration file it gets overwritten if we change the GUI config, but we won’t do that anymore ;-). Now we create a sub directory and upload our client (=NAS) certificate files. The long and hopefully good documentation on creating the certificates and how to configure OpenVPN on a standard distribution can be found here.
cat > keys/my_ds.crt (paste the certificate content and press CRTL-D in an empty line)
cat > keys/my_ds.key (paste the private key content and press CRTL-D in an empty line)
chmod 600 keys/my_ds.key
Now we change the file without extension so that it contains at leased following lines (other stuff is also required but depends on your setup)
keepalive 10 120
I recommend to make a copy of the file after very change so if someone changes something in the GUI you don’t need to start from the beginning.
cat client_234324321146 client_234324321146.backup
For simple testing start OpenVPN like this (stop it with CTRL-C):
/usr/sbin/openvpn --daemon --cd /usr/syno/etc/synovpnclient/openvpn --config client_234324321146 --writepid /var/run/ovpn_client.pid
And tune it until it works correctly. Now you can start it in the GUI and you’re finished with the first task.
Configure OpenVPN in a way that it keeps running
For this we write a script that gets called every five minutes to check if the OpenVPN is still working and if not restart its.
cat > /root/checkAndReconnectOpenVPN
if echo `ifconfig tun0` | grep -q "00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00"
echo "VPN up"
echo 1 > /usr/syno/etc/synovpnclient/vpnc_connecting
synovpnc reconnect --protocol=openvpn --name=XXXXXX
Replace XXXXXX with the name the VPN Connection has in the GUI (not sure if it is case sensitive or not, I kept the case anyway.) and make the script executable:
chmod +x /root/checkAndReconnectOpenVPN
Try it with (e.g. when the OpenVPN is running and not running)
Now we only need to add a line to the crontab file (Important it is >> and not >)
cat >> /etc/crontab
and paste and press CRTL-D in an empty line
*/5 * * * * root /root/checkAndReconnectOpenVPN
Now we only need to restart the cron daemon with following commands:
and we’re finished … a certificate based OpenVPN which reconnects also if the process fails/stops.
February 16, 2014
Originally I only wanted to look at the traffic to check why it took so long on my mobile, but than I found some bad security implementations.
1. The web service is password protected, but the password which is the same for all copies of the app is send in the clear
Just look at the request which is send via HTTP (not HTTPS) to the server. Take the string and to a base64 decoding and you get: client:xxxxxx – oh thats user name and password and its the same for any copy of the app.
2. We collect private data and don’t tell our users for what
The app asks following question “Um in den vollen Genuss der Vorzüge dieser App zu kommen, können Sie sich bei uns registrieren. Wollen Sie das jetzt tun? / To get the full use of the app you can register. Do you want to register now?” at every launch until you say yes.
But for what feature do you need to register? What happens with the data you provide? There is nothing in the legal notice of the app. I’m also missing the DVR number from the Austrian Data Protection Authority. Also a quick search in the database didn’t show anything. Is it possible they forgot it?
3. We don’t care about private data which is given to us
The private data you’re asks at every launch until you provide it, is send in the clear through the Internet. A SSL certificate was too expensive?
4. We are generating incremented client IDs to make it easy to guess the IDs of other users
At the first launch of the app on a mobile, the app requests an unique ID from the server which is not something random and not guessable. No its just a incremented integer (can’t be the primary key of the database table?), at least my tests showed this … the value got only bigger and not that much bigger, every time.
And as the image at point 3 shows that everything someone needs to change the user data on the server for an other user is this number, a small script which starts from 1 up to the 20.000 would be something nice …… the question is what else can you do with this ID? Should I dig deeper?
5. We’re using an old version of Apache Tomcat
The web service tells everyone who wants to know it, that its running on an Apache Tomcat/6.0.35. There are 7.0 and 8.0 releases out already, but the current patch release of 6.0 is 6.0.39 released 31 January 2014. But its worse than that, 6.0.35 was released on 5 Dec 2011 and replaced on the 19 Oct 2012 with 6.0.36. Someone not patching for over 2 years? No can’t be, the app is not that old. So an old version was installed in the first place?
ps: If you’re working with Ubuntu 12.04 LTS package … Tomcat is in universe not main … no official security patches.
This are my results after looking at the app for a short period of time … needed to do other stuff in between
For some time now a mobile app for Andriod phones and iPhones is advertized which is called the official app of Tirol’s Avalanche Warning Service and Tiroler Tageszeitung (Tirol Daily Newspaper), so I installed it on my Android phone some days ago. Yesterday I went on a ski-tour (ski mountaineering) and on the way in the car I tried to update the daily avalanche report but it took really long and failed in the end. I thought that can’t be possible be, as the homepage of the Tyrol’s Avalanche Warning Service worked without any problems and was fast.
So when I was home again I took a closer look the traffic the app sends and receives from the Internet … as I wanted to know why it was so slow. I installed the app on my test mobile and traced the traffic it produced on my router while it launched the first time. I was a little bit shocked when I look at the size of the trace – it was 18Mbyte big. Ok this makes it quite clear why it took so long on my mobile ;-) –> So part of the post series will be getting the size of the communication down , so I opened the trace in Wireshark and took at look at it. First I checked where the traffic was coming from.
So my focus was one the 22.214.171.124 which was the IP address of tirol.lawine-app.com and it is hosted by a German provider called Hetzner (you can rent “cheap” servers there). As I opened the TCP stream I saw at once a misconfiguration. The client supports gzip but the server does not send gzipped.
Just for getting the value how much it would save without any other tuning I gzipped the trace file and I got from 18.5Mbyte to 16.8Mbyte – 10% saved. Than I extracted all downloaded files. jpg files with 11Mbyte and png files with 4,3Mbyte … so it seems that saving there will help the most. Looking at the biggest pictures leaded to the realization that the jpg images where saved in the lowest compress mode. e.g. 2014-02-10_0730_schneeabs.jpg
- 206462 Bytes: orginal image
- 194822 Bytes: gimp with 90% quality (10% saving)
- 116875 Bytes: gimp with 70% quality (40% saving)
Some questions also arose:
- Some information like the legend are always the same … why not download it only once and reuse until the legend gets update?
- Some big parts of the pictures are only text, why not sent the text and let the app render it?
- The other question is why are the jgep files 771 x 566 and the png files 410×238 showing the same map of Tirol? Downsizing would save 60% of the Size (with the same compression level)
- Why are some maps done in PNG anyway? e.g. 2014-02-10_0730_regionallevel_colour_pm.png has 134103 Bytes, saving it as jpeg in gimp with 90% quality leads to 75015 Bytes (45% saving)
So I tried to calculate the savings without minimizing the information that are transferred – just the representation and it leads to over 60% .. so instead of 18Mbyte we would only need to transfer 7Mbyte. If the default setting would be changed to 3 days instead of 7, it would go even further down, as I guess most people look only on the last 3, if even that. So it could come down to 3-4 Mbyte … that would be Ok, so please optimize your software!
I only wanted to make one post about this app, but then I found, while looking at the traffic, some security and privacy concerns I need to look into a bit closer …. so expect a part 2.
February 15, 2014
If you as I need to get some traffic from a Mikrotik router and
/tool sniffer quick doesn’t cut it, as you need not just the headers the best way is stream the traffic to the a Linux box. The Mikrotik configuration is easy, just set the server you want to stream to:
/tool sniffer set streaming-enabled=yes streaming-server=<ip_of_the_server>
Configure a filter as you don’t want to stream everything:
/tool sniffer set filter-ip-address=<an_example_filter_ip>
and now you need only to start it with
/tool sniffer start
and check with
/tool sniffer print
if everything is running.
But now comes the part that is not documented that well. Searching through the internet I found some posts/articles on how to use Wireshark for capturing, but that does not work correctly – at least not for me.
If you configure the capture filter to udp port 37008 to get everything the router sends via TZSP you will see following lines
If you now set the display filter to show only TZSP these packets are not displayed any more. This packets contain information we need and I was not able to configure Wireshark 1.10.2 to work correctly. If you know how to get it to work, please write a comment. I changed my approach to use an other program to write the packets to disk and look at them later with Wireshark. And I found a program from Mikrotik directly which does that. Go to the download page and download Trafr and extract and use it like this:
$ tar xzf trafr.tgz
usage: trafr <file | -s> [ip_addr]
-s write output to stdout. pipe it into tcpdump for example:
./trafr -s | /usr/sbin/tcpdump -r -
ip_addr use to filter one source router by ip address
$ ./trafr test.pcap <ip_of_the_router>
After you stopped the program you can open the file in Wireshark and no packets are missing.
January 19, 2014
There seams to be a virus wave here in Austria and Germany, don’t really know why but somehow many people click on the links and download the malware. Maybe its because the mail is a faked invoice from some well known (mobile) telecommunication providers and are written in good German – normally spam like this written in broken German. And it seams that the mail passed anti spam systems as I got the some mails on the cooperate account and at home .. normally I don’t get spam mails for month.
Anyway, while I was driving home today it was even in the local radio news .. one of the top items there. And when I was home a relative, which is not that close by called me and asked be how to get ride off that virus. He got infected as initially his anti virus didn’t detected it. I recommend him following link from Raymond. Its a comprehensive list of 26 bootable antivirus rescue CDs for offline scanning. I recommend him to use at least two of the following from the list.
- Bitdefender Rescue CD
- Kaspersky Rescue Disk
- F-Secure Rescue CD
- Windows Defender Offline
So if you get asked the same from your relative you don’t need to search further.
January 12, 2014
Last week we at work got a mail from CERT.at that 2 IP addresses in our AS where probably running misconfigured NTP Servers, which can be abused for DDoS attacks via NTP Reflection. But first we need to start with the background.
In the last weeks multiple DDoS attacks were using NTP Reflection. The attackers are making use of the monlist commands, which is enabled on older versions of the NTP daemon. With that command it is possible to get a list of up to the last 600 hosts / ip address which connected to the NTP daemon. As NTP is UDP based, an attacker fakes its source IP address and the answer packet from the NTP daemon is send to the victim. Beside hiding the attackers IP addresses to the victim it amplifies the attack as the request packet is much smaller than the answer packet. The other problem with this monlist command is, that it releases potential sensitive information (the IP address of the clients using NTP)
How to verify you’re vulnerable?
First you need to find your NTP servers – and thats not so easy as it seams. E.g. our 2 reported NTP servers where not our official NTP servers … but more about that later. To find NTP Servers which are reachable from the Internet use e.g. nmap in a way like this:
sudo nmap -p 123 -sV -sU -sC -P0 <your_network/subnet_mask>
This will return for a linux ntp server something like this
Nmap scan report for xxxxx (xxxxxxxx)
Host is up (0.00016s latency).
PORT STATE SERVICE VERSION
123/udp open ntp NTP v4
| receive time stamp: 2014-01-12T11:02:30
| version: ntpd [email protected] Wed Nov 24 19:02:17 UTC 2010 (1)
| processor: x86_64
| system: Linux/2.6.32-358.18.1.el6.x86_64
| leap: 0
| stratum: 3
| precision: -24
| rootdelay: 20.807
| rootdispersion: 71.722
| peer: 56121
| refid: 126.96.36.199
| reftime: 0xd67cedcd.b514b142
| poll: 10
| clock: 0xd67cf4be.9a6959a7
| state: 4
| offset: 0.042
| frequency: -3.192
| jitter: 0.847
| noise: 1.548
| stability: 0.163
|_ tai: 0
But you may find also something like this
Nmap scan report for xxxxx
Host is up (0.00017s latency).
PORT STATE SERVICE VERSION
123/udp open ntp NTP v4
|_ receive time stamp: 2014-01-12T11:02:55
from a system you had not on the list. After this deactivate and/or filter the services you don’t need – a running service which is not needed is always a bad idea. But surely you also want to know how to probe the NTP daemon for the monlist command – just like this:
ntpdc -n -c monlist <ip_address>
If the daemon is vulnerable you’ll get a list of ip address which connected to the daemon. If the NTP daemon is running on a Linux, Cisco or Juniper System take look at this page which describes how to configure it correctly.
But I guess you’re curious, which systems where running on the 2 ip addresses we got reported? They where Alcatel Lucent Switches which have the NTP daemon activated by default it seams. So its really important to check all your IP addresses not only the known NTP Servers.
January 5, 2014
Sometimes you’ll (at leased if you’re like me ) want to know which other websites are hosted on the same server respectively the same IP address. The search engine Bing provides a nice feature for this. Just enter
ip:188.8.131.52 to get a list of the website which Bing knows to run on that IP address.
But even better, Andrew Horton has done a Bash script which allows you to check that from the command line. This looks even better:
$ ./bing-ip2hosts www.theregister.co.uk
[ 184.108.40.206 | Scraping 11-13 von 13 | Found 9 | / ]]
As hopefully many of my readers have already heard/read multiple consumer routers contain a backdoor, which allows the attacker to get the configuration of the router, which also contains the administrator password. I won’t rewrite here everything big IT news sites have already written. Here just the basics to get you up to speed if you didn’t hear it before:
- Eloi Vanderbeken found on his Linksys router WAG200G a process what was listening on TCP port 32764. After analyzing the code he figured out that it was possible to extract the configuration from the router over this process without knowing the password. The configuration contains also the password.
- After hey posted the information to the net, other users stepped forward and told him that other manufactures and models have the same backdoor. Don’t say “conspiracy theory” now.
- On some routers the process is “only” listening on the internal network (which is also attackable over the users browser) but some are also reachable on the Internet. Scanning for this in the internet is easy with zmap .. only 45min for the whole IPv4 Internet address space.
- Click here to get the current list of affected routers – its a long list containing vendors like Cisco, Linksys, Netgear, Diamond, LevelOne
- To verify if your router is also affected download this Python Script (Linux has normally Python preinstalled on Windows you need to install it). And call it like this:
python poc.py --ip <IP-Address of your router>. If it found something you can extract the configuration by adding
--print_confto the command line.
- To check if the process is also reachable from the Internet use a Website like this.
Possible workarounds to get the hole fixed fast:
- On some routers you can configure a local firewall which allows you to block the Port 32764. Depending on your router this is possible for the Internet interface and/or the internal interface.
- Install a OpenSource software like OpenWRT.
- Install the new firmware release of your vendor when and if it is released … I wouldn’t wait for this.
January 2, 2014
Basically it is simple but the 64bit makes it a little bit more difficult. It is not logical from the outside but don’t use the 64bit version. Why? As described here distributions with multiarch support can’t resolve the ia32-libs packages. But there is a simple solution to this which is not described there, as adding an additional architecture doesn’t feel right.
Install gdebi (gdebi lets you install local deb packages resolving and installing its dependencies. apt does the same, but only for remote (http, ftp) located packages.):
sudo apt-get install gdebi
Download the 32bit version
Use gdebi to install and resolve the dependencies:
$ sudo gdebi teamviewer_linux.deb
Reading package lists... Done
Building dependency tree
Reading state information... Done
Building data structures... Done
Building data structures... Done
Requires the installation of the following packages: libxtst6:i386
TeamViewer (Remote Control Application)
TeamViewer is a remote control application. TeamViewer provides easy, fast and secure remote access to Linux, Windows PCs, and Macs.
TeamViewer is free for personal use. You can use TeamViewer completely free of charge to access your private computers or to help your friends with their computer problems.
To buy a license for commercial use, please visit http://www.teamviewer.com
Do you want to install the software package? [y/N]:y
Get:1 http://at.archive.ubuntu.com/ubuntu/ saucy/main libxtst6 i386 2:1.2.2-1 [13.8 kB]
Fetched 13.8 kB in 0s (0 B/s)
Selecting previously unselected package libxtst6:i386.
(Reading database ... 250452 files and directories currently installed.)
Unpacking libxtst6:i386 (from .../libxtst6_2%3a1.2.2-1_i386.deb) ...
Setting up libxtst6:i386 (2:1.2.2-1) ...
Processing triggers for libc-bin ...
Selecting previously unselected package teamviewer.
(Reading database ... 250454 files and directories currently installed.)
Unpacking teamviewer (from teamviewer_linux.deb) ...
Setting up teamviewer (9.0.24147) .
December 24, 2013
Today I was talking with some friends about the possibility to make a DOS attack against an IPv6 router/switch if I was in the same /64 subnet by simply sending IPv6 NDP Packets to fill the neighbour cache on the router. But the question I was thinking than about was how many packets can I send e.g. over an 1Gbit link per second? How many entries will the neighbour cache need to hold if the timeout is e.g. set to 120 sec? How long would it take to scan the whole /64? So I sat down and looked at the questions.
How man packets can I send in one direction send over an 1Gbit Ethernet link?
The amount of packets which can be sent over a link depends on the size of the packets. The smallest ones used for calculation are 64byte in the IP world. We need to put that into a Ethernet frame which adds up to 84 octets Details can be found here. Which leads to following formula:
1000MbitPerSec / 8 Bits / 84 OctetsPerFrame= 1.488.095 FramesPerSec
As only one packet can be in a frame we can send 1.488.095 packets per second (often called: pps), which is also often called line speed or wire speed. The calculation is true for pure Ethernet, but I changes if you use VLAN Tags, QinQ or MPLS … in these cases take a look at this article.
How many entries will the neighbour cache need to hold if the timeout is e.g. 120 sec?
So now we know how many packets a can send at most and forget that we need some additional bytes for the NDP, which makes it easy to set the limit for the neigbour cache of our router.
1.488.095 PacketsPerSecond * 120 SecondsTimeout = 178571400 entries = 178 Million Entries
Lets say that this is only a RAM problem and everything else would work. Each entry contains a least the IP address and the MAC address. (There would be an optimization possible in only to store the host part of the IP address). An IPv6 address has 128bit = 16byte and the MAC address has 48bit = 6byte which leads to a total of 22byte per entry. A router needs 3,6Gbyte of RAM to store this table … not impossible but not common also.
How long would it take to scan the whole /64?
And as bonus question we talk on how long it would take to scan that many IP addresses. First we need to get the amount of IP addresses a /64 can hold.
2^64 = 18.446.744.073.709.551.616 = 1,844674407×10¹⁹ IP Addresses
We know that we can scan 1.488.095 IP addresses per second which leads to
1,844674407×10¹⁹IPaddresses / 1488095 packetsPerSec / 60/ 60 / 24 / 365 = 393081 years
Ok not practical. But wait … we need only to scan for /48 IP addresses as the host part is derived from the MAC … this makes only 2,814749767×10¹⁴ IP addresses
2,814749767×10¹⁴IPaddreses / 1488095 packetsPerSec / 60/ 60 / 24 / 365 = 6 years
Much smaller but still too long for my spare time.