I get only a A- on Qualys SSL Labs tests – Why? and what can I do?

April 19, 2014

In the last months more and more sysadmins started looking into their SSL configuration of their HTTPS websites. And one of the major sites that is used to rate/check the quality of the SSL configuration on a given HTTPS server is the Qualys SSL Labs SSL Server Test which can be reached via this link. If a sysadmin gets a not so good rating he search through the Internet and uses something like this settings (Apache 2.2 on Centos 6) to fix it:

SSLEngine on
SSLProtocol All -SSLv2 -SSLv3
Header add Strict-Transport-Security "max-age=15768000"
SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4
SSLHonorCipherOrder On

This leads at the time of writing (Quality SSL Labs changes the rating from time to time to following) to:

ssl_rating

And now you are wondering why you get only a A- and what the problem with your configuration is. To make your journey shorter, the problem is most likely not the SSL configuration, it is the software you’re running.  As you see on the screenshot the test reports that Forward Secrecy is not supported by all browsers and if you take a look at the details,

pfs

you’ll see that the problem is the Internet Explorer and that Forward Secrecy works for all other browsers.

(Perfect) Forward Secrecy

But what is (Perfect) Forward Secrecy in the first plage and why should you care.  PFS ensures the integrity of a session key in the event that the private key of the server gets compromised. This is done by generating a separate session key for every new HTTPS session/connection.

Why should you care?

An attacker could record the SSL traffic for some time and later he got the private key and now without PFS he would be able to extract all the SSL traffic he was not able to look into before.  Basically without PFS if a private key gets compromised you not only need to look the now and the future but also a the past and consider everything that was encrypted/signed by this key as compromised. With PFS you’re sure that an attacker is not able to extract data from before he got the private key. With the Heartbleed Bug in OpenSSL such an attack was possible or by hacking the server.

The cipher suites (the ones you choose with SSLCipherSuite in the Apache configuration) that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. The disadvantage of them is that they have a performance overhead, but the security is worth it and it is not that much overhead. With some elliptic curve variants the performance would be better.

A- and the workarounds

And how to the problem with the A- rating  – I’ll quote Shadow Zhang who described it nicely in his post:

With Apache 2.2.x you have only DHE suites to work with, but they are not enough. Internet Explorer (in all versions) does not support the required DHE suites to achieve Forward Secrecy. (Unless youre using DSA keys, but no one does; that’s a long story.) Apache does not support configurable DH parameters in any version, but there are patches you could use if you can install from source. Even if openssl can provide ECDHE the apache 2.2 in debian stable does not support this mechanism. You need apache 2.4 to fully support forward secrecy.

Patching and compiling Apache is not the best idea, as you need to do it again for every security update. I see following options:

  • Use a distribution version which supports Apache 2.4
  • Use Nginx as reverse proxy in front of the Apache because it fully supports ECDHE.
  • Change to a web server that is shipped with your distribution and that does support ECDHE.

I hope this post helped and saved you some time looking through the internet for a solution.

Configure a Synology NAS as OpenVPN client with certificate authentication (and make it stable)

March 8, 2014

Normally I use standard Linux distributions as NAS systems, but in this case it had to be a real NAS (size and price was more important than performance) and it was not at my place –> so I chose a Synology DS214se. But I still needed to setup a certificate based OpenVPN where the NAS was the client and it needed to stay connected all the time. First I though that must be easily done in the GUI as OpenVPN is easy for stuff like this … but I was wrong. First it is not possible to configure a certificate based authentication for OpenVPN in the Synology GUI and secondly if the connection got disconnected it stayed that way. But with some magic it was easily fixed:

Configure Certificate based authentication

First go to the VPN window in Control Panel and configure what is possible via the GUI. e.g. the CA certificate or the server IP address or DNS name. Use anything as username/password:

openvpn

After that save it .. but don’t connect as it won’t work. You need to log in via ssh (use username root and the admin user password) and change some files and upload some new.

cd /usr/syno/etc/synovpnclient/openvpn
ll

will give you something like this

drwxr-xr-x    3 root     root          4096 Feb 23 20:21 .
drwxr-xr-x    7 root     root          4096 Mar  7 21:15 ..
-rwxr-xr-x    1 root     root          1147 Feb 22 18:10 ca_234324321146.crt
-rw-r--r--    1 root     root           524 Mar  2 09:24 client_234324321146
-rw-------    1 root     root           425 Feb 22 18:10 ovpn_234324321146.conf

the file without extension is the configuration for OpenVPN, which gets created from the GUI. The GUI config is stored in the .conf file. So if we change the OpenVPN configuration file it gets overwritten if we change the GUI config, but we won’t do that anymore ;-). Now we create a sub directory and upload our client (=NAS) certificate files. The long and hopefully good documentation on creating the certificates and how to configure OpenVPN on a standard distribution can be found here.

mkdir keys
cat > keys/my_ds.crt    (paste the certificate content and press CRTL-D in an empty line)
cat > keys/my_ds.key  (paste the private key content and press CRTL-D in an empty line)
chmod 600 keys/my_ds.key

Now we change the file without extension so that it contains at leased following lines (other stuff is also required but depends on your setup)

ca ca_234324321146.crt
cert keys/my_ds.crt
key keys/my_ds.key
keepalive 10 120
tls-client

I recommend to make a copy of the file after very change so if someone changes something in the GUI you don’t need to start from the beginning.

cat client_234324321146 client_234324321146.backup

For simple testing start OpenVPN like this (stop it with CTRL-C):

/usr/sbin/openvpn --daemon --cd /usr/syno/etc/synovpnclient/openvpn --config client_234324321146 --writepid /var/run/ovpn_client.pid

And tune it until it works correctly. Now you can start it in the GUI and you’re finished with the first task.

Configure OpenVPN in a way that it keeps running

For this we write a script that gets called every five minutes to check if the OpenVPN is still working and if not restart its.

cat > /root/checkAndReconnectOpenVPN

if echo `ifconfig tun0` | grep -q "00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00"
then
echo "VPN up"
else
echo 1 > /usr/syno/etc/synovpnclient/vpnc_connecting
synovpnc reconnect --protocol=openvpn --name=XXXXXX
fi
exit 0

Replace XXXXXX with the name the VPN Connection has in the GUI (not sure if it is case sensitive or not, I kept the case anyway.) and make the script executable:

chmod +x /root/checkAndReconnectOpenVPN

Try it with (e.g. when the OpenVPN is running and  not running)

/root/checkAndReconnectOpenVPN

Now we only need to add a line to the crontab file (Important it is >> and not >)

cat >> /etc/crontab

and paste and press CRTL-D in an empty line

*/5     *       *       *       *       root    /root/checkAndReconnectOpenVPN

Now we only need to restart the cron daemon with following commands:

/usr/syno/etc/rc.d/S04crond.sh stop
/usr/syno/etc/rc.d/S04crond.sh start

and we’re finished … a certificate based OpenVPN which reconnects also if the process fails/stops.

Howto capture traffic from a Mikrotik router on Linux

February 15, 2014

If you as I need to get some traffic from a Mikrotik router and /tool sniffer quick doesn’t cut it, as you need not just the headers the best way is stream the traffic to the a Linux box. The Mikrotik configuration is easy, just set the server you want to stream to:

/tool sniffer set streaming-enabled=yes streaming-server=<ip_of_the_server>

Configure a filter as you don’t want to stream everything:

/tool sniffer set filter-ip-address=<an_example_filter_ip>

and now you need only to start it with

/tool sniffer start

and check with

/tool sniffer print

if everything is running.

But now comes the part that is not documented that well. Searching through the internet I found some posts/articles on how to use Wireshark for capturing, but that does not work correctly – at least not for me.

capture_filter

If you configure the capture filter to udp port 37008 to get everything the router sends via TZSP you will see following lines

fragmented_ip_protocol2

If you now set the display filter to show only TZSP these packets are not displayed any more. This packets contain information we need and I was not able to configure Wireshark 1.10.2 to work correctly. If you know how to get it to work, please write a comment. I changed my approach to use an other program to write the packets to disk and look at them later with Wireshark. And I found a program from Mikrotik directly which does that.  Go to the download page and download Trafr and extract and use it like this:

$ tar xzf trafr.tgz
$ ./trafr
usage: trafr <file | -s> [ip_addr]
-s      write output to stdout. pipe it into tcpdump for example:
./trafr -s | /usr/sbin/tcpdump -r -
ip_addr use to filter one source router by ip address
$ ./trafr test.pcap <ip_of_the_router>

After you stopped the program you can open the file in Wireshark and no packets are missing.

At the Tone, the Time will be.

January 12, 2014

Last week we at work got a mail from CERT.at that 2 IP addresses in our AS where probably running misconfigured NTP Servers, which can be abused for DDoS attacks via NTP Reflection. But first we need to start with the background.

Background

In the last weeks multiple DDoS attacks were using NTP Reflection. The attackers are making use of the monlist commands, which is enabled on older versions of the NTP daemon. With that command it is possible to get a list of up to the last 600 hosts / ip address which connected to the NTP daemon. As NTP is UDP based, an attacker fakes its source IP address and the answer packet from the NTP daemon is send to the victim. Beside hiding the attackers IP addresses to the victim it amplifies the attack as the request packet is much smaller than the answer packet. The other problem with this monlist command is, that it releases potential sensitive information (the IP address of the clients using NTP)

How to verify you’re vulnerable?

First you need to find your NTP servers – and thats not so easy as it seams. E.g. our 2 reported NTP servers where not our official NTP servers … but more about that later. To find NTP Servers which are reachable from the Internet use e.g. nmap in a way like this:

sudo nmap -p 123 -sV -sU -sC -P0 <your_network/subnet_mask>

This will return for a linux ntp server something like this

Nmap scan report for xxxxx (xxxxxxxx)
Host is up (0.00016s latency).
PORT    STATE SERVICE VERSION
123/udp open  ntp     NTP v4
| ntp-info:
|   receive time stamp: 2014-01-12T11:02:30
|   version: ntpd [email protected] Wed Nov 24 19:02:17 UTC 2010 (1)
|   processor: x86_64
|   system: Linux/2.6.32-358.18.1.el6.x86_64
|   leap: 0
|   stratum: 3
|   precision: -24
|   rootdelay: 20.807
|   rootdispersion: 71.722
|   peer: 56121
|   refid: 91.206.8.36
|   reftime: 0xd67cedcd.b514b142
|   poll: 10
|   clock: 0xd67cf4be.9a6959a7
|   state: 4
|   offset: 0.042
|   frequency: -3.192
|   jitter: 0.847
|   noise: 1.548
|   stability: 0.163
|_  tai: 0

But you may find also something like this

Nmap scan report for xxxxx
Host is up (0.00017s latency).
PORT    STATE SERVICE VERSION
123/udp open  ntp     NTP v4
| ntp-info:
|_  receive time stamp: 2014-01-12T11:02:55

from a system you had not on the list. After this deactivate and/or filter the services you don’t need – a running service which is not needed is always a bad idea. But surely you also want to know how to probe the NTP daemon for the monlist command – just like this:

ntpdc -n -c monlist <ip_address>

If the daemon is vulnerable you’ll get a list of ip address which connected to the daemon. If the NTP daemon is running on a Linux, Cisco or Juniper System take look at this page which describes how to configure it correctly.

But I guess you’re curious, which systems where running on the 2 ip addresses we got reported? They where Alcatel Lucent Switches which have the NTP daemon activated by default it seams.  So its really important to check all your IP addresses not only the known NTP Servers.

Howto install Teamviewer 9.x on Ubuntu >= 12.04 64bit (in my case 13.10)

January 2, 2014

Basically it is simple but the 64bit makes it a little bit more difficult. It is not logical from the outside but don’t use the 64bit version. Why? As described here distributions with multiarch support can’t resolve the ia32-libs packages. But there is a simple solution to this which is not described there, as adding an additional architecture doesn’t feel right.

Install gdebi (gdebi lets you install local deb packages resolving and installing  its dependencies. apt does the same, but only for remote (http, ftp)  located packages.):

sudo apt-get install gdebi

Download the 32bit version

wget http://www.teamviewer.com/download/teamviewer_linux.deb

Use gdebi to install and resolve the dependencies:

$ sudo gdebi teamviewer_linux.deb
Reading package lists... Done
Building dependency tree
Reading state information... Done
Building data structures... Done
Building data structures... Done
Requires the installation of the following packages: libxtst6:i386

TeamViewer (Remote Control Application)
TeamViewer is a remote control application. TeamViewer provides easy, fast and secure remote access to Linux, Windows PCs, and Macs.
.
TeamViewer is free for personal use. You can use TeamViewer completely free of charge to access your private computers or to help your friends with their computer problems.
.
To buy a license for commercial use, please visit http://www.teamviewer.com
Do you want to install the software package? [y/N]:y
Get:1 http://at.archive.ubuntu.com/ubuntu/ saucy/main libxtst6 i386 2:1.2.2-1 [13.8 kB]
Fetched 13.8 kB in 0s (0 B/s)
Selecting previously unselected package libxtst6:i386.
(Reading database ... 250452 files and directories currently installed.)
Unpacking libxtst6:i386 (from .../libxtst6_2%3a1.2.2-1_i386.deb) ...
Setting up libxtst6:i386 (2:1.2.2-1) ...
Processing triggers for libc-bin ...
Selecting previously unselected package teamviewer.
(Reading database ... 250454 files and directories currently installed.)
Unpacking teamviewer (from teamviewer_linux.deb) ...
Setting up teamviewer (9.0.24147) .

Some thoughts on NFS

December 23, 2013

In the last weeks I was working (from time to time ;-) ) on a new setup of my NAS at home. During this I learn some stuff I didn’t know about NFS which I want to share here. I assume that you got the basis NFS stuff working or know how it works and want some addition tips and ticks

  1. How to query a NFS server for the exported directories and the settings? Easy, use showmount -e <servername>. Here an example:
    $ showmount -e 10.x.x.x
    Export list for 10.x.x.x:
    /data/home                   10.x.x.0/255.255.255.0
  2. Use fsid – why?
    1. NFS needs to be able to identify each filesystem that it exports.  Normally it will use a UUID for the filesystem (if the filesystem has such a thing) or the device number of the device holding the filesystem (if  the  filesystem is stored on the device). If you want to an export different file system (e.g. replacement HDD) the fsid changes and you’re clients have to remount (as they have a stale NFS mount). Read more on this topic here.
    2. Some file systems don’t have a UUID, e.g. encfs does not … use a separate fsid for each export!
  3. You can export multiple file system with one mount request by the client, if you use nohide. Normally, if a server exports two file systems one of which is mounted on the other, then the client will have to mount both file systems explicitly to  get access to them.  If it just mounts the parent, it will see an empty directory at the place where the other file system is mounted.  That file system is “hidden” – if you don’t want that use nohide, like in this example:
    /data/media     10.x.x.0/255.255.255.0(fsid=1,rw,async,no_subtree_check)
    /data/media/movies     10.x.x.0/255.255.255.0(fsid=2,rw,async,no_subtree_check,nohide)
  4. If you changed /etc/exports you don’t need to restart your NFS daemon. If the init script provides a reload thats good – if not use exportfs -rav.

So far  that are my new learned tips and tricks … I think there was one more but I can’t remember it now. Will add it later if I remember.

 

Howto restart a suspended KVM guest which can’t be resumed

November 3, 2013

I just had the problem that I was not able to resume a suspended KVM guest. It happened when I powered my KVM server down to add a new hard disk. My server did not power the guest down but did instead suspended them. I realized that only after I did have no “Run” .. just a “Restore” to choose from.

kvm_restore

When I tried to “Restore” it I go following:

kvm_error

The problem was that I removed a mapped USB device some time ago but at resuming KVM checked for it. The solution was to remove the corrupted suspended virtual machine session so I could boot the machine again – naturally I did lose the suspended session, but that was ok.

[root@kvmserver ~]# virsh managedsave-remove <NameOfGuest>
Removed managedsave image for domain servicesint

Maybe there is a graphical way to do it, but I didn’t look further – as it worked.

Howto get your external IP address via command line

September 28, 2013

I just had to find out the external IP address (as seen from the Internet) of a Linux server which is behind a NAT router. The normal way to goto WhatsMyIP didn’t work as I was only connected via SSH to this server.  But the solution is quite easy thanks to the guys from ipecho, just type:

wget http://ipecho.net/plain -O - -q ; echo

Thats so easy! And even faster than using a browser in the first way ….. :-)

Session verification fails after update from PHP 5.3 to 5.4 with suhosin

September 15, 2013

I’ve just upgraded my PHP install from 5.3.25 to 5.4.19 and I ran into the problem that some PHP programs on my server stopped working. The first I found to have a problem was Tiny Tiny RSS as I use it myself. :-) I was not able to login into it anymore and in the log file I found following:

[Sun Sep 15 11:00:31 2013] [warn] [client xxx.xxx.xxx.xxx] mod_fcgid: stderr: PHP Warning:  Unknown: Failed to write session data (user). Please verify that the current setting of session.save_path is correct (/var/xxxxxx/sessions/) in Unknown on line 0, referer: https://xxxxxxxx/

After searching really long I found out that it worked again if I disabled suhosin (which is a module to harden PHP) by editing /etc/php.d/suhosin.ini and putting a ; in front of

extension=suhosin.so

But this is not secure way to handle this, therefore I search further and found a pull request on GitHup which solves the problem. Ok you need to patch and compile the module … but technically it is fixed ;-)

 

Howto access MTP devices via USB on Ubuntu 12.04

September 1, 2013

A friend asked me how he can access his Nexus 7 device via USB on his Ubuntu 12.04 notebook. With Android versions below 4.0 that was simple as the device registered as mass storage device. The problem now is the stock Ubuntu 12.04 does not support MTP via GVFS (the virtual filesystem of the GNOME desktop). Newer Ubuntu versions e.g. 13.04 have already a GVFS version which support MTP. But these are no LTS versions of Ubuntu, which I recommend for the average user. But it is quite easy to install a newer version of GVFS on Ubuntu 12.04 (and 12.10) that does support it.

First you need to start a terminal. For this click on the dash home icon (1) and than type “terminal” (2) and you’ll the terminal icon – click on it (3.)

terminal

Now copy and paste following into the Terminal (the PC needs to be connected to the Internet while going through these steps):

sudo add-apt-repository ppa:langdalepl/gvfs-mtp

Enter your user password and than you’ll be shown following text:

You are about to add the following PPA to your system:
These builds of gvfs have my native mtp backend backported from gvfs master. Use this to easily access MTP based devices with Nautilus.
More info: https://launchpad.net/~langdalepl/+archive/gvfs-mtp
Press [ENTER] to continue or ctrl-c to cancel adding it

Hit the Enter Key. After this is done you need to type following command, which updates the package list:

sudo apt-get update

After this was successful you need to upgrade the installed packages with:

sudo apt-get upgrade

It should show something like this:

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
gvfs gvfs:i386 gvfs-backends gvfs-bin gvfs-common gvfs-daemons gvfs-fuse gvfs-libs gvfs-libs:i386 libmtp-common libmtp-runtime libmtp9
12 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,193 kB of archives.
After this operation, 4,157 kB of additional disk space will be used.
Do you want to continue [Y/n]?

Just press Enter here (the Y is the default section) to install the packages.

Now you just need to restart your PC and after login just connect your Android device to the PC and the file manager Nautilus will launch with your USB device.

Powered by WordPress
Entries and comments feeds. Valid XHTML and CSS. 24 queries. 0.171 seconds.