Howto setup a Debian 9 with Proxmox and containers using as few IPv4 and IPv6 addresses as possible

August 4, 2017

My current Linux Root-Server needs to be replaced with a newer Linux version and should also be much cheaper then the current one. So at first I did look what I don’t like about the current one:

  • It is expensive with about 70 Euros / months. Following is responsible for that
    • My own HPE hardware with 16GB RAM and a software RAID (hardware raid would be even more expensive) – iLo (or something like it) is a must for me 🙂
    • 16 additional IPv4 addresses for the visualized container and servers
    • Large enough backup space to get back some days.
  • A base OS which makes it hard to run newer Linux versions in the container (sure old ones like CentOS6 still get updates, but that will change)
    • Its time to move to newer Linux versions in the containers
  • OpenVZ based containers which are not mainstream anymore

Then I looked what surrounding conditions changed since I did setup my current server.

  • I’ve IPv6 at home and 70% of my traffic is IPv6 (thx to Google (specially Youtube) and Cloudflare)
  • IPv4 addresses got even more expensive for Root-Servers
  • I’m now using Cloudflare for most of the websites I host.
  • Cloudflare is reachable via IPv4 and IPv6 and can connect back either with IPv4 or IPv6 to my servers
  • With unprivileged containers the need to use KVM for security lessens
  • Hosting providers offer now KVM servers for really cheap, which have dedicated reserved CPUs.
  • KVM servers can host containers without a problem

This lead to the decision to try following setup:

  • A KVM based Server for less than 10 Euro / month at Netcup to try the concept
  • No additional IPv4 addresses, everything should work with only 1 IPv4 and a /64 IPv6 subnet
  • Base OS should be Debian 9 (“Stretch”)
  • For ease of configuration of the containers I will use the current Proxmox with LXC
  • Don’t use my own HTTP reverse proxy, but use exclusively Cloudflare for all websites to translate from IPv4 to IPv6

After that decision was reached I search for Howtos which would allow me to just set it up without doing much research. Sadly that didn’t work out. Sure, there are multiple Howtos which explain you how to setup Debian and Proxmox, but if you get into the nifty parts e.g. using only minimal IP addresses, working around MAC address filters at the hosting providers (which is quite a important security function, BTW) and IPv6, they will tell you: You need more IP addresses, get a really complicated setup or just ignore that point at all.

As you can read that blog post you know that I found a way, so expect a complete documentation on how to setup such a server. I’ll concentrate on the relevant parts to allow you to setup a similar server. Of course I did also some security harding like making a secure ssh setup with only public keys, the right ciphers, …. which I won’t cover here.

Setting up the OS

I used the Debian 9 minimal install, which Netcup provides, and did change the password, hostname, changed the language to English (to be more exact to C) and moved the SSH Port a non standard port. The last one I did not so much for security but for the constant scans on port 22, which flood the logs.

passwd
vim /etc/hosts
vim /etc/hostname
dpkg-reconfigure locales
vim /etc/ssh/sshd_config
/etc/init.d/ssh restart

I followed that with making sure no firewall is active and installed the net-tools so I got netstat and ifconfig.

apt install net-tools

At last I did a check if any packages needs an update.

apt update
apt upgrade

Installing Proxmox

First I checked if the IP address returns the correct hostname, as otherwise the install fails and you need to start from scratch.

hostname --ip-address

Adding the Proxmox Repos to the system and installing the software:

echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
apt update && apt dist-upgrade
apt install proxmox-ve postfix open-iscsi

After that I did a reboot and booted the Proxmox kernel, I removed some packages I didn’t need anymore

apt remove os-prober linux-image-amd64 linux-image-4.9.0-3-amd64

Now I did my first login to the admin GUI to https://<hostname>:8006/ and enabled the Proxmox firewall

Than set the firewall rules for protecting the host (I did that for the whole datacenter even if I only have one server at this moment). Ping is allowed, the Webgui and ssh.

I mate sure with

iptables -L -xvn

that the firewall was running.

BTW, if you don’t like the nagging windows at every login that you need a license and if this is only a testing machine as mine is currently, type following:

sed -i.bak 's/NotFound/Active/g' /usr/share/perl5/PVE/API2/Subscription.pm && systemctl restart pveproxy.service

Now we need to configure the network (vmbr0) for our virtual systems and this is the point where my Howto will go an other direction. Normally you’re told to configure the vmbr0 and put the physical interface into the bridge. This bridging mode is the easiest normally, but won’t work here.

Routing instead of bridging

Normally you are told that if you use public IPv4 and IPv6 addresses in containers you should bridge it. Yes thats true, but there is one problem. LXC containers have their own MAC addresses. So if they send traffic via the bridge to the datacenter switch, the switch sees the virtual MAC address. In a internal company network on a physical host that is normally not a problem. In a datacenter where different people rent their servers thats not good security practice. Most hosting providers will filter the MAC addresses on the switch (sometimes additional IPv4 addresses come with the right to use additional MAC addresses, but we want to save money here 🙂 ). As this server is a KVM guest OS the filtering is most likely part of the virtual switch (e.g. for VMware ESX this is the default even).

With ebtables it is possible to configure a SNAT for the MAC addresses, but that will get really complicated really fast – trust me with networking stuff – when I say complicated it is really complicated fast. 🙂

So, if we can’t use bridging we need to use routing. Yes the routing setup on the server is not so easy, but it is clean and easy to understand.

First we configure the physical interface in the admin GUI

Two configurations are different than at normal setups. The provider gave you most likely a /23 or /24, but I use a subnet mask /32 (255.255.255.255), as I only want to talk to the default gateway and not the other servers from other customers. If the switch thinks traffic is ok, he can reroute it for me. The provider switch will defend its IP address against ARP spoofing, I’m quite sure as otherwise a incorrect configuration of a customer will break the network for all customer – the provider will make that mistake only once. For IPv6 we do basically the same with /128 but in this case we also want to reuse the /64 subnet on our second interface.

As I don’t have additional IPv4 addresses, I’ll use a local subnet to provide access to IPv4 addresses to the containers (via NAT), the IPv6 address gets configured a second time with the /64 subnet mask. This setup allows use to route with only one /64 – we’re cheap … no extra money needed.

Now we reboot the server so that the /etc/network/interfaces config gets written. We need to add some additional settings there, so it looks like this

The first command in the red frame is needed to make sure that traffic from the containers pass the second rule. Its some kind lxc specialty. The second command is just a simple SNAT to your public IPv4 address. The last 2 are for making sure that the iptable rules get deleted if you stop the network.

Now we need to make sure that the container traffic gets routed so we put following lines into /etc/sysctl.conf

And we should also enable following lines

Now we’re almost done. One point remains. The switch/router which is our default gateway needs to be able to send packets to our containers. For this he does for IPv6 something similar to an ARP request. It is called neighbor discovery and as the network of the container is routed we need to answer the request on the host system.

Neighbor Discovery Protocol (NDP) Proxy

We could now do this by using proxy_ndp, the IPv6 variant of proxy_arp. First enable proxy_ndp by running:

sysctl -w net.ipv6.conf.all.proxy_ndp=1

You can enable this permanently by adding the following line to /etc/sysctl.conf:

net.ipv6.conf.all.proxy_ndp = 1

Then run:

ip -6 neigh add proxy 2a03:5000:3d:1ee::100 dev ens3

This means for the host Linux system to generate Neighbor Advertisement messages in response to Neighbor Solicitation messages for 2a03:5000:3d:1ee::100 (e.g. our container with ID 100) that enters through ens3.

While proxy_arp could be used to proxy a whole subnet, this appears not to be the case with proxy_ndp. To protect the memory of upstream routers, you can only proxy defined addresses. That’s not a simple solution, if we need to add an entry for every container. But we’re saved from that as Debian 9 ships with an daemon that can proxy a whole subnet, ndppd. Let’s install and configure it:

apt install ndppd
cp /usr/share/doc/ndppd/ndppd.conf-dist /etc/ndppd.conf

and write a config like this

route-ttl 30000
proxy ens3 {
router no
timeout 500
ttl 30000
rule 2a03:5000:3d:1ee::/64 {
auto
}
}

now enable it by default and start it

update-rc.d ndppd defaults
/etc/init.d/ndppd start

Now it is time to boot the system and create you first container.

Container setup

The container setup is easy, you just need to use the Proxmox host as default gateway.

As you see the setup is quite cool and it allows you to create containers without thinking about it. A similar setup is also possible with IPv4 addresses. As I don’t need it I’ll just quickly describe it here.

Short info for doing the same for an additional IPv4 subnet

Following needs to be added to the /etc/network/interfaces:

iface ens3 inet static
pointopoint 186.143.121.1

iface vmbr0 inet static
address 186.143.121.230 # Our Host will be the Gateway for all container
netmask 255.255.255.255
# Add all single IP's from your /29 subnet
up route add -host 186.143.34.56 dev br0
up route add -host 186.143.34.57 dev br0
up route add -host 186.143.34.58 dev br0
up route add -host 186.143.34.59 dev br0
up route add -host 186.143.34.60 dev br0
up route add -host 186.143.34.61 dev br0
up route add -host 186.143.34.62 dev br0
up route add -host 186.143.34.63 dev br0
.......

We’re reusing the ens3 IP address. Normally we would add our additional IPv4 network e.g. a /29. The problem with this straight forward setup would be that we would lose 2 IP addresses (netbase and broadcast). Also the pointopoint directive is important and tells our host to send all requests to the datacenter IPv4 gateway – even if we want to talk to our neighbors later.

The for the container setup you just need to replace the IPv4 config with following

auto eth0
iface eth0 inet static
address 186.143.34.56 # Any IP of our /29 subnet
netmask 255.255.255.255
gateway 186.143.121.13 # Our Host machine will do the job!
pointopoint 186.143.121.1

How that saved you some time setting up you own system!

Accessing Mikrotik RouterOS via MAC Telnet from a Linux box

November 18, 2016

If you know Mikrotik Routers you know that you’re able to access them via MAC Telnet (see here for more details) via Layer2 with Winbox. But running Winbox via Wine on a Linux is not that great for using MAC Telnet, and there is a better way .. just use MAC-Telnet from Håkon Nessjøen. On Ubuntu/Debian you can just install the package with

sudo apt-get install mactelnet-client

and you see its feature like this:

$ mactelnet -h
MAC-Telnet 0.4.2
Usage: mactelnet <MAC|identity> [-h] [-n] [-a <path>] [-A] [-t <timeout>] [-u <user>] [-p <password>] [-U <user>] | -l [-B] [-t <timeout>]

Parameters:
MAC MAC-Address of the RouterOS/mactelnetd device. Use mndp to
discover it.
identity The identity/name of your destination device. Uses
MNDP protocol to find it.
-l List/Search for routers nearby (MNDP). You may use -t to set timeout.
-B Batch mode. Use computer readable output (CSV), for use with -l.
-n Do not use broadcast packets. Less insecure but requires
root privileges.
-a <path> Use specified path instead of the default: ~/.mactelnet for autologin config file.
-A Disable autologin feature.
-t <timeout> Amount of seconds to wait for a response on each interface.
-u <user> Specify username on command line.
-p <password> Specify password on command line.
-U <user> Drop privileges to this user. Used in conjunction with -n
for security.
-q Quiet mode.
-h This help.

So lets give it a try, first with searching for my home router

$ mactelnet -l
Searching for MikroTik routers... Abort with CTRL+C.
IP MAC-Address Identity (platform version hardware) uptime
10.x.x.x 0:xx:xx:xx:xx:xx jumpgate (MikroTik x.x.x. xxxx) up 139 days 5 hours XXXXX-XXXX vlanInternal

and then we’ll connect

$ mactelnet 0:xx:xx:xx:xx:xx

and we’re connected.

Howto live-sniffer traffic on a remote Linux system with Wireshark

October 2, 2016

You ask why you should need this at all? Easy, sometimes a tcpdump is not enough or not that easy to use:

Sure, it’s quite easy to sniffer on a remote Linux box with tcpdump into an file and copy that that over via scp to the local system and take a closer look at the traffic. But getting used to the feature of my Mikrotik routers to stream traffic live to my local Wireshark, I thought something similar must also be possible with normal Linux boxes. And sure it is.

We just use ssh to pipe the captured traffic through to the local Wireshark. Sure this is not the perfect method for GBytes of traffic but often you just need a few packets to check something or monitor some low volume traffic. Anyway first we need to make sure that Wireshark is able to execute the dumpcap command with our current user. So we need to check the permissions

ll /usr/bin/dumpcap
-rwxr-xr-- 1 root wireshark 88272 Apr 8 11:53 /usr/bin/dumpcap*

So on Ubuntu/Debian we need to add ourself to the wireshark group and check that it got applied with the id command (You need to logoff or start a new sesson with su - $user beforehand). Now you can simply call:

ssh [email protected] 'tcpdump -f -i eth0 -w - not port 22' | wireshark -k -i -

wireshark

And now the really cool part comes. I’m using Ubiqity Unifi access points in multiple setups and I sometimes need to look at the traffic a station communicates with the access point on the wireless interface. With that commands I’m able to ssh into the access point and look at the live traffic of an access point and a station which is hundreds of kilometres way. You can ssh into the AP with your normal web GUI user (if not configured differently) and the bridge config looks like this

BZ.v3.7.8# brctl show
bridge name bridge id STP enabled interfaces
br0 ffff.00272250d9cf no ath0
ath1
ath2
eth0

You can choose one of that interfaces (or the bridge) for normal IP traffic or go one level deeper with wifi0, which looks like this

ssh [email protected] 'tcpdump -f -i wifi0 -w -' | wireshark -k -i -

wireshark2

That’s cool!?! 😉

Block Ransomware botnet C&C traffic with a Mikrotik router

March 14, 2016

In my last blog post I wrote about blocking, detecting and mitigating the Locky Ransomware. I’ve referenced to a earlier blog post of mine which allows to block traffic to/from the Tor network. This blog post combines both – a way to block Ransomware botnet C&C traffic on a Mikrotik router. The base are the block lists from Abuse.ch, which also provide a nice statistic. Locky is not the most common Ransomware today.

ransomware

Linux part

You need also a small Linux/Unix server to help. This server needs to be trustworthy one as the router executes a script this server generates. This is required as RouterOS is only able to parse text files up to 4096 by itself, and the IP address and domain list is longer.

So first we create the script /usr/local/sbin/generateMalwareBlockScripts.py on the Linux server by downloading following Python script. Open the file and change the paths to your liking. The filename path works on CentOS, on Ubuntu you need to remove the html directory. Now make the file executable

chmod 755 /usr/local/sbin/generateMalwareBlockScripts.py

and execute it

/usr/local/sbin/generateMalwareBlockScripts.py

No output is good. Make sure that the file is reachable via HTTP (e.g. install httpd on CentOS) from the router. If everything works make sure that the script is called once every hour to update the list. e.g. place a symlink in /etc/cron.hourly:

ln -s /usr/local/sbin/generateMalwareBlockScripts.py /etc/cron.hourly/generateMalwareBlockScripts.py

Mikrotik part

Copy and paste following to get the IP address script onto the router:

/system script
add name=scriptUpdateMalwareIPs owner=admin policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive source="# Script which will download a script which adds the malware IP addresses to an address-list\
\n# Using a script to add this is required as RouterOS can only parse 4096 byte files, and the list is longer\
\n# Written by Robert Penz <[email protected]> \
\n# Released under GPL version 3\
\n\
\n# get the \"add script\"\
\n/tool fetch url=\"http://10.xxx.xxx.xxx/addMalwareIPs.rsc\" mode=http\
\n:log info \"Downloaded addMalwareIPs.rsc\"\
\n\
\n# remove the old entries\
\n/ip firewall address-list remove [/ip firewall address-list find list=addressListMalware]\
\n\
\n# import the new entries\
\n/import file-name=addMalwareIPs.rsc\
\n:log info \"Removed old IP addresses and added new ones\"\
\n"

and copy and paste following for the DNS filtering script – surely you can combine them … I let them separated as maybe someone needs only one part:

/system script
add name=scriptUpdateMalwareDomains owner=admin policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive source="# Script which will download a script which adds the malware domains as static DNS entry\
\n# Using a script to add this is required as RouterOS can only parse 4096 byte files, and the list is longer\
\n# Written by Robert Penz <[email protected]> \
\n# Released under GPL version 3\
\n\
\n# get the \"add script\"\
\n/tool fetch url=\"http://10.xxx.xxx.xxx/addMalwareDomains.rsc\" mode=http\
\n:log info \"Downloaded addMalwareDomains.rsc\"\
\n\
\n# remove the old entries\
\n/ip dns static remove [/ip dns static find comment~\"addMalwareDomains\"]\
\n\
\n# import the new entries\
\n/import file-name=addMalwareDomains.rsc\
\n:log info \"Removed old domains and added new ones\"\
\n"

To make the first try run use following command

/system script run scriptUpdateMalwareIPs

and

/system script run scriptUpdateMalwareDomains

if you didn’t get an error

/ip firewall address-list print

and

/ip dns static print

should show many entries. Now you only need to run the script once a hour which following command does:

/system scheduler add interval=1h name=schedulerUpdateMalwareIPs on-event=scriptUpdateMalwareIPs start-date=nov/30/2014 start-time=00:05:00

and

/system scheduler add interval=1h name=schedulerUpdateMalwareDomains on-event=scriptUpdateMalwareDomains start-date=nov/30/2014 start-time=00:10:00

You can use the address list and DNS blacklist now in various ways .. the simplest is following

/ip firewall filter
add chain=forward comment="just the answer packets --> pass" connection-state=established
add chain=forward comment="just the answer packets --> pass" connection-state=related
add action=reject chain=forward comment="no Traffic to malware IP addresses" dst-address-list=addressListMalware log=yes log-prefix=malwareIP out-interface=pppoeDslInternet
add action=reject chain=forward comment="report Traffic to DNS fake IP address" dst-address=10.255.255.255 log=yes log-prefix=malwareDNS out-interface=pppoeDslInternet
add chain=forward comment="everything from internal is ok --> pass" in-interface=InternalInterface

If a clients generates traffic to such DNS names or IP address you’ll get following in your log (and the traffic gets blocked):

20:07:16 firewall,info malwareIP forward: in:xxx out:pppoeDslInternet, src-mac xx:xx:xx:xx:xx:xx, proto ICMP (type 8, code 0), 10.xxx.xxx.xxx->104.xxx.xxx.xxx, len 84

or

20:09:34 firewall,info malwareDNS forward: in:xxx out:pppoeDslInternet, src-mac xx:xx:xx:xx:xx:xx, proto ICMP (type 8, code 0), 10.xxx.xxx.xxx ->10.255.255.255, len 84

ps: The Python script is done in a way that it easily allows you to add also other block lists … e.g. I added the Feodo blocklist from Abuse.ch.

Setup Let’s encrypt TLS certificate for Proxmox [Update]

February 21, 2016

The graphical interface of Proxmox runs on port 8006 and uses HTTPS. By default this is a self signed certificate, which is a problem if you login in from a client the first time. In this case you’re not sure if there is no MitM attack going on. But there is a solution for this by using Let’s encrypt.

First you need to install git

apt-get install git

download the client software

cd /root
git clone https://github.com/letsencrypt/letsencrypt

and now you need to “patch” it as with running containers the check for a free port 80 most likely fails as a container with a running a web server is quite common. Open this file /root/letsencrypt/letsencrypt/plugins/util.py with an editor and place

return False

as first command in the already_listening method. Make sure that it is aligned with the line above and below as Python requires that. It should look this:

letsencrypt

ps: If you started to read this article after calling ./letsencrypt-auto the first time you also need to patch following file the same way.

/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/plugins/util.py

[Update] In the newer version the client is called certbot and so the path got changed to

/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/plugins/util.py

[/Update]

Now make sure that Port 80 is not firewalled and therefore reachable from the Internet and call following command:

cd /root/letsencrypt/
./letsencrypt-auto certonly --standalone --standalone-supported-challenges http-01 -d <dnsname>

Replace with the <dnsname> with the dns name you use to connect to the management Web GUI. If all worked, you should see following:

IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live//fullchain.pem. Your cert
will expire on 2016-05-21. To obtain a new version of the
certificate in the future, simply run Let's Encrypt again.
- If you like Let's Encrypt, please consider supporting our work by:
....

Now only saving the old certificate with these commands …

mv /etc/pve/pve-root-ca.pem /etc/pve/pve-root-ca.pem.orig
mv /etc/pve/local/pve-ssl.key /etc/pve/local/pve-ssl.key.orig
mv /etc/pve/local/pve-ssl.pem /etc/pve/local/pve-ssl.pem.orig

and copying the new ones to the correct place …

cp /etc/letsencrypt/live/<hostname>/chain.pem /etc/pve/pve-root-ca.pem
cp /etc/letsencrypt/live/<hostname>/privkey.pem /etc/pve/local/pve-ssl.key
cp /etc/letsencrypt/live/<hostname>/cert.pem /etc/pve/local/pve-ssl.pem

followed by a restart of the processes:

service pveproxy restart
service pvedaemon restart

is open.

The last 5 commands are needed, because the special file system Proxmox uses for /etc/pve does not support symlinks. You need to execute that 5 commands after each renewal. The option for this is “renew”. You can/should automate the renew process.

Unifi upgrade 2.4.6 to 3.2.10: maps not working

July 8, 2015

So after the migration itself worked and I’ve updated the access points I looked into the problem that my maps did not work anymore. Every map showed only white and I was not able to add new maps. After long googling I found part of the answer here.  My solution is a bit different and works on the fly:

  1. Unifi management needs to be running
  2. check where your mongod is running … on my system its ports 27117
    # netstat -pnl | grep mongod
    tcp 0 0 127.0.0.1:27117 0.0.0.0:* LISTEN 16760/bin/mongod
  3. login with /usr/bin/mongo --port 27117
  4. and type following
  5. use ace followed by db.map.chunks.dropIndex("files_id_1_n_1")
  6. choose a new map in the drop down box and it should work again

the full communication looks like this:

# /usr/bin/mongo --port 27117
MongoDB shell version: 2.6.10
connecting to: 127.0.0.1:27117/test
> use ace
switched to db ace
> db.map.chunks.dropIndex("files_id_1_n_1")
{ "nIndexesWas" : 2, "ok" : 1 }
>
bye

Unifi upgrade 2.4.6 to 3.2.10: exception: remove needs a query at src/mongo/shell/collection.js

If you try to upgrade Ubiquiti Networks Unifi system from version 2.4.6 to 3.2.10 it is possible that you’ll run into following problem:

Exception in thread "launcher" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'class.super': Instantiation of bean failed; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Factory method [public final com.ubnt.A.new.B com.ubnt.A.new$$EnhancerByCGLIB$$ad7756e2.class.super()] threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'Ø00000': Invocation of init method failed; nested exception is com.mongodb.CommandResult$CommandFailure: command failed [$eval]: { "serverUsed" : "/127.0.0.1:27117" , "errmsg" : "exception: remove needs a query at src/mongo/shell/collection.js:299" , "code" : 16722 , "ok" : 0.0}

when trying to start it like this

/usr/bin/java -jar /opt/UniFi/lib/ace.jar start

Searching through the Internet shows that more people have the problem but a working solution is not posted anywhere. So here is it:

The reason for the problem is quit easily found here. Basically the syntax of the command is not correct. So I started to search through the lib directory for a file that contains this incorrect string. It is within /opt/UniFi/lib/ace.jar so I installed the jar command line utility (on Centos 6)

yum install java-1.7.0-openjdk-devel

and extracted all files to search in which it was. As I found it, its easier for you just type:

jar xf ace.jar com/ubnt/A/ooOO/OOoO.class

Now you need a Java class editor – we use this Java one. Download and extract it and start it like this:

java -jar ce.jar

Now you need to find and change the values:

  1. Click on “Constant Pool”
  2. Type db.cache_device.remove in the search field and find the line 459
  3. Activate the “Modify Mode” and change the values

classeditor

In the end it should look like this (the second one is a link to the fist,  just update):

classeditor2

Now save the file and update the jar file:

jar uf ace.jar com/ubnt/A/ooOO/OOoO.class

Now a restart and migration should work. Hope this helps others .. it took me one hour to find my solution, so I hope its now faster for you. 😉

A practical example how broken MD5 really is

November 5, 2014

Nat McHugh did a wonderful post with two completely different monochrome pictures which have the same MD5 sum. Take a look! MD5 as a secure hash function should provide the properties shown in this Wikipedia article. But as Nat says in his own words:

The two images above clearly demonstrate that MD5 lacks the final property (Collision resistance). MD5 is broken as a cryptographic hash function.

I believe he is correct, nothing shows better how broken MD5 is than two  images with the same MD5 sum and that really nobody should use it anymore for security reasons. Using it for checking file corruption during transfer is Ok, but the hash for ISO files or packages for Linux Distributions you download should not be checked with MD5. CPU power is cheap nowadays. The big ones like Ubuntu, Debian and CentOS already have changed to provide also SHA1 and SHA256 hashes for all the files. OpenSuse provides MD5 and SHA1 … better would be SHA256 too. Anyway use SHA256 were possible to verify your downloads!!

ubuntu

debian

Howto get an A+-Rating at Qualys SSL Labs with Apache 2.2

November 1, 2014

One of my HTTPS servers currently gets an A- on Qualys SSL Labs test, as I’m running Ubuntu 12.04 LTS with Apache 2.2 which does not support the ECHDE-Cipher suites, which is required for Perfect Forward Secrecy with the Internet Explorer.

aminusrating

Upgrading to Ubuntu 14.04 needs some major rework for which I currently don’t have the time for.But there is now a trick to get that A-Rating and it is called TLS Interposer. It uses LD_PRELOAD to intercept the OpenSSL API calls and adds some additional features and security settings.

Currently there is no deb package for Ubuntu 12.04, so we need to compile it for our-self:

wget https://github.com/Netfuture/tlsinterposer/archive/master.zip
unzip master.zip
cd tlsinterposer-master/
make

Possible errors:

  • make: cc: Command not found -> install the gcc (apt-get install gcc)
  • tlsinterposer.c:29:25: error: openssl/ssl.h: No such file or directory –> Install the OpenSSL Development package (apt-get install libssl-dev)

Now we need only an make install and we’re ready to try it. For this we add

export LD_PRELOAD=/usr/local/lib/libtlsinterposer.so

at the end of

/etc/apache2/envvars

and restart Apache with

/etc/init.d/apache2 restart

And you get

aplusrating

Success!!!

You need also following for an A+ Rating:

  • Following needs to be still in the Apache config:
    SSLProtocol ALL -SSLv2 -SSLv3
    SSLHonorCipherOrder On
    SSLCompression Off
    # SSLCipherSuite settings will be ignored
  • You need to HSTS configured, check this link for how to enable it on Apache 2.2

 

So this is with Ubuntu 12.04 … I’ve tried the same with Centos 6 but I didn’t have success.  Following problems did arise

1. Makefile

The Makefile does not support the names of the ssl libs on Centos 6 – when you compile, you get:

tlsinterposer.c:85: error: ‘DEFAULT_SSLLIB’ undeclared here (not in a function)

The Makefile has a regex that does not work with Centos 6. I changed following

# diff Makefile.orig Makefile
32c32
<       ldconfig -p | sed -n -e 's/^\t*\(libssl\.so\.[0-9]\.[0-9]\.[0-9]\).*/#define DEFAULT_SSLLIB "\1"/p' > $@
---
>       ldconfig -p | sed -n -e 's/^\t*\(libssl\.so\.[0-9][0-9]\).*/#define DEFAULT_SSLLIB "\1"/p' > $@

and deleted the file ssl-version.h and called make again and it compiled. I’ve reported that to author.

2. application’s cipher is not overwritten

Loading the TLS Interposer with putting it in /etc/sysconfig/httpd and than doing an /etc/init.d/httpd restart worked but the application’s cipher didn’t get changed. I could verify that with the test scripts which come with TLS Interposer:

# ./run_tests
gcc -O2 -Wall -Wextra simple_server.c -lcrypto -lssl -o simple_server
Test 1a pass
Test 1b FAIL!
Test 1c pass
Test 1d pass
Test 1e pass
Test 2a pass
Test 2b pass
Test 2c pass
Test 3a pass
Test 3b pass
Test 4a pass
Test 4b pass
Test 4c pass
Test 5a pass
Test 5b pass
Test 5c pass
Test 5d pass

I’ve reported that to the author. If I get an update on this I’ll report in my blog about this.

Take care if using Ubuntu 12.04 as a client – TLS 1.2 is not enabled by default

October 24, 2014

It got fixed with Ubuntu 14.04 but 12.04 is still supported and many people are still using 12.04 and even with the OpenSSL package update (2014-10-02) TLS 1.2 is not enabled by default. Take a look at this bug report and the statement from Marc Deslauriers (Ubuntu Security Engineer):

That USN doesn’t re-enable TLSv1.2 by default for clients in Ubuntu 12.04. It simply fixes an issue if someone _forced_ TLSv1.2 to be enabled.

You’re asking why we got into that problem in the first place … Marc tells us also this:

Ubuntu 12.04 contains openssl 1.0.1, which supports TLS v1.2. Unfortunately, because of the large number of sites which incorrectly handled TLS v1.2 negotiation, we had to disable TLS v1.2 on the client.

So someone thought again he is smarter than the OpenSSL guys … but this was not the first time …. lets remember this “optimization” of OpenSSL by the Debian guys .. could they please clean up their mess and enable TLS 1.2 by default as in 14.04?

Powered by WordPress
Entries and comments feeds. Valid XHTML and CSS. 92 queries. 0.112 seconds.