Kpartx: a tool for mounting partitions within an image file

July 12, 2008

Kpartx can be used to set up device mappings for the partitions of any partitioned block device. It is part of the Linux multipath-tools. With kpartx -l imagefile you get an overview of the partitions in the image file and with kpartx -a imagefile the partitions will accessible via /dev/mapper/loop0pX (X is the number of the partition). You can mount it now with mount /dev/mapper/loop0pX /mnt/ -o loop,ro. After unmounting you can disconnect the mapper devices with kpartx -d imagefile.

There are packages for Debian and Ubuntu.

Fix for Kopete 0.12.7 to work again with ICQ

July 2, 2008

Yesterday my Kopete stopped working with the ICQ network. The ICQ told me that my client version is too old. Here is the fix to make it work again. Look in the ~/.kde/share/config/kopeterc file and change the values of the variables to following (which are from trunk):

[ICQVersion]
Build=0x17AB
ClientId=0x010A
ClientString=ICQ Client
Country=us
Lang=en
Major=0x0006
Minor=0x0000
Other=0x00007535
Point=0x0000

After a restart of Kopete everthing works again for me.

Zattoo as backup for satellite TV

June 25, 2008

Today is the first semifinal of the EURO 2008 (=soccer – Germany vs Turkey) which is a big deal here in Europe, and today it was a really sunny day. But just 1h before the game starts it started raining strong in my home town together with lightnings. This leaded to a bad reception of my satellite TV. As the internet via ADSL was working without any problems I started searching for a backup solution and I found Zattoo. And I couldn’t believe it. They support Linux, specially (K)Ubuntu! Wow! I downloaded the .deb file for the 3.20 version but it didn’t work I got a

robert@darksun:~$ zattoo_player
(process:9626): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.14.1/gobject/gtype.c:2242: initialization assertion failed, use IA__g_type_init() prior to this function
(process:9626): GLib-GObject-CRITICAL **: g_object_new: assertion `G_TYPE_IS_OBJECT (object_type)' failed
(process:9626): GLib-GObject-CRITICAL **: g_object_ref: assertion `G_IS_OBJECT (object)' failed

I searched a little bit in the internet and found out that the 3.11 should work which I downloaded it from here. And yes it worked without any problems. One important side note: Your IP address needs to be in one of the countries for which the service is available. Ah and as I use Kubuntu and not Ubuntu I installed following packages before installing Zattoo.

apt-get install libgtkglext1 libgnome-keyring0 libgnomeui-0 libcurl3 libxul0d libgdk-pixbuf-dev

Do you know what a Host Protected Area (HPA) is?

June 17, 2008

It is sometimes also called Hidden Protected Area and it is an area of your hard disk which is normally not visible for the operating system and therefore the applications. It was first introduced in the ATA-4 standard and is defined in ATA-5 as optional feature which is supported by most modern hard disks. The normal use case of this is for system recovery and the backup of important configuration data.

So why is this security relevant? For law enforcement agencies and forensic experts it is important to detect HPAs and recovery data from it. For one someone could hide some sensitive data in it or there could be evidence or traces if the owner does not know about the HPA.

But it is also important for any business and home user, e.g. if you want to fully override your hard disk you need to make sure you also override the HPA. If you’re a user of a current Linux kernel you’re lucky – the kernel will deactivate (temporary) the HPA during booting and so can override everything without problems.

Here are some links which will help you do detect / remove the HPA from your hard disk:

The fallout of the Debian OpenSSL security problem

May 24, 2008

Today I don’t want to write about how the Debian security problem occurred, if it was the fault of the Debian maintainer or the OpenSSL guys. We’re all human so errors can occur, but this shows a much bigger problem we have. Our SSL infrastructure!!

I’m quit sure you read about the weak SSL whitehouse.gov had and that the white house does not handle their SSL stuff by them self, instead the use Akamai for this. If you don’t know Akamai, it is a major content distribution network. They have tens of thousands servers distributed all over the world and the content is served by the closest server to provide higher download speeds for the customs and make a DDOS attack much harder if not impossible. Their customer list includes Microsoft, the New York Times and so on.

So basically the SSL keys of the Akamai servers where weak, and it was possible to get their public keys (it is sent as part of the SSL handshake), and calculate the private ones out of it (there are only 32k possible keys). I know that at least one did it and sent the keys to the CCC, which verify the authenticity of the keys. Sure Akamai replaced their keys immediately. BUT. There is no way to revoke the SSL keys!!

What most people don’t seem to understand so far is that these keys are signed by a CA which is in any browser. This means that a man in the middle attack can easily performed with this. As ATI is also a customer of Akamai, someone could send you a Trojan horse instead of the newest ATI you wanted. SSL has in theory two defenses against this:

  • Keys expire, the Akamai key in October 2008
  • Originally SSL had the idea that CAs publish a list of compromised keys (revoke list) and as part of the SSL handshake the browser should check if a key is on the list. The problem with it was that this does not scale and is a privacy problem too. Browsers don’t implement this or have not activated it by default.

So we’re out in the open for this key until this fall, but that won’t be the end, as e.g. godaddy at least allows to the sign keys for a longer period of time. e.g. 3 years. And the same problem occurs if a private key is leak by other means. The whole foundation of our web security infraction is build on sand. We need something new!!

PS: I want to stress that Akamai did nothing wrong here. They did everything right and still have a problem!

iptables dynamic port script for NFS

May 10, 2008

Some days ago I talked with a friend (here a link to his homepage) about firewalls and file servers and he told me he has a iptables script which adapts to the NFS ports automatically. I asked him for this script and here is it. Thx Hannes for the script.


# rpcinfo -p prints a list of all registered RPC programs
# sed -e '1D' removes the headline
# tr -s ' ' '\t' replaces repeated spaces with a single tab
# cut -f 4,5 we only need the protocol- and port-columns
# sort | uniq removes the duplicate lines
# now we have lines with the needed protocol and port but for splits
# this lines to single words so we have to store the protocol
for l in `rpcinfo -p | sed -e '1D' | tr -s ' ' '\t' | cut -f 4,5 | sort | uniq`
do
  case $l in
    tcp)
      SYN=--syn
      PROTOCOL=$l
        ;;
    udp)
      SYN=
      PROTOCOL=$l
        ;;
    *)
      iptables -A INPUT -p $PROTOCOL --dport $l $SYN -j ACCEPT
    ;;
  esac
done

Kubuntu 8.04 hardy addition packages install script

This script is for my friends, who most know the previous versions already. It installs additional packages for kubuntu 8.04 hardy. I use it for the initial setup of a desktop system. First install Kubuntu from CD and than use this script to get the system which, has all codecs and commonly used programs (be it free or non free software) installed. So this blog entry is for my own reference and for my friends. Basically after running this script you’ll have a system which is ready for usage by a standard user.

Insecurity of Virtual Appliances and some thoughts on 7-zip compression

May 3, 2008

This week I looked for a Ubuntu server 8.04 LTS virtual appliance for Vmware – I found one here. But before I could start testing it I needed to extract the .7z file on my VMware server. The first thing I though was, why the hack 7-zip? Why not use bzip2, which is standard on Linux (beside the faster, but less compressing gzip)?

But I was shown wrong by the first entries at my google search – 7-zip has most of the time the better compression and is not much slower than bzip2. And there is even an open source command line tool on Linux, it is called p7zip. The only thing which prevents me from using it, is that it is not supported by tar so far, as soon that happens I will start using it.

But now to something security related. Almost every virtual appliance I download has openssh as sshd daemon installed. Am I the only guy who things this is a bad idea? The host keys are the same for all virtual appliances. So anyone who knows which virtual appliances I used to setup my server, can use this knowledge to perform a man in the middle attack and get my login name and password. This bad habit seems to occur by almost all virtual appliances I got my hands on. My solutions so far is following on Ubuntu and Debian Systems:


apt-get --purge remove openssh-server && apt-get install ssh

This way I’ve a clean config and new keys. (ssh is a meta package for openssh-client and openssh-server). So there is a easy work around but how many administrators will think about that? I think virtual appliances are made to ease the life of the administrators or to allow even non expert to provide a service based on the appliance. With this goal comes also the responsibility to make the system save by default.

LED blinking on your switch

April 9, 2008

Did you ever have the problem that you didn’t know to which switch port a given ethernet port /cable is connected to? Wouldn’t it be cool if the LED of the switch port would blink so you know which one is the correct one?

You’re lucky – it is possible with Linux. There are even two ways. With some chipsets ethtool -p eth0 works but not with all. But following script also helps in any case:

#/bin/bash
# usage example: blink.sh eth0

while true ; do
  ifconfig $1 down
  sleep 2
  ifconfig $1 up
  sleep 2
done

Put that script into /usr/local/sbin/blink.sh and set the execution permissions. Call it with the device as parameter. Don’t set the blinking below 2sec as it is possible that the connection negation takes up to that amount of time.

Howto setup Asterisk for recording a podcast over the Internet

April 6, 2008

Some friends and I are planing a to make a podcast and as we IT guys we needed something to support our distributed recordings over the Internet. One of my friend lives about 200km away but also with the others it would be not that easy to get all into one room at the same time. When we looked around for various ways to do distributed recordings we found mostly Skype howto’s and we knew only a few podcasts which did the recording with Asterisk (in a not that good audio quality I think). But we didn’t find what we really wanted and so I started to look into that topic. At first I wrote our requirements down:

  • the recording should be done centrally on a server automatically without user interaction (so it is not forgotten and to minimize the lag and sound quality problems)
  • the recorded files should be easy available as OGG files to all podcast members after the recording, as the post production is maybe not done always by the same people
  • various client operation systems and VoIP clients should be supported, therefore an open standard protocol should be used
  • possibility to record each participant separately to allow changes in the volume or exclusively applying audio filters
  • it should be possible to invite guest for interviews via the same system without requiring more than a VoIP client which support the choosen protocol. (no registration somewhere)
  • optional it should be possible to connect the system to the POTS (plain old telephone service) for interviews with people which cannot use a VoIP client.
  • And as a requirement by my knowledge and existing infrastructure, the server should run on one of my Linux servers I’ve running in a big data center, within a OpenVZ virtual environment.

After some research I decided to go with Asterisk. This howto describes what I’ve done to reach the above goals. After the completion of this howto you should have following:

  • A SIP server where you participants can connect to and talk with each other.
  • As soon as they go into a special virtual conference room everything they say will be recorded
  • After they leave the conference room a background process will reencode the recorded WAV file to OGG and make it available via web.
  • Every participant gets its own OGG file with the starting timestamp in the filename, you need to use the recording at the correct place in the post production.
  • The inclusion of a SIP provider with connection to the POTS network is not described in this howto as there are may others describing it.

Some points of this howto are specific to my OpenVZ setup and the chosen distribution but most are generic and should work for any setup. Anyway here is the software I used.

  • OpenVZ for virtualization
  • Ubuntu 8.04 Hardy (x86_64) as distribution for the virtual environment
  • Asterisk and Zaptel for VoIP part (Ubuntu packages)
  • sox for translating the WAV files to OGG (Ubuntu package)
  • lighttpd as small and fast web server for the OGG files (Ubuntu package)
  • Twinkle as SIP client under Linux. (apt-get install twinkle)

Part 1: Hardware node setup

You can ignore that part if you don’t use OpenVZ and your kernel/distribution comes with ztdummy modules. The hardware node in my case runs on a Centos4 (x86_64). This is important as Asterisk needs the ztdummy kernel module, which comes with zaptel, for the meetme Asterisk module which is used for the conference rooms. As it is not possible to load kernel modules within a VE (virtual environment) (that’s a security feature!) I needed it on my hardware node. As the kernel of the hardware is a OpenVZ patched kernel and also Centos 4 does not come with a ztdummy module anyway, I needed to compile it.

I used the same version of zaptel as Ubuntu 8.04 does and it is also very important that you use a 64bit VE if you hardware node is 64bit, otherwise the device cannot be accessed correctly.

The install is quite straightforward

# tar xzf zaptel-1.4.8.tar.gz
# cd zaptel-1.4.8
# ./install_prereq test

Install the required packages and then continue with:

# ./configure
# make
# make install

but no “make config”, as we don’t need init scripts or that stuff. Now load the kernel module with modprobe ztdummy (and make sure that this is done after very boot, before the VEs start). Make sure the device is working with:

# ztcfg -v

The output should be something like:

Zaptel Version: 1.4.8
Echo Canceller: MG2
Configuration
======================
0 channels to configure.

At last we need the VE be able to access the ztdummy device, so we need to tell openVZ this.

# for x in `ls /dev/zap`; do /usr/sbin/vzctl set XXX --devnodes zap/${x}:rw --save; done

Replace the XXX with the ID of your podcast VE. Now we’re done with the hardware node and we can take a look at the user space stuff.

Part 2: Virtual Environment setup

At first we install the packages we need with following command:

# apt-get install asterisk-h323 asterisk-doc speex vpb-utils sox libsox-fmt-all lighttpd zaptel

Now we configure zaptel and check if it works:

# genzaptelconf
# ztcfg -v
# ztcfg -d

No error should be given. If a device is not found check if they got created by vzctl. After that make the devices in /dev/zap read and writable for the asterisk user:

# chown root:asterisk /dev/zap/*
# chmod 660 /dev/zap/*

Now we can work on the Asterisk configuration. We set following values in /etc/default/asterisk:

RUNASTERISK=yes
AST_REALTIME=no

The real time stuff does not work in a VE and gives audio problems. Now we need to do some configuration for NAT users in /etc/asterisk/sip.conf:

externip = you're_external_IP ; this is needed as asterisk has problem with the venet0 stuff otherwise
localnet=192.168.0.0/255.255.0.0; All RFC 1918 addresses are local networks
localnet=10.0.0.0/255.0.0.0 ; Also RFC1918
localnet=172.16.0.0/12 ; Another RFC1918 with CIDR notation
localnet=169.254.0.0/255.255.0.0 ;Zero conf local network

nat=yes
qualify=yes
canreinvite=no

After this global setup we configure for each of our podcasters one section, as shown here:

[firstPodcaster] ; this is also the user name
type=friend
context=sip
secret=the_password_of_this_user
callerid="Your_Name" <1> ; it is recommended to use no spaces here, as we use this as part of the filename. You need the “, “ and < ,> exactly as show here
host=dynamic
dtmfmode=info
disallow=all
allow=alaw
callingpres=allowed_passed_screen

We only support alaw so every client uses G.711a and we don’t need to translate. I believe in the US you need to use G.711u and therefore ulaw. Now we need a conference room for which we add following line to /etc/asterisk/meetme.conf:

conf => 10
conf => 20

Now we need to tie that together with the dial plan in /etc/asterisk/extensions.conf:

[globals]
MONITOR_EXEC=/usr/local/bin/wavIn2ogg.sh

And add following section:

[sip]

; so we can talk directly with another
exten => 31,1,Dial(SIP/firstPodcaster,20,tr)
exten => 32,1,Dial(SIP/secondPodcaster,20,tr)
exten => 33,1,Dial(SIP/thirdPodcaster,20,tr)

; this conference room is not recorded, for preparations
exten => 10,1,Answer
exten => 10,2,Wait(1)
exten => 10,3,Meetme(10,s)

; this conference room records automatically
exten => 20,1,Answer
exten => 20,2,Wait(1)
exten => 20,3,Set(CALLFILENAME=podcast_X_${CALLERID(name)}-${STRFTIME(${EPOCH},,%Y%m%d-%H%M%S)})
exten => 20,4,Monitor(wav,${CALLFILENAME},m)
exten => 20,5,Meetme(20,s)

; from the demo code – useful to look at the quality of your connection
; Create an extension, 60, for evaluating echo latency.
exten => 60,1,Playback(demo-echotest) ; Let them know what's going on
exten => 60,n,Echo ; Do the echo test
exten => 60,n,Playback(demo-echodone) ; Let them know it's over
exten => 60,n,Goto(s,60) ; Start over

Now you can restart Asterisk and connect with you SIP client. Call 60 to check if the audio stream works in both directions (try it without firewall on the Asterisk server if don’t hear anything). After that go into the conference room 10 with 1 or 2 friends and test it. If that all works you can work on the recording stuff. Create following script /usr/local/bin/wavIn2ogg.sh (don’t forget chmod 755):

#!/bin/bash
# wavIn2ogg.sh - creates ogg of the input mono stream
# used for recording each participant in a meeting room separately
# Written by Robert Penz

SOX=/usr/bin/sox
NICE=/usr/bin/nice

InFile="$1"
OutFile="$2"
OggFile=`echo $3|sed s/.wav//`.ogg
FinalDir=/var/www/production

#test if input files exist
test ! -r $InFile && exit
test ! -r $OutFile && exit

$NICE -n 19 $SOX -t wav "$InFile" -t vorbis "$OggFile"

#remove input files if successfull
test -r "$OggFile" && rm "$InFile" "$OutFile"

# at last set the permissions and move it in an atomic way
chmod 644 "$OggFile"
mv "$OggFile" "$FinalDir/."

Create the directory /var/www/production and set the correct permissions:

# mkdir /var/www/production
# chown asterisk:www-data /var/www/production

Now go the the conference room 20 and say something and disconnect. If it worked you should see with your browser under http:///production/ the recorded OGG file(s). If there are none, take a look at /var/spool/asterisk/monitor/ if there are 2 WAV files. If so call the wavIn2ogg.sh script by hand and look for any errors.

So thats the end of the story – you’ve now a system for recording podcasts over the internet in a cool way! Any comments, ideas or questions? Post them here.

Powered by WordPress
Entries and comments feeds. Valid XHTML and CSS. 34 queries. 0.089 seconds.