June 28, 2014
While tracing/sniffing for something, I mirrored all packets of my mobile phone to Wireshark and I was was really astonished to see many multicast DNS requests (_googlecast._tcp.local) from my mobile …
As you see, these are more than 15 packets per second, which leaded at once to following 3 thoughts:
- That can’t be good for the battery
- The mobile is sending this surely not only in my home network but also in hotspot networks … I don’t like that for security/privacy reasons (specially what happens if the phone gets an answer and maybe sends more info about itself)
- I’m not using Chromecast anywhere
Which leaded at once to the question:
- How can I disable this?
So I went on a search trough the Internet …. but I was not able to find a solution. So the question to the community .. has someone an idea how I can disable that?
ps: I found only one guy asking the same question in the xda developers forum
February 16, 2014
For some time now a mobile app for Andriod phones and iPhones is advertized which is called the official app of Tirol’s Avalanche Warning Service and Tiroler Tageszeitung (Tirol Daily Newspaper), so I installed it on my Android phone some days ago. Yesterday I went on a ski-tour (ski mountaineering) and on the way in the car I tried to update the daily avalanche report but it took really long and failed in the end. I thought that can’t be possible be, as the homepage of the Tyrol’s Avalanche Warning Service worked without any problems and was fast.
So when I was home again I took a closer look the traffic the app sends and receives from the Internet … as I wanted to know why it was so slow. I installed the app on my test mobile and traced the traffic it produced on my router while it launched the first time. I was a little bit shocked when I look at the size of the trace – it was 18Mbyte big. Ok this makes it quite clear why it took so long on my mobile –> So part of the post series will be getting the size of the communication down , so I opened the trace in Wireshark and took at look at it. First I checked where the traffic was coming from.
So my focus was one the 22.214.171.124 which was the IP address of tirol.lawine-app.com and it is hosted by a German provider called Hetzner (you can rent “cheap” servers there). As I opened the TCP stream I saw at once a misconfiguration. The client supports gzip but the server does not send gzipped.
Just for getting the value how much it would save without any other tuning I gzipped the trace file and I got from 18.5Mbyte to 16.8Mbyte – 10% saved. Than I extracted all downloaded files. jpg files with 11Mbyte and png files with 4,3Mbyte … so it seems that saving there will help the most. Looking at the biggest pictures leaded to the realization that the jpg images where saved in the lowest compress mode. e.g. 2014-02-10_0730_schneeabs.jpg
- 206462 Bytes: orginal image
- 194822 Bytes: gimp with 90% quality (10% saving)
- 116875 Bytes: gimp with 70% quality (40% saving)
Some questions also arose:
- Some information like the legend are always the same … why not download it only once and reuse until the legend gets update?
- Some big parts of the pictures are only text, why not sent the text and let the app render it?
- The other question is why are the jgep files 771 x 566 and the png files 410×238 showing the same map of Tirol? Downsizing would save 60% of the Size (with the same compression level)
- Why are some maps done in PNG anyway? e.g. 2014-02-10_0730_regionallevel_colour_pm.png has 134103 Bytes, saving it as jpeg in gimp with 90% quality leads to 75015 Bytes (45% saving)
So I tried to calculate the savings without minimizing the information that are transferred – just the representation and it leads to over 60% .. so instead of 18Mbyte we would only need to transfer 7Mbyte. If the default setting would be changed to 3 days instead of 7, it would go even further down, as I guess most people look only on the last 3, if even that. So it could come down to 3-4 Mbyte … that would be Ok, so please optimize your software!
I only wanted to make one post about this app, but then I found, while looking at the traffic, some security and privacy concerns I need to look into a bit closer …. so expect a part 2.
January 5, 2014
Sometimes you’ll (at leased if you’re like me ) want to know which other websites are hosted on the same server respectively the same IP address. The search engine Bing provides a nice feature for this. Just enter
ip:126.96.36.199 to get a list of the website which Bing knows to run on that IP address.
But even better, Andrew Horton has done a Bash script which allows you to check that from the command line. This looks even better:
$ ./bing-ip2hosts www.theregister.co.uk
[ 188.8.131.52 | Scraping 11-13 von 13 | Found 9 | / ]]
December 15, 2012
I repost the full advanced notice from the University of Maryland (which administrates the D root DNS server).
Here the original Post:
This is advance notice that there is a scheduled change to the IPv4 address for one of the authorities listed for the DNS root zone and the .ARPA TLD. The change is to D.ROOT-SERVERS.NET, which is administered by the University of Maryland.
The new IPv4 address for this authority is 184.108.40.206. The current IPv6 address for this authority is 2001:500:2d::d and it will continue to remain unchanged.
This change is anticipated to be implemented in the root zone on 3 January 2013, however the new address is currently operational. It will replace the previous IP address of 220.127.116.11 (also once known as TERP.UMD.EDU).
We encourage operators of DNS infrastructure to update any references to the old IP address, and replace it with the new address. In particular, many DNS resolvers have a DNS root “hints” file. This should be updated with the new IP address.
New hints files will be available at the following URLs once the change has been formally executed:
The old address will continue to work for at least six months after the transition, but will ultimately be retired from service.
April 7, 2012
I’ve just activated IPv6 for my blog. You should now get A and AAAA records for the DNS name robert.penz.name. I hope it does not break anything, but you need to go with the time as the saying is.
A query should show following:
$ host robert.penz.name
robert.penz.name has address 18.104.22.168
robert.penz.name has address 22.214.171.124
robert.penz.name has IPv6 address 2400:cb00:2048:1::adf5:3d3a
robert.penz.name has IPv6 address 2400:cb00:2048:1::adf5:3d95
robert.penz.name mail is handled by 10 mail.penz.name.
July 25, 2009
I thought I share the Firefox plugins I use and which make me use Firefox in the first place. Without these plugins Firefox would be just a browser of many and the webkit browsers render faster on my Kubuntu ;-). So these Plugins make the difference for me.
- Cookie Monster: This plugin allows me to manage my cookies. I can set from which domains I accept which kind. e.g. I accept cookies only for the session from a domain if it is needed.
- Xmarks: I use this plugin to sync my bookmarks between systems and also to have a backup at all times of them. You can also use it do sync/save your stored passworts securely. You can also use your own server.
- DownloadHelper: You never know when you would like to download a flash movie or something like this onto your PC. This plugin will enable you to do so.
- Yip: If you’re using something like meebo.com for instant messaging you surely would like to get notifications of a new message also outside the tab in your browser, as it mostlikely happens that you’re working with an other program or in an other tab. If so, take a look at Yip, as its supports supports Fluid and Prism which cover the large majority (100%?) of currently implemented notifications.
December 14, 2008
I’m using Google Analytics for some time now, it basically works, but it has some short comings like that the reports do only get updated every 24h, or that it is not able to track bound links without extra work on my side. But the most import part is that I don’t want that google knows everything. So I started to look for a valid alternative. I tried some local installable open source tools but decided to go with an other SaaS. If you’re using NoScript for your Firefox you might know it already I started using Click Web Analytics. Take a look at this screenshot, it looks like most web 2.0 sites, simple, clean design with a white background.
Whats nice is that you can do a real time campaign and goal tracking and that you can track every visitor who comes to your web site and if they accept cookies all their history. This will show you which power cookies gives website providers. You should really think to disable them or remove them on every start of your browser. But as long the most users have activated it I will also take a look at it and have a nice show case for people I talk over this.
December 7, 2008
I’m going to get an Asus Eee PC 901go, which has a Solid State Disk (SSD) instead of a normal hard disk (HD). As you know me I’ll remove the installed Linux and install my own Kubuntu. I soon started to look at the best way to install my Kubuntu and I found following recommendations copy and pasted on various sites:
- Never choose to use a journaling file system on the SSD partitions
- Never use a swap partition on the SSD
- Edit your new installation fstab to mount the SSD partitions “noatime”
- Never log messages or error log to the SSD
Are they really true or just copy and pasted without knowledge. But first why should that be a problem at all? SSDs have limited write (erase) cycles. Depending on the type of flash-memory cells they will fail after only 10,000 (MLC) or up to 100,000 write cycles for SLC, while high endurance cells may have an endurance of 1–5 million write cycles. Special file systems (e.g. jffs, jffs2, logfs for Linux) or firmware designs can mitigate this problem by spreading writes over the entire device (so-called wear leveling), rather than rewriting files in place. So theoretically there is a problem but what means this in practice?
The experts at storagesearch.com have written an article SSD Myths and Legends – “write endurance” which takes a closer look at this topic. They provide following simple calculation:
- One SSD, 2 million cycles, 80MB/sec write speed (that are the fastest SSDs on the market), 64GB (entry level for enterprise SSDs – if you get more the life time increases)
- They assume perfect wear leveling which means they need to fill the disk 2 million times to get to the write endurance limit.
- 2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.
- That’s a meaningless number – which needs to be divided by seconds in an hour, hours in a day etc etc to give…
The end result is 51 years!
Ok thats for servers, but what is with my Asus 901go?
- Lets take the benchmark values from eeepc.it which makes it to a max of 50 MByte/sec. But this is a sequential write, which is not the write profile of our atime, swap, journaling… stuff. That are typically 4k Blocks which leads to 2 MByte/sec. (Side node: The EeePC 901go mount the same disk of SSD ‘EeePC S101, to be precise model ASUS SATA JM-chip Samsung S41.)
- We stay also with the 2 million cycles and assume a 16GB SSD
- With 50 MByte/sec we get 20 years!
- With 2 MByte/sec we get 519 years!
- And even if we reduce the write cycles to 100.000 and write with 2 MByte/sec all the time we’re at 26 years!!
And all this is with writing all the time, even ext3 does write the journal only every 30 secs if no data needs to be written. So the recommendation to safeguard SSDs, as the can not write that often is bullshit!!
So lets take a closer look at the 4 points at the beginning of this blog post.
- Never choose to use a journaling file system on the SSD partitions: Bullshit, you’re just risking data security. Stay with ext3.
- Never use a swap partition on the SSD: If you’ve enough space on your SSD use a SWAP partition it will not be written onto it until there is to less RAM, in which case you can run a program/perform a task which otherwise you could not. And take a look at this article.
- Edit your new installation fstab to mount the SSD partitions “noatime”: That is a good idea if all the programs work with this setting as this will speedup your read performace, specially with many small files. Take also a look at nodiratime.
- Never log messages or error log to the SSD. Come on, how many log entries do you get on a netbook? That is not an email server with > 1000 log lines per second.
Please write a comment if you disagree or even agree with my blog post. Thx!
November 1, 2008
A friend of mine has done a comparison of different browsers on a state of the art system. The System runs under Windows XP SP3 on a Core 2 Quad Core CPU (Q9450, 2,66GHz) with 3,5 GB RAM. He did use following test. Smaller bars are better, as the browser was able to process the data faster. The x-axis shows the seconds a browser took for the test.
As you can see there are quite some differences which you should be able to “feel” also on current AJAX driven sites. Specially the new JIT (only in beta and not activated by default currently) for Firefox should make it the performance leader.
September 28, 2008
Here is something that helps you: WikiVS is the the one stop for up-to-date comparisons. Be it a comparison of MySQL vs PostgreSQL, Lighttpd vs Apache or Qt vs GTK. This website has all to help you base your decision on facts.
What are the benefits of such an site for you? The comparison should be up-to-date and you don’t need to look through long threads (some of them flame wars) discussing that topic. At last you can also contribute to the comparisons.
So it’s the open source / community way of doing something like this and I think thats great!