December 15, 2012
I repost the full advanced notice from the University of Maryland (which administrates the D root DNS server).
Here the original Post:
This is advance notice that there is a scheduled change to the IPv4 address for one of the authorities listed for the DNS root zone and the .ARPA TLD. The change is to D.ROOT-SERVERS.NET, which is administered by the University of Maryland.
The new IPv4 address for this authority is 18.104.22.168. The current IPv6 address for this authority is 2001:500:2d::d and it will continue to remain unchanged.
This change is anticipated to be implemented in the root zone on 3 January 2013, however the new address is currently operational. It will replace the previous IP address of 22.214.171.124 (also once known as TERP.UMD.EDU).
We encourage operators of DNS infrastructure to update any references to the old IP address, and replace it with the new address. In particular, many DNS resolvers have a DNS root “hints” file. This should be updated with the new IP address.
New hints files will be available at the following URLs once the change has been formally executed:
The old address will continue to work for at least six months after the transition, but will ultimately be retired from service.
April 7, 2012
I’ve just activated IPv6 for my blog. You should now get A and AAAA records for the DNS name robert.penz.name. I hope it does not break anything, but you need to go with the time as the saying is.
A query should show following:
$ host robert.penz.name
robert.penz.name has address 126.96.36.199
robert.penz.name has address 188.8.131.52
robert.penz.name has IPv6 address 2400:cb00:2048:1::adf5:3d3a
robert.penz.name has IPv6 address 2400:cb00:2048:1::adf5:3d95
robert.penz.name mail is handled by 10 mail.penz.name.
July 25, 2009
I thought I share the Firefox plugins I use and which make me use Firefox in the first place. Without these plugins Firefox would be just a browser of many and the webkit browsers render faster on my Kubuntu . So these Plugins make the difference for me.
- Cookie Monster: This plugin allows me to manage my cookies. I can set from which domains I accept which kind. e.g. I accept cookies only for the session from a domain if it is needed.
- Xmarks: I use this plugin to sync my bookmarks between systems and also to have a backup at all times of them. You can also use it do sync/save your stored passworts securely. You can also use your own server.
- DownloadHelper: You never know when you would like to download a flash movie or something like this onto your PC. This plugin will enable you to do so.
- Yip: If you’re using something like meebo.com for instant messaging you surely would like to get notifications of a new message also outside the tab in your browser, as it mostlikely happens that you’re working with an other program or in an other tab. If so, take a look at Yip, as its supports supports Fluid and Prism which cover the large majority (100%?) of currently implemented notifications.
December 14, 2008
I’m using Google Analytics for some time now, it basically works, but it has some short comings like that the reports do only get updated every 24h, or that it is not able to track bound links without extra work on my side. But the most import part is that I don’t want that google knows everything. So I started to look for a valid alternative. I tried some local installable open source tools but decided to go with an other SaaS. If you’re using NoScript for your Firefox you might know it already I started using Click Web Analytics. Take a look at this screenshot, it looks like most web 2.0 sites, simple, clean design with a white background.
Whats nice is that you can do a real time campaign and goal tracking and that you can track every visitor who comes to your web site and if they accept cookies all their history. This will show you which power cookies gives website providers. You should really think to disable them or remove them on every start of your browser. But as long the most users have activated it I will also take a look at it and have a nice show case for people I talk over this.
December 7, 2008
I’m going to get an Asus Eee PC 901go, which has a Solid State Disk (SSD) instead of a normal hard disk (HD). As you know me I’ll remove the installed Linux and install my own Kubuntu. I soon started to look at the best way to install my Kubuntu and I found following recommendations copy and pasted on various sites:
- Never choose to use a journaling file system on the SSD partitions
- Never use a swap partition on the SSD
- Edit your new installation fstab to mount the SSD partitions “noatime”
- Never log messages or error log to the SSD
Are they really true or just copy and pasted without knowledge. But first why should that be a problem at all? SSDs have limited write (erase) cycles. Depending on the type of flash-memory cells they will fail after only 10,000 (MLC) or up to 100,000 write cycles for SLC, while high endurance cells may have an endurance of 1–5 million write cycles. Special file systems (e.g. jffs, jffs2, logfs for Linux) or firmware designs can mitigate this problem by spreading writes over the entire device (so-called wear leveling), rather than rewriting files in place. So theoretically there is a problem but what means this in practice?
The experts at storagesearch.com have written an article SSD Myths and Legends – “write endurance” which takes a closer look at this topic. They provide following simple calculation:
- One SSD, 2 million cycles, 80MB/sec write speed (that are the fastest SSDs on the market), 64GB (entry level for enterprise SSDs – if you get more the life time increases)
- They assume perfect wear leveling which means they need to fill the disk 2 million times to get to the write endurance limit.
- 2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.
- That’s a meaningless number – which needs to be divided by seconds in an hour, hours in a day etc etc to give…
The end result is 51 years!
Ok thats for servers, but what is with my Asus 901go?
- Lets take the benchmark values from eeepc.it which makes it to a max of 50 MByte/sec. But this is a sequential write, which is not the write profile of our atime, swap, journaling… stuff. That are typically 4k Blocks which leads to 2 MByte/sec. (Side node: The EeePC 901go mount the same disk of SSD ‘EeePC S101, to be precise model ASUS SATA JM-chip Samsung S41.)
- We stay also with the 2 million cycles and assume a 16GB SSD
- With 50 MByte/sec we get 20 years!
- With 2 MByte/sec we get 519 years!
- And even if we reduce the write cycles to 100.000 and write with 2 MByte/sec all the time we’re at 26 years!!
And all this is with writing all the time, even ext3 does write the journal only every 30 secs if no data needs to be written. So the recommendation to safeguard SSDs, as the can not write that often is bullshit!!
So lets take a closer look at the 4 points at the beginning of this blog post.
- Never choose to use a journaling file system on the SSD partitions: Bullshit, you’re just risking data security. Stay with ext3.
- Never use a swap partition on the SSD: If you’ve enough space on your SSD use a SWAP partition it will not be written onto it until there is to less RAM, in which case you can run a program/perform a task which otherwise you could not. And take a look at this article.
- Edit your new installation fstab to mount the SSD partitions “noatime”: That is a good idea if all the programs work with this setting as this will speedup your read performace, specially with many small files. Take also a look at nodiratime.
- Never log messages or error log to the SSD. Come on, how many log entries do you get on a netbook? That is not an email server with > 1000 log lines per second.
Please write a comment if you disagree or even agree with my blog post. Thx!
November 1, 2008
A friend of mine has done a comparison of different browsers on a state of the art system. The System runs under Windows XP SP3 on a Core 2 Quad Core CPU (Q9450, 2,66GHz) with 3,5 GB RAM. He did use following test. Smaller bars are better, as the browser was able to process the data faster. The x-axis shows the seconds a browser took for the test.
As you can see there are quite some differences which you should be able to “feel” also on current AJAX driven sites. Specially the new JIT (only in beta and not activated by default currently) for Firefox should make it the performance leader.
September 28, 2008
Here is something that helps you: WikiVS is the the one stop for up-to-date comparisons. Be it a comparison of MySQL vs PostgreSQL, Lighttpd vs Apache or Qt vs GTK. This website has all to help you base your decision on facts.
What are the benefits of such an site for you? The comparison should be up-to-date and you don’t need to look through long threads (some of them flame wars) discussing that topic. At last you can also contribute to the comparisons.
So it’s the open source / community way of doing something like this and I think thats great!
September 3, 2008
The Austrian ISP UPC (Chello, Indo, Telesystem) has activated a system which sends your browser to UPC site if a domain could not be resolved. They say that this helps their less tech-savvy customers but I believe it helps them more. Because they can put some ads on this site. They are not the first to try this. 2003 Versign tried something similar (called Sitefinder) but it was stopped by ICCANN and user protests. But that was not a provider.
The system is an Opt-Out one and not Opt-In. You need to perform 5 clicks, fill out a form and time to wait for a support employee to get it deactivated. You should really Opt-Out as the system can lead to problems if an DNS server is responding too slow and the system tells you you’ve a wrong domain name. The other question is what happens with the data gathered by the search engine on this site, which tries to guess what you meant.
This site (german) contains all info how you can Opt-Out.
April 24, 2008
Starting 18:00 CET (23.04.2008) someone started with a distributed denial of service attack against my blog. The UDP Flood attack was carried out, as showed my investigation by hacked servers and not zombie windows clients. At the time of writing the attack is still underway but got weaker after the first 24h.
The traffic accounting reports so far >750gb incoming traffic, but in reality it will be even higher as not every packet was counted in the beginning of the attack as it consumed large amounts of network resources. The data center my server is located at removed the route for the sub network from the border gateways, so the operation of the whole data cents was not affected. After I guess some network admins detected that some of their machines got misused for a DDOS and did shut them down, the traffic went down. After this happened the subnetwork has been reactivated again, and the blog is online again.
But why should someone attack my little blog in the first place? I didn’t post in the last 14 days. The only idea I’ve is that the hacker I found at the server of a friend and wrote about it wanted to get even. What counts for this theory is that it is carried out by hacked servers from and to random UDP ports â€“ a feature the found bot also has.
I’ll investigate further and report in my blog about it.
Update: Following IP are still attacking me after >30h … it seems to be time to try to contact the admins.
184.108.40.206 (Pakistan) - informed - not active anymore after 48h
220.127.116.11 (Korea) - informed - not active anymore after 48h
18.104.22.168 (USA) - informed - reacted within 12h
22.214.171.124 (Germany) - informed - reacted within 12h
126.96.36.199 (Hungary) - informed - still active after 3 days
188.8.131.52 (Spain) - informed - reacted within 24h
184.108.40.206 (Korea) - informed - still active after 3 days
Update2: 3 days after the start of the attack it still continues. ok only with lonely 2 systems, whose admins don’t seem to care about the attack and my mail. whats the reason for this? did the hacker lose control over them? what does he gain with it – the side is online without any problems for the users. Has anyone an idea?
March 22, 2008
A normal calculator would know the correct answer but not a Sequoia voting machine, which was used in a New Jersey Election. Take a look at the post “Evidence of New Jersey Election Discrepancies”, which shows a summary tape for the presidential primary election. Now the word is out, what is the reaction of Sequoia? Sure, threat the guy who had the insolence to recalculate the numbers on the summary tape, so he buckles under rather than show how poorly designed Sequoiaâ€™s e-voting machines are. But what do we know about bloggers? That this will evoke the Streisand Effect as bloggers around the word will now know about it and will blog about it.
That shows this again, we canâ€™t let something as important as our demography depend on trade secrets. Voting computers are just a bad idea, as every citizen needs to be able to verify the correct enumeration. Sure most won’t do it, but they could and some even will specially in turbulent times, when it specially counts.
Take also a look at this humorous little video (which I found here) concerning how insecure voting machines are.