A Kuhnian look at the iPad

In case anyone has missed it; Apple announced their new tablet computer last week. They call it the iPad. This has released a massive wave of comments of all kinds across the internet, where some has been calling the device over-hyped and under-performing, and others have hailed it as the device that will change mobile computing, and still other haven’t got the point at all (Stoke’s comparison is hilarious, he obviously doesn’t see what Apple is aiming for). But I don’t aim to get into detail on whether the iPad is a good or bad product, at least not before I’ve touched one myself. Instead, I will use two iPad-related articles as starting point for a discussion on the paradigms of computer environments, in a Kuhnian manner.

The philosophy of Kuhn

Thomas Samuel Kuhn was a physicist that is most known for his work on the theory of science and the book “The Structure of Scientific Revolutions“. His main ideas was that science periodically undergoes what he called paradigm shifts. The new paradigm introduces a new set of concepts and interpretations that can explain previous problems in the previous paradigm(s). The new paradigm can also bring a number of problems that scientist have never considered before. This paradigm-view invalidates the often perceived view that science progress linearly and continuously. Kuhn was, for example, sceptical about textbooks of science, which he thought distorted how science was carried out. He also argued that science can never be done in a completely objective way, instead findings will always be interpreted in the present paradigm, making our conclusions subjective.

The iPad

The iPad combines the touch interface of the iPhone and iPod Touch with a larger screen and more powerful hardware, which enables desktop-like applications to be run on the tablet. While this is not revolutionary in itself, Dan Moren of MacWorld argues that the philosophy of the iPad user interface is revolutionary for how we use computers in general. He concludes: “Regardless of how many people buy an iPad, it’s not hard to look forward a few years and imagine a world where more and more people are interacting with technology in this new way. Remember: even if it often seems to do just the opposite, the ultimate goal of technology has always been to make life easier.”

Steven F takes it one step further in his article reflecting on the iPad, and argues that “In the Old World, computers are general purpose, do-it-all machines (…) In the New World, computers are task-centric”, and concludes his introduction with the words: “Is the New World better than the Old World? Nothing’s ever simply black or white.” Steven then make a very interesting argument about why task-centric, easy-to-use, computers will slowly replace today’s multi-purpose devices.

Both of these articles are very much worth reading and brings up numerous important views on the future of computing, and I highly recommend reading them throughly. Even though they both use the iPad as a starting-point, Apple’s new gadget is not really their subject of study. Instead, what both Steven and Dan are reflecting on is the future of computer user interfaces. Which is a much more important subject than the success of the iPad, or if it is a good product or not.

The Old World Paradigm

To use Steven’s terminology, we have for the last 20-25 years resided in the Old World Paradigm of computing. This paradigm is what people usually think of as the normal graphical interface of a computer. It may be Windows, Mac OS X or any common Linux distribution – they are all essentially used in more or less exactly the same way. The same way the original Macintosh introduced in 1984. Not much has changed really. Using my MacBook today, is essentially the same as using my Mac Plus in 1990. It’s just easier to carry with me, slightly faster, and (importantly) has access to the internet. But I interact with it basically using the same metaphors; windows, menus, desktop icons, a pointer, buttons etc.

Before the Mac, most computing was done using command-line tools in a DOS or Unix environment. While this was convenient for many purposes, it was a huge abstraction to the new computer user, and scared people off. Still, I have access to the Unix shell in my MacBook, using the Terminal application, which I use frequently in my bioinformatics work. As computers became more common, this kind of abstraction was becoming a great wall. A crisis in computer interface development started to surface. And according to Kuhn, a crisis feed … a new paradigm. In 1984, Apple started off this paradigm, using techniques partially taken from research made at the Xerox labs. Five to ten years later, most of the computer users was taking part in this graphical paradigm (using Windows 3.1, Windows 95, or Mac System 6 and 7).

Things that scared people about the command-line, were to a large extent solved using the point-and-click metaphors. However, a lot of people still find computers hard to use. Computers need virus cleaning and re-installation. They get bogged down by running too many applications, and having them on for too long results in memory fragmentation. Just keeping the computer running is a hard task for many people. This creates another distraction, further fuelled by the addition of extra buttons, and extra functions directed at power users. Such extra features is just confusing for new computer users. And while Mac OS X and Linux is not as haunted by viruses and malware as Windows is, they are still very complex. Desktops get cluttered quickly by documents, the screen is too small to handle the windows of five applications running simultaneously. With the increasing computing power, a lot of extra functionality is added, which many times just obscures the main task of the computer. A new crisis is emerging.

The New World Paradigm: An era of simplicity

Apple sees this crisis, and also has a solution for it. They call it: simplicity. For Apple, the most important thing is not if we can have access to the latest technology to play around with its internals. Apple wants its average users to never worry about the internals of the device. It should just work. Steven nails it like this: “In the New World, computers are task-centric. We are reading email, browsing the web, playing a game, but not all at once. Applications are sandboxed, then moats dug around the sandboxes, and then barbed wire placed around the moats. As a direct result, New World computers do not need virus scanners, their batteries last longer, and they rarely crash, but their users have lost a degree of freedom. New World computers have unprecedented ease of use, and benefit from decades of research into human-computer interaction.” This means that we computer-savvy guys of the old paradigm will loose something. We loose our freedom to tinker. But this loss comes at great gains.

Kuhn would say that personal computing have reached a new crisis, which opens up for a new paradigm. Apple is among the first companies to try to create a device that defines this paradigm, but they are not alone. Google’s Chrome OS aims at the same thing – to define the next paradigm in computing. And both Apple and Google are willing to bet that the kids born today, who never saw the Old World Paradigm of computing, will never miss it. They will never ask what happened to the file system metaphors, the window metaphors, and the multi-tasking of today’s computers. Because they will never have seen it. Instead, they will ask how we could stand using the buggy and unstable computers we have today.

Apple redefined the smart phone three years ago. They definitely have the potential to redefine the experience of computing. Not that the this new paradigm would mean no more Unix command-line tools. Not that it will mean that the current desktop computers will immediately die out. But what we first think of as a computer in ten years, might very likely be a much more task-centric device than the laptops we use today. And even though this is a loss of freedom, it will surely be a great gain in usability. Until the next paradigm comes along…

HMMER3 released – with a pre-compiled binary!

As I have recently complained about open source software coming without pre-compiled binaries, I salute the release of HMMER beta 3, which has a pre-compiled Intel/Linux tar-ball. This is exactly the kind of convenience measures I have asked for, and thus I wanted to state that clearly here. Well done, Sean Eddy and company!

HMMER is a bioinformatics software for finding sequence homology. People into bioinformatics may appreciate some of the new features, like multicore support, and better-than-BLAST speeds (unbelievable but true!). For those of you that are interested, the full range of features, as well as the software download can be found here:

HMMER 3.0b3: the final beta test release

Swine flu vaccination program – is it worth it?

One might ask oneself if the public vaccination program in Sweden (and in many other countries) is worth the costs of it. Of course, this is not an easy question, as many parameters plays a role in this decision. How important is the saving of lives, how severe is the side effects, how much is an individual person’s health worth, etc. On a global scale we have to wait a year or more until the pandemic is over, or at least until next Spring, before we can draw any conclusions on if the vaccination program has been effective enough, or if it was necessary at all. However, as the spreading of the disease has increased dramatically over the last weeks e.g. in Ukraine, but also in Sweden, the importance of the vaccination program and that the government has acted fast has been painfully underscored. However, my purpose of this article is not to discuss the vaccine itself, nor the side effects, or anything else related to the vaccine. Instead, I am aiming for if the costs of the program in real numbers will payback for the government, and thus the Swedish economy.

Simply put, I will just do my math homework using the swine flu statistics. I will make the following assumptions:

  • If nobody was vaccinated, 10% of the Swedish people would suffer from the swine flu. This seems to be a rather safe expectation, it is more likely that much more than 10% will be sick without any vaccinations, but let’s keep the numbers within safe margins.
  • None of these people will die or require more complicated medical care. We already know that this statement is untrue. However, this assumption makes the math much easier. Also, if people require more complicated care, the costs would go up, not down, so just keep in mind that my expectations are (again) set too low.
  • A typical influenza victim needs to stay home from work for two weeks, and is then fully recovered. Let’s just assume that people are smart and stay home until they don’t carry the disease anymore.
  • The estimated costs of the vaccination program are 3 billion swedish kronor (about 430 million USD). This is based on an estimation made in August by Sveriges Kommuner och Landsting (SKL). The real costs for the program will not be known until some time next year. Of these 3 billions, 1.3 billions is the actual cost of the vaccine – the rest is administrative costs, according to Göran Stiernstedt at SKL.

Given these assumptions, how much would it cost not to run a vaccination program in Sweden? Well, Sweden’s GDP (Gross domestic product) was estimated to be $348.6 billion in 2008, that is about 2,437 billion Swedish kronor. So, if 10% of the population gets sick and stays home from work for two weeks that means that the loss in GDP would be (approximately):

0.10 * 2/52 * 2,437,000,000,000 kronor ≈ 9,373,000,000 kronor

So, for these low expectations I made above, Sweden would still loss about 9 billion kronor just from people not coming to work. That’s a lot of money compared to the estimated cost of three billions. And still, we have not included the increased costs of medical care into our figures. As an additional note: Swedish tax revenue is almost 50%, which means that almost 4.5 of these 9 billions will end up in the government’s hands. In that perspective, the invested three billions seem to be well-used money.

An unusual error?

I just ran into a problem using HMMER, where it aborts the building of an HMM-profile (using hmmbuild) with this line: “FATAL: illegal state transition B->E in traceback” Is there anyone who has seen this HMMER error before? Anyone who know what it means and/or how to solve it? Please let me know as soon as possible.

hmmbuild - build a hidden Markov model from an alignment
HMMER 2.3.2 (Oct 2003)
Copyright (C) 1992-2003 HHMI/Washington University School of Medicine
Freely distributed under the GNU General Public License (GPL)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Alignment file: alignment1.aln
File format: Clustal
Search algorithm configuration: Multiple domain (hmmls)
Model construction strategy: MAP (gapmax hint: 0.50)
Null model used: (default)
Prior used: (default)
Sequence weighting method: G/S/C tree weights
New HMM file: alignment1.hmm
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Alignment: #1
Number of sequences: 5517
Number of columns: 2305

WARNING: Looks like amino acid sequence, hope that's right
Determining effective sequence number ... done. [1]
Weighting sequences heuristically ... [big alignment! doing PB]... done.
Constructing model architecture ...
FATAL: illegal state transition B->E in traceback

Solving problems in seconds

Sometimes a given solution to a problem lies much closer at hand than you expect. In my work I usually do the same task repeatedly with between 6 and 50 files. Even though Unix is very efficient in many ways, this still takes time to do by hand. I have thought of various ways around that problem, including using wildcards (*), but never got fully satisfied. But this week, I finally came up with the simplest solution this far. And it took about a minute or two to implement. I don’t know why I didn’t think about this a year ago. Maybe I thought that I would only do these repetitive tasks a couple of times. I was wrong, but thanks to Perl I can now be much more efficient (and write this instead of typing Unix commands…) The good thing about my implementation (in my opinion) is that it’s so flexible. Here’s my code, please comment if you feel that there is more efficient ways. “{}” is replaced by a number for each file name:


$versionID = "Version 1.0";
print "LoopCommand\n";
print "Version $versionID\n";
print "Written by Johan Bengtsson, October 2009\n";
print "-----------------------------------------\n";

print 'Execute command: ';
chomp($command = <STDIN>);
print 'From number: ';
chomp($start = <STDIN>);
print 'To number: ';
chomp($end = <STDIN>);

for ($i = $start ; $i <= $end ; $i++) {
$exec = $command;
$exec =~ s/\{\}/$i/g;
$result = `$exec`;
print $result;

Why the LP sounds better than the CD

One would think that the refinement of digital audio would make the CD:s of today sound absolutely fantastic, compared to the now sort-of-dead LP. It doesn’t, which I personally realised when I reconnected the old LP-player to the stereo to listen to Behaviour (Pet Shop Boys’ fourth album, which I own in three different copies – the LP, the original CD master, and the 2001 remastered CD). I realised that the LP sounds so warm and dynamic compared to the CD. Yes, it pops and occasionally sways, but it is clearly more living than the modern CD sound. But why is this?

The dynamic range
The answer does not lie within the dynamic range of the LP. Audiophiles have proposed this conclusion, based on the assumption that the LP can deliver a frequency resolution of about 10 Hz to 100.000 Hz, while a CD only delivers up to 44.000 Hz. While this is true (probably – the LP numbers are much harder to achieve as they are not really fixed in hardware as the CD is), it should be noted that we can only hear frequencies up to about 20.000 Hz. As a matter of fact most of us don’t even have ears that good. A normal adult perceives only frequencies up to 16.000 or 18.000 Hz.

The whole thing gets even more complicated if you consider that half of the frequency space (44.000 Hz in the CD case) is consumed because the signal is divided into two channels (left and right, for stereo). Thus, only 22.000 Hz is actually available to produce the sound for the CD, and about 50.000 Hz for the LP. This has led to the audiophiles speculating in that the CD sound worse because of the lack of overtones. While this might be true for high-end hi-fi equipment, for most people with normal stereos the difference would be unnoticeable. This was a problem with early mp3-compression, which stripped away much of the higher frequencies, leaving a very “flat” sound, with little “headroom”. Lack of headroom is not something that is usually complained about on CD:s, though.

Muddiness and lack of dynamics
Instead, when people (even me) complain about the CD sound, the problem is rather related to muddiness, and lack of dynamics. Does this have to do with the media itself? I have tried recording copies of some of my LP:s and then burn them to CD:s again. And amazingly enough, they sound like the LP original. The CD is a very good medium for preserving the sound, and even feeling – the pops and humming – of the LP. So the problem does not lay within the CD media itself. But if it’s not the media, what can it be?

Modern listening experience
The answer is deeply connected to how most people listen to music today, compared to how we did in the era of the LP. Back then, until the LP sales started to decline in favour for CDs, people most of the time listened to music on their stereo at home. Occasionally we brought music to the car radio, preferably on tapes, but that’s about as mobile as we got. With the arrival of the CD and cheap walkman music players, things changed dramatically. Today, most people listen to music on the run using iPods or phones. This means a dramatic change in surrounding sound climate.

The modern music listener has to turn up the volume until no other sound can get in to get a good listening experience. This of course has impacts on hearing, but also on how music is mastered. For a very long time in record production, the last step in an audio creation has been to do a mastering of the finished mix to a version suitable for LP or CD (or another media). When people mostly listened to music at home, this process was generally concerned with preserving the dynamics of the recording. But, as it was also important to be heard on radio stations, the master versions were slowly progressing towards higher volume levels – just to get a better radio impact. As there is a limited resolution on a given media (that applies to both LP and CD), the drawback of an increased overall volume is a smaller dynamic range. This happens because the softest parts of the songs will be amplified to a higher level, while the loudest parts can’t be amplified above the “roof” of the media.

Mastering to be heard (and bought)
During the 1990:ies, and accelerating in the last decade, the volume issue has become even more important and complex in the mastering business. As people tend to listen to music in more crowded environments, often filled with other noises, the importance of being heard has increased dramatically to a song. Thus, there is no room for close-to-quiet parts of a song, such parts will just be sorted out. The CD mastering companies deal with this by amplifying these parts, so that they can be heard even in noisy environments. But as explained above, this results in a loss of dynamics. Thus, modern CD:s, adapted for a mobile audience, sound muddy and dull.

You can blame the music industry, or the MP3 player manufacturers, but it won’t really help. This is an issue of being heard or being forgotten. There are, however, some artists that go against the stream. Damien Rice‘s two albums are examples of highly dynamic CDs, which are very hard to listen to on the bus with an iPod and headphones. The upside of this is that they sound great on the home stereo – even better than most LP:s.

Comparison of sound quality
That this is the case can easily be heard if you have access to the same recording in different issues. To use an album I own in different formats as example: compare the LP issue of Pet Shop Boys‘ “Actually” album, to the original CD master, the CD I recorded from the LP and the modern 2001 remaster of the album. The LP sounds great, but bears some problems with pops and hums. It shares these problems with the CD I recorded from the LP. Then compare the original CD master to the LP. It sounds more or less the same. Maybe (and this is a big maybe) the dynamics have decreased a little, but that’s close to unhearable. Then compare to the 2001 CD remaster. The difference is striking. Suddenly, everything has become muddy, and it’s much harder to distinguish between the individual instruments. The bass isn’t that deep. The higher frequencies seems to be lacking free air. Of course the quality in itself is better than on the LP, but a lot of the musical density has been lost. This does not only apply to this album, it’s typical for the modern post-2000 CD masters.

For sure, LP:s would have suffered from the same problem, given that the CD had not come around and that people would have increasingly used walkmans to play their music on the go. But digital music has clearly outpaced its analog counterpart, leaving us with some good old nice-sounding LP:s, and a bunch of really crappy CD-remasters, that only sound good on the iPod in a noisy tram. Sometimes, life’s a bitch.

LogoMat-M, or how I started to hate source code and opted for precompiled binaries

LogoMat-M and its uses
I have recently struggled to install a bioinformatics program called LogoMat-M. LogoMat-M is a command line based program that creates visual representations of HMM-profiles. An excellent example of the program in action can be viewed at Sanger’s LogoMat-M website. It creates images that looks a bit like this:

The resulting images make it easy to interpret how common a given amino acid is at each position of a sequence alignment, where the alignment usually represent a protein family. So far, so good.

The problem is that the web service was not designed to work with large amounts of sequences, and thus returns nada when such sequence alignments are used. To solve this problem, I thought I would try to install the program locally, on my own computer, at least to receive a proper error message. This was a big mistake.

The “install” process
I started by downloading the LogoMat-M package (i.e. the source code – this is open source software, which often means that there are no pre-compiled binaries). However, the build files for the program complained that my computer missed certain libraries and programs required for the LogoMat software to compile. Well, alright, I went out to find the pieces of missing software. Quite fast I could track down the two missing components and download these. Once again, these were open source programs – meaning no pre-compiled binaries. I tried to compile the first of those and rapidly got the answer that a component called PDL was required and could be obtained via a service called cpan.

I started to get a bit frustrated, since I didn’t want to spend the whole day installing software – I wanted to construct images like the one above. However, I did as the instructions said and text started flashing down my screen. Suddenly, cpan exited and said “Could not compile. Compiler returned bad status.” Wow. How informative! How do you expect me to know what caused that?! So, now I was stuck. I could not compile LogoMat because I was missing another program that was required, and I couldn’t install that program because I lacked a component that wasn’t, for some unknown reason, able to compile.

Now, the big problem here is that there is no way for me to get around this, because the documentation does not mention this kind of situation. I could, of course, contact the developers, but I was on a tight time schedule, and needed this to work. It was possible, if not likely, that it would take days for the developers (who do not get paid for this software, i.e. there is no official support channel) to sort out my problem.

Again, a mentality problem
Many times, open source software is praised for being open, but what people tend to forget is that a lot of this software is not at all easy to use. Or, in this case, even install. On Windows or Mac OS X, I would have fired up an installer, which would have installed a working pre-compiled binary on my system, with all its required libraries. It would work out-of-the-box. And if it didn’t, there would be someone to call.

Now, I don’t want to call for open source developers to set up call centres for supporting their programs, that would just be ridiculous. But I beg you to please make pre-compiled, working versions, including required libraries, and supply these for at least the most common platforms. Depending on the kind of software, that could be Windows, Mac OS X, Ubuntu and Red Hat Linux, for example. Don’t bother with pre-compiled software for strange and uncommon architecture, people running these things probably know how to compile their software anyway. But please, supply some easy to use, pre-compiled program for the rest of us. Because otherwise we will never be able to get our work done using open source alternatives, and that does not benefit either our work or the open source community in general. The situation described above only benefit big corporations selling overpriced software. And that is really, really sad.