Efficiency and Computability

Some simple programs take an incredibly long time to execute. Consider the following function.

int slow( int n)
{       if(n < 3) return 1;
        return slow(n-3) - slow(n-2) + 2*slow(n-1);
}

This recursive function takes about twice as long to compute a value for n = 25 as it does for n = 24. If you execute the following fragment you will see what I mean.

i = 0; 
while(i < 30 )
{       System.out.println( slow(i) );
        i++;
}

The first few lines show up very quickly. For these the execution time is dominated by the time to do the output. But then the computation of function slow starts to take more and more time until this is the dominant part of the work. If it takes about 5 seconds to compute slow(24) and a bit less than 10 seconds to compute slow(25) and if this doubling continues, how long will it take to compute slow(30)? How about slow(45)? How long is ten million seconds? How many seconds have you been alive?


If you want to time the above program you can use a StopWatch object to do so.

StopWatch timer = new StopWatch();
timer.start();
timer.mark();
i = 0; 
while(i < 30)
{       System.out.println("" + i + ' ' + slow(i) );
        timer.mark();
        i++;
}
timer.stop();

After a stopwatch is created and you start it, the mark method will print out the elapsed time since it was last started and the "lap time," which is the elapsed time since the last mark.

Effciency

Efficiency is the science of measuring the time and space resources needed by a program. The above program is said to have exponential running time since time taken to execute it as a function of one of its important inputs (its parameter in this case) has the form of an expontial function.

One point must be carefully emphasized. An algorithm can have a certain efficiency that is worse than other algorithms for the same problem. For example, the slow function above can be done in linear time if simply start our computation at k = 3 and work up to n, andsave the three most recently computed values. This means that the problem has (at worst) linear time requirements, even though it has at least one solution that is exponential. Carefully distinguish between the efficiency required by a problem and the efficiency of a particular algorithm that solves that problem. Each problem has a "best possible" efficiency, though that may not be known in a particular case.

Computability

Computability is the science of determining what can and can not be computed. It is a surprise to many that there are functions that can not be computed on any computational device. There are two aspects to this. One is related to efficiency and the other is not.

A problem can be so difficult to compute and can require so long to compute that there may not be time or resources available (even potentally available) to compute it. For example, the "n-body problem" of determining the gravitational behavior of a large number of bodies in motionis beyond our capability to compute. It might take billions of years using all of the computational power that might possibly be created--even if a computer with the power of a Cray could be built out of a single atom-- to exactly solve such a problem. This then can lead to approximate solutions which may be sufficient or not.

There is one class of problems that is especially interesting for theoritical reasons. Problems like the traveling salesman problem and linear programming have best known solutions that are exponential, but no one hae ever shown that there isn't a better than exponential solution. In fact if any of these problems (very many are known) can be solved in polynomial time, then they all could be. The proof of this latter fact exploits transformations that shows how a solution to one of them can be transformed in polynomial time into solutions of the other. These problems are called np-complete.

The other aspect of comptability is that there are many problems that have no computable solution at all. The proof of this is quite interesting and involves some facts about Turing machines. A Turing machine is a very abstract notion of computability. There are a large number of reasons why a Turing machine (named for Alan Turing) is a universal model of computation. Many people have proposed other models of computation than this, but all have been shown to be equivalent in power to that of a Turing machine.

One interesting fact about Turing machines is that they can simulate the execution of other Turing machines. To do this requires that the Turing machine to be simulated be encoded, perhaps as a sequence of zeros and ones. Any Turing machine can be so encoded in a finite string of symbols. There are only a countable infinity of such encodings, hence only a countable number of Turing machines, hence only a countable number of solutions to computability problems. (The integers have a countably infinite number of elements, but the real numbers form a set with a larger "cardinality".) However, it can also be shown that there are a larger than countable number of functions (problems). Therefore there are a lot (uncountably many) problems with no computable solution. However, it is unlikely that any of these problems would be of interest in the "real world" in any case as examples of such are very difficult to construct and are somewhat contrived.

Investigations

In a good book on computability, investigate and discuss the following two concepts, with examples.

Big O notation

Recurrence relations