What Is The P Vs NP Problem

Links to this page
Edit this page
Entry portal
Advice For New Users

Setting the scene with addition

So we're going to talk about how efficient and effective algorithms are, what that means, how we talk about it, and some interesting conclusions.

We'll start with some school arithmetic. We learn how to add two numbers that each have a single digit. What is 3 plus 5? What is 4 plus 9? We learn these single digit sums, and if we forget we can just add things up on our fingers (and sometimes our toes).

But when it comes time to add bigger numbers, we run out of fingers and toes, and we are forced to use marks on paper. We can use tally marks, but when you have 643 sheep it's a bit tedious to have to put a mark for each one. It's better to have the numbers, and then we want to be able to do arithmetic.

So we learn long addition. (No, I'm not going to talk about long division!) We learn that we write the two numbers out, one above the other, and we add them in columns a digit at a time. If the total for two of these numbers is more than 9 then a "carry" spills over to the next column, and so on.

It's pretty clear that bigger numbers take longer to add, because there is more to do. Twice the number of digits, twice the time taken. You might wonder if there would be a faster way to do this, a question we will return to time and time again, but since we have to look at every digit it seems reasonable that we can't do better.

We could ask about using binary instead of decimal, or base 8, or base 60 (as the Babylonians did), and in fact if you've remembered all of your base 60 single digit sums, then because the number is shorted in base 60 it will take less time. But it's still the case that twice the digits will take twice the time.

Similarly, if we do it in binary each column will be quicker, but on average writing a number in binary takes 3 and a bit times the number of digits as it does in base 10, so we'll take three and a bit times as long. But it's still that case that twice the digits will take twice the time.

Time taken is some constant times the number of digits. The constant will be different for different bases, etc, but it will still be:


where $T$ is the time taken, and $S$ is the size of the numbers involved.

Moving up to multiplication

Now let's have a look at long multiplication. Here's an example:

               3   6   5
               8   7   2
              =========== \
               6  12  10   \   Central
          21  42  35        >  Section
      24  48  40           /
     ==================== /
      24  69  88  47  10

      31   8   2   8   0
I've done this is a slighly unusual way to emphasise that in the central section of the table is made up of the products of the individual digits of the initial numbers. The final number is made up by adding columns and carrying digits as necessary.

We can ask how long this will take. Well, if we have $n$ digits in the first number, and $m$ digits in the second number, we have $nm$ products in the central section, and then we have up to $m$ column to add, each with up to $n$ numbers.

That sounds complicated.

But this is where a simplification comes to our rescue - we choose to ignore the constant involved, and just look at the general shape of the time taken. In short, the time taken to multiply will be dictated by the number of cells we have to complete in the above table, and that's proportional to $mn.$ As a result we can say that the time taken will be given by:


where $c_2,\;c_1,$ and $c_0$ are constants, and $k$ is the number of digits in the larger number.

As the numbers here get larger and larger, so this time gets totally dominated by the first term, $c_2k^2,$ so we say that the algorithm here is $k^2$ in the size of the input numbers. It's quadratic.

Is there a better way?

Since the answer is at most twice the number of digits as the larger number, and we have to look at each digit, its certainly true that we can't multiply numbers in better than time that's linear in the size (in digits) of the numbers involved. The usual school algorithm is quadratic. Can we do better?

As it happens, yes we can. There are algorithms that are significantly faster, but for smaller number the extra house-keeping is horrendous and absolutely not worth it. However, in cryptography we are often dealing with very, very large numbers, and in that case it really can make a difference. So there are algorithms that are much faster for really, really large numbers. We'll mention one later.

Is general multiplication harder than squaring?

We've been looking at general multiplication, but we might wonder if its easier in the special case of the two numbers being equal. It's obvious that if we can do general multiplication then we can square, but what if we can square things quickly.

What if we had an oracle to square numbers instantly? Could we use the oracle somehow to multiply any two numbers?

The answer is, perhaps surprising, yes, we can. Here's how.

Start with two numbers, $a$ and $b,$ and to start with let's assume that they are either both odd or both even. As an example I'll use 256 and 138, but I suggest you pick your own and play along.

We compute their mid-point, $m=(a+b)/2,$ which in my case is $197.$ We also compute the difference from the mid-point to one of our numbers: $d=a-m,$ which in my example is $59.$

This works because $ab=(m+d)(m-d)=m^2-d^2$
Now we use our oracle to instantly calculate $m^2$ and $d^2,$ which in my case are $38809$ and $3481.$ And now the small piece of magic is that the product $ab$ is the different between $m^2$ and $d^2.$ So in this case you can check that $256{\times}138$ is $38809-3481.$

And it is.

Left as an exercise for the interested reader - what if the numbers are one even, one odd? What might you do then?
So that means that if we have squaring oracle we can, with just some extra addition and subtraction, work out any product we want to. So in some sense we don't have to know how to do general multiplication, we can just learn how to square, and then use the above technique.

Faster multiplication (optional section)

Now we can see one method for faster multiplication. We can square numbers with $k$ digits in time faster than $k^2$ as follows.

Suppose $N$ has $k$ digits, and split $N$ into two pieces, so $N=a{\times}10^h+b$ where $2h=k.$ (We round $h$ up if necessary). Then we can compute:


We can accompish this with two squarings and one multiplication, in each case using numbers that are half the size of the original. We then repeat the process on these smaller numbers, eventually getting to single digits. An analysis of this shows that the time now is proportional to $k^{1.5}$ which grows much more slowly than $k^2.$ The cost is all the extra fiddling about and remembering intermediate results.

But if the numbers are huge to start with, this is definitely worth it.

Clarifying the words we use

In the case of multiplication and squaring, we've not really said what we meant by "problem". We talk about the "problem" of multiplication and the "problem" of squaring, so what do we actually mean?

In the case of squaring (which we've seen is, with a little extra work, the same as multiplying) we have a(n infinite) collection of numbers, and for each of them we want to know their square. So the "problem" of squaring is a collection of questions, and in each case we want to know the answer.

This turns out to be a useful way of thinking about this. So we take the technical definition to be that:

********> width="20%"

A Problem is a collection of questions.

An *Algorithm* for a problem is a process or set of rules thatwhen followed, and when given the question, will give back the desired answer.

The *Time*Complexity* of an algorithm is the function that tells us, as a worst case, how long it will take the find the answer for a given question.


So we can talk about the "problem of squaring" as being the (infinite) collection of numbers to be squared. As we have seen above, the time complexity of addition is linear, the time complexity of multiplication is at worst quadratic (although it's actually at worst $n^{1.5},$ and in fact it's the subject of on-going research, and is now known to be better than that), and the time complexity of squaring is the same as that for multiplication.

A related problem - going the other way

So let's look at another problem: Factoring integers. In this case we are given an integer and we are asked to find a factor. There is a simple algorithm to solve this: Simple start at 2 and see if that's a divisor. If so, stop, otherwise ask about 3, then 4, then 5, and so on. Since every number os divisible by itself this will eventually stop, and so you have found a (posibly trivial) divisor.

The problem with this is that it's slow. Really slow. If you have a five decimal digit number then you might have to perform a huge number of tests. In the worst case the number you're given is prime, and so you have to test every number up to that.

Of course we can improve this enormously. If a number is prime then we don't need to look for a factor, and there are fairy quick ways to decide if a umber is prime (with near certainty - long story). Then if the number isn't divisible by 2 then it certainly isn't divisible by ay even number, so we don't have to bother dividing by 4, 6, 8, and so on. Taking that argument to its logical conclusion, we only need to test primes as potential divisors.

Finally, if there is a factor then at least one factor must be no more than $\sqrt{N}$ where $N$ is the number we are trying to factor, so we can abort as soon as we get to $\sqrt{N}.$

Even so, for a number with $k$ digits, we still need to perform some $b^k$ trial divisions, where $b$ is some constant. The time complexity of this algorithm for factoring is not polynomial in the size of the question, it is exponetial.

The interesting thing here is that this is sort of an inverse to the question of multiplication. So we have a problem that's relatively easy going one way (multiplying) and yet apparently difficult going the other way (factoring).

And I say "apparently" because we just don't know how bad factoring really is. There are some phenomenally clever algorithms now, much cleverer than the simple trial division, that take time that is less than exponetial, but more than polynomial. That's a little hard to take in, and the exact circumstances and expressions are tricky, but so far, with our current theoretical understanding, and using classical (non-quantum) computers, the time complexity of integer factoring is super-polynomial, but sub-exponetial.

We need to remember here that when we talk about the time complexity we are talking about the warst case. Most random umbers that you pick can be factored quite quickly - the proportion of numbers that are actually difficult to factor is quite small, and gets smaller as you deal with larger and larger numbers. After all, if you pick a large random number there's a 50% chance it will be even, and hence trivial to find a factor. Even if it's odd there's a 33% chance it will have 3 as a factor. And so on. In a very real sense, most numbers are easy to factor.

But some are hard (with our current understanding), and those are the ones we have to worry about when we discuss the time complexity of an algorithm.

Pause for a summary so far ...

Factorisation versus Multiplication

Clearly we have a connection between factorisation and multiplication. They are inverse problems, and we might wonder if there's a way of using one to solve the other.

Well, in a way there is. Let's suppose we can multiply really, really fast (which in a sese we can) and we want to factorise $N.$ We use the following algorithm:

Now you'll see there that this algorithm doesn't use multiplication, it used division. Just as we can take an algorithm for squaring and use it to create an algorithm for general multiplication, so we can take an algorithm for multiplication and use it to create an algorithm for division (with a bit of extra work). I'm not going to do that here, you could regard it as an exercise for the interested reader.

We can analyse this algorithm and see that it takes a polynomial amount of time. The problem is that we might have to make a lot of guesses to make it work! And that's where it ends up going wrong.

Emergence of the definition of "NP" ...

Even so, we have here a "non-deterministic" algorithm - one that takes a polynomial amount of time, provided we guess right. As such, this is a "non-deterministic polynomial time algorithm." And hence it's called an NP time algorithm, or simply an NP problem.

So just to recap, an NP problem is one that has an algorithm like this:

So we can see that Integer Factorisation falls into this category. We can just guess a solution and then divide or multiply to see if it is right. So Integer Factorisation (FAC) is an NP problem.

This sounds stupid, but it's important

So let's consider a problem that can be solved in polynomial time. To abuse the notation, and at the risk of confusion, we'll call the problem $P.$ So $P$ is a polynomial problem, and that means we have an algorithm $\mathscr{A}$ can solve a question from $P$ in polynomial time.

Is problem $P$ in the class of problems $\mathscr{NP}$ ?

Links to this page / Page history / Last change to this page
Recent changes / Edit this page (with sufficient authority)
All pages / Search / Change password / Logout