Perrin Sequence

AllPages
RecentChanges
Links to this page
Edit this page
Search
Entry portal
Advice For New Users

This is a Work In Progress ...

Perrin Sequence

From Farrago of Fragments

I'm going to start here with something that seems to just come from nowhere. It doesn't, as you might imagine, but it seems to. So stick with me for a minute.

Let's write down the numbers from 1 to 20. Underneath 1 we write the number 0. Underneath 2 we write 2, and underneath 3 we write 3. These numbers in the second row, 0, 2, 3, are the start of a sequence. To get the next number, look behind, skip over the first number, and then add the next two. So to get the number under the 4 we skip over the 3, and add 0 and 2. So that's just 2.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 2 3
> > - 2

We do this again and again, each time skipping over the most recent number and adding the two before. So the new sequence goes: 0, 2, 3, 2, 5, 5, 7, 10, 12, 17, 22, 29, 39, and so on.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 2 3 2 5 5 7 10 12 17 22 29 39 51 68 90 119 158 209 277

As you might imagine, I wouldn't ask you to do this if the were nothing interesting. Can you see anything interesting? Let me help.

Under the 2 is a 2. Under the 3 is a 3. Under the 5 is a 5, and under the 7 is a 7. Let's circle the numbers in the top row: 2, 3, 5, 7.

1 (2) (3) 4 (5) 6 (7) 8 9 10 11 12 13 14 15 16 17 18 19 20
0 2 3 2 5 5 7 10 12 17 22 29 39 51 68 90 119 158 209 277

Some readers will think "Primes!" and look at the 11. Alas, the number under that is not 11. But it is 22, which is a multiple of 11.

Hmm. What about 13?

Ah! the number under the 13 is 39, which again, is a multiple. I wonder if it always happens that the number under a prime is always a prime.

So we start checking. And checking, and checking, and checking some more. And yes, it does seem that the number under a prime is always a multiple of that prime.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 2 3 2 5 5 7 10 12 17 22 29 39 51 68 90 119 158 209 277
* * * * * * * *

Isn't that odd.

What about the others? What about the composites? Does it ever work for them?

Well, you can try it up to 100,000 or so and it seems that the number underneath is never a multiple of the number above when the number above is composite.

Isn't that strange.

Does it keep working? Does that go on forever? Does it ever go wrong?

As you compute $k(n)$ [for that is what we call the second row of numbers] you'll find they get pretty large. In fact, they grow exponentially (a term bandied about incorrectly by so many people). Literally, the value of $k(n)$ is the closest integer to $\alpha^n$ for the right value of $\alpha.$ That makes it hard to work with these numbers. In fact, a little arithmetic shows that $k(n)$ has roughly $n/8$ digits, so these numbers get pretty big, pretty fast.

These days modern computers have no trouble working with numbers that big, and you can find out after a while that it does, in fact, fail. There is a composite number that is incorrectly predicted as prime, that is to say, there is a number $n$ such that $k(n)$ is a multiple of $n.$

And for some people that would be the end of it. But for me, the nagging questions remained:

So I started to dig a little more.

First I worked on the computational aspect: How can we compute $k(n)$ efficiently? It turns out there are several tricks we can use.

The first problem to overcome is the sheer size of the numbers. It doesn't take long before we're dealing with numbers with thousands of digits ,and if we want to get serious then the numbers will get huge.

But in the end, all we want to know is whether $n$ divides $k(n).$ So for a given $n$ we can compute $k(n)$ modulo n. If the result is zero, then $n$ divides $k(n)$ (and hence is predicted to be prime).

Well that's handy. If we do that the numbers never get bigger than $2n$ (remember, we add two numbers that might each be nearly $n$ ) and numbers up to $2n$ aren't so bad. The problem is that if we then want to test $n+1$ we have to start all over again, because the calculation modulo $n$ doesn't help at all.

So we've made things worse. Now to compute the results for $n$ and $n+1$ we have to do $n+(n+1)$ additions, and not just $(n+1).$ That's a bad deal, that rapidly makes things slow to a crawl.

But it turns out we don't have to test every number. We can sieve out some numbers very quickly, and just deal with the remainder.

Hang on to your hats, this gets a little tricky (or just skip to the next section.)

Suppose we want to see if $n$ divides $k(n).$ If $n$ is not prime, we expect the answer to be no. Let's suppose for example that 3 divides $n.$ If 3 does not divide $k(n)$ then $n$ will not divide $k(n),$ so if 3 does not divide $k(n)$ then we can immediately stop and move on, $n$ doesn't divide it either.

So let's compute the sequence $k(n)$ modulo 3, and let's do that in advance. We will find that it will repeat (because there are only 27 possible triples, and once one of them repeats it will be stuck in a loop). So compute the cycle of $k(n)~(mod~3),$ and that small amount of work allows us to work out what $k(n)~(mod~3)$ will be for any value of $n.$

If that $k(n)~(mod~3)$ is not zero, and if 3 divides $n,$ then $n$ does not divide $k(n),$ and we're done with almost no work.

This is magic. If we pre-compute $k(n)$ modulo 2, 3, 5, 7, 11, 13, 17, and 19, then we have a very fast test to reject (maybe) any number that is divisible by one of them, and that's most numbers. Granted, a few will pass the test and need to be checked properly, and granted, some numbers won't be divisible by any of them and will need checking, but this process will sieve out 90% of all numbers, allowing us to concentrate on only the few remaining.

But those will still take a long time each, and that's where we use a variant of Russian Peasant Multiplication.

Er, what?

Yes. An ancient algorithm for multiplying numbers will help us compute $k(n)$ amazingly quickly, with the aid of matrices.

Er, what?

Yes, we will adapt Russian Peasant Multiplication (hereinafter "RPM") to compute large powers of matrices, and that will let us compute $k(n)$ very efficiently.

Er, what?

Let's take a quick diversion.

Here's RPM. Let's compute 13 * 17.

Which is right.

Which is good.

So why does this work? As with all algorithms, it's worth trying to understand it two ways.

Let F be the first number, S be the second number, R be the running total, and look at this sum:

F x S + R = Answer

Whe we start, R=0, so our desired answer is FxS. If F is even then we can halve that and double S, and the equation is still true. But what if F is odd. Let H be half of F, rounded down. Then F = 2H+1:

Adding S into R and writing that as N leaves us with

2H x S + N = Answer

So now we again have an even number in the first column, and we can halve that, double the second, and carry on.

Finally we end up with:

0 x S + R = Answer

and so the running total is the answer we want.

Another way to think about this is that it's a binary multiplication. The first number in this example, 13, is 1101 in binary, or 8+4+0+1. That means we want 8xS + 4xS + 0xS + 1xS. Working through the above table you can see that happening as we double up S and add in the appropriate multiples.

But how does this help compute k(n)? And what does it have to do with matrices?

Let's see how matrices come into it. Let's write the k(1), k(2) and k(3) in a column: [0,2,3]', and see what happens when we multiply by the matrix M=[ 0, 1, 0 / 0, 0, 1 / 1, 1, 0 ]. We get [2,3,2]'. That's k(2), k(3), k(4). Multiplying by M has stepped our triple on by one place. Try again, and we get $[3,2,5]'.$ Once more and we get $[2,5,5]'.$ So if we write $C_n~=~[k(n),k(n+1),k(n+2)]',$ we get the matrix equation:

$C_n~=~M^n~C_0.$

So if we can compute powers of matrices quickly, we can compute k(n) quickly.

And we can. Just two more steps.

We go back to RPM, but instead of multiplying, we adapt it to take powers. The changes are simple. Just as we can think of multiplication as repeated addition, we can think of powers as repeated multiplication. So we use the same algorithm, but wherever we add or double we now multiply or square. Let's take $2^13:$

Start with a running product of 1.

We're done. The wonderful thing is that this works with taking powers of matrices. The only difference is we start with a running product of I, the identity matrix consisting of all zeroes except for 1 down the diagonal. Then it all just works.

So now we can take the $n^{th}$ power of M, multiply by $C_0,$ and we get $C_n.$ The first entry of that is k(n), and we're done.

Let's see it in action as we compute $k(97).$ Remember, we want to know if 97 divides $k(97)$ (and 97 is prime, so it should) so we do the sums mod 97 and see if we get an answer of 0. Wouldn't that be something?

(Insert calculation)


More to come ...

The integer sequence defined by

There is a closed form formula: $P(n)=\alpha^n+\beta^n+\gamma^n$ where $\alpha$ $\beta$ and $\gamma$ are the solutions to the cubic equation $x^3-x-1=0.$ Taking $\beta$ and $\gamma$ to have modulus less than 1, this gives $P(n)\approx\alpha^n$ where $\alpha\approx1.32471795976162$

Taking $log_{10}(\alpha)=0.1221234...$ we can see that the $n^{th}$ term will have roughly 0.12n decimal digits. Thus the sequence exhibits exponential growth.

The Perrin Sequence has the amazing property that it seems that n divides P(n) if and only if n is a prime number. This conjecture seems solid, certainly holding for n up to 10^5, but it fails for n=271441. This is a great example how patterns fail for large enough cases.

Here are the first few values ...
n P(n) Divides
1 0 Yes
2 2 Yes
3 3 Yes
4 2 No
5 5 Yes
6 5 No
7 7 Yes
8 10 No
n P(n) Divides
9 12 No
10 17 No
11 22 Yes
12 29 No
13 39 Yes
14 51 No
15 68 No
16 90 No
n P(n) Divides
17 119 Yes
18 158 No
19 209 Yes
20 277 No
21 367 No
22 486 No
23 644 Yes
24 853 No




Links to this page / Page history / Last change to this page
Recent changes / Edit this page (with sufficient authority)
All pages / Search / Change password / Logout