# Russian Peasant Multiplication

Subscribe!

My latest posts can be found here:
Previous blog posts:

# Russian Peasant Multiplication - 2015/05/18

Sometimes simply called "Peasant Multiplication," sometimes called "Ancient Egyptian multiplication," sometimes called "Ethiopian multiplication," sometimes called "Multiplication by Doubling and Halving," this algorithm is well-known to some, a mystery to others, and more useful than you might think, being applicable not just to multiplication of numbers, but also useful for exponentiation, and for matrices.

This is described in detail in lots and lots of places, and a simple search for any of the above terms will turn up many references, but there are a few things I haven't seen elsewhere, so even if you have seen this before, I hope to show you something new.

## The algorithm

 13 x 19 -> 0 6 38 19 3 76 -> 1 152 -> 95 0 304 247 ^^^
So here's the algorithm with an example, shown at right. We start by putting our numbers to be multiplied at the top of two columns. We head the third column with a 0, although we don't always list that explicitly. This third column will be a running total.

When the number in the left column is odd we add the number in the second column into the running total. That's why the third column starts with a (possibly implicit) 0.

Next, we halve the number in the left column, discarding any remainder. In this case we halve 13 to get 6. Then we double the number in the second column, in this case going from 19 to 38.

The number in left column is now even, so we don't do any addition into the running total.

Repeat. Halve the left, double the middle. Now we have 3 in the left column, which is odd, so we add the 76 to the 19 giving 95.

 A quick check: average of 13 and 19 is 16. 16 squared is 256. 16-13 is 3 3 squared is 9 256 minus 9 is 247 So that's OK.
One final loop through to get 247 in the last column, and that's our answer. If we carry on the left column becomes and remains 0 forever, which is even, so we never add anything into the last column ever again, so once the left column becomes 1, the right column has our answer.

## Proof that it works

So the first question is why this works. Some people will say that it is just the usual long multiplication technique done in binary, and that's true. However, there is another explanation which is longer, but opens the door to a generalisation or two. We look at this with the aid of an invariant.

This will seem complicated, but it's more detailed than complicated. The technique of proof is important - you'll see that in the next section.

So call the number in the first column A, the second column B, the running total R, and the answer we want P (for product). With the way we've set things up, the following is true:

 I've been discussing this with some people and I've been intrigued to learn that some highly intelligent people I know are really, really not comfortable with this sequence of equations. And that bothers me, although I know that to some extent I have to accept that it's true. I've made this as simple as possible, making the steps as clear as possible, and yet some clever people are still struggling to follow it in detail. So if you're having trouble following it, don't thing it's because you're stupid, because there are clever people that have trouble with it. You may very well be one of them.

(A.B) + R = P

Now if A is even then A=2k for some k. So we can write this as:

(2k.B) + R = P

And that, of course, is the same as:

(k.2B) + R = P

So if we replace A with half its value, and B with double its value, and call those A' and B' then we have:

(A'.B') + R = P

In other words, when A is even, we can halve the first number and double the second, and our condition is still true. What about when A is odd? Then we can write A=2k+1 for some k. Things are a little messier now. To make things simpler I'll use the usual convention that multiplication is done before addition, that way I can omit most of the brackets. Also, writing things immediately next to each other means "multiply these", so I can use brackets and the "." to emphasise what's going on:

A.B + R = P

(2k+1).B + R = P

2k.B + B + R = P

2k.B + (B+R) = P

k.2B + (B+R) = P

A'.B' + (B+R) = P

 Again, we've used A' for half of A, and B' for the double of B.
So this is still true provided we:

• Add the number in the second column to the running total
• Halve the number in the first column
• Double the number in the second column

So we've now seen that following the steps in the algorithm leave this equation balanced - it truly is an invariant.

But when we finish, the number in the first column is 0. Then we have

0 x B + R = P

Or simply R=P. Our running total is the (previously unknown) product that we wanted.

## Generalisation 1: Exponentiation

So now let's generalise this to compute 213. In exponentiation we multiply things together as opposed to adding them, so we generalise the technique by:

• Replace doubling with squaring.
• In other words, replace "multiply by 2" with "power of 2."

 exp base === ==== 13 2 -> 1 6 4 2 3 16 -> 1 256 -> 32 0 65546 8192 ^^^^
Here are the calculations. We start with a running product, and we initialise that with 1. 13 is odd, so we multiply the second column into our running product giving 2. Then we halve 13 and square 2.

Next step: 6 is even so don't multiply into our running total. Then halve the 6 and square the 4 to get 16.

Now 3 is odd, so multiply 16 into the running product to give 32. Then halve the first column and square the second.

Final step: 1 is odd, so multiply 256 by 32 to give 8192, which is our final answer.

The cool thing is that the proof we have above works for this too. Our invariant is now:

$B^A\;.\;R\;=\;E$

where E is our desired exponentiation. Using that invariant, the proof goes through with minor modifications and is left as an exercise for the mythical interested reader.

## Generalisation 2: Matrices

 def fast_exp(b,e,I=1): # # Compute b^e where e is # a non-negative integer. # We start with a running # product of I, so this # works with numbers and # matrices. # result = I while e > 0: if is_odd(e): result *= b b *= b e = e / 2 return result
And finally the real payoff. We don't need to take powers of numbers. We can use this algorithm to take powers of matrices as well. We start our running product with the identity matrix, and put the matrix whose power we want in the second column. And it just works.

Here is the exponentiation routine coded in Python. It works for any non-negative integer exponent, and any "base" that is of a type that supports multiplication, and where that multiplication is associative. In other words, it works for any collection with a multiplication that forms a monoid.

But you don't need to know that, all you need to know is that it works for matrices, and that's what we'll use to speed up our calculation of Perrin numbers.

## References

To save you the effort of looking it up yourself, here are the searches for the algorithm on-line, and the wikipedia page:

 <<<< Prev <<<< FindingPerrinPseudoPrimes Part2 : >>>> Next >>>> Fast Perrin Test ...

I've decided no longer to include comments directly via the Disqus (or any other) system. Instead, I'd be more than delighted to get emails from people who wish to make comments or engage in discussion. Comments will then be integrated into the page as and when they are appropriate.

If the number of emails/comments gets too large to handle then I might return to a semi-automated system. That's looking increasingly unlikely.