Binary Search Reconsidered 


My lastest posts can be found here: Previous blog posts:
Additionally, some earlier writings: 
Binary Search Reconsidered  2011/04/18"Binary Search" was made popular as an interesting test problem by Jon Bentley in his book Programming Pearls. There he says that it's an interesting problem, and baits the hook by saying:
And so the problem has attained a certain status. I'm not going to examine all the issues here. There is a superb series of blog posts by Mike Taylor that can be found starting here:
(Note: there are a couple of extra posts interspersed ...) Some people object to being made to examine this problem. They say it's not relevant to modern programming, and that very few people are called upon to write algorithmic code from scratch. To say that, I believe, is to miss the point. The issue is addressed eloquently in one of the comments. Further, some of the comments are quite vocal about how the ideas of "proving code" interact with those of "testing code." These questions are covered quite comprehensively in the posts and other comments. The thing is, the problem is subtle. Again quoting from Bentley:
Even then, as recently as 2006 the question arose again:
As such, examining this particular problem serves as a microcosm in which to explore some aspects of programming in general. Again, the referenced posts cover the code and techniques in considerable depth. What I want to talk about here is a small simplification of the code that Mike Taylor offers in his posts. So let's look at some actual code. Algorithm descriptionThe idea is simple. You're looking for an item in a sorted list, so you look in the middle. If the thing you find is bigger than what you're looking for, then the item  if it's there at all  must be in the first half, otherwise it's in the second half. Lather, rinse, repeat. You'd think it would be simple, but the evidence of all the discussion, and the more recent article from 2006, suggests that it's harder than you think. I strongly advise you to try the challenge as laid out in the first blog post:
Spoilers follow ... Example code ...Mike starts with the invariant:
(Note: I've taken the liberty to reformat it and make nonfunctional changes. Refer to the original to check if you're concerned)
int binary_search( const int a[], const int size, const int val ) { int lower = 0; int upper = size1; while (lower <= upper) { int i = lower + (upperlower)/2; if (val == a[i]) return i; if (val < a[i]) upper = i1; else /* val > a[i] */ lower = i+1; } return 1; } I was stupid  I claimed:
My first versionSo here is the invariant I had in mind  it's very similar to Mike's:
int find(int a[], int size, int val) { int lower = 0; if (size<=0) return 1; while (1) { if (size==1) return (val==a[lower]) ? lower : 1; int m = size/2; if (val<a[lower+m]) size = m; else size = m, lower += m; } return 1; /* Never used */ } When we enter the while loop we know size is at least 1. After the test, size is at least 2, so 1<=m<=size. Therefore it's valid to probe that location, and we can see that size constantly gets smaller, and so the test eventually succeeds and the routine returns. We can check the invariant like this:
Using explicit lower and upper bounds ...This can all equivalently be written using explicit upper and lower bounds. Some people find this a retrograde step in simplicity and clarity, but it serves as a stepping stone.
int find(int a[], int size, int val) { int lower = 0, upper=size; if (upper<=0) return 1; while (1) { if (upperlower==1) return (val==a[lower]) ? lower : 1; int m = (upperlower)/2; if (val<a[lower+m]) upper = m; else lower = m; } return 1; /* Never used */ } Here size has been replaced by upperlower everywhere, with appropriate algebraic simplifications. Simplifying ...We can now move the return statement outside the while loop:
int find(int a[], int size, int val) { if (size<=0) return 1; int lower = 0, upper=size; while (upperlower>1) { int m = lower+(upperlower)/2; if (val<a[m]) upper = m; else lower = m; } return (val==a[lower]) ? lower : 1; }
There are other things to check. The arithmetic can never overflow, and we never access the array outside of the bounds that the parameters guarantee exist. The value of size ( or its equivalent upperlower ) is strictly decreasing and positive, so the loop terminates when it gets to 1. It looks pretty bulletproof. (Famous last words) Note that I don't need to worry about signedness of variables or possible overflow conditions, regardless of the language variant being used, because the structure of the code ensures that I never go outside the boundaries guaranteed by the values passed in. Taking stock, and summing up.So let's think about the differences between this and the version in the blog post. One difference is that this never terminates early. It always takes log(n) steps, and if there are equal keys, it always returns the last one. That's an interesting change in behaviour, and should be kept in mind. It makes the time behaviour consistent, but it means you don't get a quick result if you happen to be lucky on an early guess. But looking at the code, we can see that we don't set one of the bounds ( lower and lower+n ) to be outside the potential permitted probe locations. Mike's code sets lower to i+1, and you might need to worry about whether that overflows.
But none of that is conclusive. Is it simpler? Personally, I find the detailed reasoning from this invariant easier than in Mike's case, but perhaps that's just selection bias. After all, I wrote this code, and Mike wrote his, so perhaps the code reflects our respective ways of thinking. What do you think?

Quotation from Tim BernersLee 