And you always start off knowing the total length of the array, and the width of the datatype.
Actually deciding what to do with that information without incurring a bunch more cache misses in the process may be tricky.
And you always start off knowing the total length of the array, and the width of the datatype.
Actually deciding what to do with that information without incurring a bunch more cache misses in the process may be tricky.
Is the disconnect here that in many datasets there is some implicit distribution? For example if we are searching for english words we can assume that the number of words or sentences starting with "Q" or "Z" is very small while the ones starting with "T" are many. Or if the first three lookups in a binary search all start with "T" we are probably being asked to search just the "T" section of a dictionary.
Depending on the problem space such assumptions can prove right enough to be worth using despite sometimes being wrong. Of course if you've got the compute to throw at it (and the problem is large) take the Contact approach: why do one when you can do two in parallel for twice the price (cycles)?