I think it is `atan` function. Sin is almost a lookup query.

On modern machines, looking things up can be slower than recomputing it, when the computation is simple. This is because the memory is much slower than the CPU, which means you can often compute something many times over before the answer from memory arrives.

Unless your lookup table is small enough to only use a portion of your L1 cache and you're calling it so much that the lookup table is never evicted :)

It's still less space for other things in the L1 cache, isn't it?

Even that is not necessarily needed, I have gotten major speedups from LUTs even as large as 1MB because the lookup distribution was not uniform. Modern CPUs have high cache associativity and faster transfers between L1 and L2.

L1D caches have also gotten bigger -- as big as 128KB. A Deflate/zlib implementation, for instance, can use a brute force full 32K entry LUT for the 15-bit Huffman decoding on some chips, no longer needing the fast small table.

Not just modern machines, the Nintendo64 was memory bound under most circumstances and as such many traditional optimizations (lookup tables, unrolling loops) can be slower on the N64. The unrolling loops case is interesting. Because the cpu has to fetch more instructions this puts more strain on the memory bus.

If curious, On a N64 the graphics chip is also the memory controller so every thing the cpu can do to stay off the memory bus has an additive effect allowing the graphics to do more graphics. This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.

They were hoping that by having high speed memory, 250 mHz, the cpu ran at 90mHz, it could provide for everyone and it did ok, there are some very impressive games on the n64. but on most of them the cpu is running fairly light, gotta stay off that memory bus.

https://www.youtube.com/watch?v=xFKFoGiGlXQ (Kaze Emanuar: Finding the BEST sine function for Nintendo 64)

The N64 was a particularly unbalanced design for its era so nobody was used to writing code like that yet. Memory bandwidth wasn't a limitation on previous consoles so it's like nobody thought of it.

> This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.

The Ensoniq EPS sampler (the first version) used 13-bit RAM for sample memory. Why 13 and not 12? Who knows? Possibly because they wanted it "one louder", possibly because the Big Rival in the E-Mu Emulator series used μ-law codecs which have the same effective dynamic range as 13-bit linear.

Anyway you read a normal 16-bit word using the 68000's normal 16-bit instructions but only the upper 13 were actually valid data for the RAM, the rest were tied low. Haha, no code space for you!

It may be, especially when it comes to unnecessary cache. But I think `atan` is almost a brute force. Lookup is nothing comparing to that.

Sin/cos must be borders of sqrt(x²+y²). It is also cached indeed.

What do you mean brute force?

We can compute these things using iteration or polynomial approximations (sufficient for 64 bit).

There is a loop of is it close enough or not something like that. It is a brute force. Atan2 purely looks like that to me.

> Sin/cos must be borders of sqrt(x²+y²). It is also cached indeed

This doesn't make a ton of sense.

In what way do you think a sin function is computed? It is something that computed and cached in my opinion.

I think it is stored like sintable[deg]. The degree is index.

> In what way do you think a sin function is computed?

In some way vaguely like this: https://github.com/jeremybarnes/cephes/blob/master/cmath/sin...

> I think it is stored like sintable[deg]. The degree is index.

I can think of a few reasons why this is a bad idea.

1. Why would you use degrees? Pretty much everybody uses and wants radians.

2. What are you going to do about fractional degrees? Some sort of interpretation, right?

3. There's only so much cache available, are you willing to spend multiple kilobytes of it every time you want to calculate a sine? If you're imagining doing this in hardware, there are only so many transistors available, are you willing to spend that many thousands of them?

4. If you're keeping a sine table, why not keep one half the size, and then add a cosine table of equal size. That way you can use double and sum angle formulae to get the original range back and pick up cosine along the way. Reflection formulae let you cut it down even further.

There's a certain train of thought that leads from (2).

a. I'm going to be interpreting values anyway

b. How few support points can I get away with?

c. Are there better choices than evenly spaced points?

d. Wait, do I want to limit myself to polynomials?

Following it you get answers "b: just a handful" and "c: oh yeah!" and "d: you can if you want but you don't have to". Then if you do a bunch of thinking you end up with something very much like what everybody else in these two threads have been talking about.

It isnt good idea to store such values in code. I think it is something that computed when a programming environment is booting up. E.g. when you run "python", or install "python".

I try to understand how Math.sin works. There is Math.cos. It is sin +90 degrees. So not all of them is something that completes a big puzzle.