SNK were the gods of the 68000. I still remember back in the day getting a bug report on my 68000 emulator:
When playing King of Fighters, the time counter would go down to 0 and then wrap around to 99, effectively preventing the round from ending.
Eventually I tracked it down to the behavior of SBCD (Subtract Binary Coded Decimal): Internally, the chip actually does update the overflow flag reliably (it's marked as undefined in the docs). SNK was checking the V flag and ending the round when it got set.
SBCD was an old throwback instruction that was hardly used anymore, and the register variant took 6 cycles to complete (vs 4 for binary subtraction).
HOWEVER... For displaying the timer counter on-screen, they saved a ton of cycles with this scheme because extracting the digits from a BCD value is a simple shift by 4 bits (6 cycles) rather than a VERY expensive divide (140 cycles).
Interestingly, gcc-amigaos-gcc 6.5 uses dbra without having to jump through any of those contortions, as long as the optimisation level is set to at least -O1:
One thing that I heard from folks who do development for retro Atari platforms is that the 68k support in GCC has been getting worse as time has gone on, and it's very difficult to get the maintainers to accept patches to improve it, since 68k is not exactly widely used at this point.
Specifically, I heard that the 68k backend keeps getting worse, whilst the front-end keeps getting better. So choosing a GCC version is a case of examining the tradeoffs between getting better AST-level optimisations from a newer version, or more optimised assembly language output from an earlier version.
I imagine GCC 6.5 probably has a backend that makes better use of the 68k chip than the GCC 11.4 that ngdevkit uses (such as knowing when to use dbra) but is probably worse in other ways due to an older and less capable frontend.
I tried this with the old SAS/C Amiga compiler. It put addresses in A0 and then moved value into (A0) on next instruction, so the setup part was a bit more inefficient. And refused to use "dbra" no matter what I tried.
> Note how gcc is smart enough to detect that the expression ((0xc<<12) | 0xafe) is constant, so it can skip shifts and bitwise assembly operations and just emit the resulting immediate value at line 14. The same goes for the loop condition, gcc emits constant 1280 at line 10 in place of the multiplication 40x32. A classic compiler optimization called constant folding, but nice nonetheless.
This is actually required rather than an optimisation for any C compiler, from early on, as C semantically allows constant expressions rather than just constants to be used for statically allocated sizes, etc. While the 'optimisation' is not guaranteed you'll see even on -O0 the constant was evaluated at compile-time, as it's harder to not fold constant expressions sometimes than it is to just always fold them for the already required constant expression features.
The step with declaring hw registers in assembly reminds me how assignment of value to pointer is IIRC at best implementation defined, and at worst UB, and playing around with volatile saves you not from zealous optimizer.
Arguably every hardware register should be declared that way as a symbol
SNK were the gods of the 68000. I still remember back in the day getting a bug report on my 68000 emulator:
When playing King of Fighters, the time counter would go down to 0 and then wrap around to 99, effectively preventing the round from ending.
Eventually I tracked it down to the behavior of SBCD (Subtract Binary Coded Decimal): Internally, the chip actually does update the overflow flag reliably (it's marked as undefined in the docs). SNK was checking the V flag and ending the round when it got set.
https://github.com/kstenerud/Musashi/blob/master/m68k_in.c#L...
SBCD was an old throwback instruction that was hardly used anymore, and the register variant took 6 cycles to complete (vs 4 for binary subtraction).
HOWEVER... For displaying the timer counter on-screen, they saved a ton of cycles with this scheme because extracting the digits from a BCD value is a simple shift by 4 bits (6 cycles) rather than a VERY expensive divide (140 cycles).
You don't need division to convert to decimal, though it will still be slower than using BCD operations.
Technically no, but they were also always fighting against the ROM size, trying to keep costs down. Every byte helped.
Oh? How do you do it? Some kind of lookup table?
https://en.wikipedia.org/wiki/Double_dabble
A software implementation with masks and shifts will beat traditional CISC dividers.
Interestingly, gcc-amigaos-gcc 6.5 uses dbra without having to jump through any of those contortions, as long as the optimisation level is set to at least -O1:
One thing that I heard from folks who do development for retro Atari platforms is that the 68k support in GCC has been getting worse as time has gone on, and it's very difficult to get the maintainers to accept patches to improve it, since 68k is not exactly widely used at this point.
Specifically, I heard that the 68k backend keeps getting worse, whilst the front-end keeps getting better. So choosing a GCC version is a case of examining the tradeoffs between getting better AST-level optimisations from a newer version, or more optimised assembly language output from an earlier version.
I imagine GCC 6.5 probably has a backend that makes better use of the 68k chip than the GCC 11.4 that ngdevkit uses (such as knowing when to use dbra) but is probably worse in other ways due to an older and less capable frontend.
I tried this with the old SAS/C Amiga compiler. It put addresses in A0 and then moved value into (A0) on next instruction, so the setup part was a bit more inefficient. And refused to use "dbra" no matter what I tried.
I was once into 68k so I may be rusty, but shouldn't it be move.w d1,(a0)+ (increment the target address after each step)?
The hardware increments an internal pointer after each access. The view to that address is through value in a0.
> Note how gcc is smart enough to detect that the expression ((0xc<<12) | 0xafe) is constant, so it can skip shifts and bitwise assembly operations and just emit the resulting immediate value at line 14. The same goes for the loop condition, gcc emits constant 1280 at line 10 in place of the multiplication 40x32. A classic compiler optimization called constant folding, but nice nonetheless.
This is actually required rather than an optimisation for any C compiler, from early on, as C semantically allows constant expressions rather than just constants to be used for statically allocated sizes, etc. While the 'optimisation' is not guaranteed you'll see even on -O0 the constant was evaluated at compile-time, as it's harder to not fold constant expressions sometimes than it is to just always fold them for the already required constant expression features.
Significant further gains are possible by simply unrolling the loop eight or 16 times to lower the overhead of the DBF per word written
The step with declaring hw registers in assembly reminds me how assignment of value to pointer is IIRC at best implementation defined, and at worst UB, and playing around with volatile saves you not from zealous optimizer.
Arguably every hardware register should be declared that way as a symbol
That was a common feature on Borland and Microsoft compilers for MS-DOS.
I mean, at its most basic, it's a feature from even the earliest compilers for C.
Not sure if Borland or MS shipped big fat symbol tables for all hardware registers of an IBM PC though?
You can optimize further by unrolling the loop. For example:
But what about the effect on cache… oh, wait!
;-)
This is cool, but at that point, why write it in C at all? Why not just hand-roll some assembler? It's targeted at a specific platform anyway.
With Assembly.