It's not cheap. It costs millions to $100 million depending on the model. I was responding to this tradeoff:

"A 10x increase in training costs is not necessarily prohibitive if you get a 10x decrease in inference costs."

Given millions and up, I'd like that to be 10x cheaper while inference was 10x more expensive. Then, it could do research or coding for me at $15/hr instead of $1.50/hr. I'd just use it carefully with batching.

Calculating the gradient requires a forward pass (inference) and a backward pass (back propagation).

They cost roughly the same, with the backwards pass being maybe 50% more expensive. So let's say three times the cost of a forward pass.

You can't make training faster by making inference slower.

I was responding to their claim by starting with an assumption that it may be correct. I don't know the cost data myself. Now, I'll assume what you say is true.

That leaves computation and memory use of two passes plus interlayer communication.

I think backpropagation doesn't occur in the brain since it appears to use local learning but global optimization probably happens during sleep/dreaming. I have a lot of papers on removing backpropagation, Hebbien learning, and "local, learning rules."

From there, many are publishing how to do training at 8-bit and below. A recent one did a mix of low-bit training with sub-1-bit storage for weights. The NoLayer architecture might address interlayer better.

People keep trying to build analog accelerators. There are mismatches between their features and hardware. Recent works have come up with analog NN's that work well with analog hardware.

A combination of those would likely get cost down dramatically on both inference and training. Also, energy use would be lower.