We also need to take into account, that CGI only consumes energy when the actual creation of particular video happens.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
We also need to take into account, that CGI only consumes energy when the actual creation of particular video happens.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
Right idea, but the application is incorrect.
Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.
Both a movie and a language model can cost tens or hundreds of dollars to produce.
In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.
At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.
We're clearly exploring different questions.
And that energy costs money, both at the training/cgi stage and at the inference/consumption stage. It's not even an externality.
CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I can't believe you're stretching this in a good faith.
But if you are - well, you're certainly have a unique perspective.