The term “model” is one of those super overloaded terms. Depending on the conversation it can mean:

- a product (most accurate here imo)

- a specific set of weights in a neural net

- a general architecture or family of architectures (BERT models)

So while you could argue this is a “model” in the broadest sense of the term, it’s probably more descriptive to call it a product. Similarly we call LLMs “language” models even if they can do a lot more than that, for example draw images.

I'm pretty sure only the second is properly called a model, and "BERT models" are simply models with the BERT architecture.