Wouldn't it be better to have the model write a script that calls a TS library and give it access to an interpreter to run it? That's how a human would do it. I'm not convinced of the need to bake this into the model. What can you do with native TS capability that you can't by tool calling?
Anthropic is encouraging the "have the model write a script" technique as well, buried in their latest announcement on Claude Agent SDK, this stuck with me:
> The Claude Agent SDK excels at code generation—and for good reason. Code is precise, composable, and infinitely reusable, making it an ideal output for agents that need to perform complex operations reliably.
> When building agents, consider: which tasks would benefit from being expressed as code? Often, the answer unlocks significant capabilities.
https://www.anthropic.com/engineering/building-agents-with-t...
Does it actually have a concept of time? Does it understand causality?
There are papers on that, such as https://arxiv.org/abs/2410.15319. Time series modeling will not bring about an understanding of causality except in a weak sense https://en.wikipedia.org/wiki/Granger_causality. To truly connect a cause and effect you need a graphical model. And automated causal discovery, the hardest part of which is proposing the nodes of the graph, is a nascent field.
I think you missed the point. Would you call an image analysis library to describe an image or reason over a sequence of images? Check out some of the plots in the paper to see what these models can do.
I would if the image analysis library was backed by a VLM. I have not fully read the paper, but couldn't figure 6 have been done by an LLM writing a script that calls libraries for time series feature extraction and writing a hypothesis test or whatever? They will do the heavy lifting and return a likelihood ratio or some statistic that is interpretable to an LLM.