Unfortunately not really, but we've found (and used in production for a year) that Claude 3.5 is perfectly capable of identifying anomalies or other points of interests in very large sets of time series data.
Think of 100-200K worth of tokens formatted like this:
<Entity1>-<Entity2> <Dimension2> <ISO 8601 time +1> <value>
The only pre-filtering we do is eliminate "obviously non relevant" data, such as series where the value is completely flat the whole time, but this was done to add more data to the context, not because Claude struggled with it (it doesn't).
Unfortunately not really, but we've found (and used in production for a year) that Claude 3.5 is perfectly capable of identifying anomalies or other points of interests in very large sets of time series data.
Think of 100-200K worth of tokens formatted like this:
<Entity1>-<Entity2> <Dimension> <ISO 8601 time> <value>
<Entity1>-<Entity2> <Dimension> <ISO 8601 time +1> <value>
<Entity1>-<Entity2> <Dimension> <ISO 8601 time +2> <value>
<Entity1>-<Entity2> <Dimension2> <ISO 8601 time> <value>
<Entity1>-<Entity2> <Dimension2> <ISO 8601 time +1> <value>
The only pre-filtering we do is eliminate "obviously non relevant" data, such as series where the value is completely flat the whole time, but this was done to add more data to the context, not because Claude struggled with it (it doesn't).