Google's 200M-parameter time-series foundation model with 16k context

https://github.com/google-research/timesfm

GitHub - google-research/timesfm: TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.

TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting. - google-research/timesfm

GitHub

I somehow find the concept of a general time series model strange. How can the same model predict egg prices in Italy, and global inflation in a reliable way?

And how would you even use this model, given that there are no explanations that help you trust where the prediction comes from…

> How can the same model predict egg prices in Italy, and global inflation in a reliable way?

How can the same lossy compression algorithm (eg JPG) compress pictures of everything in a reliable way?

It can't compress pictures of everything in a reliable way.

Text and anything with lots of high frequency components looks terrible

It still doesn't pretty well on text. And we have newer formats and ideas that would also deal with that. (To be really dead simple: have a minimal container format that decides between png or jpg, use png for text.)

However: white noise is where it really struggles. But real pictures of the real world don't look like white noise. Even though in some sense white noise is the most common type of picture a priori.

Similar for real world time series: reality mostly doesn't look like white noise.

White noise is random, so it's incompressible by definition. By JPG or by any other method no matter how clever.

I have a very peculiar coin. With 1% probability it turns up heads and with 99% probability it turns up tails.

A string of flips is random, but it's very compressible.

In any case, my point was that reality ain't uniformly random. And not only that: pretty much anything you can point your camera at shares enough similarity in their distribution that we pretty much have universal compression algorithms for real world data.