Hacker Newsnew | past | comments | ask | show | jobs | submit | lordgrenville's commentslogin

That's what traditional time-series modelling does. This is a foundational model, which means it's just a neural network trained on lots of time series. (So maybe OP's question still stands? But it's the same question as "how can LLMs be good at so many different kinds of conversations?")

Because traditional time-series modelling (ARIMA, GARCH, ...) is too "simple" and "strict". Just like "simple" computer vision (OpenCV, edge-detection, ...) was crushed by neural networks when having to deal with real world images.

This seemed like a good answer at first. But on further thought, images on the whole really do seem to have quite a bit more standard structure / "grammar" to exploit compared to arbitrary time-series. Many images are of the world, where there is gravity so you might see preponderance of blobs at the bottom, or the repetitive types like people, animals, faces, eyes. Wildly abstract images still have some continuity, pixels in a neighborhood are likely to be similar.

Time series in general have none of this kind of structure that's strictly necessary. I'm sure that many real-world sensors typically have some gaussian distribution aspects + noise and/or smoothness and locality types of assumptions that are pretty safe, but presumably that simple stuff is exactly what traditional time-series modelling was exploiting.

Maybe the real question is just what kind of time-series are in the training data, and why do we think whatever implicit structure that is there actually generalizes? I mean, you can see how any training that mixes pictures of dogs and cats with picturing of people could maybe improve drawing hair, detecting hair, or let you draw people AND dogs. It's less clear to me how mixing sensor data / financial data / anything else together could be helpful.


> It's less clear to me how mixing sensor data / financial data / anything else together could be helpful.

Because many of these have the same underlying causal structures - humans doing things, weather correlations, holidays.

Well studied behavioral stuff like "the stock market takes the stairs up and the elevator down" which is not really captured by "traditional" modelling tools.

I'm sure people will be doing mechanical interpretation on these models to extract what they pattern match for prediction.


Personally, coming from an EE background and not finance or statistics, I would go about identifying these patterns with an Signals & Systems toolbox, like systems identification, various matched filters/classifiers.

This might be a totall wrong approach, but I think it might make sense to try to model a matched filter based on previous stock selloff/bullrun trigger events, and then see if the it has any predictive ability, likewise the market reaction seems to be usually some sort of delayed impulse-like activity, with the whales reacting quickly, and then a distribution of less savvy investors following up the signal with various delays.

I'm sure other smarter people have explored this approach much more in depth before me.


You're crafting features. The modern approach to ML (deep learning) is to use over-parameterized models and let them learn the features. Perhaps you remember this? https://www.nytimes.com/2012/06/26/technology/in-a-big-netwo...

Except that their success in the time series domain has been rather lackluster and elusive. It will s one of the few domains where old school models are not only less work to maintain but also more accurate. There are a few exceptions here and there. Every year there are a few neural nets based challengers. You can follow the M series of computations from its start to see this evolution.

Maybe because useful time-series modeling is usually really about causal modeling? My understanding is that mediated causality in particular is still very difficult, where adding extra hops in the middle takes CoT performance from like 90% to 10%.

Yes causal models are hard.

NNs do ok on those time series problems where it is really about learning a function directly off time. This is nonlinear regression where time is just another input variable.

Cases where one has to adjust for temporaly correlated errors, those seem to be harder for NNs. BTW I am talking about accuracies beyond what a typical RNN variants will achieve, which is pretty respectable. It's the case that more complicated DNNs don't seem to do much better inspite of their significant model complexity.


LightGBM won M5 and it wasn't even a competition.

The task was slightly different and favored GBMs. Note they aren't NNs whose underwhelming performance was what my comment was about.

The M series of competitions change the tasks every year to explore what models perform best under different scenarios. As I mentioned, neural network based models win here and there, but very spotty performance over all.


> Because many of these have the same underlying causal structures - humans doing things, weather correlations, holidays.

Or, you know, maybe they aren't. Thermometers and photon counts are related to weather sometimes, but not holidays. Holidays are related to traffic sensors and to markets, but not Geiger counters.

> Well studied behavioral stuff like "the stock market takes the stairs up and the elevator down" which is not really captured by "traditional" modelling tools.

Prices are the opposite, up like a shot during shocks, falling slowly like a feather. So that particular pattern seems like a great example of over-fitting danger and why you wouldn't expect mixing series of different types to be work very well.


Electricity demand is influenced very strongly by holidays, strongly by weather and from weak to strong by geopolitics (depending on location).

The model will have a library of patterns, and will be able to pattern match subtle ones to deduce "this time series has the kind of micro-patterns which appear in strongly weather influenced time-series", and use this to activate the weather pattern cluster.

To use your example, when served thermometer data, the model notices that the holiday pattern cluster doesn't activate/match at all, and will ignore it.

And then it makes sense to train it on the widest possible time series, so it can build a vast library of patterns and find correlations of activation between them.


Sometimes you want inductive bias. No universally true claim can be made like this.

Nice idea, would be good to add a third option for "these look indistinguishable" (and then I guess they could be bundled together in later stages).

I expect that OP just meant "native language"


Maybe a dumb question, but isn't this trivially solved with this .gitconfig?

    [user]
         name = lordgrenville
         email = <some_kind_of_id>+lordgrenville@users.noreply.github.com


Sure, as long as you want to rewrite all of the history of all of your public repositories.


Oh yeah, I have always had this as it was pretty clear to me that the info in the email field is public.


For commits you author.

Kernel guidelines now have a more verbose section about tagging: https://www.kernel.org/doc/html/latest/process/submitting-pa...


Not all projects are hosted at github. You also might want to receve relevant mail from fellow developers.


Fair point. Pretty sure there is a way to have a few .gitconfig files, with the active one based on the remote URL domain, but it is more work.


Perhaps, but it doesn't change the fact that this is bad behavior for the company sending the email. Since YCombinator funded this company it makes sense that YC would want to know about how they are conducting business.


And as a bonus, sometimes the information is correct!


yes, i noticed that occasionally but i'm curious which one did you find is incorrect?


Oh this was just snark.


I asked AI and it found me this repo: https://github.com/1997cui/envelope

The linked site has a nice FAQ section: https://envelopetracker.com/intro


thanks! :)


Agreed. This website seems to prepend the blog name to each page's document.title

Would suggest that one of the mods remove it


Nothing about this is specific to GNOME, right? Imagemagick is cross-platform


I guess the Gnome-specific part is that Gnome comes with the Nautilus file browser, and the instructions add a script for Nautilus.

But yea, this will work as long as you have imagemagick and Nautilus installed.


Oh I missed that part, was just looking at the script


or just run script and input pdf as argument...


Mildly surprised that this domain belongs to the Farm Bureau. Maybe they should sell it to Meta and donate the proceeds to the money-losing farms...


This is fun, I asked AI to come up with some modern ones to check someone is over 30. Zune, Friends, early memes, Who Wants to be a Millionaire, etc https://chat.deepseek.com/share/v9d5ckb8gv9rahwetq


>"Dental plan! Lisa needs braces!" is a workplace chant from...

OMG, That's absolutely unhinged to describe something that takes place entirely in Homer's head as a "workplace chant."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: