They introduce “height” as another architectural dimension, alongside the usual width and depth. If you imagine the usual diagram of a neural network, the difference when a neural net is of height 2 is that in the middle layers, each individual node contains another network inside it, and that inner network has the same structure as the top-level network. For height 3, each node has an inner network, and each of those inner networks is composed of nodes that have their own inner networks as well. And so on, recursively, for greater heights. There’s a diagram on page 3.
Couldn‘t the fully connected 3x3x3 network in Figure 2c simply be reformulated as an equivalent 9x9 network that is not fully connected? Instead connections between layers of the sub-networks would only occur every 3 layers?
Interesting, but this approach is still about scaling in size.
There are other more promising approaches that actually allow to reduce size of the network and have better performance: Closed-form continuous-time neural networks [1], or also known as liquid neural network.
They have been proved better in navigation (cars and even drones), with as little as 15 neurons in the network. The smaller scale not only makes it highly efficient, but also allows to better understand what the network is doing due to lower complexity.
It does re-ignite an old interest I have in, I don’t know the right name for it, but I think of it as “seafloor” neural networks, where they aren’t equally deep in layers all across the network (as opposed to “swimming pool” nets which are a constant layer depth at every point. From my limited attempts at neural net inspection I recall seeing some nodes acting like passthroughs where it seemed to be able to do the operation in e.g. five steps and “didn’t need” the other three so those three layers had been trained to change the values they were given as little as possible. No idea if a network would be able to “push around” operations so the simpler ones end up in the “shallower” parts of the network or not, but gradient descent is pretty amazing sometimes.
I really love small neural networks. They have some nice properties that people overlook. The training speed record (warning, self promo) for CIFAR10 to 94% uses a very tiny neural network (<10 MB if just saved raw out to disk as a definition file). That's located at https://github.com/tysam-code/hlb-CIFAR10.
You could make that even smaller if you wanted to, though at least this network is already pushing maybe even a little further down the diminishing returns spectrum in some areas than I'd like, I suspect (bound by the powers of 2/multiples of 8/multiples of 64 required by GPUs :'(((( ).
I think a really fun future challenge (yes, I know, this is taking us back to the 90's-2000's in terms of challenge territory, but I believe it has much more practical use than a lot of other modern day benchmarks) would be to find the fastest-training network that infers at 94% in under 1 MB. I certainly believe it's possible, but with pareto laws the way they are, it would take a whole lot longer to train and might not be as fast on a GPU during inference as the main net (despite having fewer parameters). That might not be true, however.
There's a few NP-hard problems that actually exist in this space that not a lot of people talk about but I feel will be considered a core part of the theory of training neural networks at some point in the future. The size of the network is a very interesting tradeoff that opens up certain mathematically interesting properties on either end of the spectrum. Bigger is not always better, though it is simpler and simple oftentimes survives, especially (especially especially especially) when skilled technical workers in a very particular niche live in the shallow long tail of [insert correct distribution of skills rarity here].
One of the common threads (might be a "common", I'm not sure to be honest as I live in my own personal bubble of research interests and community and etc) is the dimensionality of the problem at hand. That plays into the scale of the network used to solve a problem. I remember some discussion being sparked a while back from some Uber research about the inherent dimensionality of a particular problem for some given particular neural network (though of course it's naturally linked to your inductive bias so please take that as you will). As you noted, some networks do quite well with very few neurons, 15 is a record however from what I've heard (and I'd love to see that -- I have a guess as to which particular method, or, at least, method family, it is... ;P I'm...casually interested in that arena of research).
In any case, as you can see I am quite interested and passionate about this topic and am happy to discuss it at length further.
> We propose the nested network architecture since it shares the parameters via repetitions of sub-network activation functions. In other words, a NestNet can provide a special parameter-sharing scheme. This is the key reason why the NestNet has much better approximation power than the standard network.
It would be interesting to see an experiment that compares their CNN2 model with other parameter-sharing schemes such as networks using hyper-convolutions [0][1][2].
[0] Ma, T., Wang, A. Q., Dalca, A. V., & Sabuncu, M. R. (2022). Hyper-Convolutions via Implicit Kernels for Medical Imaging. arXiv preprint arXiv:2202.02701.
[1] Chang, O., Flokas, L., & Lipson, H. (2019, September). Principled weight initialization for hypernetworks. In International Conference on Learning Representations.
[2] Ukai, K., Matsubara, T., & Uehara, K. (2018, November). Hypernetwork-based implicit posterior estimation and model averaging of cnn. In Asian Conference on Machine Learning (pp. 176-191). PMLR.
Paper was submitted almost exactly 1 year ago, and last revised in Jan 2023.
Not sure if title needs a (2022), just pointing out the above in case anyone else like me read “19 May” and mistakenly thought it was a 2 day old paper :)
Probably not. The paper was accepted into NIPS 2022[0]. In case anyone is wondering I did a diff (add "diff" afer "arxiv" and before ".com") on V3 (16 Oct 2022) and V4 (latest: 14 Jan 2023) and the changes are just a few typos and a sign flip in the appendix (page 17: v3 has f - phi, now reversed)
> just pointing out the above in case anyone else like me read “19 May” and mistakenly thought it was a 2 day old paper
Is this common? Maybe because oldest is on top? But read dates at bottom for best results (obviously year helps too).
The almost exactly 1 year is probably because NIPS '23 submissions just closed (supp still open btw).