> GAs in general are explored in academia because they are a fad that's easily publishable.
I would say 90% of GA papers are because of this.
> Other than that this class of methods is in general very computationally expensive and very inefficient, and don't provide any advantage over plain old adaptive sampling, or even dumb regular lattice sampling.
Definitely false. The vast majority of OR-type papers in good journals include comparison to weak baselines like those, and wouldn't be published (in those venues) if they didn't win.
> Definitely false. The vast majority of OR-type papers in good journals include comparison to weak baselines like those, and wouldn't be published (in those venues) if they didn't win.
That statement is not correct. Although it's customary to accompany papers with benchmarks, these benchmarks focus practically exclusively on popular evolutionary algorithms. Worse, the "no free lunch" theorem grants authors the freedom to cherry-pick which benchmark problems are used.
Hmmm, maybe we're reading different papers. Ok, I admit that a lot of papers compare only against other metaheuristics, but only in cases where it is already accepted that those other metaheuristics far out-perform weaker methods.
About NFL, I agree that a lot of authors misuse it in that way. But again, in most papers in good journals, what we see is either a comprehensive experiment with a wide enough range of instances, or a real industry problem, not cherry-picking.
> Ok, I admit that a lot of papers compare only against other metaheuristics, but only in cases where it is already accepted that those other metaheuristics far out-perform weaker methods.
Those methods are invariably other evolutionary methods, and benchmarks are cherry-picked to show only encouraging results under the convenient guise of the "no free lunch" theorem. That' pretty much the norm, such as the repetitive recipe for inventing a metaheuristic algorithm of a) coming up with a clever nature-inspired metaphor with a catchy name, b) put together an algorithm that is arguably inspired on the metaphor, c) come up with a benchmark that arguably portrays the algorithm as being any improvement, even if only on a fortuitous corner case.
No, I'm using "weaker methods" in the technical sense, not "performs worse".
I agree that many papers of that recipe type exist -- Sorensen and Weyland have skewered them effectively -- they are just froth, to be ignored in a discussion of the true merits of evolutionary computation.
I would say 90% of GA papers are because of this.
> Other than that this class of methods is in general very computationally expensive and very inefficient, and don't provide any advantage over plain old adaptive sampling, or even dumb regular lattice sampling.
Definitely false. The vast majority of OR-type papers in good journals include comparison to weak baselines like those, and wouldn't be published (in those venues) if they didn't win.