American Fuzzy Loop[0] uses GAs to concentrate its fuzzing.
But in general the "No Free Lunch Theorem"[1] shows that no optimisation algorithm is "better" than any other across the universe of all possible problems.
So it's going to be some balance of your familiarity with GAs, having a problem domain that maps nicely to some simple input structure but which has a relatively gnarly surface to search and whether you could do it better in some other way.
These days the clear winner is various kinds of neural nets, largely because they can be represented with matrix multiplication, which made them suitable first for GPUs and now for much more specialised hardware.
I've had them in mind for searching autoscaler parameters, mostly because I feel it's easier to reason about a GA's optimisations than a series of matrix multiplications resulting in a cloud of floating point numbers.
But in general the "No Free Lunch Theorem"[1] shows that no optimisation algorithm is "better" than any other across the universe of all possible problems.
So it's going to be some balance of your familiarity with GAs, having a problem domain that maps nicely to some simple input structure but which has a relatively gnarly surface to search and whether you could do it better in some other way.
These days the clear winner is various kinds of neural nets, largely because they can be represented with matrix multiplication, which made them suitable first for GPUs and now for much more specialised hardware.
I've had them in mind for searching autoscaler parameters, mostly because I feel it's easier to reason about a GA's optimisations than a series of matrix multiplications resulting in a cloud of floating point numbers.
[0] http://lcamtuf.coredump.cx/afl/
[1] https://en.wikipedia.org/wiki/No_free_lunch_theorem