Maximum likelihood (which underpins many frequentist methods) basically amounts to Bayesian statistics with a uniform prior on your parameters. And the "shape" of your prior actually depends on the chosen parametrization, so in principle you can account for non-flat priors as well.
IMHO, the discussion should not be so much whether to teach Bayesian or maximum likelihood. But instead, whether to teach generative models or to keep going with hypothesis tests, which are generally presented to students as a bag of tricks.
Generative models, (implemented in e.g. Stan, PyMC, Pyro, Turing, etc.) split models from inference. So one can switch from maximum likelihood to variational inference or MCMC quite easily.
Generative models, beginning from regression, make a lot more sense to students and yield much more robust inference. Most people I know who publish research articles on a frequent basis do not know p-values are not a measure of effect sizes. This demonstrates current education has failed.