The crisis (if it exists) is not triggered by a belief in beauty, but by the simple fact that LHC hasn't yet seen anything "new" (beyond the very-important success of the Higgs discovery).
The observation of the apparent non-existence of something (supersymmetry, in this case) is at least as important as the existence of something. Supersymmetry is one of high-energy physics' best guesses as to the nature of the universe. If that turns out not to be true, we have learned something very important.
"Thirty spokes share the wheel's hub;
It is the center hole that makes it useful.
Shape clay into a vessel;
It is the space within that makes it useful.
Cut doors and windows for a room;
It is the holes which make it useful.
Therefore profit comes from what is there;
Usefulness from what is not there."
Tao Te Ching - Lao Tzu - Chapter 11
(translation by Gia-fu Feng and Jane English)
Physics is alive and well; there are plenty of big open problems with strong experimental backing:
How does gravity connect with the rest of physics?
What is the nature of dark energy?
What is the nature of dark matter?
Where is the rest of the CP violation?
What gives neutrinos mass?
(And, the doozies that are so hard to answer that nobody touches them: Why does time have a direction? Is there something underlying quantum mechanics?)
The crisis that Hossenfelder is addressing is that there has been no significant progress that's been experimentally verified in fundamental physics since the birth of string theory in the 1980s. Yes, there are a lot of big open problems, but they remain open. Yes, we have recently found experimental evidence for things like the Higgs boson, but that was first predicted in the 1960s. Yes, we can rule out some versions of string theory thanks to LHC results, but there are a vast multitude of string theories, and that's not a lot to show for nearly 40 years.
* The value of the magnetic moment of the muon (g) is too high. IIRC this result has only a 2.5 or 3 sigma, so it's far from confirmed.
* There are some experiment to prove that the neutrino is a majorana particle and that the neutrino and the anti-neutrino is the same particle. I don't remember any good announcement, so I think that the experiments have still too early results or the results are not interesting. [I don't like this theory, but many people that knows more than me like it.]
That kind of emphasizes the point rather than disproves it, though. Nearly 40 years of physics to pull your examples from, and you pull something from last month, something not confirmed (there's been tons of 2.5-3 sigma results over the past 40 years, but there's a reason physicists have learned to wait for more sigma), and something that is unannounced and may be uninteresting.
I believe there are some other things in that time frame. I believe the discovery that the universe is accelerating in its rate of separation fits in that time frame, and that's pretty big. We don't know how to fit it in with anything, but it's big. And I'd still say confirming Higgs is big.
But... yeah... it is a pretty short list compared to preceding decades.
When I spoke to an exo collaborator he told me that exo put a really tight limit on the rate of neutrino-less double beta decay. If I was a betting man I would put it in the same category as proton decay, another theoretical physics favorite.
The fact that the LHC hasn't seen anything new is the entire point here because it yields evidence against SUSY. That puts almost all of mainstream HEP theory in bad position because it has been built around SUSY being real and an accepted hypothesis.
As usual, theoretical physics is taking the wrong message from this, sticking to their guns and claiming SUSY evidence is yet above 10 TeV instead of stepping back and re-evaluating their fundamental assumptions. Of course they won't because many have built their careers on this hypothesis.
I forsee a reckoning for the whole field (which will take on economic dimensions) in the future.
I agree with you, and I would add that, from a Popperian view, the LHC has helped falsify some of the possible string-theoretic universes from being our universe, by failing to pick the lower-hanging particle-fruits of supersymmetry from its relatively low-powered tests.
Any theory that predicts something other than the "expected" can be falsified by observing the "expected".
To use the swan argument from down-thread: If a theory predicts a) All swans are drawn from a single distribution of colors and b) 1% of swans will be black, then the observation of a few thousand white swans will be sufficient to rule out said theory.
In an idealised world the thousand white swans and 10 black swans will be mixed randomly, and you would be right. In the real world bird move in flocks, the presence of one group of birds frightens off different birds. You may live in the hemisphere with only white swans and never as long as you live see a black one, but just over the hill there is a big population, it just lives in a place you haven't looked. In natural systems observations are biased. You reasoning is based on ideal assumptions.
Perhaps not absolutely, but in a Bayesian sense it should make us less sure of the theory's truth. "Falsification" is really just severe scepticism, after all, nothing can be absolutely proven in any direction.
I think that Bayesian reasoning applies to a different category of knowledge; there are things for which we have no predictive theories that we have to reason about probalistically and Bayes is good for that. But thinking about how things are and probability seems a poor duo - after all, we can end up 99.999% sure that something is true, observe one counter example and now know that 100% it is false... Bayes doesn't do that.
MM is a great historical example that the field should remember. However, culturally for the current and previous generations, physicists aren't keen on their history and even if they are they definitely can't place themselves in it.
Yes, and it's results confused everyone in physics and made people wonder what was really going on, but it could have been that another "amended" version of aether could have worked out, or it could have been that there was some mess with the experiment, or something else (perhaps) a bit like gravity now (cdm, or something else?) - when Einstein came up with relativity then all of that was blown away because Relativity came with positive predictions that aether didn't come with and these predictions were then observed ("look ma that swan is black, I told ya"). Now, who out there actually believes that cold dark matter is wrong because no one has had a good direct observation so far? It might be wrong, but it isn't ruled out at all.
You're losing the plot here. Recall the original claim:
> Nothing is falsified by the failure of an experiment to observe the expected.
MM definitively falsified the then-prevalent theory that space and time are absolute, and that the earth is moving with respect to the medium through which light waves propagate. The fact that it did not falsify all possible theories of the luminiferous aether does not change this fact.
I can sit for day after day not observing a black swan. This proves nothing, it does not prove the theory "there is no non white swan". But if I see a black swan then the theory that all swans are white is up the chute.
Hypothesis 0 - all adult swans have largely white plummage.
Expectation - all observed swans "are white".
Observation - some swans "are black".
>"Nothing is falsified by the failure of an experiment to observe the expected." //
The experiment failed to observe the expected. The hypothesis was falsified.
Hypothesis 1 - all swans are white or black.
---
Even with something like "we expected to find a Top quark at ~100GeV and found nothing" then the experiment has falsified that there is Top that would appear in such an experimental framework. We usual project that result beyond it's technical bounds, to say there is no top below that limit, is that what you're trying to get at?
If you mean experiments aren't perfect then I don't see how that truism is useful. There have been a couple of potentially massive results in recent years that have fallen to poor experimental process.
When a particle is observed at some energy the theories that predict that no such particle can exist are invalidated. It fault of the experiment to see something is just that, the experiment has failed. The causality of the failure is unknown, also if you do make the observation then the causality of the observation if unknown; Newton's observations and all the observations made by people who believed in a Newtonian universe were not unobserved when relating came along. I believe that the probabilistic approach to observation and observe to rule out approach in HEP now is a big error.
There's a distinction here. An experiment may measure a parameter to be zero instead of some nonzero value that everyone expected. That isn't subject to the black swan fallacy. Only when you are sampling is the black swan fallacy relevant.
The problem here is that in Aristotelian logic, which deals strictly in statements that are true or false and nothing in between, non-existent evidence is not evidence of non-existence. But this cute phrase, over-quoted and generally badly misunderstood, is not true because of the way the obvious, naive reasoning would lead you to believe... it is true because Aristotelian binary logic doesn't deal in anything like "evidence" at all. Non-existent evidence is not evidence of non-existence, because under this logic there is straight-up no such thing as evidence. There is only proof.
However, while this form of logic is a great thing to teach children for their first (and likely only) one, and while your fancier logics probably ought to have some sort of Correspondence Principle for this style of logic, this rigid style of logic is not terribly useful in the real world, where statements of rigid, solid, 100%-confidence mathematical truth are in short supply. In any probabilistic logic, non-existent evidence is evidence of non-existence. It is not proof. But it is evidence. It is true the human mind may have a tendency to overweight the evidence, but that's fixed by math.
So, while there is a sense in which the oft-trotted out phrase is true, in logics useful in the real world, it is simply false.
After all... what other evidence of non-existence do you expect to gather? If you do not believe in the invisible, intangible unicorn that lives in your garage that Sagan liked to talk about, what other actual evidence for that belief do you have other than a complete lack of evidence it exists? There is no way for you to disprove it under Aristotelian logic. (And this style of logic doesn't have a great story for statements it can neither prove nor disprove, which is another reason why mathematics moved past it a long, long time ago.)
If string theory is wrong, what other evidence for that can we gather other than the predictions of string theory being false? Remember, we can build physics models based on, say, Loop Quantum Gravity that excludes strings and may seem to predict the universe correctly, but that does not logically prove that there are not also strings in the universe. We exclude strings or other broken models only by consistently failing to collect evidence they are true.
>>We exclude strings or other broken models only by consistently failing to collect evidence they are true
Which is not a reason to consider them excluded. There is no reason to believe in the unicorn, or strings it there are powerful predictive alternate theories that explain the observed facts. But science creates contingent knowledge, not truth, that is all.
"Remember, we can build physics models based on, say, Loop Quantum Gravity that excludes strings and may seem to predict the universe correctly, but that does not logically prove that there are not also strings in the universe." - from just the previous sentence.
Incidentally, that post gored some oxes, apparently. Despite the firm mathematical foundation I stand on with great confidence for the entire post, it's sitting at -2. Seems a lot of people have a very emotional reaction to hearing "absence of evidence is not evidence of absence" is untrue.
None of what you say contradicts that HEP Theory is in a crisis. That word was thrown around before the LHC confirmed the Standard Model, it's become more acute since.
The crisis is that the foundational ideas about how to make progress in physics: Elegance, beauty, naturalness, symmetries... are not yeilding any insights on the questions you raise. And yet CERN still argues that HL-LHC is important for detecting supersymmetry in their press release today.
The technical core of Hossenfelders argument is here:
"Thirty spokes share the wheel's hub;
It is the center hole that makes it useful.
Shape clay into a vessel;
It is the space within that makes it useful.
Cut doors and windows for a room;
It is the holes which make it useful.
Therefore profit comes from what is there;
Usefulness from what is not there."
Tao Te Ching - Lao Tzu - Chapter 11
(translation by Gia-fu Feng and Jane English)
---
(YC people: it is very telling that you can't find the will/resources to fix this blockquote on mobile problem. I would say it is damning of the whole ethos of YC/HN, in fact, but I'm marginal.)
One dev intern could tinker with HN's codebase to improve it. It's not as if YC is a startup that has to have a laser focus on one main chance. This is big business now.
There's plenty of research about the problem of the arrow of time, so o don't think it's fair to say nobody touches them. And what did you mean by "Is there something underlying quantum mechanics?"
> The observation of the apparent non-existence of something (supersymmetry, in this case) is at least as important as the existence of something.
Human imagination can create a large number of theories. It isn't as valuable to prove the non-existence of something you have imagined up than proving the existence of something you have imagined up.
Sure, in this case, it was pretty important to prove an idea (supersymmetry) wrong because it was so widely accepted. But, insofar as physics is an empirical science, not it's important to reflect, how did it happen in the first place that an idea with no empirical support became so widely held? What can be improved in the culture and habits of physics and physics funding. Which traditions should be reconsidered.
Supersymmetry was "beautiful" (i.e. it was produced by the conscious mind) and hence in a sense more "likely", so disproving it is as valuable as proving other "beautiful" theories (e.g. GR and Mercury's orbit).
“Why should the laws of nature care about what I find beautiful?”
My suspicion is this is something we will forever wrestle with because "beauty" is a proxy shorthand measurement for things of real value, but it is confounded by an enormous and oft abused potential to use this fact fraudulently. That which has real value is often appreciated for its beauty, but the street doesn't run both ways. Seeking to make something look good doesn't necessarily give it the more valuable underlying properties.
In GIS school, I learned that a good map will typically be described as beautiful, but a beautiful map isn't necessarily good. A good map is elegantly designed to effectively convey information. When the goal is achieved, high quality design also has inherent aesthetic appeal. But trying to just make a map pretty doesn't make it more useful. In fact, it often makes it less useful.
I think the same principle generalizes. I was born with serious respiratory problems. I always had really lousy fingernails. With getting healthier, my fingernails have grown stronger and prettier. This makes me suspect that manicures and painted nails are about trying to enhance a signal of baseline good health that is inherently valuable and attractive. But painting your nails doesn't actually improve your respiratory health. Fake glued on nails can give an appearance of health that isn't real.
Maybe we should care about beauty because Nature cares about beauty the way she cares about symmetry. I’m just spitballing here, but maybe Emily Noether’s intellectual heir will find a link between the conservation of energy and Kolmogorov complexity (if this hasn’t already been shown).
Maybe beauty is merely a word for "my brain has crunched vast amounts of data and concluded that something very, very fundamentally appealing is going on here and I shall call that by a word suggesting visual appeal because I assessed that appeal with my eyes."
Maybe "beauty" is ultimately a name for symmetry, where symmetry is a name for invariance under some transformation. And there is always a new type of transformation to consider and therefore a new and perhaps more subtle form of beauty.
"Beauty", in this context, means "concise". Basic physical laws used to be very short, with no arbitrary constants. "F = ma". "E = ir". "E = m*c^2". Kepler's laws. Maxwell's equations. The Schroedinger wave equation. When researchers discovered the general rules that described how things worked, they could be expressed as simple expressions. Physics progressed by discovering such simple expressions.
Maybe I'm speaking from ignorance so please show me why if this is wrong, but I find physics to be much less understanding the universe than observing it (and happening to be able to use the language of maths to describe those observations).
The naturalness/beauty described by the author is an approach which seeks to formulate theories which help us understand the universe.
Perhaos the apparent naturalness/beauty in the equations you state has less to do with understanding what makes the universe work, and more to do with our mind's predisposition that we would tend to observe those relationships first.
Many physical laws, AFAIK, appear less as absolute truths, or keys, to understanding the universe and more as just observations about what to expect. IE Newtonian gravity works all the time, is described beautifully in maths, but then at certain scales stops working. Does it really tell us anything about what makes the universe tick?
Anyways here is a clip I like from Feynman discussing the idea of beauty in figuring out how nature works: https://youtu.be/MEqMM2Co_9c
Every theory "explains data we don't have," for example F=ma makes a prediction about the motion of a 10^100^100^100kg particle under a 10^100^100^100N force.
I may be biased by the selection of physicist I watched lectures of, but they would probably tend to disagree with the statement that physics got lost in mathematics. Especially Nima Arkani-Hamed convincingly argues that we are fundamentally looking at the universe from the wrong perspective. Relativity and quantum theory are build around a few very fundamental principles like locality and unitarity and somewhat surprisingly those principles single out a very small set of possible theories that are compatible with those principles.
But on the other hand those principles seem to obscure a more fundamental and simpler truth behind them. For example calculating scattering amplitudes may require hundreds or thousands of terms from different Feynman diagrams with more and more virtual particles but in the end they all cancel out. Surprisingly again there are ways to arrive at the same results - BCFW recursion relations and the amplituhedron - but with comparatively extremely simple calculations. But unlike Feynman diagrams, which suggest a picture of particles interacting locally in space, those calculations provide no picture that can be easily matched against recognizable things.
The results are the same but there is nothing that looks like locally interacting particles in there. This then suggests that things we currently considered fundamental, for instance locality, are not fundamental, that they emerge from something more fundamental. And this is where mathematics might be really useful, you try to reformulate existing theories in new ways and maybe you find a representation that is much simpler than what we have and maybe that is what the universe is really like as compared to what it looks like to us. And maybe that will also suggest new experiments to be done and which do not require energies far beyond our current reach.
I have a question that didn't get answered when I talked to people about amplituhedron. Do the higher k (dimension, I'm unsure) Grasmannian calculations have any bearing on a world without super symmetry. Because Nima's tool is only useful for super symmetric calculations then honestly it's not very compelling. I want to be wrong, it's very beautiful math, but it seems holy impractical for "real world" standard model physics.
I am a total layman, so I don't know. But if I understood it right, the nice final results relies on the use of Grassmannn variables which eventually cancel out. Whether or not there is a path to non-supersymmetric theories, I have no idea. But even if it fundamentally requires supersymmetry, the jury is still out on whether or not the universe is supersymmetric even though the best versions have now been ruled out.
Having studied both pure physics and pure mathematics, my impression is that physicists don't really do mathematics, they use a pidgin form of symbolic expression related to mathematics with the specific goal of studying natural systems. And this is a pretty subtle point to explain to anyone who has not experienced both cultures significantly. But the MO of studying systems that can be measured in physical reality, is a constraining principle. Mathematics is not constrained to so-called reality and as a result can fly higher and see farther; it has a better imagination, if you will.
Burden me not with any reminders about reality, people needing to get up to make the donuts or whatever. I know all that. I'm not knocking reality, reality's great. Nor do I need reminding how beautiful and strange and wild and cool some of the math in physics really can be. Utterly, utterly rad, without a doubt. But working with such high-dimensional spaces, such high-rank/high-variable transformations, and such high-density symbolic representations, as physics seeks to do, requires a much freer and more "artistic" approach than some kind of mental slavery to what can be seen. (By which I mean, what can be measured.) Physicists need more pure math, and when I say pure, I actually prefer the term theoretical math. Because physicists need to design their own math, and that is in one sense what theoretical mathematics means.
Physicists are better at what laypeople think of as mathematics--huge whiteboards, filled with esoteric symbols, furrowed brows and chalk-stained hands jittering through the air in some magnificent, halting dance of frustration & eureka. That's what people think of when they think of "doing mathematics". But physicists don't really do what mathematicians do, not really, not completely. And all those purely esoteric maths that no one except pure mathematicians ever get to see, say, topology, homology, algebraic geometry, abstract algebra, etc, are hiding some real gems of thought.
I'd like to see Physics, finding itself at a halt, go and start to study all the Mathematics it's been putting off.
I'm pretty sure guys like Ed Witten are doing real math. Or at least he fooled the mathematicians thoroughly enough that they gave him a fields medal. I'm neither a physicist nor a mathematician, but your characterization seems a bit unfair.
> I'd like to see Physics, finding itself at a halt, go and start to study all the Mathematics it's been putting off.
And that kind of approach is exactly what Sabine Hossenfelder in the article criticizes, I my opinion significantly unfairly. The critics (she is surely not the only one) exactly complain that, for example, the "string theory" is more math than physics because it can't be "immediately verified" or they complain that the most of the experiments are "not confirming" the most obvious variants of the expected results of the new theory candidates. But the science shouldn't be reduced to the short-term goals and only to the processes which are "guaranteed to work." That's exactly when we won't see further from our noses.
And if the critics say that the "alternative theories" don't get enough funding, I posit that the average "alternative theories" are typically even more conservative than what is widely accepted physics (by the virtue of the accepted physics being already "unintuitive" enough and having the steeper learning curve than practically all "alternatives" are ready to accept).
We should all appreciate that one the most impressive achievements of the 20th century physics, the General Theory of Relativity has its fundamental support in the famous Michelson-Morley experiments which also didn't confirm the expectations of the 19th century physicists.
So we have to achieve enough experimental results against some approaches, and more than that, with enough precision, to even have the chance of finding out the new rules, if the new rules in the form that we're used to find them even could be found. We must be open to the experiments, and be happy even when the "most hoped" results don't happen.
On another side, when Sabine Hossenfelder is more precise and when she addresses some specific aspects, I can surely agree with some of her statements, for example:
"The criticism of heliocentrism based on the argument that the absence of observable
parallax implied the stars had to be “unnaturally” far away was wrong for exactly
this reason: They had no probability distribution but erroneously postulated one by
assuming that the stars should be likely to have similar distances to the planets as the
planets have among each other. We now understand the distribution of stars and their
typical distances comes about dynamically during structure formation and that there
is nothing “unnatural” about the distance of our Sun to the other suns."
Her criticism, however, is that currently physicists are "looking for the lost keys under the street lamp" because "in the dark they can't see them." But even if it sounds funny or misguided, it is true that in the dark not much could be seen, and before we actually check the already properly lit areas we can't expect more from looking into the dark where we really see too little. Investing in the flashlights can be reasonable, but also only once the lit areas are actually checked.
And we should also not forget that the current physics already enlightened immense parts of the universe. The "dark areas" were never so amazingly small as they are now.
> “Why should the laws of nature care about what I find beautiful?”
This article is a nice counterpoint to Paul Dirac's speech in favour of mathematical beauty in physical theories. [0]
Some quotes from Dirac:
> What makes the theory of relativity so acceptable to physicists in spite of its going against the principle of simplicity is its great mathematical beauty. [...] We now see that we have to change the principle of simplicity into a principle of mathematical beauty.
Interestingly, he also says:
> For example, only four-dimensional space is of importance in physics, while spaces with other numbers of dimensions are of about equal interest in mathematics.
> It may well be, however, that this discrepancy is due to the incompleteness of present-day knowledge, and that future developments will show four-dimensional space to be of far greater mathematical interest than all the others.
His prediction here was almost correct. Except it was physics that started to take an interest in a higher number of dimensions.
You get this in mathematics too. The four colour theorem for example has an "ugly" brute forced machine checked proof that's impractical for humans to check.
Elegance and simplicity should be sought after, and ugly proofs can be an indication that you're missing the right abstraction but I don't see why all true statements should have an elegant proof.
Supersymmetry has not been found; however the top quark is an encouraging precedent.
The top quark was predicted in the early 70s, and was expected to be found soon. However colliders failed to find them, and its minimum mass kept getting pushed up. It wasn't found until 1995, at a much higher energy than was initially expected.
The presence of Top was predicted with the finding of Bottom, in order to maintain symmetry. It was expected to have a higher energy otherwise it would have been found ... but the energy turned out to be much higher. It wasn't that it was predicted to be low energy and was found to have a higher energy, it was that we knew it had higher energy than Bottom, but just not how high - like climbing a convex hill covered in cloud, one can't see the top, and one isn't sure until you reach it how high it's going.
It's my understanding that the unpredictably high energy hasn't properly been accounted for but is believed to relate to Yukawa couplings.
The energy was predicted within certain lower+upper bounds in '94 just prior to the confirmation of the Top in '95, and a Nobel was awarded for that work [relating to T parameters (https://en.wikipedia.org/wiki/Peskin%E2%80%93Takeuchi_parame...) which I don't claim to understand! 't Hooft and someone, erm, ...].
In part I believe it relates to how the Higgs works and whether the Higgs is composite - possibly being comprised of Top and Anti-Top in one theory.
Beauty is a feature of our Universe. It's not that surprising to find it also in descriptions (equations, theories). Beauty is a Universe-y characteristic.
I imagine if we were in a different universe then our concept of beauty would differ and perhaps, perhaps, conform more to the parameters of theories that described that universe.
On the contrary. She suggests adhering to it. The notion of whether a theory is elegant or beautiful is exogenous to the question if it describes nature, hence should be cut by the razor.
In technical terms, there is no need for a prior on theory space:
A probability distribution from which to calculate the most likely choice of parameter adds unnecessary structure to the theory and is thus in conflict with the dictum of simplicity. We could have chosen a parameter and be done with it. The probability distribution and all the not-observed values of the parameters are unnecessary for the derivation of any observable and they should therefore be stripped by Occam’s razor.
My current view is that parsimony is a heuristic you can use, but weak evidence at best. Plus, parsimony is not unambiguously defined, so comparing hypotheses in terms of it can still be subjective even with objective criteria as there is no agreement on which approach is best. It's most justified to say that extremely complex models are unlikely if the complexity is not necessary.
I’d use another, less charitable, word. Finding the laws of physics isn’t any different than finding a winning play in Go, except the search space is vastly larger. If we aren’t going to pay for things like the Superconducting Supercollider, we aren’t going to pay for a brute-force search to find the laws of physics. We have to use heuristics to guide the search. “Beauty” has worked in the past (Gell-Mann has a TED talk on this) so there’s no reason to abandon it - until it doesn’t work. Then we’ll try something else. In addition it took, what, 300 years to prove Fermat’s Last Theorem? Good grief. For hard problems, 40 years is nothing.
Brute force search isn't what solved Go. What was needed was a model with the right information, not all the information or the most elegant or compressive version of the information, but the right information. What "right" means in this context is what we need to figure out!
What is special about Occam's Razor? Why not get rid of it? Do you know why Scotus and Occam and the rest of the gang came up with it? Very few people have actually thought about why and what it was for, so why do so many people cleave to it?
I have strong ideas that are somewhat better informed than most people I talk to, but then again I am probably talking to the wrong people. As far as I know Occam's Razor was invented 800 years ago to solve the problem of the holy trinity. Basically medieval philosophers were stumped by the number three - one was fine; unity was perfection... but more than one was a problem - why would god have a certain number of aspects? So school of Scotus formulated the idea that the number of entities was necessary; just right because God knew what was up with the universe and therefore would produce aspects to necessity. This brought in the idea that there were elegant and sufficient numbers of things in nature which was quite contrary to the old way of looking at the infinite and mystical aspects of the world, inspired by ideas of real numbers and right angled triangles with length 2 hypotenuse. This idea has been a handy heuristic for science (not maths) ever since, but it has no actual basis apart from avoiding getting put on a bonfire if you were a Christian philosopher in the thirteenth century ( I think that William of Occam was nearly burned in Avignon, but talked his way out of it).
When people use Occams Razor today they typically mean "prefer the simplest, most elegant explanation". The reason we have it is that for any kind of observation you can make there are an infinite number of different theories that could explain your observations. So we need some kind of rule that tells us which of these theories we should prefer.
It's not only a heuristic though. If you have a scientific theory modeling 10 parameters, but a theory modeling the exact same phenomena can be formed using only 7 parameters, the 7 parameter theory is preferable for a number of very rational and non-aesthetic reasons.
I think it's an eye grabbing title, but not really honest, as it changes from 'beauty is bad' to 'why has it failed us'. It's a question worth asking, because obviously answering it would be progress. But, I don't think anyone is actually arguing against Occam's Razor, or that simpler question solving more, is the goal.
I read the article, and I don't see any insightful answers, perhaps we're supposed to buy the book? Or perhaps people should devise stricter criteria for theories to be experimentally demonstrable...
Why has beauty failed us? Such mathematical beauty is produced by the conscious mind, and the mind is created (via the brain) from the action of the laws of physics. Perhaps it's because the self-referential circuit Math -> Physics -> Mind -> Math can't exist that Math can't fully explain Physics, and e.g. GR and QM will never be united via a "beautiful" mathematical theory.
Not sure. Look at any symmetric figure, and compare it with a real flower, or a leaf, or a bee. Which is more beautiful? The former looks dead in comparison. I think the laws of nature are more similar in their "beauty" to the latter.
I'm thinking vaguely of how, in a song, a "perfect rhyme" can be as jarring as the lack of a rhyme where expected. I started listening for them and it seems as though "general rhymes" are the rule. There's something about imperfect symmetry in nature.
>“Why should the laws of nature care about what I find beautiful?”
I find this question to be missing the fundamental point of scientific endeavour. Nature obviously does not care about beauty, but scientists do because extracting meaning and order (laws) from observations and building models is the very essence of scientific insight.
The goal of science is not to replicate nature, it is to build a model of nature from which we can collect insights and make predictions, the model does not need to 'conform to' nature. That's not the point. If that were the case we could just dump the LHC data into a textbook. There is a great story writen by Borges called The Exactitude of Science where he creates the analogy of a large, but useless map:
"... In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography."
I think you misunderstand the issue here. No one is arguing against reductionism, and equations, and extracting meaning. You have a different notion of beauty in mind than the one that is being criticized here.
This is an overdue debate within the context of theoretical physics. No one wants to throw the process of deriving equations and models that explain data out. If anything Hossenfelder is the classicist here, rejecting ideas that want to go beyond empirical truth in order to salvage preconceived notions on what the laws of physics "should look like".
Such a map is far from useless - if you want to see any other area you don't have to travel there; you can "visit" all areas, you can duplicate any one area and experiment on how changes will affect it, etc.. It's only the bulk of the map that makes it impractical, we actually now have maps that are pretty close to that, we just found a way to make them less bulky.
I think the end goal of science actually is very close to the 1:1 scale map in Borges' story [1] - we want to be able to say in any situation "if I take this scenario and run it forward then what are the outcomes".
We can do this to some extent. We can predict paths of simple objects, we can predict how macroscopic systems will play out to some degree of accuracy. We can even predict the existence of particles and confirm the consistency of such predictions in later observations.
Surely the goal is to make our predictions better, up to any limit the universe gives. We want to be able to perfectly simulate the world - we don't just want the 1:1 map, we want a 1x10^20:1 map with moving people, with weather, with every possible feature that we can measure so that instead of travelling to Ulaanbaatur and dropping a rocket motor from the troposphere we can do that in the "map", and measure the effects in the map and so know what the effects would be if that were to happen for real.
We can't use the Universe as it's own map, we can't arbitrarily look at it at any scale, we can't run it back or duplicate it when we want to re-run experiments, we can't visit any part without travelling, etc. - so we seek to simulate it, through simplification because we can't currently simulate it any other way. As we uncover ever better models we learn to simulate limited sections or limited facets to a greater and greater degree.
but the predictability in this example comes from the function that maps one state of the map to the next through time, not from any visible or high-resolution feature in any particular snapshot of the map. You don't determine the function and the skeleton of the map by increasing its resolution, you obtain its functions by abstracting the unnecessary details away to figure out what the important objects and regularities are.
If you want to figure out how the tube in London works you don't need a photorealistic, lifelike copy of the subway system, you need an abstraction of the network, its congestion, routes and so forth. Those are idealised mental models that are simplifying the real system, but they are much more useful to you than the real thing.
Assuming you want to know how the tube works, as a passenger, independently of other systems, then yes.
But if you're building a grand unified model of how London works then not only do you want to be able to abstract certain facets, you also want to be able to combine the models those abstractions create with data points to build a more complex systemic model. You want to look at where the stations are in relation to others, how the different transport networks link, what happens when a tube-train stops - how does the effect ripple through the system and alter what time a particular Pret-a-manger have to restock the sandwiches.
At that point the abstraction in to a simplified system that only describes the tube is useless you need to combine the abstractions to model the entire system - and you may be able to get there, except your model missed out sunspot activity and a bus-driver turned the wrong way because their GPS was marginally out and the Pret-a-manger manager missed the start of the shift and now you have to eat Cheese instead of Mexican chicken.
In short you want your abstraction to be able to construct a model that is as close as necessary for the purpose to the "photorealistic, lifelike copy" of whatever it is you're seeking to predict. If you only wish to predict how many stops you'll need to stay on the tube for then a simple tube map suffices. If you want to predict airflow through the tube system and how it affects heat exchange then a more complex model will be needed, with different abstractions (preserving length of tunnels for example).
If you want to predict everything that it's possible to predict ...
So, science is a contraction mapping of reality onto its subset (some mathematical theories). For it to work, it should remove some redundancy, some predictable structure. Finding and describing this predictable structure is the essence of science.
Lack of a predictable structure is lack of beauty: a stream of random numbers is both incompressible and aesthetically disappointing. This is why looking for signs of beauty is a good heuristics for finding that hidden structure. Same applies to engineering, BTW; the father of Soviet space program (Sputnik, first man in space) S. Korolev said: "An ugly aircraft will not fly".
Software doesn't have to obey the laws of physics in the same way; it just has to obey the laws of a dirty, messy, virtual environment created by humans. Obviously the physical world is below all the human-made layers, but it's far enough removed that I don't think that ugly yet functional software alone refutes the point nine_k (the GP) was making. Just a thought.
Every major advance in physics has arrived with a new paradigm of mathematics, usually more than one new paradigm. Beauty may be in the eye of the beholder, but I doubt quantum gravity is going to be solved without the invention of major new mathematics. It's not going to be found by tweaking some kludj no matter how "ugly", nor hidden inside some long-familiar object (e.g., the E8 group...and for the record, Garrett Lisi is NOT a highly respected physicist.)
The observation of the apparent non-existence of something (supersymmetry, in this case) is at least as important as the existence of something. Supersymmetry is one of high-energy physics' best guesses as to the nature of the universe. If that turns out not to be true, we have learned something very important.
Physics is alive and well; there are plenty of big open problems with strong experimental backing:How does gravity connect with the rest of physics?
What is the nature of dark energy?
What is the nature of dark matter?
Where is the rest of the CP violation?
What gives neutrinos mass?
(And, the doozies that are so hard to answer that nobody touches them: Why does time have a direction? Is there something underlying quantum mechanics?)