Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In modern terms I think you could make this critique from a perspective of learning theory and combinatorics. Methodological monism implies that the fitness landscape being explored by science is "well behaved" and can be traversed using a simple stepwise or gradient descent function. We have no reason to believe this, and many reasons to believe that "unjustified leaps" and other heresies may be required to escape local maxima or traverse unconnected regions of state space.

There is probably no deterministic "for(;;) { do X }" formula for learning in complex real-world domains. You could make an anthropic argument here: if there were, life and intelligence would not exist since whatever thermodynamically-favored auto-catalytic processes life does could be replaced by non-living systems of lower entropy and higher probability. The existence of life as the chaotic, ever-changing, evolving system that it is is itself evidence for the non-closed-form nature of evolutionary learning.

I've thought for some time that the slow down in "fundamental invention" that some have observed since roughly the late 1960s to early 1970s is the result of "scientific fundamentalism" and Skepticism. Look into the backgrounds and thoughts of people like Einstein, Tesla, Engelbart, Turing, Edison, or Schrodinger, and you generally will not find them to be fundamentalist-positivist Skeptics.

It seems like the general intellectual trend from roughly 1970 until maybe 2010 or so was the rise of fundamentalism of every kind, both religious and secular... maybe due to an emotional desire for the comfort of absolute certainty in the face of rapid dislocating change.

There also seems to be an economic drive to commoditize humans. If learning can be completely systematized, then science can be rendered into an assembly line process that can be run by bureaucracies, scaled as needed, and in which individual workers can be treated as a 'human resource' commodity. Bureaucracies cannot deal with multi-modality, unpredictable and eccentric 'genius,' etc., so to admit that these things are required for the progress of science is to admit that bureaucracies are inherently limited.

Of course this being startup central, we all know this. If learning could be systematized and bureaucratized, there would be no startups. Big companies, VC funds, and banks would just execute whatever deterministic steps are required to yield progress and maintain 100% ownership of everything.



>There also seems to be an economic drive to commoditize humans. If learning can be completely systematized, then science can be rendered into an assembly line process that can be run by bureaucracies, scaled as needed, and in which individual workers can be treated as a 'human resource' commodity. Bureaucracies cannot deal with multi-modality, unpredictable and eccentric 'genius,' etc., so to admit that these things are required for the progress of science is to admit that bureaucracies are inherently limited.

I think the drive to commoditize everything is probably the bigger issue than a "scientific fundamentalism". There are plenty of crank scientists, and always have been! If it were just a matter of generating sufficiently many sufficiently non-mainstream theories to achieve a paradigm leap in a whole field, you would expect that sufficient production of cranks would produce geniuses by chance.


First of all, I think you're claiming a little too much about science from.

> the fitness landscape being explored by science is "well behaved" and can be traversed using a simple stepwise or gradient descent function

This is false! Non-obvious I know, but it was experimentally shown and holds for a wide class of high dimensional landscapes. It was well explained here (he talks about several novel ideas/insights actually, worth watching):

https://www.youtube.com/watch?v=7KCWcx-YIRI

> There is probably no deterministic "for(;;) { do X }" formula for learning in complex real-world domains

This is true in a rather straightforward manner when limited to mathematics, from the halting problem (and some mild computational assumptions about the universe): if no finite algorithm can solve arbitrary instances of the halting problem, clearly no finite algorithm can solve arbitrary problems (that includes algorithms that try to "self-improve", of course).

I think this bit of scientific realism is indeed very important to those in the field.

> There also seems to be an economic drive to commoditize humans. If learning can be completely systematized, then science can be rendered into an assembly line process...

I tend to think of the opposite; the more we're able to replace humans with machines the more power we get to the whatever we want. We just have to make sure this surplus power is made widely available.


That talk looks fascinating... bookmarked it for a later read. Quite a few people in all kinds of areas would benefit from a study of machine learning theory -- I think some of these findings are philosophically profound.


The idea that simple descent (in this case a modified Newtons Method) works is very simple... the curvature of fitness the across at each point across dimensions are mostly uncorrelated; a local minimum occurs when the curvature across all dimensions are positive, which is exponentially unlikely in the dimension. I hadn't heard of such a simple and useful insight in a while... this is exactly what theory is for! Anyway, that talk is very interesting.


The obvious falsehood in this concept is science is based on idea nullification as such it seeks places where a theory breaks down. In physics for example it often seems that adding yet another decimal to some constant is more or less a waste of time, but precision has broken many theory's and as such it's not 'safe' territory.

PS: Abstractly, gradient descent seems to get you stuck in a local optima, but fractals for example are not 'safe' places. Look to close at the head of a pin and there be dragons.


The two are not mutually exclusive. To say that all scientific progress can't be attained by just following a single scientific method does not imply that following that method is fruitless or wrong. It obviously does work sometimes. The fitness landscape is complex and multifaceted, and different approaches to learning will work to varying degrees at different times.

The danger is in fundamentalism -- in the elevation of a single method or philosophy of science to an absolute One True Way and the (always attendant) purging and blacklisting of anyone who doesn't follow it.

It seems to me that a lot of people, especially CS and math types, are really deeply uncomfortable with the sort of multi-modality I'm talking about. Studying biology deeply made me really comfortable with it. It's really very rational too. If you understand learning, search, combinatorics, and evolutionary theory, the absolute requirement for multi-modality is easy to grasp... even intuitive to visualize. One would not traverse a desert with snowshoes, an ocean with boots, or a mountain with a road bike. Fitness landscapes are also varied and demand a varying set of strategies to cope with different types of terrain. We need an epistemological toolbox with all kinds of different tools in it.


I think you'll find pure mathematicians and CS theorists more amenable to that creativity and flexibility you're speaking of it's the applied ones who have strayed too close to engineering and had their minds burned into a particular mode that I've found are altogether uncomfortable with multi-modality.


You missed my point methods are just another idea that can be falsified. Ex: http://www.nature.com/news/first-results-from-psychology-s-l...

At it's core science is really 3 things. Experiment, book keeping, and continuing after the old guard dies off. That does not mean it's fast, but give it 500 years and phrenology looks like a poor joke.


I'm not sure what you mean by "fundamentalist-determinist" but Evelyn Fox Keller's account of the work of Barbara McClintock might be relevant here.

I can only find the following article online, which is critical of Keller's conclusions. I'd say Feyerabend is more philosophically sophisticated than McClintock.

http://www.americanscientist.org/bookshelf/pub/demythologizi...


fundamentalist-positivist Skeptics

This seems to lump several different things together. Could you give us an example of this sort of character?


Not the parent poster, but David Deutsch in "The Fabric of Reality": both outward-looking and utterly married to Popper.


I'm not familiar with that work, but there doesn't seem to be much skepticism to it?


The problem is that "Methodological monism" has never been a real approach of science as Feyerabend shows. It's just an idealized approach one can cull from some philosophy of science theories.

The idea that you can get some more robust idea of truth from fuzzier learning approach is interesting but doesn't have any relationship to Feyerabend and little relation to science as such.


I believe that a field needs crackpots as much as they need fundamentalists for the optima-breaking reasons you outline. I'm not sure if you could institutionalize crackpottery – seems like the kind of thing that when measured, goes away. But I do think that one should take it upon oneself to "take one for the team" and occasionally entertain odd ideas, given that one understands why one is doing so and maintain a high standard of first-order evidence (NOT consistency/coherence/second-order evidence).


The problem is human nature, specifically that we become emotionally attached to our ideas and our methods/tools and form cliques around them. If that weren't the case, a "crackpot" who followed their own peculiar methods and a systematic logical/positivist type could get along just fine and could even be objective about each others ideas. I also blame monotheism. A polytheist culture might have an easier time with the idea that there are multiple ways of thinking that can prove fruitful in different situations.

I've seen people who can pull this off, and it results in profoundly interesting debates. But most people can't. For most people their beliefs equal their identity and their sense of self, and other beliefs are a threat.


I think you might be overestimating the role emotions and identity play in maintaining the integrity of a system of epistemic statements. There are structural reasons beyond simply how much I like my ideas or other incentives to keep my ideas that make scientific reasoning hard. Devoid of emotion and human nature concerns, we still end up with optimization problems which feel fuzzy because they're fuzzy, not because there's anything clouding our vision.

The epicycle models that dominated cosmology's explanations of orbits comes to mind, if only because it's a good example that is obvious enough in hindsight as to be uncontroversial. Epicycles carry such strong descriptive power that it does make sense to add more epicycles if you're merely trying to explain your data. But because epicycles don't require information from gravitation, it feels like crackpottery to introduce such information even if it results in a simple model.

Another example that's more obviously structural is building set theory without any notion of self-reflective statements. Russell's paradox undid the first iteration of set theory despite that system being quite descriptive.

If you built a whole system of thought on a powerful, incorrect statement, you're fooling yourself but it's understandable.

I can't evaluate it well enough to say if it's true, but some crackpots are coming up with models of black whole phenomena without event horizons, based on a similar problem as epicycles might of had. That's all I'll say because I think I've exhausted the controversy I can put into one post. :)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: