Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This rant unfortunately fails to clearly demonstrate how Kurzweil errs. It goes on about complexity and emergence, but why would complex interactions not emerge from a computer simulation just as they do in the real biochemical system?

Nevertheless, my gut feeling, too, is that Kurzweil is mistaken. I can't quite put a finger on it yet, but at least one problem I see is this: Kurzweil seems to suggest that the observation that the genome consists in only 50MB of data (after compression) somehow gives us an upper bound to the complexity of the system. I'd however suspect it rather gives us a lower bound: factor in all the epigenetics, external interactions, the not necessarily simple rule set provided by physical chemistry (this is not in the genome, obviously), etc etc, and the problem may be quite a bit larger.

Take for example the way we currently believe gene transcription promoter networks to work. The combinatorial nature of those interactions means that even though the underlying data is "only" a few megabytes, the system you end up simulating gets very big very quickly.



why would complex interactions not emerge from a computer simulation

One answer is "They will, and surprisingly quickly. But they will be a completely different set of complex interactions than are observed in the real world, because of some roundoff error in the binary representation of the Nth digit of some apparently unimportant constant. Unfortunately, because the system is complex, you'll probably spend the rest of your career trying to track down that error, and fail."

Another answer is: They would, if the simulation was comprehensive enough. Unfortunately, phase space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Seriously: Your mind reels when confronted with the number of different molecular interactions going on inside the "simplest" single-celled prokaryote, so you abstract it away, almost as a reflex, to stop yourself from going mad. Then you abstract away the first-order abstraction. Then you keep going. Soon you begin to imagine that you can model an entire collection of a trillion organisms, just as a naive programmer imagines that they can rewrite Windows in three days if only they use a powerful enough language. It's a mere matter of programming!


If you wanted an exact simulation, the n-th digit is important. But if you just want some system where some other (possibly intelligent) behaviour emerges, the n-th digit is not important.


Well, it would by all means be awesome if we could build an intelligent system that didn't match the one we already have.

And we even have an existence proof that such a thing is possible, given enough design time. Unfortunately, the existence proof says nothing about the odds of doing so very quickly -- in less than, say, a million years, which is very quick by historical standards.


As for the first answer: we all have to learn to deal with roundoff error. That's not by principle an obstacle.

The other answer contains an implicit assumption that's not obviously correct: you suggest that complexity only arises when you enumerate every possible dimension of phase space. But physical simulations have been very successful at reproducing complex behaviour from simple rules, without taking into account every particle's state vector.

Finally, did you really try to equate my statement to the statement that Windows could be written in three days? ...


No, I'm poking fun at Kurzweil's 2020 deadline, not at you. You appear to have been wise enough not to specify a deadline...

As for this statement:

physical simulations have been very successful at reproducing complex behaviour from simple rules

Absolutely, but it doesn't follow that every complex behavior can be reproduced from simple rules. To overgeneralize from success in one field is the occupational illness of futurists. It's certainly a key problem for Singularitarians, who tend to get so enthusiastic about Moore's Law that they forget that most of the world has nothing to do with microelectronics.


Haha, well I guess the fact that for a moment I felt that was condescending mostly says a lot about me ;)


Biological systems are approximate. I would assume that they can deal with a little rounding error.


At the lowest level biology is physics. And physical systems are not approximate, they are exactly what they are, and a small round-off error can give a huge difference in the specific outcome, even if on a macroscopic level that outcome might be indistinguishable from the 'real' outcome.

So the real question then becomes does biology tolerate working on an approximation of the underlying physics and does that simulated biology still have the ability to exhibit intelligence. I think the first is a maybe, the second a yes but I couldn't give you any reasons why other than that it might be that our biology needs 'its' physics to operate and that probably anything Turing complete has the potential to exhibit intelligence, regardless of whether or not we find a way to achieve that.


This rant unfortunately fails to clearly demonstrate how Kurzweil errs. It goes on about complexity and emergence, but why would complex interactions not emerge from a computer simulation just as they do in the real biochemical system?

Because so far, despite the best efforts of many geniuses and heroic computing resources, our simulations don't even reliably predict the real-world outcomes of far, far simpler systems. Ilya Prigogine won the Nobel Prize in chemistry for demonstrating that sufficiently complex systems display emergent behaviors that can never be entirely predicted by studying their components in isolation: http://en.wikipedia.org/wiki/Ilya_Prigogine#The_End_of_Certa...

Kurzweil is hopelessly out of his depth in these arguments and is talking nonsense. Personally I'll be very surprised if we can construct anything approaching human intelligence in my lifetime.


BTW how do you define human intelligence ? What is it that we wouldn't be able to construct ?

Put in another way, what construct would qualify as approaching human intelligence ?


My definition of human intelligence is intelligence that is broadly adaptable, self-training, and able to communicate via a system of symbols as ambiguous and semantically rich as human language.

For instance, an intelligence that could be taught to read English and then have a reasonable conversation about a contemporary novel, with its own insights into the style and themes of the book, would qualify.

That said, I do expect to see great strides in the sophistication of machines in the next 30 years. They don't have to think like people to be useful.


”It goes on about complexity and emergence, but why would complex interactions not emerge from a computer simulation just as they do in the real biochemical system?“

I think the point he is making is that if your goal is to simulate the human brain you also have to simulate and thus understand all the little details of biology because transistors don’t magically have the same properties as proteins.


Yes, I think that's his point too. But if you believe that the little details of biology emerge from the underlying physics, then maybe you only need to code the fundamental rules and can have all the rest automatically.


I think his point was that we can't code the fundamentals because those require linear time protein folding. And protein folding is one of those hard CS problems.


At this point we don't even have accurate models for most proteins, let alone knowing how to predict what they do.

My current project is a search engine for protein chain geometry. We only have ~20% of the known proteins in our database because the data on the other 80% isn't accurate enough to be useful.


Well that's exactly my criticism: most of the author's objections boil down to "it's hard". Note that I don't disagree with him (nor with you), but this just doesn't help to expose the real issues with Kurzweil's estimates. To "protein folding is difficult" one can always reply "but we'll solve it in the next 10 years" - which I think is what the singularity-folks would say.

My simpler point is that Kurzweil's not taking a useful measure for the size of the system we're solving. (By the way, he plays the same kind of trick on his audience when he's pointing out there are only a few billion neurons in the brain - as if that were the only level of complexity in the brain).


Well that's exactly my criticism: most of the author's objections boil down to "it's hard"

No, common English use of "it's hard" means something completely different from CS "hard". CS hard means NP-complete which to English translates as impossible. Impossible because of well understood mathematical reasons.

Quantum computers may solve it, indeed real life protein folding may have quantum computer-like properties.


Well, this article addresses a specific point which Kurzweil is making and says that what Kurzweil said is simply wrong, not just hard.


Yeah, you "only" need to perfectly simulate the entire universe. No big deal.


I think his point that the genome is not the program but the data was very demonstrative.

If you're a computer guy, you should clearly understand what his fundamental disagreement with Kurzweil is about.


Actually I didn't find that very enlightening at all. Code or data - I'd say it's both, depending on your perspective. You need those instructions to build your proteins, after all. It's very lispy ;)

The point is, he seems to suggest that the genome is all you need, when clearly that's not true.


Another sleight of hand with that statement: Kurzweil's converting compressed data to lines of code, but humans don't write in high-entropy binary. So it's 800MB of code, not 50MB.


Not only that: it may be "only" 800 MB of code, but we have no access to the CPU it runs on.


I took a few stabs at expressing that idea before giving up. I like your attempt better than mine:

Even the 800MB of base pairs may have higher entropy than the machine language we're used to. 2000 lines of lisp or haskell are worlds away from 2000 lines of assembly.


Just look at compressed source code, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: