The entire essay is really garbled.
1. He begins by attacking the claim that the Singularity will cause a
bend in the exponential growth technology curve. He's confusing
computer power with sentient intelligence. At any rate it's totally
irrelevant to the issue of reaching the Singularity in the first
place.
2. "And, indeed, should Intel, or Google, or some other organization
succeed in building a smarter-than-human AI, it won't immediately be
smarter than the entire set of humans and computers that built it."
This is ridiculous, with many counterexamples. The people who design
chess-playing programs pretty consistently lose to their own programs.
3. He's confusing a "digital mind" with super-intelligent computer.
This is a mistake that a lot of people make. They say something like,
"We can't make a computer smarter than humans until we completely
understand the human mind." Once again, ridiculous. It's saying that
you can't design a fast car until you understand how a cheetah works.
The super-intelligent computer will not work like the human mind, but
it will be functionally superior, in that it can do things like
perform tasks and make decisions better than the human mind can.
4. He calls this the "golden age of AI." I disagree. The 60s were the
golden age of AI. All AI researchers are doing today is
reimplementing the algorithms developed in the 60s on faster
computers. This is called "brute force" AI, which is considered
inferior to "real" AI.
5. "No one's really sure how to do it." Of course we know how to do
it. I wrote the algorithms ten years ago.
** Book 2 - Chapter 7 - The Singularity
**
http://www.generationaldynamics.com/pg/ ... 2.next.htm
The algorithms are extensions of the minimax algorithm developed in
the 1960s. These algorithms can't be implemented today because
computers aren't fast enough, but they will be fast enough within 10
years or so.
6. "There's a huge lack of incentive." This guy really knows
absolutely nothing about the Singularity. Military researchers around
the world are racing to be the first to develop super-intelligent
weapon systems.
7. "There are ethical issues." So what? Every weapon system has
ethical issues, but they get developed anyway.
8. "How detailed does a simulation of a brain need to be in order to
give rise to a healthy, functional consciousness?" This is something
that confuses a lot of people. Whether super-intelligent computers
will be "sentient" or "self-aware" in some human sense is irrelevant.
Going back to a previous example, fast cars can outrun cheetahs, but
cheetahs are self-aware, while cars are not. (i.e., not yet)
9. "Perhaps you've seen video of IBM's Watson trouncing Jeopardy
champions. Watson isn't sentient. It isn't any closer to sentience
than Deep Blue, the chess playing computer that beat Gary
Kasparov. Watson isn't even particularly intelligent. Nor is it built
anything like a human brain." Totally misses the point. Once again,
cars beat cheetahs without being sentient. The significance of Watson
is that it shows that computers have the ability to learn very
quickly. Once computers are fast enough, a computer will be able to
scan the entire internet and become smarter than everyone in the
world.
There's absolutely nothing in this essay that challenges my estimate
of the Singularity by 2030.
[quote="Tom Mazanec"]http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html[/quote]
The entire essay is really garbled.
1. He begins by attacking the claim that the Singularity will cause a
bend in the exponential growth technology curve. He's confusing
computer power with sentient intelligence. At any rate it's totally
irrelevant to the issue of reaching the Singularity in the first
place.
2. "And, indeed, should Intel, or Google, or some other organization
succeed in building a smarter-than-human AI, it won't immediately be
smarter than the entire set of humans and computers that built it."
This is ridiculous, with many counterexamples. The people who design
chess-playing programs pretty consistently lose to their own programs.
3. He's confusing a "digital mind" with super-intelligent computer.
This is a mistake that a lot of people make. They say something like,
"We can't make a computer smarter than humans until we completely
understand the human mind." Once again, ridiculous. It's saying that
you can't design a fast car until you understand how a cheetah works.
The super-intelligent computer will not work like the human mind, but
it will be functionally superior, in that it can do things like
perform tasks and make decisions better than the human mind can.
4. He calls this the "golden age of AI." I disagree. The 60s were the
golden age of AI. All AI researchers are doing today is
reimplementing the algorithms developed in the 60s on faster
computers. This is called "brute force" AI, which is considered
inferior to "real" AI.
5. "No one's really sure how to do it." Of course we know how to do
it. I wrote the algorithms ten years ago.
** Book 2 - Chapter 7 - The Singularity
** http://www.generationaldynamics.com/pg/ww2010.book2.next.htm
The algorithms are extensions of the minimax algorithm developed in
the 1960s. These algorithms can't be implemented today because
computers aren't fast enough, but they will be fast enough within 10
years or so.
6. "There's a huge lack of incentive." This guy really knows
absolutely nothing about the Singularity. Military researchers around
the world are racing to be the first to develop super-intelligent
weapon systems.
7. "There are ethical issues." So what? Every weapon system has
ethical issues, but they get developed anyway.
8. "How detailed does a simulation of a brain need to be in order to
give rise to a healthy, functional consciousness?" This is something
that confuses a lot of people. Whether super-intelligent computers
will be "sentient" or "self-aware" in some human sense is irrelevant.
Going back to a previous example, fast cars can outrun cheetahs, but
cheetahs are self-aware, while cars are not. (i.e., not yet)
9. "Perhaps you've seen video of IBM's Watson trouncing Jeopardy
champions. Watson isn't sentient. It isn't any closer to sentience
than Deep Blue, the chess playing computer that beat Gary
Kasparov. Watson isn't even particularly intelligent. Nor is it built
anything like a human brain." Totally misses the point. Once again,
cars beat cheetahs without being sentient. The significance of Watson
is that it shows that computers have the ability to learn very
quickly. Once computers are fast enough, a computer will be able to
scan the entire internet and become smarter than everyone in the
world.
There's absolutely nothing in this essay that challenges my estimate
of the Singularity by 2030.