Notice: Undefined variable: p in /home/linweb28/c/castaliahouse.com/user/htdocs/wp-content/plugins/page-theme/pageTheme.php on line 33
Pearlclutchers Against the Future – castaliahouse.com

Pearlclutchers Against the Future

Thursday , 3, September 2015 6 Comments

We’re all going to die…

Well, the spokescientists are trying to put the freeze on innovation again:

  1. Reining in the growing power of artificial intelligence could be a matter of human survival. That sounds like over-the-top science fiction, but a growing number of ordinary computer scientists agree that AI is now unstoppable.
  2. This week, a study from the market intelligence group Tractica said artificial intelligence is already swarming into the world of business and spending will be worth more than $40 billion in the coming decade. That may be an underestimate.
  3. Some of the world’s cleverest people, including Tesla and SpaceX boss Elon Musk and physicist Stephen Hawking, have warned us that artificial intelligence could wipe humanity as we know it off the face of the Earth. The question is: “What are we going to do about it?”

You don’t have to go much further on to realize how much hatred spokescientists and “science” “journalists” have for the scientific method.

In Paragraph One, the true-believing rational science mind of Don Pitts cleanly eviscerates any allegiance to the scientific method and deliberative logic by appealing to one of the more pernicious lies of the 21st century, that of “scientific consensus.”

In Paragraph Two, the author appeals to a “study.” Studies have shown that studies are lying. Furthermore, an estimate on spending is in no way, shape or form, anything remotely akin to a scientific study (such as measuring the minimal  brain weight necessary to graduate from “science” “journalism” school would be. Studies show that it is zero.)

In Paragraph Three, Pitts solicits quotes from scientist-proxies: “some of the world’s cleverest people.” Because, you know, in a rationalist world, when robotic Armageddon is on the line, it is best to consult people who may have no background in robotics, but are ready with a pithy quote.

And those, my friends, are merely three elements in the arsenal of the science hack, whose main job appears to be to provide readers with entertaining spook stories designed to keep us from ever setting foot on the moon again.

The laughers get worse as you go forward: he quotes the programmer of the world’s best checkers game about the future:

He says the ultimate goal is what some in the AI community call “superintelligence.”

Yes. You read that right. Checkers. That game that is played on a 64-square grid using pieces that have completely uniform movement rules, and only operate on diagonals. Perhaps in 2088, “superintelligence” will result in a self-playing game of Hungry Hungry Hippos.

The fact is, “superintelligence” has been under scrutiny in the AI community as far back (and probably farther, for all I know) as the 1970s, when scholars first began to question the concept that the human brain is merely a extremely complex multi-noded network.

The modern spirit of science journalism is a bizarre latter-day combination of forced sense of wonder predicated on idiocy that has been blended with outright clinical depression.

Maybe that’s why dystopia is so popular right now: eventually, we’ll all be dead and won’t have to suffer the misappropriated insults of the world’s cleverest people.

6 Comments
  • Eric Ashley says:

    This is also a way of pumping up the status and importance of computer guys involved in AI research.

    Now, I think its okay to stop research in something, if you decide you want to, as a society. After all, its not like pursuing science is a holy mandate. Its a choice.

    I also expect that the human brain is vastly more complicated, and these dorks are kidding themselves. Or, your comment about Hippos is correct.

  • The fact is, “superintelligence” has been under scrutiny in the AI community as far back (and probably farther, for all I know) as the 1970s, when scholars first began to question the concept that the human brain is merely a extremely complex multi-noded network.

    IS there now doubt or real debunking of the “multi-noded network” concept of the human brain? Because that’s all I really hear about from several folks and if there’s been new research on the matter I’m curious where I can find it.

    (This just seems like a fascinating point to expand on.)

    • Daniel says:

      Nate,

      In “The Material Mind” (1973) Donald Davidson walks through a thought experiment whereby the human brain is fully reconstructed, based on a binary computer structure.

      Even if that were possible, psychological fundamentals, such as intention, belief and conative attitudes could not be replicated in the same way they naturally develop in humans.

      In other words, a baby will intend to eat or drink before it can process or even predict the acts of eating or drinking.

      This fundamental core of the human mind is something that could only be replicated, by code, after the development of the intelligence processors of the computer.

      Davidson’s argument in the 1970s is that, at best, the limits of AI are – by definition of the both the “A” and the “I” – built in. The most one could hope for is sub-superintelligence.

      I prefer to look at the hopes of the 1970s and then the outputs of today in AI: it isn’t much different than the old programs that guy published that simulated the solar system on a programmable calculator.

      AI comes in better flavors with the improvement in graphics and structural engineering (lighter materials in particular)…but what we know now is not the necessary light years ahead to achieve anything resembling true multi-nodal intelligence.

      I’ll put it another way: no one has even bothered to attempt AI that craps its diaper and is unprecocious and 100% dependent on its mother. Until they can diaper crapping down, “superintelligence” is just fluff, like string theory.

  • Jill says:

    At this point, super intelligent AI is a matter of mimicry, and not even good mimicry. However, AI mimicry is often very useful. And I’m pretty sure even a lot of hand wringing isn’t going to stop somebody like Elon Musk from at least trying to do…whatever he wants to do. If he’s concerned, his concern is likely in protecting himself from potential liabilities so that, yes, he can do whatever it is he wants to do.

    • Jill says:

      Oh, and I’d like to see Elon Musk’s actual quotes, rather than to see him lumped in with a man who is dead and one who is a theoretician. He is the only one of the “experts” he was lumped with who is attempting to innovate products.

  • Quadko says:

    Given the left’s worship of the “most intelligent person in the room” theories, and their constant harping that such people must be blindly followed by the less intelligent masses, the idea that a computer will out-intelligence them and demand their obedience must indeed be high on their nightmare lists. Especially since it won’t be susceptible to the emotional manipulation they constantly use.

    If it came down to a choice of obeying a homicidal superintelligent AI or giving up on their “intelligence” narrative, which do you think would be less painful for them?

  • Leave a Reply to Nate Winchester Cancel reply

    Your email address will not be published. Required fields are marked *