[Artificial Intelligence] may well be the most vital of all commodities, surpassing water, food, heat and light. Without it, we will certainly not survive as a species.

One of our problems is data - masses of it. A few hundred years of scientific inquiry and the invention of the data-generating and sharing mechanism that is the internet has left reams of crucial information unused and unanalysed.

AI is not about sentient robots, but machines that mimic our organic intelligence by adapting to, as well as recognising, patterns in data. AI is about making machines understand.
Jamie Carter / Peter Cochrane, { South China Morning Post }
WHY GENERAL ARTIFICIAL INTELLIGENCE HAS FAILED AND HOW TO FIX IT.
Excerpts from an essay by David Deutsch:

It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos.
It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters.
The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.
But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.
The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

What is needed is nothing less than a breakthrough in philosophy, a theory that explains how brains create explanations 
… and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

… Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.
This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

Emphases mine.Abridged version via Kurzweil AI.Full version at Aeon Magazine.

WHY GENERAL ARTIFICIAL INTELLIGENCE HAS FAILED
AND HOW TO FIX IT
.

Excerpts from an essay by David Deutsch:

It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos.

It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters.

The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.

The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

What is needed is nothing less than a breakthrough in philosophy, a theory that explains how brains create explanations

… and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.

This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

Emphases mine.
Abridged version via Kurzweil AI.
Full version at Aeon Magazine.

Robot learns self-awarenessAugust 24, 2012

“Only humans can be self-aware.”
Another myth bites the dust. Yale roboticists have programmed Nico, a robot, to be able to recognize itself in a mirror.

via { KurzweilAI }
••••••
This is huge news.
It’s not only important because “robots will need to learn about themselves and how they affect the world around them — especially people,” as stated in the original article;
this has massive implications for the way we think of ourselves, of what we are, what we can do.
There’s an argument that often comes up among laymen at any scientific gathering: that humans are “special” because we have consciousness, we recognize ourselves, we have thoughts, Minds.
While that is amazing, for the scientifically literate person it’s more like this: what we know to be “the mind” emerges from a system of integrated parts — from “bits” of information, if you like. A lot of little pieces come together into a whole synergetically, and that system-of-parts comes to “know itself” via interaction with the larger system (environment, universe). It’s incredible, but it’s not magic. It makes sense for a thing to be self-aware to some extent, if it’s to function as a whole in a world at all.
Certainly a human is very complex, but again, the complexity is an emergent property.
An illustration: It’s like the images we see on our monitors. What looks to us to be a 17th century painting, our friend, or the comic above, is just a set of cleverly arranged 1’s, 0’s, and some physical equipment that, combined, creates something that looks like an image — not like its components. An even simpler example: a Pointillist painting up close vs. far away.
So, to build a robot with this functionality is… expected, really. We should expect that “unconscious” parts can become aware if they’re built to do so.
The technology may be in its infancy, but it’s a great representation of the above (systemic perception) in action.
••••••
{ memeengine }:

I like the photo, and the idea. But… I think recognizing one’s own physical self doesn’t have much to do with self-awareness. We could train REALLY simple systems to recognize any specific shape and name it “self”.

OS RE ME:

I should have made it more clear; of course I’m stretching it here, and consciousness =/= self-recognition. It’s a baby step. But I think it’s possible to do, eventually. Most essentially, I’m referencing the idea of abiogenesis, and artificial intelligence.
But, also, do you have an example of such trainable simple systems? Curious.
Thanks!

Robot learns self-awareness
August 24, 2012

“Only humans can be self-aware.”

Another myth bites the dust. Yale roboticists have programmed Nico, a robot, to be able to recognize itself in a mirror.

via { KurzweilAI }

••••••

This is huge news.

It’s not only important because “robots will need to learn about themselves and how they affect the world around them — especially people,” as stated in the original article;

this has massive implications for the way we think of ourselves, of what we are, what we can do.

There’s an argument that often comes up among laymen at any scientific gathering: that humans are “special” because we have consciousness, we recognize ourselves, we have thoughts, Minds.

While that is amazing, for the scientifically literate person it’s more like this: what we know to be “the mind” emerges from a system of integrated parts — from “bits” of information, if you like. A lot of little pieces come together into a whole synergetically, and that system-of-parts comes to “know itself” via interaction with the larger system (environment, universe). It’s incredible, but it’s not magic. It makes sense for a thing to be self-aware to some extent, if it’s to function as a whole in a world at all.

Certainly a human is very complex, but again, the complexity is an emergent property.

An illustration: It’s like the images we see on our monitors. What looks to us to be a 17th century painting, our friend, or the comic above, is just a set of cleverly arranged 1’s, 0’s, and some physical equipment that, combined, creates something that looks like an image — not like its components. An even simpler example: a Pointillist painting up close vs. far away.

So, to build a robot with this functionality is… expected, really. We should expect that “unconscious” parts can become aware if they’re built to do so.

The technology may be in its infancy, but it’s a great representation of the above (systemic perception) in action.

••••••

{ memeengine }:

I like the photo, and the idea. But… I think recognizing one’s own physical self doesn’t have much to do with self-awareness. We could train REALLY simple systems to recognize any specific shape and name it “self”.

OS RE ME:

I should have made it more clear; of course I’m stretching it here, and consciousness =/= self-recognition. It’s a baby step. But I think it’s possible to do, eventually. Most essentially, I’m referencing the idea of abiogenesis, and artificial intelligence.

But, also, do you have an example of such trainable simple systems? Curious.

Thanks!

Ray Kurzweil (figurehead of the { futurist } & { transhumanist } movements) responds to { The Singularity isn’t Near } Paul Allen’s (co-founder of Microsoft & chairman of Vulcan) & Mark Greaves’ (computer scientist, Vulcan’s director for knowledge systems) response to Kurzweil’s original essay “The Law of Accelerating Returns”, & the theory discussed in his book, { The Singularity is Near }.

IBM unveils cognitive computing chips, combining digital ‘neurons’ and ‘synapses’August 18, 2011, { Kurzweil AI }

IBM researchers unveiled today a new generation of experimental computer  chips designed to emulate the brain’s abilities for perception, action  and cognition.
In a sharp departure from traditional von Neumann computing concepts  in designing and building computers, IBM’s first neurosynaptic computing  chips recreate the phenomena between spiking neurons and synapses in  biological systems, such as the brain, through advanced algorithms and  silicon circuitry.
The technology could yield many orders of  magnitude less power  consumption and space than used in today’s  computers, the researchers  say. Its first two prototype chips have  already been fabricated and are currently undergoing testing.
Called cognitive computers,  systems built with these chips won’t be programmed the same way  traditional computers are today. Rather, cognitive computers are  expected to learn through experiences, find correlations, create  hypotheses, and remember — and learn from — the outcomes, mimicking the  brains structural and synaptic plasticity.
“This is a major  initiative to move beyond the von Neumann paradigm that has been ruling  computer architecture for more than half a century,” said Dharmendra  Modha, project leader for IBM Research.

IBM unveils cognitive computing chips, combining digital ‘neurons’ and ‘synapses’
August 18, 2011, { Kurzweil AI }

IBM researchers unveiled today a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.

In a sharp departure from traditional von Neumann computing concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry.

The technology could yield many orders of magnitude less power consumption and space than used in today’s computers, the researchers say. Its first two prototype chips have already been fabricated and are currently undergoing testing.

Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember — and learn from — the outcomes, mimicking the brains structural and synaptic plasticity.

“This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,” said Dharmendra Modha, project leader for IBM Research.

How Computational Complexity Will Revolutionize Philosophy
The theory of computation has had a profound influence on philosophical thinking. But computational complexity theory is about to have an even bigger effect, argues one computer scientist.KFC 08/10/2011 { Technology Review }
Read the short paper by Scott Aaronson (who has proposed these ideas):{ Cornell University Library }

How Computational Complexity Will Revolutionize Philosophy

The theory of computation has had a profound influence on philosophical thinking. But computational complexity theory is about to have an even bigger effect, argues one computer scientist.

KFC 08/10/2011 
{ Technology Review }

Read the short paper by Scott Aaronson (who has proposed these ideas):
{ Cornell University Library }

wildcat2030
The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about. We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can. We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place? “Sometimes it seems,” says Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, like a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if everything that we thought hinged on thinking turns out to not involve it, then … what is thinking? It would seem to reduce to either an epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion. Where is the keep of our selfhood?

Mind vs. Machine - Magazine - The Atlantic (via wildcat2030)

••••••

OS: Why are we always so worried about retaining our “selfhood”? Is it so important?