The discovery of the role the Phenomenon Objective plays in the functioning of the human brain

Artificial Intelligence

Garry Kasparov writes in his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (p. 75):

The basic suppositions behind Alan Turing’s dreams of artificial intelligence were that the human brain is itself a kind of computer and that the goal was to create a machine that successfully imitates human behaviour.

This concept has been dominant for generations of computer scientists. It’s a tempting analogy – neurons as switches, cortexes as memory banks, etc. But there is a shortage of biological evidence for this parallel beyond the metaphorical and it is a distraction from what makes human thinking different from machine thinking.

The terms which I (Garry Kasparov) prefer to highlight these differences, are: “understanding” and “purpose”.

Andrew McAfee and Erik Brynjolfsson:

Computers and robots can - despite their intelligence - understand little of the human condition, of the unique human perception of the world.

My (Hans Damen) description of

describes the essence of the mental part of the human condition.

An answer to the question “What is understanding ?” might be derived from my answer to the question “What is language?”

History

In 1956, at the ‘Dartmouth Conference’, a group of prominent scientists started the thinking about that what they called “Artificial Intelligence” (A.I.).

Herbert Simon, one of the attendants of that conference, predicted in 1965:

“machines will be capable, within 20 years, of doing any work a man can do”.

Marvin Minsky, another attendant of that conference agreed, writing in 1967:

“within a generation….the problem of creating Artificial Intelligence will substantially be solved”.

In 1973 it had become obvious that these scientists had grossly underestimated the difficulty of building a truly intelligent machine, and funding of undirected research in Artificial Intelligence (A.I.) was stopped in the USA and the UK.

Garry Kasparov (p.99):

A.I. would not see its spring until a movement arose that gave up on grandiose dreams of imitating human cognition.
The field was “machine learning”.

The basic concept of “machine learning” is that you don’t give the machine a bunch of rules to follow, the way you might try to learn a second language by memorising grammar and conjugation rules.
Instead of telling it [the rules of] the process, you provide the machine with lots of examples of that process and let the machine figure out the rules, so to speak.

Language translation is a good illustration. Google Translate is powered by machine learning, and it knows hardly anything about the rules of the dozens of languages it works with.
They feed the system examples of correct translations, millions and millions of examples, so the machine can figure out what’s likely to be right when it encounters something new.

Looking back one could say that “machine learning” rescued A.I. from insignificance, because it worked and it was profitable.

Future

Garry Kasparov (p.247 and 248):

Intelligent machines have been making great advances thanks to “machine learning” and other techniques, but in many cases they are reaching the practical limits of data-based intelligence.

Going from a few thousand examples to a few billion examples makes a big difference. Going from a few billion to a few trillion may not.

In response, in an ironic twist after decades of trying to replace human intelligence with algorithms, the goal of many companies and researchers now is how to get the human mind back into the process of analysing and deciding in an ocean of data.

Humans do many things better than machines, from visual recognition to interpreting meaning, but how to get the humans and machines working together in a way that makes the most of the strength of each without slowing the computer to a crawl?

Thinking about the future of Artificial Intelligence is

Replacing a human by a robot

A person, who

usually has

in her brain.

Such a plan is a prediction, consisting of a sequence of “in between objectives” (“milestones”) along the road of that person to that objective.

After a person has finished a particular job, she can compare

Such a comparison shows that these sequences are different.

These sequences are for example different because that person often got in an unpredicted situation, in which

This means, that

Note:

Each of the computers “Deep Blue”, “Alpha Go”, “Watson”, and “Google Translate” was given

at the start, and the computer concerned did not alter that plan while pursuing that objective.

Before deciding to spend billions of dollars on designing a computer

which is capable of making new sequences of “in between objectives” (=”new plans”) for pursuing its current objective many times a day

one obviously should at least know the answers to the questions:


back to 'foreseeable benefits'
top of page