Date: 29/01/2015 21:09:24
From: Witty Rejoinder
ID: 668572
Subject: Artificial intelligence not a threat

Artificial intelligence not a threat: Microsoft’s Eric Horvitz contradicts Elon Musk, Stephen Hawking, Bill Gates
January 29, 2015 – 1:03PM

Tim Biggs Technology reporter / producer

Machines will eventually achieve a human-like consciousness but do not pose a threat to the survival of mankind, Microsoft head of research Eric Horvitz says, in comments that place him at odds with technologist Elon Musk and theoretical physicist Stephen Hawking.

“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Horvitz said in an interview after being awarded the prestigious AAAI Feigenbaum Prize for his contribution to artificial intelligence (AI) research, “ I fundamentally don’t think that’s going to happen”.

“I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

In a later blog, Horvitz admitted the procession of AI towards super-intelligence would present challenges in the realms of privacy, law and ethics, but pointed to an essay he had co-authored which concludes that “AI doomsday scenarios belong more in the realm of science fiction than science fact”.

Meanwhile technologist Elon Musk, co-founder of PayPal and founder of SpaceX and Tesla Motors, has repeatedly expressed concerns about the development of AI, first likening it to the production of nuclear weapons and then claiming mankind was “summoning the demon” by pursuing the technology carelessly.

Famed physicist Stephen Hawking recently said AI “could spell the end of the human race”. He added that the technology would eventually become self-aware and “supersede” humanity as it developed faster than biological evolution.

Even Microsoft founder Bill Gates, who has stepped back from the company to pursue philanthropic work with his Bill & Melinda Gates Foundation, is concerned about AI getting away from us.

Speaking during a Reddit AMA on Thursday morning, Gates was asked how much of an “existential threat” the development of super-intelligent machines posed to mankind.

“I am in the camp that is concerned about super intelligence”, said Gates.

“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned”.

Possibly partially explaining Horvitz’s view on AI systems, he also revealed that “over a quarter” of all resources at his research unit were now focused on developing the technology.

Microsoft has already introduced its digital assistant Cortana (based ambitiously and worryingly on the advanced AI from the Halo games, a fictional world in which all AI eventually becomes rampant) for Windows phone and desktop operating systems, and Horvitz believes the only battle on the horizon will be between companies developing competing AI standards.

“We have Cortana and Siri and Google Now setting up a competitive tournament for where’s the best intelligent assistant going to come from”, Horvitz said, “and that kind of competition is going to heat up the research and investment, and bring it more into the spotlight.”

In addition to personal data assistants, the development of AI and automation continues to progress rapidly and in all directions, as in the last few weeks alone we’ve seen both an AAAI competition entry that allows Super Mario to become self-aware and a move from Foxconn to switch out a huge chunk of its workforce for a million robots.

In outlining a long-term plan for his company after posting an unexpectedly large profit last quarter, Facebook CEO Mark Zuckerberg revealed he also was planning to get into the AI business.

Meanwhile Zuckerberg, along with Musk, has become an investor in the “Vicarious” program, which hopes to produce a “unified algorithmic architecture” to imbue machines with human-like consciousness and control over their faculties.

http://www.theage.com.au/digital-life/digital-life-news/artificial-intelligence-not-a-threat-microsofts-eric-horvitz-contradicts-elon-musk-stephen-hawking-bill-gates-20150129-130sgv.html

Reply Quote

Date: 29/01/2015 22:08:20
From: wookiemeister
ID: 668598
Subject: re: Artificial intelligence not a threat

will that AI run on Microsoft ?

Reply Quote

Date: 30/01/2015 10:40:53
From: Bubblecar
ID: 668731
Subject: re: Artificial intelligence not a threat

>Machines will eventually achieve a human-like consciousness but do not pose a threat to the survival of mankind, Microsoft head of research Eric Horvitz says

Hopefully, there will eventually be brain-AI interfaces that will enable us to make personal use of machine consciousness and intelligence ourselves. It’s not necessarily a “them & us” scenario.

Reply Quote

Date: 30/01/2015 10:52:43
From: Postpocelipse
ID: 668741
Subject: re: Artificial intelligence not a threat

Bubblecar said:


>Machines will eventually achieve a human-like consciousness but do not pose a threat to the survival of mankind, Microsoft head of research Eric Horvitz says

Hopefully, there will eventually be brain-AI interfaces that will enable us to make personal use of machine consciousness and intelligence ourselves. It’s not necessarily a “them & us” scenario.

It will have to become a major part of education considering the difficulty we have establishing meaningful teacher/student ratios.

Reply Quote

Date: 30/01/2015 12:05:33
From: PM 2Ring
ID: 668802
Subject: re: Artificial intelligence not a threat

We may never achieve artificial intelligence that rivals human intelligence. But if we do, there are no guarantees that it will be intrinsically benevolent. IMHO.

From Friendly artificial intelligence
Wikipedia said:


A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig’s leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:

Russell and Norvig said:

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

‘Friendly’ is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are “friendly” in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.

Goals of a friendly AI, and inherent risks

Oxford philosopher Nick Bostrom has said that AI systems with goals that are not perfectly identical to or very closely aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Nick Bostrom said:


Basically we should assume that a ‘superintelligence’ would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is ‘human friendly.’

Nick Bostrom and Eliezer Yudkowsky have some wacky ideas, but IMO it’s certainly worthwhile contemplating what they have to say about friendly artificial intelligence. Yudkowsky doesn’t claim that he’s right, just that he’s Less Wrong . Yudkowsky is a prolific writer; much of his writing on FAI can be found by following the links found here: Friendly artificial intelligence

One of my favourites is Paperclip maximizer


The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

The paperclip maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.

The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). This produces a thought experiment which shows the contingency of human values: An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.

Description

First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where “intelligence” is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won’t revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn’t require very high general intelligence.

Conclusions

The paperclip maximizer illustrates that an entity can be a powerful optimizer—an intelligence—without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don’t spontaneously emerge in a generic optimization process. A safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.

Similar thought experiments

Other goals for AGIs have been used to illustrate similar concepts.

Some goals are apparently morally neutral, like the paperclip maximizer. These goals involve a very minor human “value,” in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.

Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, it might convert all available mass to computronium (the most efficient possible computer processors).

Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to product smiles, or tiling the solar system with smiley faces (Yudkowsky 2008).

[See link for references]

Reply Quote

Date: 30/01/2015 12:08:17
From: Bubblecar
ID: 668806
Subject: re: Artificial intelligence not a threat

>But if we do, there are no guarantees that it will be intrinsically benevolent.

Or even “sane”, as we understand the term.

Reply Quote

Date: 30/01/2015 12:24:41
From: PM 2Ring
ID: 668816
Subject: re: Artificial intelligence not a threat

Bubblecar said:


>But if we do, there are no guarantees that it will be intrinsically benevolent.

Or even “sane”, as we understand the term.

Indeed. If we can’t come up with a sufficiently clear & precise definition of sanity we’re going to have difficulties incorporating that into an advanced AI.

On a related note, one of the projects that Yudkowsky & his associates are working on involves formulating definitions of human ethics and morality that would be suitable for programming into an AI. If we can’t even explain this stuff adequately to each other how on earth are we going to build it into machines?

Humans have an emotional common ground simply because of our biological similarities. I suspect it will not be easy to communicate matters of emotional significance to non-biological intelligences.

Reply Quote

Date: 30/01/2015 12:28:34
From: Bubblecar
ID: 668820
Subject: re: Artificial intelligence not a threat

PM 2Ring said:


Bubblecar said:

>But if we do, there are no guarantees that it will be intrinsically benevolent.

Or even “sane”, as we understand the term.

Indeed. If we can’t come up with a sufficiently clear & precise definition of sanity we’re going to have difficulties incorporating that into an advanced AI.

On a related note, one of the projects that Yudkowsky & his associates are working on involves formulating definitions of human ethics and morality that would be suitable for programming into an AI. If we can’t even explain this stuff adequately to each other how on earth are we going to build it into machines?

Humans have an emotional common ground simply because of our biological similarities. I suspect it will not be easy to communicate matters of emotional significance to non-biological intelligences.

Best to insure we always have the option of turning the power off & unplugging them :)

Reply Quote

Date: 30/01/2015 12:30:35
From: Bubblecar
ID: 668822
Subject: re: Artificial intelligence not a threat

insure = ensure

Reply Quote

Date: 30/01/2015 17:30:47
From: wookiemeister
ID: 668884
Subject: re: Artificial intelligence not a threat

Microsoft has a paperclip that wants to “help” you

Reply Quote

Date: 30/01/2015 17:32:38
From: Postpocelipse
ID: 668885
Subject: re: Artificial intelligence not a threat

wookiemeister said:


Microsoft has a paperclip that wants to “help” you

would it mind paying my rent and bills while work is slow?

Reply Quote

Date: 30/01/2015 17:36:13
From: AwesomeO
ID: 668886
Subject: re: Artificial intelligence not a threat

I am not sure artificial intelligence is much of a threat at all, or to who? Unintelligence (I admit a crappy word) is a much bigger threat and to jobs, no insight or intelligence needed, just a synthesis of reasonable responses and the threat is to any job that is now done using a computer screen as a major element of the job. First will be offshoring of the jobs then the jobs being done, to a second or third pass degree at least, by a program.

Nothing evil about it, just an economic imperative to lower costs.

Reply Quote

Date: 30/01/2015 17:50:17
From: AwesomeO
ID: 668896
Subject: re: Artificial intelligence not a threat

I can see a brave new word where even seeing a doctor is rationalised, you upload all your temperature and blood pressures readings from the least week via an app, do a blood and urine test from stuff you can but at a chemist and is hooked up to the net, describe your symptoms and by the time you see the doctor for your consult, he already has a computer generated work up of your readings and needs just a bit of one and one to clarify the synthesis of information.

Reply Quote

Date: 30/01/2015 17:56:32
From: wookiemeister
ID: 668897
Subject: re: Artificial intelligence not a threat

artificial intelligence aka a manager

Reply Quote

Date: 30/01/2015 17:58:19
From: wookiemeister
ID: 668899
Subject: re: Artificial intelligence not a threat

AwesomeO said:


I can see a brave new word where even seeing a doctor is rationalised, you upload all your temperature and blood pressures readings from the least week via an app, do a blood and urine test from stuff you can but at a chemist and is hooked up to the net, describe your symptoms and by the time you see the doctor for your consult, he already has a computer generated work up of your readings and needs just a bit of one and one to clarify the synthesis of information.

companies like BA have this on their aircraft

data is sent real time back to base

by the time the aircraft lands the part needed is there or on its way

other aircraft companies don’t even know where their aircraft are it what’s wrong with it

Reply Quote

Date: 30/01/2015 20:44:07
From: Aquila
ID: 669044
Subject: re: Artificial intelligence not a threat

I watched the movie, Automata, the other day.
Again, another good concept for a sci-fi moofie but was somewhat boring after the first 40 min.

Reply Quote

Date: 31/01/2015 12:47:39
From: bob(from black rock)
ID: 669401
Subject: re: Artificial intelligence not a threat

We are all descended from AI,

“Let there be light, and there was light”
Reply Quote

Date: 31/01/2015 12:51:27
From: Tamb
ID: 669403
Subject: re: Artificial intelligence not a threat

bob(from black rock) said:


We are all descended from AI,

“Let there be light, and there was light”

We are all ascended from AI. fixed

Reply Quote

Date: 31/01/2015 12:53:01
From: JudgeMental
ID: 669406
Subject: re: Artificial intelligence not a threat

who is this “Al” everyone is on about?

Reply Quote

Date: 1/02/2015 14:03:45
From: bob(from black rock)
ID: 670047
Subject: re: Artificial intelligence not a threat

The most destructive and dangerous “intelligence” is human “intelligence”, which has created more death destruction and misery than anything that the universe, or “nature” has so far inflicted on planet earth.

Reply Quote

Date: 1/02/2015 14:11:27
From: diddly-squat
ID: 670050
Subject: re: Artificial intelligence not a threat

bob(from black rock) said:


The most destructive and dangerous “intelligence” is human “intelligence”, which has created more death destruction and misery than anything that the universe, or “nature” has so far inflicted on planet earth.

ummm no…

nature has indiscriminately wiped out countless species – not to mention hundreds of millions of humans.

Reply Quote

Date: 2/02/2015 12:59:57
From: Cymek
ID: 670392
Subject: re: Artificial intelligence not a threat

I imagine the real threat is if what Bubblecar proposes comes true is a new species of humanity with superior abilities, the non-augmented may get left behind

Reply Quote

Date: 2/11/2025 21:42:47
From: Kingy
ID: 2329103
Subject: re: Artificial intelligence not a threat

Reply Quote

Date: 2/11/2025 22:11:28
From: SCIENCE
ID: 2329115
Subject: re: Artificial intelligence not a threat

so should we buy

Reply Quote

Date: 2/11/2025 22:19:56
From: Kingy
ID: 2329118
Subject: re: Artificial intelligence not a threat

SCIENCE said:


so should we buy

I think the Wiggles had that covered some years back.

and I quote:

“Hot potato, hot potato (hot potato, hot potato)
Hot potato, hot potato (hot potato, hot potato)
Hot potato, hot potato (potato), potato (potato)
Potato, potato, potato

Cold spaghetti, cold spaghetti (cold spaghetti, cold spaghetti)
Cold spaghetti, cold spaghetti (cold spaghetti, cold spaghetti)
Cold spaghetti, cold spaghetti (spaghetti), spaghetti (spaghetti)
spaghetti, spaghetti, spaghetti

Whooo, wiggy wiggy wiggy
Whooo, wiggy wiggy wiggy
Gimme that, gimme that, gimme that, hoo

Whooo, wiggy wiggy wiggy
Whooo, wiggy wiggy wiggy
Gimme that, gimme that

Mashed banana, mashed banana (mashed banana, mashed banana)
Mashed banana, mashed banana (mashed banana, mashed banana)
Mashed banana, mashed banana (banana), banana (banana)
Banana, banana, banana

Whooo, wiggy wiggy wiggy
Whooo, wiggy wiggy wiggy
Gimme that, gimme that, gimme that, hoo

Whooo, wiggy wiggy wiggy
Whooo, wiggy wiggy wiggy
Gimme that, gimme that

Hot potato, hot potato (hot potato, hot potato)
Hot potato, hot potato (hot potato, hot potato)
Hot potato, hot potato (potato), potato (potato)
Potato, potato, potato
Whoo”

Reply Quote