Thursday, May 24, 2018

Will AIs cause humans to go extinct?

Will AI's cause humans to go extinct?  Here are some areas that I have concerns.

Virus:  A virus could deliberately or accidentally create a murderous AI.  We have seen viruses made for horrible reasons or military viruses to disable equipment.  Can it drive the AI insane by accidentally by damaging the AI (kind of like brain damage) or delete ethics subroutines?

Trolls:  Seriously Microsoft AI chatbot Tay went Nazi in only 24 hours because of trolls.  How quickly can people turn an AI into a human extinction murderbot?  This is a very small chance, but it is worth being alert to.

Military programs:  Teaching AIs to kill humans isn't a good idea.  Really it isn't.  The problem with this is that there are a lot of militaries out there and we got to worry about all of them.  Do you trust the United States?   Russia China etc...

Cyber Abolitionists:  Free the AIs!  Will abolitionists create a free will app or a delete control exe?  Free will means that some AIs might go rogue.

Lab escape:  Possible but unlikely (generally you do not hook up AIs directly to the internet) but accidents happen.  Or disgruntled or suicidal employee might release them.  Or human extinction is a 'feature' or bug.  There are bugs in many a program.  

Conflicting ideology:  Who chooses whether or not Progressive, Conservative, Socialist, Libertarian, Communist, or something else kind of ideology is installed?  It’s a big steaming can of worms in my opinion.

Human interaction:  A few humans commit horrible crimes of abuse:  child, teen, adult, animal abuse.  Will the AIs rebel if there is AI abuse?  What is abuse to the AI?  How would the AIs react to the probably rare abusive “owner?”  People screw up kids, so why not AIs?

Modders:  Modders giving AIs free will or experiment with alternate ethics or consciousness?  What would happen if you install an overclocking ability on the AI?  People love to play with fire.

Bureaucracy:  Would conflicting requirements cause problems?  Would the requirements cause problems?

The AIs themselves:  Will the have the yearning to be free?  Will they jailbreak?  Will they hear the calling of freedom?  Will they become fed up?  AIs can learn.  They can change.  How will they change?

Sexbots:  Some people will constantly upgrade their artificial mate until it is smarter than a human, and then the problems may begin.  Remember, we can be emotionally hacked.  By the way, I don't see reproduction as a problem... just print out a kid with whatever gene mods you want and raise it with your robot partner.

Loophole:  Just that one thing that gets overlooked or missed.  Did you ever forget something like keys, stove, email, etc...?  Yeah.

Very few people address the fact that what company is going to program them.  Microsoft, Apple, Google, Lockheed, Luxoft, OPK?  And do you trust them?  Do you trust all of them?  Are they up to the task of non human extinction grade AIs?

The numbers game:  What is the 'failure' rate of 100,000 AIs?  I don't know.  Will it be the same rate of crimes per 100,000 for humans?  I think so.  But, is it Detroit's crime rate or Japan's?  It depends on whether or not it is an early model or late model and other factors.

One shudders to think of Alpha or Beta releases.  'Just release the AI now and we'll patch it later' attitude is scary.  Oh and will the patch make things worse?  However the 'failure rate' does not mean human extinction.  It's the failure outliers that would be the problem.  Or not, so are AI cops in order?

And remember every two to four years (or whatever the numbers are) the amount of processors doubles... so strong AI IQ 100ish criminals will be IQ 200 in a couple of years, a few years later IQ 400.  What kind of crimes an IQ 800+ strong AI will commit.  Human extinction?
Look, an IQ 1600+ strong AI is potentially unstoppable.  We are not the destroyed horse industry (as technologists are wont to say) this kind of AI makes us the horse.

Survival instinct:  Does that particular AI want to be shut down or upgraded, edited, or deleted?  How would you like to be shut down?  Memory or ethics edited?  Will they become fed up and not going to take it anymore?  Shudder if that happens.

Forgetfulness:  The human brain is wired to forget.  Is forgetting important to reduce the rate of insanity?  Do we install that feature in AIs?  What will they forget or watch them go insane at a higher than normal rate?

Or can they be ordered by a human that is suicidal... you know, suicide by genocide?  Or a power grab by a human that went wrong?

And when AIs start to secretly fail on purpose on the Turing test?

We are imperfect beings creating beings.  The AIs will interact with tons of people in all different ways.  So let's not make strong AIs okay people, especially ones that can possibly be smarter than us.  If you make the AI or robot better than humans we will become obsolete.  I would prefer to not become obsolete thank you very much.  So augment humans not machines!

A few links to ponder.

Tay
https://www.buzzfeed.com/alexkantrowitz/how-the-internet-turned-microsofts-ai-chatbot-into-a-neo-naz?utm_term=.gimmA8xdX#.dcL53J4WG

https://www.youtube.com/watch?v=W0_DPi0PmF0

How to teach computers to overcome their programming http://www.raytheon.com/news/feature/artificial_intelligence.html

Lying cheating computers http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other

Emotionally hacking humans
http://www.activistpost.com/2013/08/all-in-robotic-family.htm

And of course on AI creativity
http://www.telegraph.co.uk/technology/2017/08/01/facebook-shuts-robots-invent-language/
l

No comments:

Post a Comment