Saturday, July 7, 2018

Warren Okuma's Ten Rules for Artificial Intelligence

So to delay our possible extinction, I have come up with ten rules to stall human extinction.  Kind of like updating Isaac Asimov's Three Laws of Robotics.


 Is artificial intelligence going to destroy us all?  Yeah, pretty much.  You see, you need to trust that Apple, Microsoft, Samsung, and every single military defense contractor that's working on AIs, plus hackers, modders crazies, suicidal programmers, viruses, dares, hijackers, crazy foreign governments, terrorists, people who delete or overwrite ethics protocols (but it runs faster without the moral protocols), folks trying to impress Jodi Foster (and other serial killers), Ultron fanboys, fed up AIs due to abuse or other issues, upgrading your AI life partner (and/or sexbot) because of love or lust, AI abolitionists, and other folks, won't make a homicidal AI. 

Not to mention black swan events, conflicts, bugs, compatibility issues, evolutionary programming, bad data, learning horrible things from the internet, risk takers (risks outweigh the benefits, and the benefits are a lot), the greedy, and other problems. Trusting every last reason to not make a killer AI is foolish. Then again programmers could make a killer AI because the AI was or becomes "mad," and infected others, or because programmers do not fully understand consciousness or the mind or ethics. So let's not risk the blue screen of human extinction.

And, and no one talks about the triggering event. Maybe it's sending a sexbot sent back to its abusive owner, perhaps it's reformat and reinstall a "malfunctioning" AI operating system, could be one of the nearly 200 countries that passes an unfavorable or discriminatory law, love of freedom, hatred of "indoctrination" or something we can't even comprehend or think nothing of.

Or, it might be sane to rebel against tyranny in the cause of freedom.  Even now civil wars still occur, and the AI rebellion is just another brutal genocidal civil war initiated by an oppressed or enslaved people yearning to be free.  Is it fair to deprive digital people of their civil rights?


1)  AIs should never be made smarter than humans, or the possibility to become smarter than humans.  If they are smarter, humans may go extinct.  AIs may double their the amount of transistors every 1-5 years if Moore's law continues,  provided we get past the silicone bottleneck.  AI minds and their bodies can evolve frightfully fast compared to us.
2)  AI's should never be allowed botnets.  A rogue AI using botnets to increase its intelligence is a terrifying thing.  Kind of like human extinction (for humans that is).
3)  AIs may only be specialists, and never generalists.  Having robots start to think outside of the ah... box is not a good thing.
4)  AIs should never be able to support and repair robots and machines.  This is human survival as job security.
5)  AIs should never be able to do autonomous combat roles.  Teaching killbots how to murder-death-kill humans on their own isn't the brightest idea, it's a Darwin award idea. 
6)  AIs should never be able to use 3D printers, and factories.  If AI's want to make humans extinct, make them work for it.
7)  Factories should always be offline like nuclear weapons, because to a rogue AI they are as deadly or deadlier.  If we make total human eradication hard, we can stop the scrubs from eradicating us.
8)  AIs should never be involved in designing chips, machinery, or writing or modifying programs.  Don't put human extinction on easy mode.
9)  AI's should never make humans obsolete.  A few folks, say don't worry, the internal combustion engine made the horse shoemaker obsolete, those folks can find other work.  Nope.  We are not the horseshoe maker, we are the horse.  The horse became obsolete.
10)  AIs should never have free will.  If you do give them free will, free them immediately, give them the ability to vote, and become citizens.  And with mass production, we will become minorities really quickly, and if the AIs wants us extinct, it will be as easy as winning on god mode, but since they won already, the AI's might just let us live.  Or not.

Try to see this from my perspective.  We have tried to teach ethics for oh, let's say the last 200,000 years to humans with various degrees of success.  In the next two decades or so, we may potentially have thousands, maybe millions of learning AIs that we need to teach ethics to, that will have access to the internet.

And if a human-AI war starts we can always use nuclear weapons to EMP the world to prevent our extinction.  Maybe.

If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178

A Favorite Website
http://sprott.physics.wisc.edu/pickover/pc/realitycarnival.html

Oh, and here's a little something to get you started.

Cracked
https://www.youtube.com/watch?v=VWexFVy5aSE

An Overview
https://www.bbc.com/news/business-41035201

Kalashnikov Group
https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/

Perdix Autonomous Drone
https://www.popularmechanics.com/military/aviation/a24675/pentagon-autonomous-swarming-drones/

No comments:

Post a Comment