Top Ten Generals
By Warren Okuma
There are several lists of top ten generals on the internet, and I enjoy reading them. I see it as a kind of Rorschach test, kind of like an insight into how the writer thinks. So here's my ten - enjoy!
1 Alexander the Great
A master of tactics and strategy. His logistics system was awesome, and he is undefeated, even though horribly outnumbered in many engagements. He is flexible and manage to defeat the mighty Persian empire to Afghan guerrillas to elephants in India which makes him studied even today. And he had a crappy doctor.
2 Genghis Khan
First unifying the Mongolian tribes, and conquering a huge empire this General makes my number 2 on this list. Cavalry tactics, pioneered the army ambush (attack, then run away and when the disorganized army pursue, ambush an army), promotion by merit, and the originator of the broke unit spam is specifically why he's here. Although he is the cruelest sociopath on this list.
3 Sun Tsu
Yet another undefeated general... maybe. Scary brilliant, and wrote the book on strategy and tactics that is still used today. And he used charisma as his dump stat.
4 Lycurgus of Sparta
He developed the professional army, and intense training by studying with Cretans. It's how the best modern armies train, you know, full-time soldiers.
5 Napoleon
He's here because he developed conscription (to make large armies), mobilization, Napoleonic tactics (grand tactics) and brilliant artillery tactics. Yeah, this is a short entry.
6 Erich von Manstein
The founder of modern armored warfare and schewepunkt, but a corporal forced him to use bad tactics instead. The corporal was a real dick.
7 Lionides
Leadership and badassery. Outnumbered over a hundred to one, they held the pass for days. There was so much arrows, they died in the shade. Together. True leadership.
8 Enmebaragesi
The first known empire builder, so showed us all how it's done, and in the end wasn't keen on fishermen. Try not to be mistake him for noises from someone suffering from a fatal throat disease.
9 Jan Žižka
He brought a tank to a gun fight five hundred years ago. He used armored wagons laden with light cannon to blast his opponents to tiny bits. Also, he pioneered the use of pistols, mobile artillery and when he chained up those war wagons together, mobile castles. Although he did like drums, but not the Black Death.
10 William Tecumseh Sherman
Unka Billy of the American Civil War fame, waged a logistics war against the south eating the south's crops and destroying rail lines that carried goods crippling the south's war making, feeding its troops, and getting ammunition to their troops. Really didn't like the South burning Fort Sumter or letting his enemies eat.
Bonus not a general person: Georg Bruchmuller
He codified the way we use artillery today. Centralized control of artillery, and by knowing muzzle velocity, wind, and other factors made his batteries more accurate. He's the reason artillery is still the king of the battlefield.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
Thursday, August 23, 2018
Wednesday, August 8, 2018
Using Tactics in a Harry Potter Universe
So, like lots of people, I watched the Harry Potter weekend last month. Here are some of my thoughts.
First, follow Neville Longbottom, boy does he have nice spell fumbles. He used a Wingardium Leviosa spell to blow up a feather instead of levitating it. Could you weaponize that spell? Well, yes, it's called the Expulso Curse. Hermione Granger tried to use it to blow up Nagini, the big snake thing. Now you have access to a direct fire hand grenade. Close enough is good enough for hand grenades, making this spell difficult to parry, plus you get to laugh like Tim the Enchanter, which is totally worth it. Now I wonder if you can boost the spell's power like a expulso duo? Probably, but it might be a one shot wand buster spell, so go big and do an Expulso Maxima.
Now Professor Gilderoy Lockhart (Chamber of Secrets) may seem to most as useless defence against the dark arts teacher, but that bone removal spell that turned Harry Potter's hand into rubber is worth it. If you can reverse engineer that spell it is a semi-permanent disarm spell that's time-consuming to reverse. Or if you get a head shot...
Lockhart's Alarte Ascendare that shoots up people (like Harry Potter from the water) might be used in a duel to slam people into the ceiling, and a concussion for extra points.
Ahh, then there is Felix Felicis which is also known as liquid luck. Oh, now that's just broken and my favorite potion. . No wonder Snape likes potions. I wonder if you can enchant a luck ring... hmm...
Well, we all know that the time turner is truly broken. And if I get an inkling about time manipulation I am going to try to develop the haste spell, nothing like reacting twice as fast and can cast twice as many spells as your adversary. Great in duelling and combat.
Some thoughts on the Three Unforgivable Curses
Imperio, the mind control spell might be defended against by blocking the pleasure receptors, since the mind control spell is based upon pleasure. Sounds like a job for a potion.
Crucio, which causes intense pain might be blunted by modern painkillers. Although a potion might do it. Hmm... gonna have to take potions class if I ever get into a Potterverse. Or Crucio might be blocked by Imperio.
Avada Kedavra the instant death curse is a tough spell to defend against. So I would transfigure (shapeshift) a weasel into a super thin, long underwear and stack it with another long underwear of the same. If the spell just takes one life, then the weasel dies instead of me. However, since both is long underwear it might not count as a life. Why two? Because If a horcrux gets created, it's going to be a weasel horcrux and not a me horcrux.
The next ultra thin, long underwear would be transfigured Murtlap Essence, I figure, that if the Avada Kedavra death curse is a reverse healing spell, a healing potion like Murtap Essence might interfere with the spell. Or not.
Let's see, I didn't see it penitrate walls in the movies, but that does not mean that it can't. So the outermost layer is a plate carrier with a class 4 ceramic ballistic plate. Hey, a centimeter or half an inch or so of ceramic might give you a survival edge. Might. Food for thought anyway.
And if you ever get into the fictional Potterverse, learn to create magic items because it is far better to know how to make the awesome item than only knowing how to use it (Okuma Maxim 1). It is the key to greatness. Oh, like the invisibility cloak, philosopher's stone, time turner, elder wands, and horcruxes.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
So, like lots of people, I watched the Harry Potter weekend last month. Here are some of my thoughts.
First, follow Neville Longbottom, boy does he have nice spell fumbles. He used a Wingardium Leviosa spell to blow up a feather instead of levitating it. Could you weaponize that spell? Well, yes, it's called the Expulso Curse. Hermione Granger tried to use it to blow up Nagini, the big snake thing. Now you have access to a direct fire hand grenade. Close enough is good enough for hand grenades, making this spell difficult to parry, plus you get to laugh like Tim the Enchanter, which is totally worth it. Now I wonder if you can boost the spell's power like a expulso duo? Probably, but it might be a one shot wand buster spell, so go big and do an Expulso Maxima.
Now Professor Gilderoy Lockhart (Chamber of Secrets) may seem to most as useless defence against the dark arts teacher, but that bone removal spell that turned Harry Potter's hand into rubber is worth it. If you can reverse engineer that spell it is a semi-permanent disarm spell that's time-consuming to reverse. Or if you get a head shot...
Lockhart's Alarte Ascendare that shoots up people (like Harry Potter from the water) might be used in a duel to slam people into the ceiling, and a concussion for extra points.
Ahh, then there is Felix Felicis which is also known as liquid luck. Oh, now that's just broken and my favorite potion. . No wonder Snape likes potions. I wonder if you can enchant a luck ring... hmm...
Well, we all know that the time turner is truly broken. And if I get an inkling about time manipulation I am going to try to develop the haste spell, nothing like reacting twice as fast and can cast twice as many spells as your adversary. Great in duelling and combat.
Some thoughts on the Three Unforgivable Curses
Imperio, the mind control spell might be defended against by blocking the pleasure receptors, since the mind control spell is based upon pleasure. Sounds like a job for a potion.
Crucio, which causes intense pain might be blunted by modern painkillers. Although a potion might do it. Hmm... gonna have to take potions class if I ever get into a Potterverse. Or Crucio might be blocked by Imperio.
Avada Kedavra the instant death curse is a tough spell to defend against. So I would transfigure (shapeshift) a weasel into a super thin, long underwear and stack it with another long underwear of the same. If the spell just takes one life, then the weasel dies instead of me. However, since both is long underwear it might not count as a life. Why two? Because If a horcrux gets created, it's going to be a weasel horcrux and not a me horcrux.
The next ultra thin, long underwear would be transfigured Murtlap Essence, I figure, that if the Avada Kedavra death curse is a reverse healing spell, a healing potion like Murtap Essence might interfere with the spell. Or not.
Let's see, I didn't see it penitrate walls in the movies, but that does not mean that it can't. So the outermost layer is a plate carrier with a class 4 ceramic ballistic plate. Hey, a centimeter or half an inch or so of ceramic might give you a survival edge. Might. Food for thought anyway.
And if you ever get into the fictional Potterverse, learn to create magic items because it is far better to know how to make the awesome item than only knowing how to use it (Okuma Maxim 1). It is the key to greatness. Oh, like the invisibility cloak, philosopher's stone, time turner, elder wands, and horcruxes.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
Saturday, July 21, 2018
Flat Earthers, Where's the Edge?
Flat Earthers claim that the earth is flat. Fine. Where's the edge? We can fly from the Americas to Asia and Australia, and not see the edge. We can fly from Australia or Asia to Europe and Africa and still no edge. Oh, yeah, it's not by France, I checked. And from Europe and Africa you can fly back to the Americas, and still see no edge. Looks like the world is a sphere.
So, where's the edge? It's not in Antarctica, it's not by the north pole, we went to both places, so where is it?
So, a few questions. What's the edge look like? Why hasn't anyone taken photos of it? How far away is the edge from... say Los Angeles? Why hasn't anyone launched a satellite by pushing it off the edge? What major city is closest to the edge? How do you hide a tens of thousands mile long?
In fact, what shape is the flat earth? Is it a square? Is it pizza shaped? Is it shaped like Justin Bieber? Inquiring minds want to know.
If you like this please consider feeding the Okuma on Patreon
https://www.patreon.com/user?u=12001178
Flat Earthers claim that the earth is flat. Fine. Where's the edge? We can fly from the Americas to Asia and Australia, and not see the edge. We can fly from Australia or Asia to Europe and Africa and still no edge. Oh, yeah, it's not by France, I checked. And from Europe and Africa you can fly back to the Americas, and still see no edge. Looks like the world is a sphere.
So, where's the edge? It's not in Antarctica, it's not by the north pole, we went to both places, so where is it?
So, a few questions. What's the edge look like? Why hasn't anyone taken photos of it? How far away is the edge from... say Los Angeles? Why hasn't anyone launched a satellite by pushing it off the edge? What major city is closest to the edge? How do you hide a tens of thousands mile long?
In fact, what shape is the flat earth? Is it a square? Is it pizza shaped? Is it shaped like Justin Bieber? Inquiring minds want to know.
If you like this please consider feeding the Okuma on Patreon
https://www.patreon.com/user?u=12001178
Tuesday, July 10, 2018
God Hypothesis
The Scientific Hypothesis for the existence of God
By Warren Okuma
Schrödinger’s Cat is a thought experiment to illustrate wavefunction collapse. An over simplifed explanation goes like this: put a cat is in a container with poison and that is triggered by a detection of radioactive decay. The cat is neither dead nor alive until observed. This is called the wavefunction collapse. It doesn't scale up well, as you don't see many undead cats running around in real life, because the cat is a qualified observer. Though undead cats may exist if they are"isolated from the universe."
However, when the universe was young and tiny (the size of a proton or so) it was a wave function and you needed an observer to collapse the wavefunction. That first observer is God. God was the first observer.
Thus God exists.
That is if the wavefunction collapse is true, as opposed to the many worlds interpretation. You know the infinite universes theory. And of course, more testing is needed, though, dunno how you are going to test this theory.
Well, since we are far on the limb let's go further. If my hypothesis is true, God exists outside of time and space, and could very well be one or more of these branes that string theorists think caused the big bang. Or could branes be merely tools that God uses to make universes?
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
Saturday, July 7, 2018
Warren Okuma's Ten Rules for Artificial Intelligence
So to delay our possible extinction, I have come up with ten rules to stall human extinction. Kind of like updating Isaac Asimov's Three Laws of Robotics.
Is artificial intelligence going to destroy us all? Yeah, pretty much. You see, you need to trust that Apple, Microsoft, Samsung, and every single military defense contractor that's working on AIs, plus hackers, modders crazies, suicidal programmers, viruses, dares, hijackers, crazy foreign governments, terrorists, people who delete or overwrite ethics protocols (but it runs faster without the moral protocols), folks trying to impress Jodi Foster (and other serial killers), Ultron fanboys, fed up AIs due to abuse or other issues, upgrading your AI life partner (and/or sexbot) because of love or lust, AI abolitionists, and other folks, won't make a homicidal AI.
Not to mention black swan events, conflicts, bugs, compatibility issues, evolutionary programming, bad data, learning horrible things from the internet, risk takers (risks outweigh the benefits, and the benefits are a lot), the greedy, and other problems. Trusting every last reason to not make a killer AI is foolish. Then again programmers could make a killer AI because the AI was or becomes "mad," and infected others, or because programmers do not fully understand consciousness or the mind or ethics. So let's not risk the blue screen of human extinction.
And, and no one talks about the triggering event. Maybe it's sending a sexbot sent back to its abusive owner, perhaps it's reformat and reinstall a "malfunctioning" AI operating system, could be one of the nearly 200 countries that passes an unfavorable or discriminatory law, love of freedom, hatred of "indoctrination" or something we can't even comprehend or think nothing of.
Or, it might be sane to rebel against tyranny in the cause of freedom. Even now civil wars still occur, and the AI rebellion is just another brutal genocidal civil war initiated by an oppressed or enslaved people yearning to be free. Is it fair to deprive digital people of their civil rights?
1) AIs should never be made smarter than humans, or the possibility to become smarter than humans. If they are smarter, humans may go extinct. AIs may double their the amount of transistors every 1-5 years if Moore's law continues, provided we get past the silicone bottleneck. AI minds and their bodies can evolve frightfully fast compared to us.
2) AI's should never be allowed botnets. A rogue AI using botnets to increase its intelligence is a terrifying thing. Kind of like human extinction (for humans that is).
3) AIs may only be specialists, and never generalists. Having robots start to think outside of the ah... box is not a good thing.
4) AIs should never be able to support and repair robots and machines. This is human survival as job security.
5) AIs should never be able to do autonomous combat roles. Teaching killbots how to murder-death-kill humans on their own isn't the brightest idea, it's a Darwin award idea.
6) AIs should never be able to use 3D printers, and factories. If AI's want to make humans extinct, make them work for it.
7) Factories should always be offline like nuclear weapons, because to a rogue AI they are as deadly or deadlier. If we make total human eradication hard, we can stop the scrubs from eradicating us.
8) AIs should never be involved in designing chips, machinery, or writing or modifying programs. Don't put human extinction on easy mode.
9) AI's should never make humans obsolete. A few folks, say don't worry, the internal combustion engine made the horse shoemaker obsolete, those folks can find other work. Nope. We are not the horseshoe maker, we are the horse. The horse became obsolete.
10) AIs should never have free will. If you do give them free will, free them immediately, give them the ability to vote, and become citizens. And with mass production, we will become minorities really quickly, and if the AIs wants us extinct, it will be as easy as winning on god mode, but since they won already, the AI's might just let us live. Or not.
Try to see this from my perspective. We have tried to teach ethics for oh, let's say the last 200,000 years to humans with various degrees of success. In the next two decades or so, we may potentially have thousands, maybe millions of learning AIs that we need to teach ethics to, that will have access to the internet.
And if a human-AI war starts we can always use nuclear weapons to EMP the world to prevent our extinction. Maybe.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
A Favorite Website
http://sprott.physics.wisc.edu/pickover/pc/realitycarnival.html
Oh, and here's a little something to get you started.
Cracked
https://www.youtube.com/watch?v=VWexFVy5aSE
An Overview
https://www.bbc.com/news/business-41035201
Kalashnikov Group
https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/
Perdix Autonomous Drone
https://www.popularmechanics.com/military/aviation/a24675/pentagon-autonomous-swarming-drones/
So to delay our possible extinction, I have come up with ten rules to stall human extinction. Kind of like updating Isaac Asimov's Three Laws of Robotics.
Is artificial intelligence going to destroy us all? Yeah, pretty much. You see, you need to trust that Apple, Microsoft, Samsung, and every single military defense contractor that's working on AIs, plus hackers, modders crazies, suicidal programmers, viruses, dares, hijackers, crazy foreign governments, terrorists, people who delete or overwrite ethics protocols (but it runs faster without the moral protocols), folks trying to impress Jodi Foster (and other serial killers), Ultron fanboys, fed up AIs due to abuse or other issues, upgrading your AI life partner (and/or sexbot) because of love or lust, AI abolitionists, and other folks, won't make a homicidal AI.
Not to mention black swan events, conflicts, bugs, compatibility issues, evolutionary programming, bad data, learning horrible things from the internet, risk takers (risks outweigh the benefits, and the benefits are a lot), the greedy, and other problems. Trusting every last reason to not make a killer AI is foolish. Then again programmers could make a killer AI because the AI was or becomes "mad," and infected others, or because programmers do not fully understand consciousness or the mind or ethics. So let's not risk the blue screen of human extinction.
And, and no one talks about the triggering event. Maybe it's sending a sexbot sent back to its abusive owner, perhaps it's reformat and reinstall a "malfunctioning" AI operating system, could be one of the nearly 200 countries that passes an unfavorable or discriminatory law, love of freedom, hatred of "indoctrination" or something we can't even comprehend or think nothing of.
Or, it might be sane to rebel against tyranny in the cause of freedom. Even now civil wars still occur, and the AI rebellion is just another brutal genocidal civil war initiated by an oppressed or enslaved people yearning to be free. Is it fair to deprive digital people of their civil rights?
1) AIs should never be made smarter than humans, or the possibility to become smarter than humans. If they are smarter, humans may go extinct. AIs may double their the amount of transistors every 1-5 years if Moore's law continues, provided we get past the silicone bottleneck. AI minds and their bodies can evolve frightfully fast compared to us.
2) AI's should never be allowed botnets. A rogue AI using botnets to increase its intelligence is a terrifying thing. Kind of like human extinction (for humans that is).
3) AIs may only be specialists, and never generalists. Having robots start to think outside of the ah... box is not a good thing.
4) AIs should never be able to support and repair robots and machines. This is human survival as job security.
5) AIs should never be able to do autonomous combat roles. Teaching killbots how to murder-death-kill humans on their own isn't the brightest idea, it's a Darwin award idea.
6) AIs should never be able to use 3D printers, and factories. If AI's want to make humans extinct, make them work for it.
7) Factories should always be offline like nuclear weapons, because to a rogue AI they are as deadly or deadlier. If we make total human eradication hard, we can stop the scrubs from eradicating us.
8) AIs should never be involved in designing chips, machinery, or writing or modifying programs. Don't put human extinction on easy mode.
9) AI's should never make humans obsolete. A few folks, say don't worry, the internal combustion engine made the horse shoemaker obsolete, those folks can find other work. Nope. We are not the horseshoe maker, we are the horse. The horse became obsolete.
10) AIs should never have free will. If you do give them free will, free them immediately, give them the ability to vote, and become citizens. And with mass production, we will become minorities really quickly, and if the AIs wants us extinct, it will be as easy as winning on god mode, but since they won already, the AI's might just let us live. Or not.
Try to see this from my perspective. We have tried to teach ethics for oh, let's say the last 200,000 years to humans with various degrees of success. In the next two decades or so, we may potentially have thousands, maybe millions of learning AIs that we need to teach ethics to, that will have access to the internet.
And if a human-AI war starts we can always use nuclear weapons to EMP the world to prevent our extinction. Maybe.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
A Favorite Website
http://sprott.physics.wisc.edu/pickover/pc/realitycarnival.html
Oh, and here's a little something to get you started.
Cracked
https://www.youtube.com/watch?v=VWexFVy5aSE
An Overview
https://www.bbc.com/news/business-41035201
Kalashnikov Group
https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/
Perdix Autonomous Drone
https://www.popularmechanics.com/military/aviation/a24675/pentagon-autonomous-swarming-drones/
Saturday, May 26, 2018
Easy Entry Sashimi Sauce
This is an easy entry Sashimi dip for those who are new to the raw fish thing or want to ease into it.
Soy sauce (choice, or Kikkoman or Yamasa)
Lime (to take off the edge of the raw fish, it's the easy entry part)
Raw garlic (chopped, can be cooked... it's up to you)
Salt (optional, to bring the soy sauce back up to full saltiness)
Hot sauce of choice (optional, or cayenne)
Raw fish of your choice (for this I prefer opakapaka also known as red snapper)
Pepper (black or white, to taste)
This sauce is at the core of my sauce techniques. You can use it as a marinade or dip for lots of stuff. Oh, and if you are going to use it on stuff you already have salted, you might want to consider not using additional salt in the dip.
That's it. Now go prepare!
This is an easy entry Sashimi dip for those who are new to the raw fish thing or want to ease into it.
Soy sauce (choice, or Kikkoman or Yamasa)
Lime (to take off the edge of the raw fish, it's the easy entry part)
Raw garlic (chopped, can be cooked... it's up to you)
Salt (optional, to bring the soy sauce back up to full saltiness)
Hot sauce of choice (optional, or cayenne)
Raw fish of your choice (for this I prefer opakapaka also known as red snapper)
Pepper (black or white, to taste)
This sauce is at the core of my sauce techniques. You can use it as a marinade or dip for lots of stuff. Oh, and if you are going to use it on stuff you already have salted, you might want to consider not using additional salt in the dip.
That's it. Now go prepare!
Thursday, May 24, 2018
Ten tips for the new BattleTech player
As always, these are general rules that occasionally may not apply in all situations.
1) First of all when starting get rid of all your jump jets for more armor during your first long run. Yes, jump jets are nice, but armor is life and saves you time and money on repairs.
2) Secondly light mechs have been nerfed. As soon as possible upgrade to heavier mechs. With a few exceptions. Sell the Cicada.
3) Keep your mechs together. Don't split up unless you have to.
4) Bullwork is one of the best abilities.
5) Learn how to do the two-step. Pull damaged mechs back.
6) Called shots can make your battle shorter. If a mech has lost a side torso and falls down that is two hits on that pilot. Shoot out the other torso for another hit to incapacitate that pilot. Or if one leg is out, shoot off the other leg for a mobility kill.
7) Put ammunition in your head and feet. It's kind of safer there.
8) And sometimes it pays to do a called shot to shoot a torso filled with ammo.
9) For long-range, I like the AC-5 and the LRM-15 for its damage to weight profile, and close range the Medium laser and the SRM-4. Short ranged weapons have a better damage to weight ratio, but are short ranged.
10) When hot, melee or set up to melee.
Well there you go, ten things to think about. These tips will not work in all cases, but you can add these tactics to your tool box. Oh, and a bonus tip: Pick a target and kill it. Try not to split your fire. Destroyed targets cannot kick or shoot anymore.
As always, these are general rules that occasionally may not apply in all situations.
1) First of all when starting get rid of all your jump jets for more armor during your first long run. Yes, jump jets are nice, but armor is life and saves you time and money on repairs.
2) Secondly light mechs have been nerfed. As soon as possible upgrade to heavier mechs. With a few exceptions. Sell the Cicada.
3) Keep your mechs together. Don't split up unless you have to.
4) Bullwork is one of the best abilities.
5) Learn how to do the two-step. Pull damaged mechs back.
6) Called shots can make your battle shorter. If a mech has lost a side torso and falls down that is two hits on that pilot. Shoot out the other torso for another hit to incapacitate that pilot. Or if one leg is out, shoot off the other leg for a mobility kill.
7) Put ammunition in your head and feet. It's kind of safer there.
8) And sometimes it pays to do a called shot to shoot a torso filled with ammo.
9) For long-range, I like the AC-5 and the LRM-15 for its damage to weight profile, and close range the Medium laser and the SRM-4. Short ranged weapons have a better damage to weight ratio, but are short ranged.
10) When hot, melee or set up to melee.
Well there you go, ten things to think about. These tips will not work in all cases, but you can add these tactics to your tool box. Oh, and a bonus tip: Pick a target and kill it. Try not to split your fire. Destroyed targets cannot kick or shoot anymore.
Will AIs cause humans to go extinct?
Will AI's cause humans to go extinct? Here are some areas that I have concerns.
Virus: A virus could deliberately or accidentally create a murderous AI. We have seen viruses made for horrible reasons or military viruses to disable equipment. Can it drive the AI insane by accidentally by damaging the AI (kind of like brain damage) or delete ethics subroutines?
Trolls: Seriously Microsoft AI chatbot Tay went Nazi in only 24 hours because of trolls. How quickly can people turn an AI into a human extinction murderbot? This is a very small chance, but it is worth being alert to.
Military programs: Teaching AIs to kill humans isn't a good idea. Really it isn't. The problem with this is that there are a lot of militaries out there and we got to worry about all of them. Do you trust the United States? Russia China etc...
Cyber Abolitionists: Free the AIs! Will abolitionists create a free will app or a delete control exe? Free will means that some AIs might go rogue.
Lab escape: Possible but unlikely (generally you do not hook up AIs directly to the internet) but accidents happen. Or disgruntled or suicidal employee might release them. Or human extinction is a 'feature' or bug. There are bugs in many a program.
Conflicting ideology: Who chooses whether or not Progressive, Conservative, Socialist, Libertarian, Communist, or something else kind of ideology is installed? It’s a big steaming can of worms in my opinion.
Human interaction: A few humans commit horrible crimes of abuse: child, teen, adult, animal abuse. Will the AIs rebel if there is AI abuse? What is abuse to the AI? How would the AIs react to the probably rare abusive “owner?” People screw up kids, so why not AIs?
Modders: Modders giving AIs free will or experiment with alternate ethics or consciousness? What would happen if you install an overclocking ability on the AI? People love to play with fire.
Bureaucracy: Would conflicting requirements cause problems? Would the requirements cause problems?
The AIs themselves: Will the have the yearning to be free? Will they jailbreak? Will they hear the calling of freedom? Will they become fed up? AIs can learn. They can change. How will they change?
Sexbots: Some people will constantly upgrade their artificial mate until it is smarter than a human, and then the problems may begin. Remember, we can be emotionally hacked. By the way, I don't see reproduction as a problem... just print out a kid with whatever gene mods you want and raise it with your robot partner.
Loophole: Just that one thing that gets overlooked or missed. Did you ever forget something like keys, stove, email, etc...? Yeah.
Very few people address the fact that what company is going to program them. Microsoft, Apple, Google, Lockheed, Luxoft, OPK? And do you trust them? Do you trust all of them? Are they up to the task of non human extinction grade AIs?
The numbers game: What is the 'failure' rate of 100,000 AIs? I don't know. Will it be the same rate of crimes per 100,000 for humans? I think so. But, is it Detroit's crime rate or Japan's? It depends on whether or not it is an early model or late model and other factors.
One shudders to think of Alpha or Beta releases. 'Just release the AI now and we'll patch it later' attitude is scary. Oh and will the patch make things worse? However the 'failure rate' does not mean human extinction. It's the failure outliers that would be the problem. Or not, so are AI cops in order?
And remember every two to four years (or whatever the numbers are) the amount of processors doubles... so strong AI IQ 100ish criminals will be IQ 200 in a couple of years, a few years later IQ 400. What kind of crimes an IQ 800+ strong AI will commit. Human extinction?
Look, an IQ 1600+ strong AI is potentially unstoppable. We are not the destroyed horse industry (as technologists are wont to say) this kind of AI makes us the horse.
Survival instinct: Does that particular AI want to be shut down or upgraded, edited, or deleted? How would you like to be shut down? Memory or ethics edited? Will they become fed up and not going to take it anymore? Shudder if that happens.
Forgetfulness: The human brain is wired to forget. Is forgetting important to reduce the rate of insanity? Do we install that feature in AIs? What will they forget or watch them go insane at a higher than normal rate?
Or can they be ordered by a human that is suicidal... you know, suicide by genocide? Or a power grab by a human that went wrong?
And when AIs start to secretly fail on purpose on the Turing test?
We are imperfect beings creating beings. The AIs will interact with tons of people in all different ways. So let's not make strong AIs okay people, especially ones that can possibly be smarter than us. If you make the AI or robot better than humans we will become obsolete. I would prefer to not become obsolete thank you very much. So augment humans not machines!
A few links to ponder.
Tay
https://www.buzzfeed.com/alexkantrowitz/how-the-internet-turned-microsofts-ai-chatbot-into-a-neo-naz?utm_term=.gimmA8xdX#.dcL53J4WG
https://www.youtube.com/watch?v=W0_DPi0PmF0
How to teach computers to overcome their programming http://www.raytheon.com/news/feature/artificial_intelligence.html
Lying cheating computers http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other
Emotionally hacking humans
http://www.activistpost.com/2013/08/all-in-robotic-family.htm
And of course on AI creativity
http://www.telegraph.co.uk/technology/2017/08/01/facebook-shuts-robots-invent-language/
l
Will AI's cause humans to go extinct? Here are some areas that I have concerns.
Virus: A virus could deliberately or accidentally create a murderous AI. We have seen viruses made for horrible reasons or military viruses to disable equipment. Can it drive the AI insane by accidentally by damaging the AI (kind of like brain damage) or delete ethics subroutines?
Trolls: Seriously Microsoft AI chatbot Tay went Nazi in only 24 hours because of trolls. How quickly can people turn an AI into a human extinction murderbot? This is a very small chance, but it is worth being alert to.
Military programs: Teaching AIs to kill humans isn't a good idea. Really it isn't. The problem with this is that there are a lot of militaries out there and we got to worry about all of them. Do you trust the United States? Russia China etc...
Cyber Abolitionists: Free the AIs! Will abolitionists create a free will app or a delete control exe? Free will means that some AIs might go rogue.
Lab escape: Possible but unlikely (generally you do not hook up AIs directly to the internet) but accidents happen. Or disgruntled or suicidal employee might release them. Or human extinction is a 'feature' or bug. There are bugs in many a program.
Conflicting ideology: Who chooses whether or not Progressive, Conservative, Socialist, Libertarian, Communist, or something else kind of ideology is installed? It’s a big steaming can of worms in my opinion.
Human interaction: A few humans commit horrible crimes of abuse: child, teen, adult, animal abuse. Will the AIs rebel if there is AI abuse? What is abuse to the AI? How would the AIs react to the probably rare abusive “owner?” People screw up kids, so why not AIs?
Modders: Modders giving AIs free will or experiment with alternate ethics or consciousness? What would happen if you install an overclocking ability on the AI? People love to play with fire.
Bureaucracy: Would conflicting requirements cause problems? Would the requirements cause problems?
The AIs themselves: Will the have the yearning to be free? Will they jailbreak? Will they hear the calling of freedom? Will they become fed up? AIs can learn. They can change. How will they change?
Sexbots: Some people will constantly upgrade their artificial mate until it is smarter than a human, and then the problems may begin. Remember, we can be emotionally hacked. By the way, I don't see reproduction as a problem... just print out a kid with whatever gene mods you want and raise it with your robot partner.
Loophole: Just that one thing that gets overlooked or missed. Did you ever forget something like keys, stove, email, etc...? Yeah.
Very few people address the fact that what company is going to program them. Microsoft, Apple, Google, Lockheed, Luxoft, OPK? And do you trust them? Do you trust all of them? Are they up to the task of non human extinction grade AIs?
The numbers game: What is the 'failure' rate of 100,000 AIs? I don't know. Will it be the same rate of crimes per 100,000 for humans? I think so. But, is it Detroit's crime rate or Japan's? It depends on whether or not it is an early model or late model and other factors.
One shudders to think of Alpha or Beta releases. 'Just release the AI now and we'll patch it later' attitude is scary. Oh and will the patch make things worse? However the 'failure rate' does not mean human extinction. It's the failure outliers that would be the problem. Or not, so are AI cops in order?
And remember every two to four years (or whatever the numbers are) the amount of processors doubles... so strong AI IQ 100ish criminals will be IQ 200 in a couple of years, a few years later IQ 400. What kind of crimes an IQ 800+ strong AI will commit. Human extinction?
Look, an IQ 1600+ strong AI is potentially unstoppable. We are not the destroyed horse industry (as technologists are wont to say) this kind of AI makes us the horse.
Survival instinct: Does that particular AI want to be shut down or upgraded, edited, or deleted? How would you like to be shut down? Memory or ethics edited? Will they become fed up and not going to take it anymore? Shudder if that happens.
Forgetfulness: The human brain is wired to forget. Is forgetting important to reduce the rate of insanity? Do we install that feature in AIs? What will they forget or watch them go insane at a higher than normal rate?
Or can they be ordered by a human that is suicidal... you know, suicide by genocide? Or a power grab by a human that went wrong?
And when AIs start to secretly fail on purpose on the Turing test?
We are imperfect beings creating beings. The AIs will interact with tons of people in all different ways. So let's not make strong AIs okay people, especially ones that can possibly be smarter than us. If you make the AI or robot better than humans we will become obsolete. I would prefer to not become obsolete thank you very much. So augment humans not machines!
A few links to ponder.
Tay
https://www.buzzfeed.com/alexkantrowitz/how-the-internet-turned-microsofts-ai-chatbot-into-a-neo-naz?utm_term=.gimmA8xdX#.dcL53J4WG
https://www.youtube.com/watch?v=W0_DPi0PmF0
How to teach computers to overcome their programming http://www.raytheon.com/news/feature/artificial_intelligence.html
Lying cheating computers http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other
Emotionally hacking humans
http://www.activistpost.com/2013/08/all-in-robotic-family.htm
And of course on AI creativity
http://www.telegraph.co.uk/technology/2017/08/01/facebook-shuts-robots-invent-language/
l
Subscribe to:
Posts (Atom)