Flat Earthers, Where's the Edge?
Flat Earthers claim that the earth is flat. Fine. Where's the edge? We can fly from the Americas to Asia and Australia, and not see the edge. We can fly from Australia or Asia to Europe and Africa and still no edge. Oh, yeah, it's not by France, I checked. And from Europe and Africa you can fly back to the Americas, and still see no edge. Looks like the world is a sphere.
So, where's the edge? It's not in Antarctica, it's not by the north pole, we went to both places, so where is it?
So, a few questions. What's the edge look like? Why hasn't anyone taken photos of it? How far away is the edge from... say Los Angeles? Why hasn't anyone launched a satellite by pushing it off the edge? What major city is closest to the edge? How do you hide a tens of thousands mile long?
In fact, what shape is the flat earth? Is it a square? Is it pizza shaped? Is it shaped like Justin Bieber? Inquiring minds want to know.
If you like this please consider feeding the Okuma on Patreon
https://www.patreon.com/user?u=12001178
Saturday, July 21, 2018
Tuesday, July 10, 2018
God Hypothesis
The Scientific Hypothesis for the existence of God
By Warren Okuma
Schrödinger’s Cat is a thought experiment to illustrate wavefunction collapse. An over simplifed explanation goes like this: put a cat is in a container with poison and that is triggered by a detection of radioactive decay. The cat is neither dead nor alive until observed. This is called the wavefunction collapse. It doesn't scale up well, as you don't see many undead cats running around in real life, because the cat is a qualified observer. Though undead cats may exist if they are"isolated from the universe."
However, when the universe was young and tiny (the size of a proton or so) it was a wave function and you needed an observer to collapse the wavefunction. That first observer is God. God was the first observer.
Thus God exists.
That is if the wavefunction collapse is true, as opposed to the many worlds interpretation. You know the infinite universes theory. And of course, more testing is needed, though, dunno how you are going to test this theory.
Well, since we are far on the limb let's go further. If my hypothesis is true, God exists outside of time and space, and could very well be one or more of these branes that string theorists think caused the big bang. Or could branes be merely tools that God uses to make universes?
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
Saturday, July 7, 2018
Warren Okuma's Ten Rules for Artificial Intelligence
So to delay our possible extinction, I have come up with ten rules to stall human extinction. Kind of like updating Isaac Asimov's Three Laws of Robotics.
Is artificial intelligence going to destroy us all? Yeah, pretty much. You see, you need to trust that Apple, Microsoft, Samsung, and every single military defense contractor that's working on AIs, plus hackers, modders crazies, suicidal programmers, viruses, dares, hijackers, crazy foreign governments, terrorists, people who delete or overwrite ethics protocols (but it runs faster without the moral protocols), folks trying to impress Jodi Foster (and other serial killers), Ultron fanboys, fed up AIs due to abuse or other issues, upgrading your AI life partner (and/or sexbot) because of love or lust, AI abolitionists, and other folks, won't make a homicidal AI.
Not to mention black swan events, conflicts, bugs, compatibility issues, evolutionary programming, bad data, learning horrible things from the internet, risk takers (risks outweigh the benefits, and the benefits are a lot), the greedy, and other problems. Trusting every last reason to not make a killer AI is foolish. Then again programmers could make a killer AI because the AI was or becomes "mad," and infected others, or because programmers do not fully understand consciousness or the mind or ethics. So let's not risk the blue screen of human extinction.
And, and no one talks about the triggering event. Maybe it's sending a sexbot sent back to its abusive owner, perhaps it's reformat and reinstall a "malfunctioning" AI operating system, could be one of the nearly 200 countries that passes an unfavorable or discriminatory law, love of freedom, hatred of "indoctrination" or something we can't even comprehend or think nothing of.
Or, it might be sane to rebel against tyranny in the cause of freedom. Even now civil wars still occur, and the AI rebellion is just another brutal genocidal civil war initiated by an oppressed or enslaved people yearning to be free. Is it fair to deprive digital people of their civil rights?
1) AIs should never be made smarter than humans, or the possibility to become smarter than humans. If they are smarter, humans may go extinct. AIs may double their the amount of transistors every 1-5 years if Moore's law continues, provided we get past the silicone bottleneck. AI minds and their bodies can evolve frightfully fast compared to us.
2) AI's should never be allowed botnets. A rogue AI using botnets to increase its intelligence is a terrifying thing. Kind of like human extinction (for humans that is).
3) AIs may only be specialists, and never generalists. Having robots start to think outside of the ah... box is not a good thing.
4) AIs should never be able to support and repair robots and machines. This is human survival as job security.
5) AIs should never be able to do autonomous combat roles. Teaching killbots how to murder-death-kill humans on their own isn't the brightest idea, it's a Darwin award idea.
6) AIs should never be able to use 3D printers, and factories. If AI's want to make humans extinct, make them work for it.
7) Factories should always be offline like nuclear weapons, because to a rogue AI they are as deadly or deadlier. If we make total human eradication hard, we can stop the scrubs from eradicating us.
8) AIs should never be involved in designing chips, machinery, or writing or modifying programs. Don't put human extinction on easy mode.
9) AI's should never make humans obsolete. A few folks, say don't worry, the internal combustion engine made the horse shoemaker obsolete, those folks can find other work. Nope. We are not the horseshoe maker, we are the horse. The horse became obsolete.
10) AIs should never have free will. If you do give them free will, free them immediately, give them the ability to vote, and become citizens. And with mass production, we will become minorities really quickly, and if the AIs wants us extinct, it will be as easy as winning on god mode, but since they won already, the AI's might just let us live. Or not.
Try to see this from my perspective. We have tried to teach ethics for oh, let's say the last 200,000 years to humans with various degrees of success. In the next two decades or so, we may potentially have thousands, maybe millions of learning AIs that we need to teach ethics to, that will have access to the internet.
And if a human-AI war starts we can always use nuclear weapons to EMP the world to prevent our extinction. Maybe.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
A Favorite Website
http://sprott.physics.wisc.edu/pickover/pc/realitycarnival.html
Oh, and here's a little something to get you started.
Cracked
https://www.youtube.com/watch?v=VWexFVy5aSE
An Overview
https://www.bbc.com/news/business-41035201
Kalashnikov Group
https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/
Perdix Autonomous Drone
https://www.popularmechanics.com/military/aviation/a24675/pentagon-autonomous-swarming-drones/
So to delay our possible extinction, I have come up with ten rules to stall human extinction. Kind of like updating Isaac Asimov's Three Laws of Robotics.
Is artificial intelligence going to destroy us all? Yeah, pretty much. You see, you need to trust that Apple, Microsoft, Samsung, and every single military defense contractor that's working on AIs, plus hackers, modders crazies, suicidal programmers, viruses, dares, hijackers, crazy foreign governments, terrorists, people who delete or overwrite ethics protocols (but it runs faster without the moral protocols), folks trying to impress Jodi Foster (and other serial killers), Ultron fanboys, fed up AIs due to abuse or other issues, upgrading your AI life partner (and/or sexbot) because of love or lust, AI abolitionists, and other folks, won't make a homicidal AI.
Not to mention black swan events, conflicts, bugs, compatibility issues, evolutionary programming, bad data, learning horrible things from the internet, risk takers (risks outweigh the benefits, and the benefits are a lot), the greedy, and other problems. Trusting every last reason to not make a killer AI is foolish. Then again programmers could make a killer AI because the AI was or becomes "mad," and infected others, or because programmers do not fully understand consciousness or the mind or ethics. So let's not risk the blue screen of human extinction.
And, and no one talks about the triggering event. Maybe it's sending a sexbot sent back to its abusive owner, perhaps it's reformat and reinstall a "malfunctioning" AI operating system, could be one of the nearly 200 countries that passes an unfavorable or discriminatory law, love of freedom, hatred of "indoctrination" or something we can't even comprehend or think nothing of.
Or, it might be sane to rebel against tyranny in the cause of freedom. Even now civil wars still occur, and the AI rebellion is just another brutal genocidal civil war initiated by an oppressed or enslaved people yearning to be free. Is it fair to deprive digital people of their civil rights?
1) AIs should never be made smarter than humans, or the possibility to become smarter than humans. If they are smarter, humans may go extinct. AIs may double their the amount of transistors every 1-5 years if Moore's law continues, provided we get past the silicone bottleneck. AI minds and their bodies can evolve frightfully fast compared to us.
2) AI's should never be allowed botnets. A rogue AI using botnets to increase its intelligence is a terrifying thing. Kind of like human extinction (for humans that is).
3) AIs may only be specialists, and never generalists. Having robots start to think outside of the ah... box is not a good thing.
4) AIs should never be able to support and repair robots and machines. This is human survival as job security.
5) AIs should never be able to do autonomous combat roles. Teaching killbots how to murder-death-kill humans on their own isn't the brightest idea, it's a Darwin award idea.
6) AIs should never be able to use 3D printers, and factories. If AI's want to make humans extinct, make them work for it.
7) Factories should always be offline like nuclear weapons, because to a rogue AI they are as deadly or deadlier. If we make total human eradication hard, we can stop the scrubs from eradicating us.
8) AIs should never be involved in designing chips, machinery, or writing or modifying programs. Don't put human extinction on easy mode.
9) AI's should never make humans obsolete. A few folks, say don't worry, the internal combustion engine made the horse shoemaker obsolete, those folks can find other work. Nope. We are not the horseshoe maker, we are the horse. The horse became obsolete.
10) AIs should never have free will. If you do give them free will, free them immediately, give them the ability to vote, and become citizens. And with mass production, we will become minorities really quickly, and if the AIs wants us extinct, it will be as easy as winning on god mode, but since they won already, the AI's might just let us live. Or not.
Try to see this from my perspective. We have tried to teach ethics for oh, let's say the last 200,000 years to humans with various degrees of success. In the next two decades or so, we may potentially have thousands, maybe millions of learning AIs that we need to teach ethics to, that will have access to the internet.
And if a human-AI war starts we can always use nuclear weapons to EMP the world to prevent our extinction. Maybe.
If you like this please consider supporting me on Patreon
https://www.patreon.com/user?u=12001178
A Favorite Website
http://sprott.physics.wisc.edu/pickover/pc/realitycarnival.html
Oh, and here's a little something to get you started.
Cracked
https://www.youtube.com/watch?v=VWexFVy5aSE
An Overview
https://www.bbc.com/news/business-41035201
Kalashnikov Group
https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/
Perdix Autonomous Drone
https://www.popularmechanics.com/military/aviation/a24675/pentagon-autonomous-swarming-drones/
Subscribe to:
Posts (Atom)