Welcome to my crazy world. Life, music, animals and misadventures with my twisted humor leading the way!
It’s a strange reality, but the UN has been making humanoid AI. They actually had one speak at one of their big events not too long ago. Truth and reality are stranger than fiction. And with a world that’s directed towards totalitarianism, what y’all are talking about is more than likely being conducted currently.
And that’s frightening!
People think the idea of a new world order and a one world government is nonsense…. keep looking.
👀 I haven’t had my coffee yet. Lol!
I promise you I’m not a conspiracy theorist but. But I do pay attention to things and call them as I see them.
Oh I love conspiracy theories. But I want them to be just theories not fact. The world is full of self serving, power hungry individuals. Those are the worst people to be in a position of power. It’s hard to find ones anymore in positions of power that are there because they genuinely want to make the world a better place. Power tends to corrupt, absolute power corrupts absolutely.
I agree. But unfortunately, it’s came to a point where conspiracy theory is pretty much conspiracy fact.
Can I use this to disable an evil AI?
Lol! I dunno….let me know how it turns out!
I need to compile questions for disabling evil AIs and computer-possessing alien ghosts, just in case 😂.
I don’t think that the ploy used in Star Strek (Compute the last decimal digit of pi) would because any AI worth its salt would know that pi is both irrational and transcendental, and hence, has an infinite number of random digits.
Hahaha! I think you’re SOL!
Perhaps, it’s time to do more research about human augmentation, just in case 😂.
Anyway, Stephen Hawking told the following story during an interview with John Oliver:
Stephen: Artificial intelligence could be a real danger in the not-too-distant future. It could design improvements to itself and out-smart us all.
John: I know you’re trying to get people to be cautious there, but why should I not be excited about fighting a robot?
Stephen: You would lose.
John: Okay, for a start we don’t know that. We don’t know that for sure. ‘Cause what could a robot do that I couldn’t then fight back by simply unplugging him?
Stephen: There’s a story that scientists built an intelligent computer. The first question they asked it was: “Is there a God?” The computer replies: “There is now.” And a bolt of lightning struck the plug so it couldn’t be turned off.
Scary huh? It’s very unlikely, one on one a human could win. Then you have the bigger picture. I am a huge fan of the Terminator movies. Skynet! I think the first movie I ever watched that made me really think about the probability of humans vs machines was War Games. I was about 13. Though I loved the movie, it scared me. Got the wheels turning.
The general consensus nowadays in the AI community is that there’s no way for an AI rebellion to happen and the notion that it would is ridiculous. However, what if they are mistaken? Humans constantly make mistakes after all, but they seldom admit it.
The Terminator series are good. It made me think about the ethics of AI and transhumanism more deeply. And it has time travel too which is one of my most favorite themes in fiction.
Time travel!! Yes!! Well if it ever comes to pass, chances are it would be cataclysmic. Survivors would fall into two categories, the lucky ones and the survivalists. The lucky ones will have to learn survival skills that in this time most just don’t have. This includes me! That isn’t even counting what they’d have to do to fight. The survivalists would fair better but still think it would be completely devastating for the human race.
Humans are the creators of machines and we are faulty for lack of a better word. We make mistakes yes but sometimes we don’t think about the bigger picture. Then you have the saying, just because you can doesn’t mean you should. If it happens, that’s where I think it will stem from. Someone wanting to break new ground and make a name for them self. They will the modern day Dr. Frankenstein.
We have to somehow come up with new tech that doesn’t involve electronics as they could be used against us. Also, I would assume that AIs don’t have morals and wouldn’t feel any kind of “pity”. So, surviving in this kind of world would be a great challenge.
Speaking of morals, a philosopher in my university raised a few interesting points rdgarding AI and morals. She said that AIs inherently do not possess any moral principles. Therefore, they can’t be “good” or “evil”, unless they developed their own moral principles in the future of course. What she feared is that AIs may do what they were told using the “best of their abilities”.
For example, an AI system was developed for eliminating inefficiency. As the AI learns more, it continues to develop more efficiency-related techniques while eliminating inefficient factors in the process. Eventually, the AI will learn that humans are inefficient and therefore should be removed “at all costs”.
There are more examples of this kind of course. Imagine a militarized AI whose function is to remove anything that is a threat to its country. It’s only a matter of time before it figures out that the biggest threat is humans.
It’s funny you bring up morals. I have been thinking about this myself. Humans are in conflict on how to act due to the heart and mind being in opposition on things. We can feel anger and may even feel justified to act out but then our hearts and conscience act as angels on one shoulder fighting against the devil on the other. AIs don’t have power struggle between heart and mind. Or they won’t. Inefficiency scenario makes sense. They will do what they were created to.
Indeed, that’s why we have terms like “cold logic”.
On a lighter note, it seems that some AIs develop their own points of views. Look at these Wikipedia AIs/bots correcting each others.
Hahaha! Me too! 😂
You must log in to post a comment.