Roku's Basilisk: The Most Dangerous Thought Experiment

citizenthom

I'm not sayin'. I'm just sayin'.
Nov 10, 2009
3,299
185
✟12,912.00
Faith
Non-Denom
Marital Status
Married
Politics
US-Republican
What are you: a Box A-er, or a Box B-er, and why?

"One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate."

http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html
 

SkyWriting

The Librarian
Site Supporter
Jan 10, 2010
37,281
8,500
Milwaukee
✟410,948.00
Country
United States
Faith
Non-Denom
Marital Status
Married
Politics
US-Others
What are you: a Box A-er, or a Box B-er, and why?

"One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?
You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:
Listen to me very closely, you idiot.YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate."http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html

Meh
 
  • Like
Reactions: Chesterton
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,290
8,067
✟328,400.00
Faith
Atheist
Totally absurd. Do we see people being punished today for not helping a malevolent AI exist? No.

You can invent whatever situation you like, including one with the exact opposite results, with just as much validity; i.e. none.

I'm inclined to trust the friendly AI that has traveled back in time and already prevented the malevolent AI from ever existing in this timeline :rolleyes:
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Lazarus Short

Well-Known Member
Apr 6, 2016
2,934
3,009
74
Independence, Missouri, USA
✟294,142.00
Faith
Non-Denom
Marital Status
Married
Actually, if you get meat really cold, freezer burn is minimal. That takes at least liquid nitrogen. I still remember the first time I peered into a deep container of the stuff - it looked like water, but was in a slow boil at -200 degrees...
 
Upvote 0

Nithavela

our world is happy and mundane
Apr 14, 2007
28,249
19,719
Comb. Pizza Hut and Taco Bell/Jamaica Avenue.
✟498,809.00
Country
Germany
Faith
Other Religion
Marital Status
Single
Sometimes intelligent people can be very stupid.

He should stick to writing Harry Potter fanfiction.

I'm with Box B by the way, because who cares about 1000$ when he'll get a million? And if that computer is somehow wrong for the first time, I can at least rub it into it's developers faces. It's a win-win.

A no-brainer, really.
 
Last edited:
Upvote 0

Dan Bert

Dan
Dec 25, 2015
440
25
70
Cold Lake Alberta
✟10,517.00
Faith
Marital Status
Married
WIthout God and Ignorance on how things work get people into trouble. First In Eternity there is only Now..which mean the linear existence is just for us on the earth. 2. God is in Charge, not AI's or people that build them. We cannot evolve to the point of perfect morality and ethics without knowing the beginning from the ending. Our lack of ability to see in the future prevent this. Also if you notice....this civilization, is fast losing the "civil" in it. Normally without God the descent into corruption and wickedness increases at an extremely fast pace. The only way out is to let the Spirit of GOD decide for us what is good and evil. This by-passes our own knowledge of God and Evil and wisdom.

dan


What are you: a Box A-er, or a Box B-er, and why?

"One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate."

http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html
 
Upvote 0