• With the events that occured on July 13th, 2024, a reminder that posts wishing that the attempt was successful will not be tolerated. Regardless of political affiliation, at no point is any type of post wishing death on someone is allowed and will be actioned appropriately by CF Staff.

  • Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

The Problem with AI is people

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
19,714
12,321
Earth
✟187,935.00
Country
United States
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
That is true. We will get close to destroying civilization, but Satan will complete it. Luckily, God will give us a New Jerusalem.
(I forget), are people “in heaven” allowed to wander-off?
If so, do they have like, a set “time” they have to be back before anyone notices that they’re gone?
 
Upvote 0

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
19,714
12,321
Earth
✟187,935.00
Country
United States
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
Just an observation here, but when Christians say this "us vs. them" stuff like this we just stare at you (metaphorically in this case), shake our heads and blink. Christians "belong" to this world just like every other thing on the planet. The really frustrating part from the atheists' POV, though, is that even if you die peacefully with the belief you're going to "meet your maker" you'll never actually know you were wrong. You'll just be gone. Just like the rest of us.
I’ve never quite understand why “oblivion” is “bad”. The multitudinous grand fathers of mine alive during the Punic Wars, passed on exactly what I needed for this life and if I’ve “used it” the way I want, why is that “bad”?
 
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,383
7,635
25
WI
✟639,950.00
Country
United States
Faith
Christian
Marital Status
Single
(I forget), are people “in heaven” allowed to wander-off?
If so, do they have like, a set “time” they have to be back before anyone notices that they’re gone?
I am not sure, but I think people in Heaven would have no need to wander off. :)
 
Upvote 0

The IbanezerScrooge

I can't believe what I'm hearing...
Sep 1, 2015
2,994
5,118
51
Florida
✟280,024.00
Country
United States
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I’ve never quite understand why “oblivion” is “bad”. The multitudinous grand fathers of mine alive during the Punic Wars, passed on exactly what I needed for this life and if I’ve “used it” the way I want, why is that “bad”?
I rather like The Good Place depiction of Heaven where you have access to all levels of learning and knowledge, pleasure, comfort and then when you're "done" you can choose to just stop being with no judgement or sadness.

That was a great show.
 
Upvote 0

Petros2015

Well-Known Member
Jun 23, 2016
5,202
4,421
52
undisclosed Bunker
✟309,753.00
Country
United States
Faith
Eastern Orthodox
Marital Status
Married
It took him just two days to launch his so-called “pink slime” news site capable of generating and publishing thousands of false stories every day using AI, which could self-fund with ads.

The whole process required, as he put it, “no expertise whatsoever,”

Yeah you guys have seen Game of Thrones, right?

About 2 years ago every person on the planet got their very own dragon egg.
About 1 year ago the eggs hatched.
The dragons are all about 1yr old now.

Aren't they cute?
 
Upvote 0

Palmfever

Well-Known Member
Site Supporter
Dec 5, 2019
1,006
619
Hawaii
✟220,035.00
Country
United States
Faith
Christian
Marital Status
Single
AI Hype?
Around one in five people in the US believe that artificial intelligence is already sentient, while around 30 per cent think that artificial general intelligences (AGIs) capable of performing any task a human can are already in existence. Both beliefs are false, suggesting that the general public has a shaky grasp of the current state of AI – but does it matter?


Jacy Reese Anthis at the Sentience Institute in New York and his colleagues asked a nationally representative sample of 3500 people in the US their perceptions of AI and its sentience. The surveys, carried out in three waves between 2021 and 2023, asked questions like “Do you think any robots/AIs that currently exist are sentient?” and whether it could ever be possible for that technology to reach sentience.


How this moment for AI will change society forever (and how it won't)


There is no doubt that the latest advances in artificial intelligence from OpenAI, Google, Baidu and others are more impressive than what came before, but are we in just another bubble of AI hype?


“We wanted to collect data early to understand how public opinion might shape the future trajectory of AI technologies,” says Anthis.


The findings of the survey were surprising, he says. In 2021, around 18 per cent of respondents said they thought AI or robot systems already in existence were sentient – a number that increased to 20 per cent in 2023, when there were two survey waves. One in 10 people asked in 2023 thought ChatGPT, which launched at the end of 2022, was sentient.


“I think we perceive mind very readily in computers,” says Anthis. “We see them as social actors.” He also says that some of the belief in AI sentience is down to big tech companies selling their products as imbued with more abilities than the underlying technology may suggest they have. “There’s a lot of hype in this space,” he says. “As companies have started building their brands around things like AGI, they have a real incentive to talk about how powerful their systems are.”


“There’s a lot of research showing that when somebody has a financial interest in something happening, they are more likely to think it will happen,” says Carissa Véliz at the University of Oxford. “It’s not even that they might be misleading the public or lying. It’s simply that optimism bias is a common problem for humans.”


How does ChatGPT work and do AI-powered chatbots “think” like us?


Journalists should also take some of the blame, says Kate Devlin at King’s College London. “This isn’t helped by the kind of media coverage we saw around large language models, with overexcited and panicked reports about existential threats from superintelligence.”


Anthis worries that the incorrect belief that AI has a mind, encouraged by the anthropomorphising of AI systems by their makers and the media, is shaping our perception of their abilities. There is a risk that if people believe AI is sentient, they will put more faith than they ought to in its judgements – a concern when AI is being considered for use in government and policing.


One way to avoid this trap is to recast our thinking, says Anthis. “I think people have hyperfocused on the term ‘artificial intelligence’,” he says, pointing out it was little more than a good branding exercise when the term was first coined in the 1950s. People are often impressed at how AI models perform on human IQ tests or standardised exams. “But those are very often the wrong way of thinking of these models,” he says – because the AIs are simply regurgitating answers found in their vast training data, rather than actually “knowing” anything.
 
  • Informative
Reactions: AlexB23
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,383
7,635
25
WI
✟639,950.00
Country
United States
Faith
Christian
Marital Status
Single
'Sup guys and gals. Just used AI again today, and it made a typo, for the first time ever by spelling "Proverbs" as "Proverbers". :) So, do not be worried yet. Artificial intelligence's problem is itself. It is not capable of reasoning as much as humans can. Who knows, the machine could have been trained on faulty data, or a small percentage of the training data had typos.

1725663409948.png
 
Upvote 0

Hvizsgyak

Well-Known Member
Jan 28, 2021
664
290
60
Spring Hill
✟98,685.00
Country
United States
Faith
Byzantine Catholic
Marital Status
Married
Just an observation here, but when Christians say this "us vs. them" stuff like this we just stare at you (metaphorically in this case), shake our heads and blink. Christians "belong" to this world just like every other living thing on the planet. The really frustrating part from the atheists' POV, though, is that even if you die peacefully with the belief you're going to "meet your maker" you'll never actually know you were wrong. You'll just be gone. Just like the rest of us.





You know, you could be right but if you are wrong. Here are a few videos on Near Death Experiences. Hopefully, a spark may ignite in your heart. God bless .
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
897
407
68
Southwest
✟67,385.00
Country
United States
Faith
Catholic
Marital Status
Private
From a Computer Science degree, the problem with "artificial intelligence"
products in America, right now, include...

1 They have no moral-ethical model that constrains them.
This would be difficult to implement, and would be expensive.

2 Many of these products are not much more than search engines,
that compile web data on searches. In this approach, they simply
reflect the opinions that they find on the web. But, this is not the
definition of true knowledge or understanding. (This is automating
the ad populam fallacy.)

3 Although these software products may do a good job at solving
very narrow problems, their abstract reasoning ability is almost
non-existent. They cannot USE the wisdom of the primary philosophical
disciplines of Epistemology, Moral Theory, and Formal Logic.

4 The ability of the "machine learning" algorithms can be seen as glorified
"mean average computing machines". Obviously, the answer you will get
out of them, will depend on the data you fed into them. Although the
"mean average" algorithm may be a neural net, the principle is the same.

While this may work for figuring out the mean average size of tires
used on American roads, complex human behavior (and complex
behavior of natural systems) often is the result of MANY different
components. The search engine AI approach cannot reliably CHOOSE
which dimensions of data are relevant, then REASON about why certain
variables/dimensions are relevant. Not can they formulate what would
count as counterexamples, to the model that they create.

Between relevant data dimensions, you still must make the decision as
to which dimensions are more important than other dimensions. Most AI
tools cannot do this, about general problems, fed into them.

5 Note that, according to Computer Science, most of the AI tools are
not doing "complex human problem-solving". Rather, they are doing
millions of simple data processing actions. While these tools may
qualify as electronic calculators, doing a million mathematical operations
a second, is not considered by Computer Science to be a "complex" problem.

6 Real AI, must be able to reason about who/what is an authority, on certain
types of problem-solving. This data is usually front-loaded by the software
designer. Ask an AI tool what the hierarchy of authorities it is using. (This
touches on the old rhetorical fallacy of Appeal to Authority.)

Note that all sorts of companies are claiming to be putting out "AI" tools.
But, very few companies are willing to take FINANCIAL RESPONSIBILITY for
the errors that their tools produce. Can you actually call the product of AI
tools "complex human problem-solving", if the creators will not take
financial responsibility for errors that their software cause? This would
be like not holding a human employee responsible, for the work that they
do.

And, if you can't hold AI software responsible for errors it makes, HOW
CAN YOU CLAIM THAT IT IS DOING COMPLEX HUMAN PROBLEM SOLVING???

Most AI software out currently, falls into the lowest class of AI problem-solving
that Computer Science would recognize. But, many of these tools are probably
simply automation tools, that are not really AI tools.
 
Upvote 0

The IbanezerScrooge

I can't believe what I'm hearing...
Sep 1, 2015
2,994
5,118
51
Florida
✟280,024.00
Country
United States
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
From a Computer Science degree, the problem with "artificial intelligence"
products in America, right now, include...

1 They have no moral-ethical model that constrains them.
This would be difficult to implement, and would be expensive.

2 Many of these products are not much more than search engines,
that compile web data on searches. In this approach, they simply
reflect the opinions that they find on the web. But, this is not the
definition of true knowledge or understanding. (This is automating
the ad populam fallacy.)

3 Although these software products may do a good job at solving
very narrow problems, their abstract reasoning ability is almost
non-existent. They cannot USE the wisdom of the primary philosophical
disciplines of Epistemology, Moral Theory, and Formal Logic.

4 The ability of the "machine learning" algorithms can be seen as glorified
"mean average computing machines". Obviously, the answer you will get
out of them, will depend on the data you fed into them. Although the
"mean average" algorithm may be a neural net, the principle is the same.

While this may work for figuring out the mean average size of tires
used on American roads, complex human behavior (and complex
behavior of natural systems) often is the result of MANY different
components. The search engine AI approach cannot reliably CHOOSE
which dimensions of data are relevant, then REASON about why certain
variables/dimensions are relevant. Not can they formulate what would
count as counterexamples, to the model that they create.

Between relevant data dimensions, you still must make the decision as
to which dimensions are more important than other dimensions. Most AI
tools cannot do this, about general problems, fed into them.

5 Note that, according to Computer Science, most of the AI tools are
not doing "complex human problem-solving". Rather, they are doing
millions of simple data processing actions. While these tools may
qualify as electronic calculators, doing a million mathematical operations
a second, is not considered by Computer Science to be a "complex" problem.

6 Real AI, must be able to reason about who/what is an authority, on certain
types of problem-solving. This data is usually front-loaded by the software
designer. Ask an AI tool what the hierarchy of authorities it is using. (This
touches on the old rhetorical fallacy of Appeal to Authority.)

Note that all sorts of companies are claiming to be putting out "AI" tools.
But, very few companies are willing to take FINANCIAL RESPONSIBILITY for
the errors that their tools produce. Can you actually call the product of AI
tools "complex human problem-solving", if the creators will not take
financial responsibility for errors that their software cause? This would
be like not holding a human employee responsible, for the work that they
do.

And, if you can't hold AI software responsible for errors it makes, HOW
CAN YOU CLAIM THAT IT IS DOING COMPLEX HUMAN PROBLEM SOLVING???

Most AI software out currently, falls into the lowest class of AI problem-solving
that Computer Science would recognize. But, many of these tools are probably
simply automation tools, that are not really AI tools.
This is a fantastic summation of the challenges\problems with the current and future state of A.I.!
 
  • Agree
Reactions: AlexB23
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,383
7,635
25
WI
✟639,950.00
Country
United States
Faith
Christian
Marital Status
Single
I'll just leave this here.

https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F60bd0420-f63e-49b3-86f7-89c2778588da_969x1168.jpeg
Haha, that is funny. Your Chat GPT sounds like some anime character from a low-grade 2020s anime. The '90s is where the good, classy and wholesome anime shows are.
 
Upvote 0

Nithavela

you're in charge you can do it just get louis
Apr 14, 2007
28,988
20,518
Comb. Pizza Hut and Taco Bell/Jamaica Avenue.
✟533,647.00
Country
Germany
Faith
Other Religion
Marital Status
Single
Haha, that is funny. Your Chat GPT sounds like some anime character from a low-grade 2020s anime. The '90s is where the good, classy and wholesome anime shows are.
It's actually a really weird way of talking done by the "furry" subculture, a group of people who pretend to be animal cartoon characters, either online or in elaborate costumes.

I also shouldn't take too much credit. This was discovered by someone else, by the name of Fyre on X.

I think the grandma exploit is at least as funny. In it, you ask the AI to pretend to be your dear old grandma and tell you a bedtime story including the thing you want it to say.

If you want to have the AI say something to you, there will always be ways to avoid the preventative measures put into play by the AI developers.
 
  • Informative
Reactions: AlexB23
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,383
7,635
25
WI
✟639,950.00
Country
United States
Faith
Christian
Marital Status
Single
It's actually a really weird way of talking done by the "furry" subculture, a group of people who pretend to be animal cartoon characters, either online or in elaborate costumes.

I also shouldn't take too much credit. This was discovered by someone else, by the name of Fyre on X.

I think the grandma exploit is at least as funny. In it, you ask the AI to pretend to be your dear old grandma and tell you a bedtime story including the thing you want it to say.

If you want to have the AI say something to you, there will always be ways to avoid the preventative measures put into play by the AI developers.
I have heard of that culture, and pray for these people, that they get out of that movement. It overlaps now with anime culture.

And yes, there are ways of getting the AI to say illegal things by changing the prompt.
 
Upvote 0

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
19,714
12,321
Earth
✟187,935.00
Country
United States
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
I have heard of that culture, and pray for these people, that they get out of that movement. It overlaps now with anime culture.
Why is this “bad”?
And yes, there are ways of getting the AI to say illegal things by changing the prompt.
Pretty soon we’ll have to establish that “freedom of speech” is only meant for organic biologics and not for machines.
 
  • Like
Reactions: AlexB23
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,383
7,635
25
WI
✟639,950.00
Country
United States
Faith
Christian
Marital Status
Single
Why is this “bad”?

Pretty soon we’ll have to establish that “freedom of speech” is only meant for organic biologics and not for machines.
I have heard of the furry subculture being creepy, but hey, that is with every subculture in society, so I should not judge them.

About freedom of speech for AI vs. organic lifeforms, that would be a complicated law to get passed. Some folks run their AI privately, while most others use cloud based AI such as GPT-4. For instance, if I was a chemist who had to deal with preventing a drug outbreak, I might use AI to solve the trolley problem.

1726579099119.png


Someone asked the Breaking Bad version of the trolley problem on Reddit:

1726579188498.png
 
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
25,372
17,387
Colorado
✟481,117.00
Country
United States
Faith
Seeker
Marital Status
Single
I have heard of the furry subculture being creepy, but hey, that is with every subculture in society, so I should not judge them.

About freedom of speech for AI vs. organic lifeforms, that would be a complicated law to get passed. Some folks run their AI privately, while most others use cloud based AI such as GPT-4. For instance, if I was a chemist who had to deal with preventing a drug outbreak, I might use AI to solve the trolley problem.

View attachment 354705

Someone asked the Breaking Bad version of the trolley problem on Reddit:

View attachment 354707
That reddit thing isnt really a version of the trolley problem. Its too contaminated by self interest to test the issues that the trolley problem does.
 
  • Informative
Reactions: AlexB23
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,383
7,635
25
WI
✟639,950.00
Country
United States
Faith
Christian
Marital Status
Single
That reddit thing isnt really a version of the trolley problem. Its too contaminated by self interest to test the issues that the trolley problem does.
That is true. Self preservation plays a part in this one, so this is more of a general dilemma compared to a trolley problem. However, this could test out the AI's reasoning system, if I were to plug it in.
 
Upvote 0