• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • Christian Forums is looking to bring on new moderators to the CF Staff Team! If you have been an active member of CF for at least three months with 200 posts during that time, you're eligible to apply! This is a great way to give back to CF and keep the forums running smoothly! If you're interested, you can submit your application here!

Problems with Modern AI Tools

Stephen3141

Well-Known Member
Mar 14, 2023
1,239
508
69
Southwest
✟89,877.00
Country
United States
Faith
Catholic
Marital Status
Private

There is a problem in the way in which these new AI tools are being
trained. The training allows them to try all possible paths to achieving
a goal. And some of these paths are unethical-immoral.

This is tied up with the design dysfunction, that the tools have not had
a credible Moral-Ethical model installed in them.

Without any ME model installed, these tools "learn" by trying out all possible
paths to a goal. And some of these paths, inevitably turn out to be unacceptable,
in the context of Christian morality.
 
  • Informative
Reactions: mindlight

mindlight

See in the dark
Site Supporter
Dec 20, 2003
14,033
2,888
London, UK
✟936,435.00
Country
Germany
Gender
Male
Faith
Christian
Marital Status
Married

There is a problem in the way in which these new AI tools are being
trained. The training allows them to try all possible paths to achieving
a goal. And some of these paths are unethical-immoral.

This is tied up with the design dysfunction, that the tools have not had
a credible Moral-Ethical model installed in them.

Without any ME model installed, these tools "learn" by trying out all possible
paths to a goal. And some of these paths, inevitably turn out to be unacceptable,
in the context of Christian morality.

If you tell a computer to win a game, then in the case of an AI it will do everything it can to achieve that. You could have asked it to try and win the game fairly without breaking the rules of the game or laws of the land and then it would have done that. AI is not intelligent when it comes to setting its own purpose. It has no soul. The question you raise is whether or not preprogrammed limits should or indeed can be set within which AI could then operate.

Your post raises interesting questions about proprietary rights versus generic access. Data protection laws say there is a thing called proprietary knowledge. If machine code learning models do not respect property and regard all data as simple 1s and 0s then you are talking about the collapse of the world financial transaction system, title deeds to property and bank accounts would also all be up for grabs. Of course, some with a socialist bent want to see such a great equalization occur, the end of proprietary knowledge, and the dawn of a new open-code world where everybody has access to the answers and the game plan. Equally, the same technology could be used to centralize power in the hands of a dictator. But is our only choice chaos or oppression? AIs cannot be allowed to steal, distort or indeed challenge the basic moral fabric of society. But since there will always be people who do not respect the rules there will always be AIs who try and circumvent them. Policing the situation is the real dilemma. A super AI that monitors other AI activities could end up being worse than the problem it is designed to prevent though. We live in interesting times.
 
Upvote 0

MyGodIsStrongerThanI

Regular Member
Jan 6, 2015
253
73
✟26,048.00
Country
Australia
Gender
Female
Faith
Non-Denom
Marital Status
Single
Politics
AU-Greens
To an extent, this has always been a thing with machine learning. I mean, if you go look at some of the old strategy computer games from the '90s, one of the ways the computer would control difficulty would be to give the artificial player extra resources at certain points in a match, etc. to help give it an edge over a human player. It was to the point where even something like Age of Empires, which still did it but not to the same extent, was seen as a step forward specifically because it wasn't doing it as much.

I don't know why it's a shock that modern artificial intelligence will do the same thing. Of course it does. Modern AI that's being used for online chess matches is still working with all the same built in biases that game designers were using thirty years ago because there is a direct lineage there. I wouldn't be too surprised if it came out that some of the same people who've been working on the current generations of artificial intelligence were also working on the AI systems for games in the '90s.

While this is undoubtedly a problem, I don't think it's the only problem that exists with AI, especially generative AI, as it exists today. I mean, it is becoming an increasing problem that university students will be using generative AI programs such as Chat GPT to "help" with their essays, even though it is known that it'll generate inaccuracies as often as not, and more to the point, Chat GPT or Grok or whatever else someone might decide to use can't cite a source, so you can't trace an idea as well as you'd expect to be able to in an academic setting. That's a huge concern in that context because there's been people who've lost their jobs or scholarships even at middle-of-the-road universities over a poorly cited source thirty years ago.

In more practical terms though, this is also going to have a huge impact on the legal system, too. There was a news article published by The Guardian two or three weeks ago that revealed that lawyers have started to cite cases that don't exist specifically because they've had Chat GPT do some of the writing for them and they haven't bothered to check what it's "written". Thankfully, these got caught before anything came of it, but it's only a matter of time before something like this gets through.

Of course, this was here in Australia. I don't know if that happens overseas or not, but realistically speaking, it probably does.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,239
508
69
Southwest
✟89,877.00
Country
United States
Faith
Catholic
Marital Status
Private
I hear you.

Christians, worldwide, need to ask some questions...

1 What comes first? A coherent (philosophical) worldview,
or, a gathering of opinions about worldviews, done by computerized
search engines?
2 For those who think that they can find a coherent worldview, and "truth" from the
browsing of a search engine, I would say that there is no guarantee that a
computerized search engine SHOULD encounter all truth.

Although some Christian groups argue that a worldview should not be
part of a Christian's thinking (they argue that we should go directly back to the
Bible to get "truth"), from a philosophical viewpoint, the Bible presents a
coherent worldview. And this realistic worldview must be embraced, in order
that we can accept that we live in a "shared reality", that the Bible presents.

A basic philosophical wordview is UPSTREAM of being ablt to understand
the Scriptures, as an artifact that inhabits our shared reality.
 
Upvote 0