• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • Christian Forums is looking to bring on new moderators to the CF Staff Team! If you have been an active member of CF for at least three months with 200 posts during that time, you're eligible to apply! This is a great way to give back to CF and keep the forums running smoothly! If you're interested, you can submit your application here!

The EU is Trying to be Responsibly AI-Friendly

Stephen3141

Well-Known Member
Mar 14, 2023
1,349
536
69
Southwest
✟94,899.00
Country
United States
Faith
Catholic
Marital Status
Private

I have openly, for a few years, criticized the algorithms that are (very loosely)
called "artificial intelligence", in America. These comments come from a person
with an M.S. in Computer Science, and Artificial Intelligence.

My criticism has been that the "neural net" model algorithms are the WEAKEST
form of artificial inference, but probably the MOST ACCESSIBLE to people who
like to play with math distributions. (Whether or not those distributions reflect
relevant inputs, or not.)

The European Union HAS recognzed that these "AI" tools can be turned to automate
massive crime rings, and need to be regulated. This, the EU has begun to do.

The PROBLEM with these neural net algorithms, is that THEY MUST BE
TRAINED. And, the answers they put out depend on the information they
are trained on, and massive brute force search methods through piles
of "data".

I point out many similarities between trying to integrate a moral-ethical
model (specifically a Christian one) into formal logic, and the problem of trying
to integrate a moral-ethical model into computer algorithms that run on
statistical distributions (such as neural nets)...
---------- ----------

"Machine Learning Parallels

In a strange way, the problems that Computer Science has encountered when working on Artificial Intelligence algorithms and machine learning (ML) algorithms, can shed light on the problem that human beings have in complex problem-solving. And, the challenges of getting morality/ethics into artificial algorithms directly parallel getting ME principles into formal arguments/proofs (and into the heads of students).

Studying the errors in machine learning algorithms also shine a light on other aspects of dysfunctional arguments. An example is arguments that try to demonstrate conclusions based on types of information that are not relevant to the conclusions. These are often the same errors made by those who invoke arbitrary blame groups, to explain all the troubles in their life. (Determining true causality is at the heart of deductive logic.)

The machine learning problem underlines other problems that philosophers have encountered for centuries.

Much of what the public informally calls Artificial Intelligence (AI) does not meet the high standards in Computer Science as to what AI is — artificial algorithms that emulate the complex problem-solving that human beings can do. Much of today’s “AI” is a simplistic automation of tasks that do not do complex problem-solving. This is an analogy to people who use the surface language of logic and proofs, but do not understand what complex problem-solving is.

Human language is based on logic. Computers do not understand human language. Computer algorithms can manipulate numbers, and use statistics and probability theory. But probabilities do not determine what is logically valid, or invalid (any more than the majority vote of some group of humans determines what is logically valid or invalid).

Artificial Intelligence algorithms can be divided into logical, and sublogical algorithms. Logical algorithms operate on concepts that can be stated in human language (as humans would use it), such as rule-based algorithms. If you think of the 10 Commandments, you have statements in a logical algorithm:

you shall not commit murder
you shall not lie
you shall not make idols and worship them…

Formal logic deals with arguments/proofs that can be explained in human language. The moral/ethical system that the Bible presents, can be explained in human language. (It is, after all, presented to us in the Bible, which was written in human language.)

Sublogical algorithms operate on systems of computation that do not follow common concepts in human language, but follow systems of numerical weighting and numerical adjustments. When sublogical algorithms arrive at “solutions” to a problem, they often cannot be explained in human language. This also means that how the algorithm arrived at the solution, often cannot be explained. This is a problem, if the computerized system is controlling lethal assets (such as weapon systems, or driving a car).

For all these reasons, we can look at the history of machine learning (automated methods that attempt to “teach” a computer algorithm how to arrive at the correct answer to a problem), and observe all sorts of errors that surface when a method works on statistics, but does not understand meanings of concepts.

How to ensure that automated models “…capture our norms and values, understand what we mean or intend, and above all, do what we want…” is the alignment problem. [Alignment, 13]" [Christian Logic, Wuest, 2024, 416-417]
---------- ----------

The "answers" that these neural net programs put out, are COMPLETELY
dependent on the data they are trained on.

The CHOICES and VALUES and "AUTHORITIES" that the training data
contain, are completely governed by those making the training data.

No Christian should blindly accept the answers of these "AI tools",
without knowing the worldview of the designer of the training data
set, that the AI tools was trained on.

[Alignment] The Alignment Problem: Machine Learning and Human Values, Brian
Christian, W.W. Norton and Company, 2020.