As a
USA Today piece recently observed, on its own at least, AI clearly is still working out some kinks. But, of course, AI is not on its own, so at least for now, the
real threat remains fallen humans using AI to spread falsehood, gain power, hurt others, and try to become superhuman.
Examples of this also abound.
Writing in The Wall Street Journal, Jack Brewster recently described “How I Built an AI-Powered, Self-Running Propaganda Machine for $105.” It took him just two days to launch his so-called “pink slime” news site capable of generating and publishing thousands of false stories every day using AI, which could self-fund with ads.
The whole process required, as he put it, “no expertise whatsoever,” and could be tailored to suit whatever political bias or candidate he chose. Readers would have no clear way of distinguishing the auto-generated fake news from real journalism, or of knowing that
Buckeye State Press, the name Brewster assigned his phony website, was only a computer making things up according to his political specifications. Even worse, the one human that Brewster spoke with in the process of setting up the site was a web designer in Pakistan who claimed to have already built over 500 similar sites (and likely
not for reporters interested in exposing this problem). The news and information rating service NewsGuard
has identified over a thousand such “pink slime” news sites so far and claims that many are “secretly funded and run by political operatives.”
Questions like, “What is the truth?” and “Who is actually telling it?” will become more important than ever as AI technology takes off and is used by unscrupulous people to flood the internet, newsfeeds, and airwaves with misinformation. Christians will need a great deal more discernment than we currently cultivate, and a hesitancy to believe everything we see, especially when it reinforces our biases and assumptions.
More importantly, we’ll need to carefully weigh out which tasks and activities are irrevocably human and shouldn’t be outsourced to machine learning. This question is already urgent.
The Associated Press reported last year on an AI avatar that “preached” a “sermon” to a gathering of German liberal protestants. Recently,
a major Catholic apologetics website announced an “interactive AI” chatbot named “Father Justin,” who supposedly provides “a new and appealing way for searchers to begin or continue their journey of faith.” It didn’t take long for “Father Justin” to
be “demoted.” If it needs to be said, following spiritual advice from AI is a terrible idea.
In his book
2084: Artificial Intelligence and the Future of Humanity, Dr. John Lennox argued that it’s not alarmist to note how some of the main AI pioneers openly espouse transhumanism. At the heart of this worldview is a very old lie, first whispered by a snake in a Garden, that humans “shall be like God.” We can acknowledge that without denying the legitimate, helpful, and humane uses of AI.