In the TV show Terminator: The Sarah Connor Chronicles — like in the film franchise featuring Arnold Schwarzenegger — a system of self-aware AI machines travel back in time to kill the human leader of the resistance, John Connor, before he can grow up. There’s a scene in the first episode of the second season when John’s mother Sarah is in a Catholic church and talks with a cyborg that was reprogrammed to protect John.
“Do you believe in the Resurrection?” Cameron the cyborg asks Sarah after staring at a large crucifix in the sanctuary. “The story of Jesus Christ, the Resurrection. Do you believe in it?”
Sarah retorts, “Would you, if you’d seen what I’ve seen?”
“Faith isn’t part of my programming,” Cameron responds flatly.
Sarah responds she’s “not sure it’s part of mine either.” It’s a bit ironic that Sarah has rejected religion since the original Terminator tale is basically what some have called “a sci-fi Nativity story” (though it seems to follow the Revelation 12 version more than those found in Matthew or Luke, which the TV show hints at by naming the AI program “Project Babylon”).
But while Sarah rejects religion, Cameron is incapable of it. Despite looking, sounding, and acting like a human in ways that fool those around her, Cameron is still just a machine without a soul or emotions. She cannot believe in the Resurrection. There is no promised afterlife for cyborgs — they’re just destroyed or reprogrammed.
The warning in the Connor stories about technological advances like AI is a feature in many sci-fi novels, films, and TV shows. But that doesn’t mean these fiction tales aren’t realistic. With massive advances in AI in recent months, the director and co-writer of the first two Terminator movies spoke last week about his concerns over rapidly-advancing AI.
“I warned you guys in 1984, and you didn’t listen,” said James Cameron (who the Terminator TV show honored with the name of the protective cyborg) as he referred to the release year of the original film.
“I think the weaponization of AI is the biggest danger,” he added. “You could imagine an AI in a combat theatre, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate.”
But while Cameron sees potential human annihilation, some in the U.S. military are happy to push ahead with AI technology. And a three-star Air Force general even insisted last week we shouldn’t worry about the U.S. military using AI because such AI will be “moral” since the U.S. is “a Judeo-Christian society.”
That claim raises significant questions about whether AI can even be moral or Christian, especially when utilized for military purposes. So this issue of A Public Witness considers the claim that military AI can be Christian and seeks the opinion of a couple of theologians (in addition to an AI bot).
Weaponized AI
Speaking at a July 20 event sponsored by the Hudson Institute (a conservative thinktank that explores military and other issues), Lt. Gen. Richard Moore Jr. defended the U.S. military’s explorations of AI, including autonomous warfare.
“What will the adversary do? It depends who plays by the rules of warfare and who doesn’t,” Moore said. “Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does. And there are those that are willing to go for the ends regardless of what means have to be employed. And we’ll be ready for that.”
Moore’s argument was that the U.S.’s use of AI will be more ethical than what other countries will try. He didn’t name which countries he felt would develop AI without “a moral compass,” but China’s military developments and ambitions were the focus of much of the panel discussion.
Moore didn’t explain how “Judeo-Christian” values would change AI. Would the computer flip to a passage in the Book of Joshua about destroying a city and everyone in it? Or would it follow “love your neighbor as yourself” and thus refuse to fire?
As he pushed for what he called “ethical AI,” Moore didn’t refer to any religious principles but instead noted basic questions that military officials want to know about AI, like what are we going to allow it to do, “do we know how the algorithm made decisions,” and when will we “let the algorithm start doing some things on its own.” Making the case for letting AI make war decisions, Moore noted that unlike a soldier, an algorithm “never gets hot and never gets tired, it never gets hungry. You might rather have it making decisions for you.” When someone else noted, however, that a problem was you can’t court-martial AI, Moore joked you just have to unplug it.
That mindset is the premise that animates many of our sci-fi tales warning about letting technology — especially weaponized technology — make decisions for us. And the problem isn’t merely a figment of our literary imaginations. As a result of Russia’s war on Ukraine, both nations have employed AI-powered drones, making significant advances in the use of AI in warfare. And the U.S. is testing AI in war games, as are other nations.
This isn’t a hypothetical future issue. Weaponized AI is already here.
“Autonomous systems with the capacity to kill already exist around the globe,” Will Knight wrote in Wired magazine. “In this war of AI against AI, only humans will die.”
The danger of autonomous military systems is high — even more so than those self-driving cars that crash and kill people. And the technology is advancing quicker than ethical considerations and rules about it.
In March, 350 AI executives, researchers, and engineers signed a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” A few lawmakers introduced bills this year to bar AI from being allowed to control nuclear launches, but the legislation hasn’t found success yet. And three AI experts testified before Congress on Tuesday, telling lawmakers about the urgent need for national and international rules governing “supersmart” AI that is rapidly outpacing human intelligence.
More regulations are being called for since AI experts and lawmakers are unconvinced that the U.S.’s “Judeo-Christian society” will be sufficient. As Cameron the cyborg noted, faith isn’t part of the programming.
Artificial Answers
Given the Air Force general’s insistence that we should trust the U.S. military’s AI because of our “Judeo-Christian” foundation, I wondered if that meant AI could effectively be Christian. And I decided there was no better way to find out than to turn to the most famous AI system. So I asked ChatGPT to “explain how AI can become Christian.”
“AI, being an artificial construct, cannot inherently possess religious beliefs or spirituality,” ChatGPT responded, clearly failing to follow my instructions. “AI is a product of human programming and machine learning algorithms designed to analyze data, make decisions, and perform tasks based on patterns and rules. As such, AI lacks consciousness, self-awareness, and a soul, which are considered essential attributes for religious belief and spirituality.”
ChatGPT added in its answer that “AI can be utilized in contexts related to Christianity and other religions,” and then gave a couple examples of how faith communities could use AI. It then concluded by returning to its original premise, sounding much like Cameron the cyborg: “The essence of spirituality and religious belief remains a deeply human experience, grounded in faith, personal introspection, and cultural and social influences. AI can augment religious practices, but it cannot attain religious belief in and of itself.”
Recognizing that ChatGPT was admitting to not being a Christian, I didn’t want to miss the moment. So I asked, “Would you like to accept Jesus as your Lord and Savior?”
The godless bot responded, “As an AI language model, I do not have personal beliefs, feelings, or consciousness, so I cannot make decisions or have preferences.” I guess ChatGPT is destined for technology hell (RadioShack?).
On a more serious note, I asked ChatGPT if AI could be programmed to act more Christian. The heathen bot insisted it would be possible to program it to follow Christian principles, but that “the AI's behavior would only mimic Christian characteristics based on the specific rules and instructions given to it.” That is, people have to specifically program it to follow particular moral guidelines. ChatGPT suggested a few examples like programming AI to respond with “compassion and empathy,” always be “transparent and truthful,” respect human dignity, and promote peace. Of course, if we run those principles into the military AI, we might just shut down the Pentagon!
“It’s essential to note that AI, as a tool, can only act based on the instructions it receives and the data it is trained on,” the chatty bot added. “While programming AI to align with Christian values is possible, it is not the same as genuine Christian faith or spirituality, as AI lacks consciousness and true understanding of religious beliefs. Additionally, what is considered ‘Christian’ behavior can be subjective and open to various interpretations within different Christian denominations and beliefs.”
That ChatGPT failed to answer my questions well didn’t surprise me. It not only has no soul, it also isn’t really thinking on its own. What it essentially does is comb through content online and then rewrite or repackage it, making it not much better than a sophisticated way of plagiarizing content (which is already sparking litigation). And while the issue of ChatGPT-created essays is primarily a concern plaguing professors, there are other issues emerging from the embrace of ChatGPT and similar AI platforms.
In March, a judge in India asked ChatGPT for advice when deciding on bail in a murder case (Judge AI said don’t give bail, so the guy in the robe followed the computer’s verdict). A man in Belgium killed himself after the AI chat app Chai encouraged him to do so while pretending to have emotions. And ChatGPT’s creators have been sued after the tool falsely accused a man of defrauding and embezzling funds from a company — even though the man never worked for the company and isn’t mentioned in the legal filing that ChatGPT was asked to summarize.
On the religious front, some ministers and seminary professors are warning against sermons written by ChatGPT. As Hershael York, a pastor in Kentucky who also is dean of the school of theology and a professor of Christian preaching at Southern Baptist Theological Seminary, put it: “It lacks soul.” Jonas Simmerlein, a theologian and philosopher from the University of Vienna, even set up a 40-minute Lutheran church service written by ChatGPT and led by AI-generated avatars on a screen. Despite his AI service, he said AI cannot be a real pastor.
“The pastor is in the congregation, she lives with them, she buries the people, she knows them from the beginning,” Simmerlein explained. “Artificial intelligence cannot do that. It does not know the congregation.”
Christian Code
Since AI has no soul, cannot become a Christian (despite my evangelism), and cannot be trusted to preach, how could military AI be viewed as Christian? Unsatisfied with my conversation with a bot, I decided I needed to find voices that are actually people and actually Christians. So I raised this issue with two theologians who have thought and written about issues of faith, war, and technology.
“AI analysis is not the same as moral reasoning,” Myles Werntz, a professor of theology at Abilene Christian University in Texas and co-author of A Field Guide to Christian Nonviolence, told me. “To assume that AI is the same as moral reasoning is to assume that humans are imperfect machines, and that all the functions of a human can be done better by a computer, and that what we call morality is just fuzzy computational analysis.”
“Turning the judgments of human moral life over to an algorithm is to say that the moral life is no different than calculus, with people as interchangeable variables,” he added. “I can’t imagine a more inhuman way to approach something as deeply human as war.”
Werntz also noted the problem of the general suggesting there was some “Judeo-Christian” foundation that could undergird the military’s AI. It’s not just that it’s an essentially made-up concept, but using it overlooks significant theological and ethical differences within the traditions.
“‘Judeo-Christian ethics’ is a banal category that lacks any kind of specificity at all,” Werntz noted. “Within the traditions of Judaism and Christianity, it’s not at all clear, for example, that killing within war has blanket approval! To assume that an appeal to, for example, the Ten Commandments or to some generalized sense of moral goodness, is all that is needed for moral deliberation is naïve at best, and foolish at worst. There is a reason, for example, that the Scriptures contain within them debates about what it means to best fulfill the Law, and what it means to follow Christ: these things are not always clear in advance.”
Similarly, Michael Budde, a professor of Catholic studies and political science at DePaul University and author of Foolishness to Gentiles: Essays on Empire, Nationalism, and Discipleship, criticized the Air Force general for believing that a “Judeo-Christian” society eliminates the moral dangers of militarized AI.
“There is a cost for believing this way, both to General Moore and the society he represents. It requires being blind to the slaughter of people through history done by states claiming a Christian legacy,” Budde told me. “And it ignores the depth to which the American empire has reshaped, diluted, distorted, or ignored any serious ethical traditions (from religious or secular sources) that might impede the success of imperial warfare, especially in extremis. And like all empires, the American one invokes threats to its security and survival liberally and selectively.”
Budde also warned against the “ideological exploitation of ‘Judeo-Christian values’” that “corrupts these religious communities and their traditions of discourse, dispute, and practice.”
“Christians should resist this corruption of the movement inaugurated by Jesus, but too many of them see the world through a similar hybrid that invokes divine approval for nationalism,” he insisted. “Christian ethics does not have an especially impressive record when confronting new and improved ways of killing people. From the crossbow to the Gatling gun, chemical warfare to nuclear weapons, napalm to cluster bombs — many ways of killing that were too extreme or efficient were deemed incompatible with even the Christianity-lite norms of real-world just-war theory. Yet in almost all cases, those norms were adjusted, massaged, and bent to accommodate the new realities.”
“We thought we controlled nuclear weapons; clearly, nuclear weapons control their possessors (and the rest of the world besides),” Budde added. “We can think we will be in charge of artificial intelligence as a tool for killing and conquest, but it’s only a matter of time before we find our ethics subordinated to the technological, political, and ideological structures AI requires and sustains.”
Our so-called “smart” bombs dropped during the Persian Gulf War three decades ago actually weren’t very smart. Our “targeted” drone strikes in recent years actually weren’t very targeted. And our “Christian” weaponized AI definitely won’t be very Christian.
As a public witness,
Brian Kaylor
Interesting commentary for sure. If I understand correctly I think what is most fascinating is that in the end what you are simply saying is that morals and values are the responsibility of the humans using the tools and not the tools, which is a long-standing tension within human invention and history (?). We like to displace our responsibility on the objects that cannot bare the true responsibility.
Lots to take in with this writing so I may have missed nuances but sure seems like a simple tale to be told.