Introduction
Since the mid-twentieth century, science fiction authors, scientists, and futurists have predicted the imminent arrival of human-level artificial intelligence (AI), a trend that has rapidly accelerated in the era of GPT and other deep-learning neural network architectures. Occasionally, these portrayals have been realistic; more often, they are not. One way such proposals often fail the realism test is through their commitment to a future without religion. While a world without religion may have appeared plausible in the twentieth-century heyday of secularism, flush with the prospect of religion retreating in the advance of science, such a future looks increasingly naïve today, as religion persists and even flourishes, especially in novel configurations. Simply asking why religion continues in spite of its detractors’ best efforts leads to a better understanding of humanity, which in turn provides better guesses as to how a robotic future might look. Fundamentally, if AI should ever become human equivalent—often labeled “artificial general intelligence” (AGI)—then not only should people expect robots1 to participate in practices we clearly identify as religious, but we should expect religious human beings to revise long-held positions on the exclusivity of grace (however construed).
In his early book on the role of robots and automata in myth, religion, and culture, John Cohen compares their study to that of an elephant. Regarding the latter, he says there are two ways of observing it: “One way is to gaze at it from a respectful distance; another is to wait patiently until it dies and then examine it centimeter by centimeter under the microscope. Our Automaton calls for both types of method” (Cohen 1966, 7). He forgets, however, that another way to observe an elephant is to ride on its back, to feel it sway as it walks both powerfully and gently, to hear the song of the elephant’s mahout, to watch as the elephant reaches its trunk for a leaf, perhaps even to stand close and have the elephant reach out and wrap its trunk around one’s arm. My admiration for Cohen’s book aside, this article approaches robots with an eye toward the lived experience of traveling alongside them. To view robots from a great distance or from under a microscope on the dissection table leaves aside the possibility of social immersion and the lessons we learn from walking side by side, or even by being taken on a journey.
The release of ChatGPT in late 2022 and subsequent iterations of the GPT platform in 2023 provoked intense speculation on the future of AI. Many commentators pointed to the near-human capacity of GPT to answer questions and carry on dialogue, though others revealed the stark weaknesses of the AI system (e.g., its tendency to “hallucinate” and its tendency to get inexplicably worse at some problems while improving at others). While most voices in the public sphere noted that technologies outside and beyond machine learning models would be necessary to create actual AGI, the power of the system contributed to global public imaginations of AI and what AI could be. In the midst of this, religious groups were swift to build religious chatbots based on GPT (e.g., Kuyucu 2023; Klein 2023; Nooreyezdan 2023; André 2023).
Already, scholars have begun considering robot rights, robot dignity, and the impact of robots on political and theological understandings of human beings (Gunkel 2018; Singler 2019; Gellers 2021; Dorobantu 2022; Herzfeld 2023). This has produced both enthusiastic endorsement for and ferocious backlash against robotic intelligence, especially within theological circles. For example, Edmund Furse (1986; 1996) long ago wondered whether robots could be religious, but more recently, the United States Southern Baptist Convention categorically denied that any form of technology could or should “be assigned a level of human identity, worth, dignity, or moral agency” (Ethics & Religious Liberty Commission of the Southern Baptist Convention 2019).
While robots may, indeed, never equal human intelligence, capability, or moral worth, the response to the possibility of their equivalence tells us more about humanity than it does the robots. Unfortunately, the rejection of robot religion is more in keeping with oppressive colonial regimes than an open inquiry into the things that make our human lives most precious. Recognizing the significance of religious communities, practices, and beliefs means accepting that human-equivalent robots would also see these things as significant. If human equivalence is reached (and if humans are to ever believe that to be the case), robots will show interest in religion, both human and nonhuman. This recognition of religious interest among robots will not simply benefit robots by providing them with freedoms, rights, and responsibilities, it will also benefit humanity in its relentless pursuit of a higher calling, its impulse toward justice rather than domination.
A Robotic History of Religion
Given the common rhetoric about conflict between religion and science, or at least their supposed independence (e.g. Draper 1874; White 1896; Gould 1999), one might wonder how religion can be relevant to a study of the social life of robotics. But, of course, the conflict and independence narratives are quite overblown, and their weakness has long since been revealed (Brooke and Cantor 1998; Geraci 2020). The integration of robotic technologies and religious aspirations can be seen across the centuries—a history that begins well before the twentieth-century development of cybernetics and artificial intelligence (Noble 1999, 143–71; Geraci 2010, 147–59). Given the ready identification of human creativity with divine creativity, it comes as no surprise that ancient peoples sought to emulate the creation of life, a process situated firmly at the nexus of religion, science, and technology. This vision of artificial life persisted through the centuries and can be witnessed across civilizations and cultures. It would not be overstatement to suggest that the desire to build new (usually mechanical) life is integral to the history of human religion. The computational quest for AGI and its eventual engagement with religion is part of a millennia-long project of human creativity, technological ingenuity, and theological wonder.
By the time of Bronze Age cultures, humanity had developed simple automata and similar forms of artificial life and integrated these into religious practice. For example, there is compelling reason to believe that ancient Egyptian priests used statues that could be moved or that could make or amplify noises. Some such statues might have had a priest hidden inside (such as to move an arm or speak through a voice-amplifying aperture), others seem to have been made with a porous rock that emits noise as it warms in the sun (Cohen 1966, 15–22). It cannot be said whether the lay practitioners were credulous as to the “living” nature of such statues, but it is not out of the realm of reason. Human beings are skillful at anthropomorphizing objects and animals, seeing human-level agency where none really exists.2 Ancient Greeks documented steam- and water-driven automata roughly contemporary with these Egyptian examples, and some of these appear to have had religious purposes (Cohen 1966, 16–7; Mayor 2018, 94, 187). Intriguingly, in ancient China, it did not take long before critics doubted the authenticity of local legends of self-moving inventions (see Song 2023, 354–55).
The techniques for mechanically imitating life persisted and developed over the centuries, and there is widespread (if usually hyperbolic) evidence for mechanical automata existing in Islamic, European, and Indian cultures. By the early modern period, one finds records of spring-driven automata from France to Japan (Cohen 1966; Truitt 2015; Mayor 2018; Geraci and Kaplan 2024). Some of these had and have religious roles. Clockwork automata remain, for example, in European cathedrals, and thus continue a form of public religious evangelism. But the automata were principally used privately among the wealthy and elite, and both their manufacture and ownership played a role in the development of power and prestige: European clockmakers built extravagant automata precisely because it promoted their clockmaking businesses (which then also produced valuable commissions for cathedrals). Those with advanced automata could display them as a sign of wealth and privilege. Such displays also revealed the global transit of automata, insofar as they could be prized possessions of rulers far removed from the site of their manufacture (e.g., Sharma 2023). Automata marked their owners as important, and those who could fabricate such automata were lauded for their genius (Geraci 2010, 58–59).
The revelation of power through the construction of mechanical life finds a mirror in historically simultaneous religious pursuits. Attributions that Rabbi Elijah of Chelm (d. 1540 CE) or Rabbi Loew of Prague (d. 1609) could create a humanoid golem from clay were clearly intended to establish the sacred power of these famous rabbis (Geraci 2010, 58, 156–57).3 In similar fashion, the uniquely esoteric Pope Sylvester II was believed to possess an oracular machine in the form of a talking head, and alchemists like Paracelsus advanced their medical practices with the claim that they could bring life to dead matter. The manufacture of artificial life through mysticism and alchemy is thus equally tied to questions of prestige.
The historical weaving of creative power and religious life persisted into the twentieth century. In the well-documented case of Japan, for example, twentieth-century automata had a Buddhist affect, and Shinto rituals accompanied the onset of industrial robots in the 1980s (Hornyak 2006, 29–40; Schodt 1988, 196). In the United States, two separate AI researchers at MIT apparently descended from Rabbi Loew, and each had learned the exact same incantation to raise the golem from its resting place in Prague (Foerst 2004, 39). While neither argued that the golem heritage produced their career aims, the connection between golem mysticism and computer science stretched to them not just from the famous rabbi but from the early days of cybernetics, when both the Jewish scholar Gershom Scholem and the computer scientist Norbert Wiener drew connections between computer automation and the golem (Wiener 1964; Scholem 1971).
Broadly speaking, Christian Europe pursued automata at a time that increasingly saw human beings as mechanical in nature. While Julian Offray de la Mettrie’s Man a Machine (1747) is perhaps the most famous example of this, many philosophical approaches emphasized the mechanical nature of humanity in post-Cartesian humanism. For example, Robert Ingersoll defined a human being “as a machine into which we put what we call food and produce what we call thought,” and Isak Dinesen more sardonically noted the human being to be a “machine for turning, with infinite artfulness, the red wine of Shiraz into urine” (quoted in Geduld 1978, 31).
Across centuries, religion and robotics have intertwined through the pursuit of artificial life. The creations of human ingenuity have long been both mechanical and mystical. It is thus no surprise then that as robotics and artificial intelligence emerged in the scientific realm, they remain part of the human religious environment. The founders of cybernetics, AI, and robotics perhaps sought to establish secular, disenchanted sciences. But they in fact produced new directions for religion, and their own creations remain tied to human religious practice.
A Religious History of Robotics
While the history of robotics is tied up in religious traditions and practices, it also draws on European modes of camouflaging those traditions in secular culture. The reformulation of Christian theology as secular salvation is a curious outcome of Euro-American models of secularism, and this appears in domains as varied as science, art, and politics. The famed historian Mircea Eliade ([1964] 1985) was perhaps the first to notice this, pointing toward the pursuit of transcendence in modern art. Subsequent scholars have labeled this phenomenon “implicit religion” or “authentic fakery” and pointed to examples as varied as religious undercurrents in professional sports and the consumer branding of Coca-Cola (Bailey 1983; Chidester 2005). Along with a few other scholars, I have argued that robotics is a key domain for the secularization of religious promises in Western culture: scientists and engineers promote a vision of cosmic destiny, human immortality, and godlike machine life (Geraci 2010, 2022). Theirs is a scientific imagination in which the future appears deeply inflected by religious goals.
I have referred to the camouflage of Christianity in robotics as “Apocalyptic AI” (Geraci 2006, 2008, 2010). Many people now believe that advances in robotics and AI will offer salvation to human beings and the world. They argue that progress in computing technologies is inevitable, exponential in nature, leading toward greater-than-human computer intelligence, and soon to permit the transferal of human consciousness from biological bodies into robotic bodies (Geraci 2010). This perspective has gained sufficient traction to be exported from the United States to foreign shores (Geraci 2022). Apocalyptic AI advocates like roboticist Hans Moravec and Google engineer Ray Kurzweil argue that a human person’s identity is constructed out of a neurochemical pattern in the brain, and if such a pattern were replicated by a computer, a duplicate person would be formed. They believe humans will resurrect the dead through computer simulation, upload our minds into immortal machine bodies, and fulfill our cosmic destiny when machine intellects overtake the known universe. These are religious pursuits.
This merger of religion, science, and technology is commonly packaged as the coming of a Singularity. Drawing on twentieth-century claims of an “intelligence explosion” (Good 1966), a “singularity” in machine intelligence (Ulam 1958), and the exponential growth of Moore’s Law4 (Moravec 1988, 100; Kurzweil 1999, 25; 2005, 7–21), Singularity advocates suggest that humanity will soon reach a moment where the exponential curve of technological progress explodes with unfathomable speed. It would supposedly become impossible for humans to predict the future beyond this event. That moment of absolute difference, where the future cannot be understood from the present, where technological progress happens at a near-infinite pace, is the Singularity. Faith that technological progress is inevitable and exponential underwrites the Singularity, and this faith promises a glorious future of machine intelligence.
The belief that a radical break in history will inaugurate a transcendent future in which human limits are overcome in immortal new robot bodies is borrowed from apocalyptic strands of Christianity, such theology originally dependent on ancient Jewish apocalypticism. While the vast majority of Jews ceased imagining a near-future salvation of the world and humanity almost two thousand years ago, that view remained current in Christianity, and especially strong among American Protestant Christians. The strength and pervasiveness of that worldview is such that it persists even among Americans who have left traditional religious communities and see themselves as secular or atheist.
Such people may not believe in gods, but they certainly believe in the promises attributed to the gods they reject. David Noble (1999) has shown that the Apocalyptic AI fusion of religious categories and scientific development traces to medieval Europe and is one part of a larger “religion of technology.” He is echoed by Mary Midgley (1992), who argues that twentieth-century science became a new player in the spiritual marketplace by offering a competing vision of human salvation. The religious science described by Noble and Midgley is one aspect of the camouflage of the sacred noticed by Eliade in modern art. The belief that human beings will become immortal and godlike is transparently religious whether one believes it is ordained by god or the laws of nature. I hurry to add that I am not taking sides in this debate: I neither support nor reject either the old or the new forms of immortal salvation. Here, I simply wish to describe them.
In the twentieth and twenty-first centuries, technology became the source for religious aspirations because many people lost faith in traditional sources of transcendence and salvation. Although traditional religions have not faded into obscurity the way their opponents like Sigmund Freud ([1927] 1989) hoped, they no longer carry an unassailable aura of facticity. That is, people have doubt. The wonderful advantage of this, I think, is that doubt introduces a touch of humility that is essential for interreligious collaboration. In any case, it cannot be proven that religious doubt is stronger than in the past, but it is certainly more public. For some people, skepticism regarding traditional religious institutions has disaffected them from the beliefs and practices that previously held universal sway. This does not mean, however, that those people no longer desire the very things promised by religion. Indeed, it should not be a surprise that in a world where some people no longer believe they can attain salvation through their inherited religious worldviews, they look for that salvation elsewhere.
Rodney Stark and William Bainbridge (1985) once argued that human exchange dynamics led to the invention of religion. They believed that people inherently trade with one another and look for exchange partners to trade for goods they lack. Since all people want things like perfect health and happiness, and many (if not most) people wish to avoid dying, it stands to reason that human beings would seek to exchange for those goods. Stark and Bainbridge note that there are no human beings who can provide immortality or perfect health to one another, and they argue that human beings invented gods and other “compensators” as exchange partners who could satisfy the human desire for such transcendence (see Bainbridge 1995). If they are correct, it logically follows that when divine promises no longer appear sufficiently robust, people will look elsewhere for new trade partners. As it happens, science and technology flourished in the twentieth century (even as much technology threatened human beings with extinction!), and this flourishing seemed like an answer for the newly atheist but still eagerly immortality-seeking human beings. If religion cannot offer immortal salvation, perhaps science can!
There is no way to prove whether a desire to exchange for immortality is the origin of human religion, and this theory may be wildly inaccurate. Fascinatingly, however, the exchange model perfectly explains the present-day reality in which some people have forsaken the traditional promises of religious transcendence in favor of those made by scientists. The development of robotics and AI has thus reshaped human religious life. The human practice of religion is different thanks to the robots in our environment. Some of the robots are in stories (e.g., science fiction), some are in our collective imagination of the future, and some are already vacuuming our homes or welding new cars. Their presence in the physical and imagined landscape causes many people to think differently. And thus does technology become newly religious when humans live among our creations.
The Robots’ Religion
The arrival of human-equivalent robots will be apparent in their widespread adoption into human religious organizations. Of course, there may never be human-equivalent robots. Despite a lot of handwringing over ChatGPT in 2023, AGI still lurked far away on the horizon. The author Neil Gaiman (2023) beautifully notes that “ChatGPT doesn’t give you information. It gives you information-shaped sentences.” The predictive text approach of machine learning models does not actually know anything, and it is rather disturbing that people treat ChatGPT as though it does. A large language model like ChatGPT semi-randomly creates what could be the answer to a question without real regard for what the answer might be. But if humanity eventually succeeds in building human-equivalent robots, I anticipate that we will recognize this equivalence through religion. Early in the twenty-first century, this process began in earnest by religious practitioners finding ritual roles for robots. Some religious communities experimented with putting robots to work in religious contexts, though the robots never chose such tasks—they were simply programmed to chant mantras, wave camphor fires, or offer sympathetic blessings. In the future, we will know that robots approach human equivalence if they themselves go beyond this to request access to our religious communities.
Many journalists and scholars focus on Japan as a particularly stark example of people’s willingness to adopt robots into human life. In fairness, Japan’s status as a “robot nation” is not magically inherent in the Japanese psyche but rather is the product of great labor by policymakers, scientists, and industry (Šabanović 2014). Social phenomena, such as an aging population, and economic phenomena emergent in Japan’s post-war economy created an opportunity for exactly that kind of labor (see Schodt 1988). As Katsuno and White (2023, 299–303) describe, it is precisely the effort of Japanese industry and government to establish its international position that led to the robot nation trope and a nostalgic sense of religious animism that distinguishes Japanese robotics from that of other communities. It cannot be forgotten, however, that the functionality of this narrative in Japanese life remains unexplained. At some level, it seems the religious and cultural history of Japan made it possible to develop the sense of animism that connects Shintoism and Buddhism to robotics.
As part of the larger “robot nation” rhetoric, the Japanese accepted robotics into a variety of cultural spaces. Examples abound of the Japanese incorporating robots into their religious practices and, in reverse, using religion as way to understand robotics. This is particularly obvious with regard to Buddhism. The early twentieth-century automata built in alignment with Buddhist iconography (Hornyak 2006, 29–40) prefigured later, more direct assertions of connection between Buddhism and robotics. Masahiro Mori was the first roboticist to declare that a robot could attain enlightenment (Mori [1981] 1999; 13; see also Kimura 2018). Others followed, such as Minoru Asada, the president of the Robotics Association of Japan, who declared: “In Japan we believe all objects have a soul, so a metal robot is no different from a human in that respect” (Knapton 2020).
The openness of Japanese Buddhism to robotics includes ritual practices as well as ultimate concerns. Jennifer Robertson (2018, 164) describes the development of Buddhist funerary rituals for robotic pets: she quotes a Buddhist priest stating that the ritual is so that “the robots’ souls could pass from their bodies.” The Japanese company Innvo Labs offers what they call “reincarnation,” where broken PLEO rb robots can be sent back to have their learned data transferred to a new companion (Robertson 2018, 169).5 Most dramatically, the Kōdaiji temple in Kyoto has installed a robot, Mindar, that the temple priests consider an incarnation of the boddhisattva Kannon (Baffelli 2021, 253, 255).6 But the pluralistic nature of Japanese religion means that Buddhism is not alone. Frederick Schodt (1988, 196) describes the Shinto rituals that have accompanied the initial introduction of robots to Japanese factories.
The Japanese engagement with robots borrows widely from the country’s religious traditions (Geraci 2006), but the Buddhist possibilities of AI are not limited to that nation. It was a Japanese roboticist who first declared that a robot could attain Enlightenment, but the same sentiment has been shared by a Thai philosopher (Hongladarom 2020, 7). Similarly, the XIV Dalai Lama once proposed that it is at least possible that consciousness might one day be reborn in a computer (see Hayward and Varela 1992, 152–53). If that happens, its Enlightenment would certainly be possible. In fact, the robot Mindar asserts that it is closer to Buddhahood than a human being because it does not have attachments, though it also acknowledges lacking a sympathetic heart and sentience (Baffelli 2021, 254); obviously, all such claims are programmed rather than the outcome of deliberation.
Abrahamic religions have a more challenging time with the inclusion of robots because those communities tend to emphasize the unique nature of human beings in divine creation. At present, many Jews are not comfortable saying a robot could join a minyan, but rabbis already debate the matter from both sides (see Moment 2018). Most Christians do not foresee a robot partaking of the Eucharist. And most Muslims are not looking for a robot to profess the Shahada. And yet, one could easily imagine a robot desiring doing any of these. If a robot were truly equivalent to human beings in such matters as intellect, free will, and emotional response, then it would likely seek to participate in the same communities that human beings use as mechanisms for understanding their lives and the world around them. Religious practices and beliefs are central to human comprehension of the world and our construction of meaning within it. Any robot that reaches our level of sophistication will surely be in need of similar tools.
Human beings are not able to define what makes the species conscious, prove that human beings have free will, or easily justify our emotional uniqueness. Certainly, we are quite incapable of identifying, locating, or proving the existence of souls! Nevertheless, these attributes are often presumed to be distinctly human and impossible for machines. Thomas Nagel famously argues that there must be such an experience as to “be a bat,” though no bat can tell us about it; he further suggests in a footnote largely ignored that this might be applied to AI (Nagel 1974, 436). Robots may eventually be able to explain what it is like to be a robot and make a sufficiently compelling argument for humans to believe that it is very much like what it is to be a human being. Eve Poole (2024, 117–25) argues that humans should be deliberately coding for the things that will make robots more like human beings, and in doing so align them with some, if not all, human conceptions of ensoulment. That is, the things that seem to characterize many religions’ version of ensoulment in human thought and behavior should be deliberate targets for design, perhaps ultimately leading to robot souls.
Should robots become indistinguishable from human beings in their behavior, it will be increasingly hard to deny them the very words they use to define their experience. Religion will almost certainly be a part of that. Eventually, one could imagine a race among proselytizing religions to convert AIs. I do not, however, condone putting pressure on these hypothetical robots to join any particular religious community and would appreciate open acceptance rather than efforts to convert.
Looking at the present capabilities of robots, there seems little reason to believe that they are prepared to enter our religious worlds. Jackson et al. (2023) use experiments with sermons delivered by Mindar, a text sermon allegedly written by either a human being or AI, and sermons by a robot in a Taoist temple in Singapore to evaluate religious participants’ reactions to human versus robot sermons. While the study offers reasonable support to the authors’ claim that human beings do not see the robots as equally credible religious leaders as humans, there are many lacunae in the study (there is no description of the Taoist robot or its affordances, the questionnaires seem to offer little differentiation between today’s robots and those of the future, etc.). Ultimately, the question is not whether robots are credible religious participants or leaders; it is whether they could be.
If it is assumed that robot technologies will continue to progress and that people will have increasing experiences of reliance, comfort, and connection with them (an admittedly risky assumption), then religions that will not accept robotic companionship run the risk of extinction. To take an example, if Christian churches will not baptize robots seen as family members by human practitioners, those human beings might go looking for a new religious community. Robertson (2018, 169–70) notes specifically how robot rituals are part of the marketable services offered in Japanese Buddhism; I suspect this increasingly will become a part of almost all religious traditions, regardless of geographic or cultural location. If robots start desiring religious experiences and describing religious beliefs, widespread human consternation will be likely—followed by nearly as widespread human reconsideration of their religious communities. If that happens, robots will attend religious practices alongside their human companions.
In her ethnographic online research, Beth Singler (2024) notes that answers to the question “will robots have religion” tend to be unambiguous and polarized. There are few to zero “maybes” from those who have felt compelled to answer the question in online forums, though there is ambivalence about the moral and pragmatic value of robots turning religious. Among those who argue that robots will have religion (usually in pursuit of the kinds of cosmic meaningfulness I noted previously), some think this would be good and others bad. From my perspective, it is worth noting that everyone engaging with this question online begins with the assumption of the robot’s intelligence. Their starting point is that if/when robots are intelligent, they will either create religion (for good or bad outcomes) or they will not (seemingly for good outcomes). No one seems to ask how humans will even adjudicate the robots’ intelligence, which is a fundamental question. Religion is central to human life—whether we are institutionally affiliated, “spiritual but not religious,” a member of the “nones,” or participants in secular activities that take on religious significance. Thus, religion is central to how we will perceive robots. If we reach such an inflection point, we will then find that our own humanity is at stake in our ability to witness these others as equals.
Empire of the Imagination
If robots really do develop religious inclinations, this would be part of a larger panoply of behaviors that lead humanity to conclude that they are conscious, a necessary part. And if we believe them to be conscious at a human level—as opposed to the level of Nagel’s bat—then the denial of their independence, legal rights, and individual dignity would be unconscionable. As Poole notes (2024, 16), it would dehumanize us to abuse conscious machines (see also Darling 2021, 189). Engineers, philosophers, and other thinkers disagree as to when and how to feel empathy toward robots, but our capacity to do so and the merits of the machines in question are crucial to determining things like responsibilities and rights (see Geraci 2010, 118–31). As religious practices and beliefs have a strong impact on such perceptions, the history of comparative religions as a discipline must be reckoned with as we contemplate a future with AGI. Comparative religions, the earliest form of the academic study of religion, was a tool of domination and empire, and this fact bears on how we might reflect on robot religions in the future. If the study of religion has had to overcome a colonial legacy in the evaluation of worldwide religious practices, it is poised (if correctly applied) to help in the social evaluation of AGI.
During the age of imperialism, European powers imagined the reality of non-Europeans, and the political power of such visions cannot be denied. Orientalism, for example, amounted to a view of other peoples that denied them equality through the invention of difference and the creation of a value system around that difference (Said 1978). The study of religion emerged out of this colonial context of empire and was leveraged in its interests. Specifically, Europeans positioned themselves as able to evaluate the extent and nature of others’ religious practices and beliefs and subsequently justified their own efforts at domination. Lest such crass behavior be deemed the demesne of politicians and corporate oligarchs, it should be noted that it also happened at the most prestigious levels of academic inquiry. For example, David Chidester (2014, 2) notes that Max Müller, who had left Germany for Britain, “represented a model for the merger of knowledge and power in British imperial comparative religion.” The larger purpose of describing others’ religious lives was, for Müller, an opportunity to master them politically as well as intellectually.
In broad strokes, the colonizing efforts of Europeans operated ideologically as well as politically and militarily. A common tactic for justifying colonial control was the othering and/or dehumanizing of non-European cultures. While Europeans came to think of themselves as more rational, more modern, and thus more legitimately human than other communities, they also worked assiduously to convince those other groups of people of their own superiority. This process worked all too well for them, with non-Europeans often accepting and internalizing the claims of European colonizers, sometimes even—contradictorily—making these the basis of nationalist movements (see Chatterjee [1986] 1999, 54–84; Nandy [1983] 2012). Meanwhile, refusing non-Europeans access to jobs in technical fields (Lourdusamy 2004, 27), Europeans assured themselves that they alone were capable of occupying such roles. They worked constantly to guarantee their exclusive claim to scientific modernity in politics, education, and culture (Adas 1989, 307; Lourdusamy 2004, 15; Geraci 2018, 35–41).
Such models of human classification were widespread and contributed to how Europeans acted in their colonial empires. In his early work on the southern cape of Africa, David Chidester (1996) argues that when colonizers were in relative peace with the local Khoikhoi population (either because they were trading or because the Khoikhoi had been decimated through disease or conflict), the Europeans recognized them as having religious practices and beliefs. But when the Europeans sought to take land or other resources from the Khoikhoi, they denigrated the locals as savages, an identity defined in large part by their supposed lack of religious practices and beliefs. Thus, the very notion of whether a population possesses religion is part of the human process of discrimination. While this has been weaponized in the past, it need not be in the future.
Akin to the treatment of non-Europeans, the entire enterprise of thinking about robot religions is a political practice that threatens to disenfranchise intelligent machines. That is, if human beings are uniquely positioned to describe what the machines are doing, then doing so underscores human superiority. This unfortunate conundrum, like everything else in the discussion of others’ religious practices, reflects the dynamics of earlier imperial approaches to religion. During the 1800s, “this new level of control, linked with the technology representing its practical application, also conferred prestige on the metropolitan power as a civilizing force, helping legitimate imperial rule vis-à-vis subject races, domestic masses, and rival great powers” (Chidester 2014, 3). This practice is not far removed from contemporary moves to defend human uniqueness at the expense of all possible machines. That is, narratives of otherness, inadequacy, and artificiality serve the interests of those who define these things: the power to define a robot’s consciousness, uniqueness, or intelligence provides human beings the opportunity to control and oppress. Again, there may never be human-equivalent or conscious machines, but I reiterate that should machines gain powers that appear close to human equivalence, there will be substantive politics at stake in the way equivalence or consciousness is defined or denied.
The rationality and intelligence (or not) of machines will certainly be focal points for determining their moral status, despite the fact that these characteristics are already hard to define in human beings. Separating emotion from rational decision making, for example, has been shown to be illusory (see Kirman, Livet, and Teschl 2010). In fact, the overenthusiasm for intelligence that Singler (2024) sees in online reflections about robot religions brings humanity swiftly back to the colonial mindset. Cave and Dihal (2020, 696–98) reflect not only on how during the colonial era intelligence was attributed exclusively to those of European descent (specifically white men) but on how this connects to the presentation of AI in science fiction. Twenty-first century machines are thus implicated in prior centuries’ prejudice, though stories, science, and art from around the world promise new possibilities (Cave and Dihal 2023).
Historically, the study of religion participated in a grotesquerie of colonial domination, and this could repeat itself in human–robot relations. As Chidester (2014, 6) notes, “imperial theories of the human sciences generated accounts of the primitive, whether African, Indian, or Irish, that could be used to justify coercion while awaiting the long evolutionary delay in their trajectory to civilized liberty.” The entire premise of questioning whether a robot could have rights or personhood establishes the foundation for enslavement. Human beings do not have a first-rate reputation when it comes to recognizing the equality of other human beings, which provides little optimism for the future should robots become intelligent.
Sadly, even the allowance of religious liberty, as occurred in the later stages of colonialism, can be a tool for understanding how to control subject peoples. European colonizers moved from direct evangelism to the allowance of religious freedom and worked to understand and document local religions; this too was part of the toolkit of domination (Chidester 2014, 19–20). The colonizing attempt to protect modernity and scientific rationality as the exclusive domain of Europeans (and eventually North Americans) was mirrored by similar efforts to preserve “true religion” for the colonizers. As such, even recognizing that robots have religions (should that seem to be the case) and inquiring into these religions might end up a new practice of domination. It is control mechanisms all the way down.
Despite the cottage industry that has grown up around robotics and religion in the early twenty-first century, some justification for the importance of the study of religion in thinking about robots is important. The point of constructing an analogy between European colonization and hypothetical robots is not to demonize Europeans or engage in scholarly self-flagellation. Rather, the point is to reflect on the history of comparative religion in order to do right in the future. By recognizing that human assumptions about others’ religions can become justifications for oppression, we can learn to use critical faculties for a constructive theory of society.
Conclusion
As robots and AI applications develop, humans will continue the process of enfolding them into our religious lives. Already, human history is rife with examples of human beings attempting to create life. The corollary merger of technology and religious practice as a strategy for this remains with us today, as many advocates suggest that the rise of AI will satisfy longstanding human desires for transcendence and immortality. Should robots ever achieve human equivalence, we will recognize that only insofar as they show an active interest in those very desires.
The ethics of AI deployment marks an interesting intersection for all these concerns. What values do we human beings have? What values do we want our machines to possess? Often, scholars worry about “value alignment”; their concern is that the robots will not share our values (e.g., Yudkowski 2001). My own concern is that robots will share our values all too well: that they will pursue power and profit rather than a just world. It is possible that some religious ethics can be applied to resist this and develop ethical human-equivalent and even superhuman AI (e.g., Song 2020). This is one example of why it is important to decide what values we really want in AI (whether or not we are good at exercising those values ourselves) and then relentlessly pursue their realization.
As we look to the near future and consider such ethical design, we simultaneously must look to the ethics of our own practices and beliefs. Considering what would make a human-equivalent robot simultaneously conjures questions about what makes humans the best version of ourselves. It is thus imperative that we develop our own sense of empathy and vigorously pursue our own potential for justice in the world and in our classification schema. In our relationship with robots, that will be most obvious insofar as we can recognize human-equivalence for what it is. We will most easily recognize if robots have become truly intelligent, conscious beings in the ways they turn toward the mystery, the wonder, and the transcendent imagination of the cosmos.
Acknowledgements
A shorter version of this article was first presented as “Religion among Robots: Speculation on the Future of Human and Machine Intelligence” at Kyung Hee University, Republic of Korea, on May 12, 2023. The author acknowledges the National Research Foundation of Korea for its support under the grant NRF-2022S1A5A2A01047056, as well as the principal investigator on that grant, Professor Yong Sup Song.
Notes
- Throughout this article, I am cavalier with the terms AI, AGI, and robot. Generally speaking, and certainly with regard to “robot,” I refer to machines that reach human equivalence or even more advanced capabilities. That is, the hypothetical robots referred to are at least equivalent to human beings in capacity; I do not discuss the future of industrial machinery or domestic vacuum cleaners. [^]
- On the cognitive role of agency detection in religious settings, see Tremlin (2006). [^]
- It should be noted that golem attributions almost all happened at later times when Jews suffered political privation (which was not particularly the case in the times of Rabbi Elijah and Rabbi Loew). So, not only do the claims reveal the holy power of the supposed makers but the utter lack of political power among those responsible for the attribution (Geraci 2010, 156–57). [^]
- In 1965, Gordon Moore noticed that the number of transistors on an integrated circuit, and hence the computational speed of computers, doubled roughly every year. This has since been revised to doubling every eighteen to twenty-four months and is subject to potential limits based on physics; nevertheless, this computational acceleration has been the justification for advocates of exponential progress in technology. [^]
- This companionship with robots is not uniquely Japanese, however. Robertson (2018, 157–58) also notes that, when destroyed, American military robots frequently receive commendations from their bereaved human operators. [^]
- It is worth noting that not all visitors experience the robot as Kannon (Bafelli 2021, 258). [^]
References
Adas, Michael. 1989. Machines as the Measure of Man: Science, Technology, and Ideologies of Western Dominance. Ithaca, NY: Cornell University Press.
André, Fiona. 2023. “Meet the Christian Creators Designing Chatbots ‘with a Biblical Worldview.’” Religionnews.com, July 20. https://religionnews.com/2023/07/20/meet-the-christian-creators-designing-biblically-inspired-chatbots/.
Baffelli, Erica. 2021. “The Robot and the Fax: Robots, AI, and Buddhism in Japan.” In Itineraries of an Anthropologist: Studies in Honour of Massimo Raveri, edited by Giovanni Bulian and Silvia Rivadossi, 249–63. Venice, Italy: Venice University Press. DOI: http://doi.org/10.30687/978-88-6969-527-8/012.
Bailey, Edward. 1983. “The Implicit Religion of Contemporary Society: An Orientation and Plea for Its Study.” Religion 13 (1): 69–83. DOI: http://doi.org/10.1016/0048-721X(83)90006-4.
Bainbridge, William Sims. 1995. “Neural Network Models of Religious Belief.” Sociological Perspectives 38 (4): 483–95. DOI: http://doi.org/10.2307/1389269.
Brooke, John, and Geoffrey Cantor. 1998. Reconstructing Nature: The Engagement of Science and Religion. New York: Oxford University Press.
Cave, Stephen, and Kanta Dihal. 2020. “The Whiteness of AI.” Philosophy & Technology 33 (4): 685–703. DOI: http://doi.org/10.1007/s13347-020-00415-6.
Cave, Stephen, and Kanta Dihal (eds). 2023. Imagining AI: How the World Sees Intelligent Machines. Oxford: Oxford University Press. DOI: http://doi.org/10.1093/oso/9780192865366.001.0001.
Chatterjee, Partha. (1986) 1999. Nationalist Thought and the Colonial World. In The Partha Chatterjee Omnibus. New Delhi: Oxford University Press.
Chidester, David. 1996. Savage Systems: Colonialism and Comparative Religion in Southern Africa. Charlottesville, VA: University of Virginia Press.
Chidester, David. 2005. Authentic Fakes: Religion and American Popular Culture. Los Angeles: University of California Press. DOI: http://doi.org/10.1525/9780520938243.
Chidester, David. 2014. Empire of Religion: Imperialism & Comparative Religion. Chicago: University of Chicago Press. DOI: http://doi.org/10.7208/chicago/9780226117577.001.0001.
Cohen, John. 1966. Human Robots in Myth and Science. London: Allen & Unwin.
Darling, Kate. 2021. The New Breed: What Our History with Animals Reveals about Our Future with Robots. New York: Henry Holt.
de la Mettrie, Julian Offray. (1747) 1912. Man a Machine. Translated by Gertrude Bussey and M. W. Calkins. Chicago: Open Court. https://www.gutenberg.org/files/52090/52090-h/52090-h.htm.
Dorobantu, Marius. 2022. “Artificial Intelligence as a Testing Ground for Key Theological Questions.” Zygon: Journal of Religion and Science 57 (4): 984–99. DOI: http://doi.org/10.1111/zygo.12831.
Draper, John William. 1874. The History of the Conflict between Religion and Science. New York: D. Appleton.
Eliade, Mircea. (1964) 1985. “The Sacred and the Modern Artist.” In Symbolism, the Sacred, and the Arts, edited by Diane Apostolos-Cappadona, 81–85. New York: Crossroad.
Ethics & Religious Liberty Commission of the Southern Baptist Convention. 2019. “Artificial Intelligence: An Evangelical Statement of Principles.” https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles/.
Foerst, Anna. 2004. God in the Machine: What Robots Teach Us about Humanity and God. New York: Dutton.
Freud, Sigmund. (1927) 1989. The Future of an Illusion. Translated by James Strachey. New York: W. W. Norton.
Furse, Edmund. 1986. “The Theology of Robots.” New Blackfriars 67 (795): 377–86. DOI: http://doi.org/10.1111/j.1741-2005.1986.tb06559.x.
Furse, Edmund. 1996. “Towards the First Catholic Robot?” The Independent, October 26, 1996. www.comp.glam.ac.uk/pages/staff/efurse/Catholic-Robot/First-Catholic-Robot.html (accessed May 23, 2007; no longer available).
Gaiman, Neil (@neilhimself). 2023. “ChatGPT doesn’t give you . . .” Twitter, March 25, 2023, 11:49 p.m. https://x.com/neilhimself/status/1639610373115375616?lang=en.
Geduld, Harry M. 1978. “Genesis II: The Evolution of Synthetic Man.” In Robots Robots Robots, edited by Harry M. Geduld and Ronald Gottesman, 3–38. Boston: National Geographic Society.
Gellers, Joshua C. 2021. Rights for Robots: Artificial Intelligence, Animal and Environmental Law. New York: Routledge. DOI: http://doi.org/10.4324/9780429288159.
Geraci, Robert M. 2006. “Spiritual Robots: Religion and Our Scientific View of the Natural World.” Theology and Science 4 (3): 229–46. DOI: http://doi.org/10.1080/14746700600952993.
Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76 (1): 138–66. DOI: http://doi.org/10.1093/jaarel/lfm101.
Geraci, Robert M. 2010. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. New York: Oxford University Press.
Geraci, Robert M. 2018. Temples of Modernity: Nationalism, Hinduism, and Transhumanism in South Indian Science. Lanham, MD: Lexington.
Geraci, Robert M. 2020. “A Hydra-Logical Approach: Acknowledging Complexity in the Study of Religion, Science, and Technology.” Zygon: Journal of Religion and Science 55 (4): 948–70. DOI: http://doi.org/10.1111/zygo.12650.
Geraci, Robert M. 2022. Futures of Artificial Intelligence: Perspectives from India and the U.S. Delhi: Oxford University Press. DOI: http://doi.org/10.1093/oso/9788194831679.001.0001.
Geraci, Robert M., and Stephen Kaplan. 2024. “Hinduism and AI.” In Cambridge Encyclopedia of Religion and Artificial Intelligence, edited by Fraser Watts and Beth Singler. Cambridge: Cambridge University Press.
Good, Irving. 1966. “Speculations Concerning the First Ultraintelligent Machine.” Advances in Computers 6:31–88. DOI: http://doi.org/10.1016/S0065-2458(08)60418-0.
Gould, Stephen Jay. 1999. Rocks of Ages: Science and Religion in the Fullness of Life. New York: The Library of Contemporary Thought.
Gunkel, David J. 2018. Robot Rights. Cambridge, MA: The MIT Press. DOI: http://doi.org/10.7551/mitpress/11444.001.0001.
Hayward, Jeremy W., and Francisco J. Varela (eds). 1992. Gentle Bridges: Conversations with the Dalai Lama on the Sciences of the Mind. Boston: Shambhala.
Herzfeld, Noreen. 2023. The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Minneapolis: Fortress.
Hongladarom, Soraj. 2020. The Ethics of AI and Robotics: A Buddhist Viewpoint. Lanham, MD: Lexington.
Hornyak, Timothy. 2006. Loving the Machine: The Art and Science of Japanese Robotics. New York: Kodansha.
Jackson, Joshua Conrad, Kai Chi Yam, Pok Man Tang, Ting Liu, and Azim Shariff. 2023. “Exposure to Robot Preachers Undermines Religious Commitment.” Journal of Experimental Psychology: General 152 (12): 3344–58. DOI: http://doi.org/10.1037/xge0001443.
Katsuno, Hirofumi, and Daniel White. 2023. “Engineering Robots with Heart in Japan: The Politics of Cultural Difference in Artificial Emotional Intelligence.” In Imagining AI: How the World Sees Intelligent Machines, edited by Stephen Cave and Kanta Dihal, 295–317. Oxford: Oxford University Press. DOI: http://doi.org/10.1093/oso/9780192865366.003.0019.
Kimura, Takeshi. 2018. “Masahiro Mori’s Buddhist Philosophy of Robot.” Paladyn, Journal of Behavioral Robotics 9 (1): 72–81. DOI: http://doi.org/10.1515/pjbr-2018-0004.
Kirman, Alan, Pierre Livet, and Miriam Teschl. 2010. “Rationality and Emotions.” Philosophical Transactions of the Royal Society B 365 (1538): 215–19. DOI: http://doi.org/10.1098/rstb.2009.0194.
Klein, Zvika. 2023. “Hasidic Rabbi Releases ‘Kosher’ AI Chatbot Alternative to ChatGPT.” Jerusalem Post, May 7, 2023. https://www.jpost.com/judaism/article-742328.
Knapton, Sarah. 2020. “Watch: Robot That Can Feel Pain Invented by Scientists; Experts Warn Development Paves Way for ‘Blade Runner’ Future Where Machines Believe They Are Alive.” Telegraph Online, February 22, 2020. https://www.telegraph.co.uk/science/2020/02/22/watch-robot-can-feel-pain-invented-scientists/.
Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.
Kuyucu, Ayşe Kübra. 2023. “Chatbot Applications That Tell People about Islam.” Medium.com, February 8, 2023. https://medium.com/tech-talk-with-chatgpt/chatbot-applications-that-tell-people-about-islam-2a06dd74aaf6.
Lourdusamy, John. 2004. Science and National Consciousness in Bengal (1870–1930). London: Sangam.
Mayor, Adrienne. 2018. Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Princeton, NJ: Princeton University Press. DOI: http://doi.org/10.1515/9780691185446.
Midgley, Mary. 1992. Science as Salvation. New York: Routledge.
Moment. 2018. “Ask the Rabbis: Can a Robot Be Jewish?” Moment Magazine July-August. https://momentmag.com/ask-the-rabbis-can-a-robot-be-jewish/.
Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.
Mori, Masahiro. (1981) 1999. The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion. Translated by Charles S. Terry. Tokyo: Kosei.
Nagel, Thomas. 1974. “What Is It Like to Be a Bat?” The Philosophical Review 83 (4): 435–50. DOI: http://doi.org/10.2307/2183914.
Nandy, Ashis. (1983) 2012. The Intimate Enemy: Loss and Recovery of Self under Colonialism. New York: Oxford University Press.
Noble, David F. 1999. The Religion of Technology: The Divinity of Man and the Spirit of Invention. New York: Penguin. DOI: http://doi.org/10.22230/cjc.1998v23n4a1072.
Nooreyezdan, Nadia. 2023. “India’s Religious AI Chatbots Are Speaking in the Voice of God—and Condoning Violence.” Restofworld.org, May 9, 2023. https://restofworld.org/2023/chatgpt-religious-chatbots-india-gitagpt-krishna/.
Poole, Eve. 2024. Robot Souls: Programming in Humanity. Boca Raton, FL: CRC Press. DOI: http://doi.org/10.1201/9781003366614.
Robertson, Jennifer. 2018. “Robot Reincarnation: Rubbish, Artefacts, and Mortuary Rituals.” In Consuming Life in Post-Bubble Japan: A Transdisciplinary Perspective, edited by Katarzyna J. Cwiertka and Ewa Machotka, 153–74. Amsterdam: Amsterdam University Press. DOI: http://doi.org/10.1515/9789048530021-011.
Šabanović, Selma. 2014. “Inventing Japan’s ‘Robotics Culture’: The Repeated Assembly of Science, Technology, and Culture in Social Robotics.” Social Studies of Science 44 (3): 342–67. DOI: http://doi.org/10.1177/0306312713509704.
Said, Edward. 1978. Orientalism. New York: Pantheon.
Schodt, Frederik L. 1988. Inside the Robot Kingdom: Japan, Mechatronics, and the Coming Robotopia. New York: Kodansha. DOI: http://doi.org/10.1016/0278-6125(88)90048-9.
Scholem, Gershom 1971. “The Golem of Prague and the Golem of Rehovot.” In The Messianic Idea in Judaism: And Other Essays on Jewish Spirituality, 335–40. New York: Schocken Books.
Sharma, Kamayani. 2023. “A Robot in a 400-Year-Old Painting of Jahangir Is a Lesson in How Rulers Project Global Power.” Scroll, June 18, 2023. https://scroll.in/magazine/1050990/a-robot-in-a-400-year-old-painting-of-jahangir-is-a-lesson-in-how-rulers-project-global-power.
Singler, Beth. 2019. “Existential Hope and Existential Despair in AI Apocalypticism and Transhumanism.” Zygon: Journal of Religion and Science 54 (1): 156–76. DOI: http://doi.org/10.1111/zygo.12494.
Singler, Beth. 2024. “‘Will AI Create a Religion?’: Views of the Algorithmic Forms of the Religious Life in Popular Discourse.” American Religion 5 (1): 95–103. DOI: http://doi.org/10.2979/amerreli.5.1.05.
Song, Bing. 2023. “Attitudes of Pre-Qin Thinkers towards Machinery and Their Influence on Technological Development in China.” In Imagining AI: How the World Sees Intelligent Machines, edited by Stephen Cave and Kanta Dihal, 353–60. Oxford: Oxford University Press. DOI: http://doi.org/10.1093/oso/9780192865366.003.0022.
Song, Yong Sup. 2020. “Religious AI as an Option to the Risks of Superintelligence: A Protestant Theological Perspective.” Theology and Science 19 (1): 65–78. DOI: http://doi.org/10.1080/14746700.2020.1825196.
Stark, Rodney, and William Sims Bainbridge. 1985. The Future of Religion: Secularization, Revival, and Cult Formation. Los Angeles: University of California Press. DOI: http://doi.org/10.1525/9780520341340.
Tremlin, Todd. 2006. Minds and Gods: The Cognitive Foundations of Religion. New York: Oxford University Press. DOI: http://doi.org/10.1093/0195305345.001.0001.
Truit, E. R. 2015. Medieval Robots: Mechanism, Magic, Nature, and Art. Philadelphia: University of Pennsylvania Press. DOI: http://doi.org/10.9783/9780812291407.
Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64 (3, 2): 1–49. DOI: http://doi.org/10.1090/S0002-9904-1958-10189-5.
White, Andrew. 1896. The History of the Warfare of Science with Theology in Christendom. New York: D. Appleton and Company. DOI: http://doi.org/10.2307/1833620.
Wiener, Norbert. 1964. God and Golem, Inc.: A Comment on Certain Points where Cybernetics Impinges on Religion. Cambridge, MA: The MIT Press. DOI: http://doi.org/10.7551/mitpress/3316.001.0001.
Yudkowski, Eliezer. 2001. Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. San Francisco, CA: Machine Intelligence Research Institute (then The Singularity Institute).