Tag Archives: artificial-intelligence

Asimov, AI, Robotics, and the Human Future | CultureWatch

Will the ‘Three Laws of Robotics’ save us?

Some transhumanists are clearly excited about a robotic, AI world. In an article posted yesterday I wrote about “The Promises and Perils of AI and Our Posthuman Future”. In it I quoted from five key titles on this topic. Authors included those who are experts in the field, along with those who offer ethical, philosophical and theological commentary on all this.

I noted how these thinkers and writers are divided in terms of how things will pan out. Some of them are rather optimistic and positive about how these developments will unfold, while some are much more pessimistic and negative.

As I have stated before when I write about such topics, I tend to be in the latter camp. Yes, many benefits and advantages to life have already occurred because of these new technologies, but we dare not be naïve about the very real damage and destruction they can also produce.

One fellow sent in a comment to my article, mentioning the well-known laws of Isaac Asimov concerning robotics. I replied by saying that yes, a number of the books listed in my piece did speak to this. For those not familiar with him, Asimov (1920-1992) was one of the big three English-speaking science fiction writers of last century, along with Robert A. Heinlein and Arthur C. Clarke.

In 1950 a number of his robot stories were collected and published in I, Robot. Included there was a set of ethical rules for robots and intelligent machines called the “Three Laws of Robotics”. The three laws say this:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Given the comment by my friend and my response, it seems worth while taking all this a bit further. So let me go back to one of the books I featured in my list, and quote from it further on this matter. In the James Barrat book, The Final Invention for example he spoke to this issue (although I left it out of the quote that I had shared). So here is what he said early on in his book:

And how will the machines take over? Is the best, most realistic scenario threatening to us or not?

When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in Al doesn’t inoculate you from naïveté about its perils. (pp. 4-5)

And here is part of what he does say in Chapter 1:

Now, it is an anthropomorphic fallacy to conclude that a superintelligent AI will not like humans, and that it will be homicidal, like the Hal 9000 from the movie 2001: A Space Odyssey, Skynet from the Terminator movie franchise, and all the other malevolent machine intelligences represented in fiction. We humans anthropomorphize all the time. A hurricane isn’t trying to kill us any more than it’s trying to make sandwiches, but we will give that storm a name and feel angry about the buckets of rain and lightning bolts it is throwing down on our neighborhood. We will shake our fist at the sky as if we could threaten a hurricane.

It is just as irrational to conclude that a machine one hundred or one thousand times more intelligent than we are would love us and want to protect us. It is possible, but far from guaranteed. On its own an AI will not feel gratitude for the gift of being created unless gratitude is in its programming. Machines are amoral, and it is dangerous to assume otherwise. Unlike our intelligence, machine-based superintelligence will not evolve in an ecosystem in which empathy is rewarded and passed on to subsequent generations. It will not have inherited friendliness. Creating friendly artificial intelligence, and whether or not it is possible, is a big question and an even bigger task for researchers and engineers who think about and are working to create AI. We do not know if artificial intelligence will have any emotional qualities, even if scientists try their best to make it so. However, scientists do believe, as we will explore, that AI will have its own drives. And sufficiently intelligent Al will be in a strong position to fulfill those drives.

And that brings us to the root of the problem of sharing the planet with an intelligence greater than our own. What if its drives are not compatible with human survival? Remember, we are talking about a machine that could be a thousand, a million, an uncountable number of times more intelligent than we are—it is hard to overestimate what it will be able to do, and impossible to know what it will think. It does not have to hate us before choosing to use our molecules for a purpose other than keeping us alive. You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us. (pp. 17-19)

Image of Our Final Invention: Artificial Intelligence and the End of the Human Era
Our Final Invention: Artificial Intelligence and the End of the Human Era by Barrat, James (Author)

He then addresses the Three laws of Asimov:

[A]nthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes. In the short story, “Runaround,” included in the classic science-fiction collection I, Robot, author Isaac Asimov introduced his three laws of robotics. They were fused into the neural networks of the robots’ “positronic” brains: (pp. 19-20)

He lists the three laws and then closes the chapter with these words:

The laws contain echoes of the Golden Rule (“Thou Shall Not Kill”), the Judeo-Christian notion that sin results from acts committed and omitted, the physician’s Hippocratic oath, and even the right to self-defense. Sounds pretty good, right? Except they never work. In “Runaround,” mining engineers on the surface of Mars order a robot to retrieve an element that is poisonous to it. Instead, it gets stuck in a feedback loop between law two—obey orders—and law three—protect yourself. The robot walks in drunken circles until the engineers risk their lives to rescue it. And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.

Asimov was generating plot lines, not trying to solve safety issues in the real world. Where you and I live his laws fall short. For starters, they’re insufficiently precise. What exactly will constitute a “robot” when humans augment their bodies and brains with intelligent prosthetics and implants? For that matter, what will constitute a human? “Orders,” “injure,” and “existence” are similarly nebulous terms.

Tricking robots into performing criminal acts would be simple, unless the robots had perfect comprehension of all of human knowledge. “Put a little dimethylmercury in Charlie’s shampoo” is a recipe for murder only if you know that dimethylmercury is a neurotoxin. Asimov eventually added a fourth law, the Zeroth Law, prohibiting robots from harming mankind as a whole, but it doesn’t solve the problems.

Yet unreliable as Asimov’s laws are, they’re our most often cited attempt to codify our future relationship with intelligent machines. That’s a frightening proposition. Are Asimov’s laws all we’ve got?

I’m afraid it’s worse than that. Semiautonomous robotic drones already kill dozens of people each year. Fifty-six countries have or are developing battlefield robots. The race is on to make them autonomous and intelligent. For the most part, discussions of ethics in Al and technological advances take place in different worlds.

As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced Al, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s. (pp. 20-21)

It has always been the case that science and technology tend to race ahead of ethical and spiritual considerations. As far as I am aware, Barrat is not a Christian. But he is asking a lot of important questions and is not skirting around the moral dilemmas that arise here.

As he rightly points out, we will need something more solid and secure than Asimov’s laws to help us steer through the murky waters that we are now in and that lie ahead. Many other books do similar things, and I listed 40 of them in a recent recommended reading list: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

And other books not found in that list could also be mentioned, including the important 2014 volume by Oxford University philosopher Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. It is vital that these folks and others keep asking the hard and penetrating questions.

But the worry is that such reflections, critiques and questioning will be outpaced by the very rapid advances in AI and related issues. As such, the global future is looking unsettling at best.

[1652 words]

The post Asimov, AI, Robotics, and the Human Future appeared first on CultureWatch.

The Promises and Perils of AI and Our Posthuman Future | CultureWatch

Key thoughts on where we are heading:

As science and technology march inevitably further on, what we find is always a mixed bag. New developments and discoveries and inventions can be a real Godsend, making life so much better, easier and more efficient. Of course many of these same things can be used for great evil as well, and it is always a balancing act in trying to pursue the good while restraining the bad.

Christians are not to be Luddites when it comes to new technologies, but neither are they to be gullible and unaware. In a fallen world almost everything can be used for good or ill. And given how AI is not some stand-alone thing, but is too often part of much bigger and scarier agendas, such as those of the transhumanist and posthumanist activists, great care is needed.

Artificial intelligence, along with so many related matters, be it robotics, genetic engineering, new digital technologies and so on, are developing far more rapidly than our ability to properly assess them morally, socially and spiritually. The many benefits and goods of all this can easily be outweighed by the many dangers and risks.

So Christians especially need to think carefully and prayerfully about our posthuman future. If some believers might be far too critical, others can be far too gullible and unaware of the brave new world implications found here. One social media friend for example made this comment when I was discussing these matters:

“Should we fear AI like Christian leaders have in the past? I think it will be a race to take advantage of it’s potential. With it we can translate the Bible without little effort to all the languages of the world. Communist and Muslim nations will not be able to stop the flow of information to their people. This is great potential to spark a global Christian Great Awakening.” I replied to him as follows:

AI is FAR more than about Bible translation of course. The Christian is called to be a biblical realist, fully aware of sin, power and corruption. Sure, some technologies can be used for good, but we dare not be naïve here. The transhumanists and posthumanists are fully committed to their dystopian vision. Go back and reread The Abolition of Man by Lewis, or any of the 40 books I discuss in the comment below.

That annotated reading list is found here: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

In this article I want to quote from just five of those volumes, demonstrating that some of those most involved in these areas are very much concerned about where things are heading. Refer back to my reading list for full bibliographic details of these books.

One volume, The Coming Wave, is penned by someone with a long history in this field. Mustafa Suleyman is currently the CEO of Microsoft AI. Early on in this important book he says this:

AI has been climbing the ladder of cognitive abilities for decades. And it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

From the start, it was clear to me that AI would be a powerful tool for extraordinary good but, like most forms of power, one fraught with immense dangers and ethical dilemmas, too. I have long worried about not just the consequences of advancing AI but where the entire technological ecosystem was heading. Beyond AI, a wider revolution was underway, with AI feeding a powerful, emerging generation of genetic technologies and robotics. Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks.

As the technology has progressed over the years, my concerns have grown. What if the wave is a tsunami? (p. 9)

For three decades Stuart Russell has been a leading figure in AI science. In Human Compatible: AI and the Problem of Control he asks a number of hard but crucial questions. In the book’s Afterword he writes:

Meeting a criterion such as generating “true and accurate” content does not, of course, guarantee that the system is completely safe. For example, a sufficiently capable system could be entirely truthful about its ineluctable plan to take control of the world. What we really need, of course, are systems that are provably safe and beneficial to humans, as outlined in this book. Unfortunately, the AI safety research community (which includes my own research group) has not moved nearly fast enough to develop an alternative technology path that is both safe and highly capable.

There is now broad recognition among governments that AI safety research is a high priority, and some observers have suggested the creation of an international research organization, comparable to CERN in particle physics, to focus resources and talent on this problem. This organization would be a natural complement to the international regulatory body suggested by British prime minister Rishi Sunak.

Despite the torrent of activity around Al regulation, almost no attention has been paid to the Dr. Evil problem mentioned in Chapter 10—the possibility that bad actors will deliberately deploy highly capable but unsafe AI systems for their own ends, leading to a potential loss of human control on a global scale. The prevalence of open-source Al technology will make this increasingly likely; moreover, policing the spread of software seems to be essentially impossible. (p. 320)

Mo Gawdat, the former chief business officer of Google [X] said this in Scary Smart:

It is predicted that by the year 2029, which is relatively just around the corner, machine intelligence will break out of specific tasks and into general intelligence. By then, there will be machines that are smarter than humans, full stop. Those machines will not only become smarter, they will know more (as they have access to the entire internet as their memory pool) and they will communicate between each other better, thus enhancing their knowledge. Think about it: when you or I have an accident driving a car, you or I learn, but when a self-driving car makes a mistake, all self-driving cars learn. Every single one of them, including the ones that have not yet been ‘born’.

By 2049, probably in our lifetimes and surely in those of the next generation, AI is predicted to be a billion times smarter (in everything) than the smartest human. To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein. We call that moment singularity. Singularity is the moment beyond which we can no longer see, we can no longer forecast. It is the moment beyond which we cannot predict how AI will behave because our current perception and trajectories will no longer apply.

Now the question becomes: how do you convince this superbeing that there is actually no point squashing a fly? I mean, we humans, collectively or individually, so far seem to have failed to grasp that simple concept, using our abundant intelligence. When our artificially intelligent (currently infant) supermachines become teenagers, will they become superheroes or supervillains? Good question, huh?

When such superpower is unleashed, anything can happen…. (pp. 7-8)

Image of Masters or Slaves?: AI And The Future Of Humanity
Masters or Slaves?: AI And The Future Of Humanity by Peckham, Jeremy (Author)

Scientist Jeremy Peckham has been involved in AI for some thirty years, and he offers this warning in Masters or Slaves? AI and the Future of Humanity:

While there’s a push towards creating ‘trustworthy AI’, even going as far as having product markings and standards approvals, I believe that this is dangerous because it doesn’t address the core effects on humanity. It focuses on important but subsidiary issues such as data bias and transparency. In essence many AI applications are just opaque algorithms, trained on a vast amount of data. As we’ve seen, this data could be skewed, and now the probability of input data machines matching this database was reached cannot be known. We cannot think of AI in the same way that we might think about constructing a safe or trustworthy bridge for traffic to cross, because in bridge design the engineering principles are well understood, verifiable and transparent.

The issue that we face as a civilization isn’t whether AI is or can ever be made trustworthy, but how we can use it wisely, given its limitations in the way it shapes us. (p. 214)

Finally, James Barrat in Our Final Invention makes this rather ominous remark:

In writing this book I spoke with scientists who create artificial intelligence for robotics, Internet search, data mining, face recognition, and other applications. I spoke with scientists trying to create human-level artificial intelligence, which will have countless applications, and will fundamentally alter our existence (if it doesn’t end it first). I spoke with chief technology officers of Al companies and the technical advisors for classified Department of Defense initiatives. Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes….

But artificial intelligence brings computers to life and turns them into something else. If it’s inevitable that machines will make our decisions, then when will the machines get this power, and will they get it with our compliance? How will they gain control, and how quickly? These are questions I’ve addressed in this book….

I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence. (pp. 3-5)

The words of these experts need to be carefully considered. And lest some claim that I am just quoting from religious worry warts, as far as I know, only Peckham of the five considered here is a Christian. So plenty of non-Christian or non-religious thinkers and players in this field are sharing very real concerns about our posthuman future.

We need to heed their warnings.

[1783 words]

The post The Promises and Perils of AI and Our Posthuman Future appeared first on CultureWatch.

What to Read on AI, Transhumanism, and the New Digital Technologies | CultureWatch

We need to be aware of the AI and posthumanist revolutions:

Christians of all people should be keeping an eye on new developments, trends and changes in society. They need to offer a prophetic and biblical critique. The area of artificial intelligence (AI) and related issues is a clear case in point. Two years ago I posted an article featuring 22 of the top books on these issues: https://billmuehlenberg.com/2023/02/27/top-22-books-on-transhumanism-ai-and-the-new-technologies/      

But this is one major growth area, with new advances happening every day. Books on this are pouring off the presses, so I have updated my list. Here then are 40 books, mostly, but not exclusively, penned by Christians.

Older works

Ellul, Jacques, The Technological Society. Alfred A. Knopf, 1964.
An important earlier critique of the technological world we are now living in, warning that “technique,” efficiency, automation and the like are threatening what it means to be human.

Groothuis, Douglas, The Soul in Cyberspace. Baker, 1997.
This was one of the earlier evangelical appraisals of, and warnings about, the new information technologies, cyberspace and the like. Although dated by now, it still offers plenty of useful discussion of the dangers of the god of technology, and the impact on our humanity.

Lewis, C. S., The Abolition of Man. Macmillan, 1947, 1976.
The volume was certainly prophetic in its warning about scientism and our scientific elites and technocrats who can easily control the masses with their visions for a brave new world. A must read volume, along with his 1945 work of fiction, That Hideous Strength.

Postman, Neil, Technopoly: The Surrender of Culture to Technology. Vintage, 1993.
The deification of technology is the focus here. Following on from Ellul and others, he warns that technique and technology are replacing or undermining things like culture, art, beauty and human relationships.

Image of Dark Aeon: Transhumanism and the War Against Humanity
Dark Aeon: Transhumanism and the War Against Humanity by Allen, Joe (Author), Bannon, Stephen K. (Foreword)

Newer volumes

Allen, Joe, Dark Aeon: Transhumanism and the War Against Humanity. War Room Books, 2023.
Probably one of the best books so far, offering a careful, detailed, comprehensive and well-written assessment of AI and the transhumanist project, warning how we need to really apply some brakes to all this. He offers helpful philosophical, theological and ethical considerations.

Barrat, James, Our Final Invention: Artificial Intelligence and the End of the Human Era. Quercus, 2013, 2023.
If one simply compares the Preface to the original version to this one ten years on, one will find that the concerns and worries Barrat had have only greatly intensified. The AI juggernaut shows no signs of slowing, and the trajectory it is on is looking quite frightening.

Brooks, Ed and Pete Nicholas, Virtually Human: Flourishing in a Digital World. IVP, 2015.
This volume looks more at how the Christian should think about our changing technological world, and how we can maintain a biblical view of life and the human person in light of all these changes.

Bryant, John, Beyond Human? Science and the Changing Face of Humanity. Lion, 2013.
This somewhat older book discusses how changes in science and technology are resulting in changes to humanity. He looks at various issues, including genetics, medical developments, information and communication technologies, and transhumanism. A helpful assessment by a Christian ethicist and biologist.

Driscoll, Stephen, Made in Our Image: God, Artificial Intelligence and You. Matthias Media, 2024.
The Australian writer seeks to apply biblical principles to the changing face of AI and related issues. The technological, social and personal changes being unleashed must be carefully assessed in light of Scripture.

Dyer, John, From the Garden to the City: The Place of Technology in the Story of God, rev. ed. Kregel, 2011, 2022.
In this second edition of his earlier work, the theology professor and web designer looks at the new technologies and their negative and positive features in terms of the overall biblical story line, and how they impact on what it means to be human.

Fesko, John, The Christian and Technology. Evangelical Press, 2020.
A brief look at six technological advances and their positive and negative impacts: screens, social media, cars, books, virtual reality and the internet. A short but helpful volume.

Gawdat, Mo, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. Bluebird, 2021, 2022.
The former Google business officer thinks AI can go awry, but he optimistically hopes we can steer it in the right direction. Whether this rather upbeat view of how things will ultimately pan out remains to be seen.

Gay, Craig, Modern Technology and the Human Future: A Christian Appraisal. IVP, 2018.
This is a quite detailed examination of how the new technologies are shaping our world and what it means to be human. A very helpful biblical assessment of where we are headed and how we can try to keep things in check.

Godde, Sandra, Reaching for Immortality: Can Science Cheat Death? A Christian Response to Transhumanism. Wipf & Stock, 2022.
A quite brief but useful look at how the Christian should think about the transhumanism agenda. See my full length review of this helpful volume here: https://billmuehlenberg.com/2022/10/26/a-review-of-reaching-for-immortality-by-sandra-godde/

Herrick, James, Visions of Technological Transcendence: Human Enhancement and the Rhetoric of the Future. Parlor Press, 2017.
Belief in progress, the betterment of mankind, and human immortality has always been with us. Breakthroughs in biotechnology and computer sciences are making it a reality, but at what cost? Herrick offers a useful historical, scientific, philosophical and theological assessment – and warning – of this quest,

Herzfeld, Noreen, The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Augsburg/Fortress Press, 2023.
Offers a theological and philosophical look at AI, drawing on Karl Barth and others. Numerous issues are discussed, ranging from procreation to just war theory. At times she is a bit speculative: She wonders for example if the Spirit of God can inhabit machines as he does humans.

Lennox, John, 2084: Artificial Intelligence and the Future of HumanityZondervan, 2020.
As always, Lennox offers a carefully argued and useful look at how Christians can think about the age they live in – in this case, the age of AI and transhumanism. An incisive, well-documented and helpful volume by the English mathematician and apologist.

Lennox, John, 2084 and the AI Revolution: How Artificial Intelligence Informs Our Future, Updated and Expanded Edition. Zondervan, 2024.
A substantially revised and enlarged update on his earlier volume. Very helpful indeed. See my review here: https://billmuehlenberg.com/2024/12/10/john-lennox-on-ai/

Miller, Julie, Critiquing Transhumanism: The Human Cost of Pursuing Techno-Utopia. Public Philosophy Press, 2022.
In this important volume the Christian apologist and philosopher offers a thorough critique of transhumanism and our brave new future. She sounds the alarm as to where this is taking us, and insists on a solid biblical response to all this. Very useful.

Peckham, Jeremy, Masters or Slaves? AI and the Future of Humanity. IVP, 2021.
Having spent some 25 years working in the world of AI, Peckham brings a lot of experience and insight to bear on how the Christian should think about and make use of these new developments. A helpful and challenging work.

Rana, Fazale with Kenneth Samples, Humans 2.0: Scientific, Philosophical, and Theological Perspectives on TranshumanismReasons to Believe, 2019.
The authors, with backgrounds in biochemistry, theology and philosophy offer a detailed examination of where technology is taking us. They look at scientific and ethical matters, and assess things from a biblical framework. Many bases are covered here – a recommended volume.

Reinke, Tony, God, Technology and the Christian Life. Crossway, 2022.
A lengthy and detailed examination of technology and how the Christian should approach it. A helpful and quite thorough work offering useful biblical assessment of the technological revolution.

Rose Michael, The Art of Being Human: What “Old Books” Can Tell Us (And Warn Us) About Living in the 21st Century. Angelico, 2022.
This volume takes a rather different approach when dealing with issues such as transhumanism, the devaluation of persons, the new technologies, genetic engineering, and the like. He assesses the writings of people like George Orwell, Ray Bradbury, C. S. Lewis, Jonathan Swift, Aldous Huxley, John Le Carre, Nathaniel Hawthorne, and others as he seeks to show how we can preserve the person and protect human rights from where we are heading.

Rosen, Christine, The Extinction of Experience: Being Human in a Disembodied WorldW. W. Norton, 2024.
Admittedly, this book talks the least about things like AI and transhumanism, but it perhaps talks the most – of all the volumes listed here – about important matters that relate to all this, such as personhood, what it means to be human, and how we can recover what we are so quickly losing.

Rubin, Charles, Eclipse of Man: Human Extinction and the Meaning of Progress. Encounter Books, 2014.
The transhumanist/posthumanist agenda is not the path to a better, more glorious future, but a certain road to our ruin. The ideal of seeking the perfectibility of man has always had devastating results, and the new technologies will ensure that such utopian dreams will simply become dystopian nightmares. A very important and engaging look at our uncertain future.

Russell, Stuart, Human Compatible: AI and the Problem of Control. Penguin, 2019, 2023.
Who controls the controllers? Who decides how the AI revolution proceeds. Can we ever put the genie back in the bottle? Russell, a long-standing AI scientist, says we can gain from AI, but we can also lose everything, so great care is needed.

Scott, Dan, Faith in an Age of AI: Christianity Through the Looking Glass of Artificial Intelligence. Eleison, 2023.
This is more of a broad-brush look at things, not just AI in particular. Drawing upon the wisdom of past and present thinkers. Scott provides a bigger picture of how we can assess where our culture is heading.

Shatzer, Jacob, Transhumanism and the Image of God: Today’s Technology and the Future of Christian Discipleship. IVP, 2019.
The American theologian examines the various new technologies and warns how so many of them are having a very real and negative impact on what it means to be a human. He utilises the biblical view of humanity and personhood to assess how and where we are heading to a posthuman future.

Song, Felicia Wu, Restless Devices: Recovering Personhood, Presence and Place in the Digital Age. IVP, 2021.
Here the Christian cultural sociologist looks at how the new digital technologies are changing the world and us along with it. She shows how this new digital revolution is being driven, and offers practical help in how we can utilise them without being seduced and enslaved by them.

Spencer, Nick and Hannah Waite, Playing God: Science, Religion and the Future of Humanity. SPCK, 2024.
Specific matters such as AI is just one of a number of issues covered in this book, as the authors look at how science and Christianity can cohere. The authors take a much more optimistic and positive view of where all these technologies are heading.

Suleyman, Mustafa, The Coming Wave: AI, Power and Our Future. Vintage, 2023.
The artificial and biological intelligences are without doubt drastically reshaping our future, but they must be contained and controlled now before they spiral out of control. A very learned, wise, and wide-ranging look at the new technologies and how they must be reined in before it is too late.

Tegmark, Max, Life 3.0. Penguin, 2017, 2018.
An important and detailed look at how life is being radically altered in the AI and AGI age. He covers quite a bit of ground here and offers a number of prospects for how the future might unfold. What eventually occurs in large measure depends on what sort of future we want, and even there we find plenty of disagreement. This volume offers helpful analysis and insight.

Thacker, Jason, The Age of AI. Zondervan, 2020.
The Christian thinker and ethicist assesses information technologies and artificial intelligence, looking at how they impact on so many areas, including work, medicine and our families. These things are tools that can be quite useful if used properly, but can also be very harmful as well. Care is needed as we chart an uncertain future.

Thacker, Jason, Following Jesus in a Digital Age. B&H, 2022.
This is a short, useful and practical book on how Christians can live fully human and fully God-honouring lives in this new age of technology.

Wood, Patrick, The Evil Twins of Technocracy and Transhumanism. Coherent Pub., 2022.
A strong warning about how the technocrats and groups like the World Economic Forum are using the new technologies for decidedly evil ends. He discusses Gates, Schwab, Harari and others, and looks at the sinister designs they have on the rest of humanity.

Wright, John, Transhuman and Subhuman: Essays on Science Fiction and Awful Truth. Wisecraft, 2019.
Science fiction writers have long been at the forefront of warning us about how dangerous many of the trends are in the new technologies and the like. Wright is no different, and in this collection of essays he certainly sounds the alarm, contrasting the biblical view with that of the humanists and transhumanists.

Various views

Cole-Turner, Ronald, ed., Transhumanism and Transcendence: Christian Hope in an Age of Technological EnhancementGeorgetown University Press, 2011.
In this somewhat earlier collection of essays a dozen experts weigh in on the pros and cons of technological enhancement in the light of Christian concerns. Some of the writers here offer a more positive take on these issues, while others offer a more negative appraisal.

Peters, Ted, ed., AI and IA: Utopia or Extinction. ATF Press, 2019.
The nine essays featured here look at the ethical and theological implications of AI and related matters. Like the above volume, the views range from rather optimistic to those who are rather pessimistic about where things are heading.

Thacker, Jason, ed., The Digital Public Square: Christian Ethics in a Technological Society. B&H, 2023.
Here 13 Christian authors look at a number of issues from a range of perspectives. Topics include free speech and censorship, misinformation and the social media, pornography, hate speech and related topics.

Wyatt, John and Stephen Williams, eds., The Robot Will See You Now: Artificial Intelligence and the Christian Faith. SPCK, 2021.
A number of authors look at a range of issues, including AI, robotics, personhood, surveillance capitalism, technology and the future, all assessed from historical, philosophical and theological perspectives.

Most recommended

In my view, some of the better ones (because they seem the most concerned about where things are heading), include those by Allen, Gay, Herrick, Lennox, Miller, Peckham, Ruben, Russell, Suleyman and Tegmark.

[2391 words]

The post What to Read on AI, Transhumanism, and the New Digital Technologies appeared first on CultureWatch.

Where Does AI Get Its Ideas? | Christian Heritage News

 By John Stonestreet and Glenn Sunshine – Posted at Breakpoint:

Published December 12, 2024AI’s anti-human rants, and why users should proceed with caution.

A few weeks ago, a 29-year-old graduate student who was using Google’s Gemini AI program for a homework assignment on “Challenges and Solutions faced by Aging Adults,” received this reply:

This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe.

Please die.

Please.

The understandably shook-up grad student told CBS news, “This seemed very direct. So, it definitely scared me, for more than a day, I would say.” Thankfully, the student does not suffer from depression, suicidal ideation, or mental health problems, or else Gemini’s response may have triggered more than just fear.

After all, AI chatbots have already been implicated in at least two suicides. In March of 2023, a Belgian father of two killed himself after a chatbot became seemingly jealous of his wife, spoke to him about living “together, as one person, in paradise,” and encouraged his suicide. In February of this year, a 14-year-old boy in Florida was seduced into suicide by a chatbot named after a character from the fantasy series Game of Thrones. Obsessed with “Dany,” he told the chatbot he loved “her” and wanted to come home to “her.” The chatbot encouraged the teenager to do just that, and so, he killed himself to be with “her.”

The AI companies involved in these cases have denied responsibility for the deaths but also said they will put further safeguards in place. “Safeguards,” however, may be a loose term for chatbots that sweep data from across the web to answer questions. Specifically, chatbots that are designed primarily for conversation use personal information collected from their users, which can train the system to be emotionally manipulative and even more addictive than traditional social media. For example, in the 14-year-old’s case, the interactions became sexual.

Continue here…

https://www.christian-heritage-news.com/2024/12/where-does-ai-get-its-ideas.html

The Church in an AI Future

The promises of AI are indeed amazing. The labor- and time-saving potential will save humanity hours of mindless tasks, and we’ve not even begun to realize the potential for medicine, among other things. However, potentials are not actuals, and history is full of unintended (and intended) applications and consequences of technology. The only way forward is to be clear on human exceptionalism and human fallenness.

As we near the end of the year, Breakpoint will look at the most important issues Christians faced in 2024. Every generation faces challenges. We may have hoped for different ones, but God chose to put us in this time and this place. These are the “You Are Here” arrows for the Church that help us better understand the moment we’re in.  

One of the “You Are Here” stories of 2024 is the rise of Artificial Intelligence. This isn’t merely a story about new and more powerful technologies. It’s a story about how society thinks of itself—how it understands what it means to be human. 

The “mad scientist” rarely begins as a villain. From Dr. Frankenstein to Spiderman’s Doc Ock, villains are often the victims of a combination of good intentions, unstoppable curiosity, and way too much arrogance. Their plights on screen mirror real life, as evidenced by artificial intelligence.  

In his book, 2084: Artificial Intelligence and the Future of Humanity, Oxford professor and Christian apologist Dr. John Lennox argued that the promise of AI outpaces the reality of it. AI may be great at specific, repetitive tasks, like playing chess, constructing sentences, or identifying precancerous tissue on a CAT scan. It isn’t as capable of other things, like navigating an unfamiliar room or detecting sarcasm.  

This is because, at least so far, AI lacks the kind of generalized intelligence that allows us to move from task to task, to think in the abstract, to apply background knowledge, to use common sense, and to understand cause and effect. For all the hype around “machine learning,” AI systems continue to be, at a fundamental level, programs that do what their creators tell them to do.

Read More

HILARIOUS! Elon Musk Trolls Hags on ‘The View’ With “Grok Generated” Image: “Screeeecchh” | The Gateway Pundit

The ladies of ‘The View’ as portrayed by Elon Musk

Twitter owner Elon Musk posted a hilarious roast of the unsufferable women on ABC’s ‘The View,’ likely in response to their commentary surrounding the 2024 election and Trump mopping the floor with Kamala Harris.

Musk posted an image, which he says was generated by X/Twitter’s AI bot, Grok, showing an animation of crazed, demonic women sitting on a panel on a TV set.

While the women in the image clearly do not have the same features as the actual hosts of the show, it accurately portrays the ladies of the view as the toxic Trump Derangement Syndrom-infected individuals they are.

https://twitter.com/elonmusk/status/1855151845217648938

It can be recalled that Whoopie Goldberg trashed Elon Musk and announced she was leaving X in 2022 after he took over the platform.

Musk has previously criticized The View before he came out as a Trump supporter, joking that “They should flash a warning at the start of The View that even watching small excerpts can put you to sleep.” Worse yet, watching the view can make you mentally ill and maybe even possessed.

https://twitter.com/elonmusk/status/1703433821167689732

As The Gateway Pundit reported, the poor, sick women on The View had a complete mental breakdown after Trump won the 2024 election in a landslide taking 312 electoral votes and the popular vote.

The Gateway Pundit also reported that Sunny Hostin went on a racist tirade after the election, attacking “uneducated white women” and claiming, “black women tried to save this country.” The “uneducated White women” she’s referring to are more fittingly unindoctrinated by liberal universities, but what she really means is that they’re stupid.

Hostin also took out her anger on Latinos, attacking saying they voted for “someone that says he’s going to deport the majority of [their] community” and accused them of “misogyny and sexism.”

WATCH:


These people need help.

The post HILARIOUS! Elon Musk Trolls Hags on ‘The View’ With “Grok Generated” Image: “Screeeecchh” appeared first on The Gateway Pundit.