Tag Archives: science-fiction

Warnings From Sci Fi – and Captain Kirk | CultureWatch

We can learn much from science fiction:

For years now I and many others have been warning about things like genetic engineering, the new reproductive technologies, human cloning, AI, transhumanism, cyborgs, the totalist state, and so on. Philosophers, theologians, ethicists and scientists are included in that number who have sounded the alarm.

But what is of interest is the fact that science fiction writers are often well ahead of the game when it comes to telling us about where unrestrained science and the new technologies are taking us. They have long been warning us to beware of the many inherent dangers in all this.

Plenty of books and films can be mentioned here. A few movies – among many – would include:

Metropolis (1927)
Soylent Green (1973)
The Boys From Brazil (1978)
Blade Runner (1982)
Jurassic Park (1993)
The Island of Dr. Moreau (1996)
Gattaca (1997)
Bicentennial Man (1999)
The Matrix (1999)
The Sixth Day (2000)
AI (2001)
Terminator 3: Rise of the Machines (2003)
I, Robot (2004)
The Island (2005)
Elysium (2013)
Her (2013)
Replicas (2018)

Not just books and films are involved in this. Many television programs have dealt with these themes. Think of some of the old Twilight Zone episodes for example. And the popular sci fi series Star Trek also focused on such matters as well. Let me speak to just one episode of the latter that I happened upon the other day.

In the first season of Star Trek, “What Are Little Girls Made Of?” aired on October 20, 1966. It dealt with things like human cloning and unhinged scientists seeking to create a “better” world. In this episode Captain Kirk is cloned – temporarily.

The story line is this: Nurse Christine Chapel searches for her long-lost fiancé exobiologist Dr. Roger Korby. He is found on a barren ice planet, and she uncovers his secret plan to create sophisticated androids for galactic conquest. He creates an android duplicate of Kirk as she looks on. The scene I want to highlight takes place in a dining room where an android, Andrea, serves some food:

ANDREA: I am now programmed to please you also. Is the food appealing?
CHAPEL: Yes, thank you.
ANDREA: Please sit, Captain.
KIRK2: Thank you. I’m more or less on parole, I understand.
ANDREA: Doctor Korby suggested that you have lunch. He thought you might have a few things to talk about.
CHAPEL: Captain.
KIRK2: I know, I know. I’d hate to be torn between commander and fiancée myself.
CHAPEL: No, I’m not torn. I’m puzzled. I’m worried.
KIRK2: Has he confided in you?
CHAPEL: Nothing he hasn’t told you. I know it doesn’t make sense. What he’s done may seem wrong, but he is Roger Korby, whatever he seems to be doing.
KIRK2: Unless something’s gone wrong with his mind.
CHAPEL: No. You’re forgetting how well I know him. He’s as sane as you or I.
KIRK2: Nurse, if I gave you a direct order to betray him…
CHAPEL: Please, don’t ask me to make that choice. I’d much rather you push me off the same precipice where Matthews died. Oh, I can’t. (pushes plate away) Please, go ahead and eat.
KIRK2: Androids don’t eat, Miss Chapel.
KORBY: He even has your sense of humour.

An important bit of dialogue especially gets going at this point:

KIRK: Well, there’s one difference between us. I’m hungry.
KIRK2: The difference is your weakness, Captain, not mine.
KORBY: One at a time, gentlemen. Captain?
KIRK: Eating is a pleasure, sir. Unfortunately, one you will never know.
KIRK2: Perhaps, but I will never starve, sir.
KIRK: He’s an exact duplicate?
KORBY: In every detail.
KIRK: What about memory? Tell me about Sam.
KIRK2: George Samuel Kirk, your brother. Only you call him Sam.
KIRK: He saw me off on this mission.
KIRK2: Yes, with his wife and three sons.
KIRK: He said he was being transferred to Earth colony two research station.
KIRK2: No, Captain. He said he was continuing his research and that he wanted to be transferred to Earth colony two.

And this next bit of dialogue well expresses my primary emphasis:

KORBY: You might as well try to outthink a calculating machine.
KIRK: Obviously, I can’t, but we do have some interesting differences.
KORBY: Totally unimportant ones. You may leave now. (Kirk2 leaves) You haven’t guessed the rest? Not even you, Christine? What you saw was only a machine, only half of what I could’ve accomplished. Do you understand? By continuing the process I could’ve transferred you, your very consciousness into that android. Your soul, if you wish. All of you. In android form, a human being can have practical immortality. Can you understand what I’m offering mankind?
KIRK: Programming. Different word, but the same old promises made by Genghis Khan, Julius Caesar, Hitler, Ferris, Maltuvis.
KORBY: Can you understand that a human converted to an android can be programmed for the better? Can you imagine how life could be improved if we could do away with jealousy, greed, hate?
KIRK: It can also be improved by eliminating love, tenderness, sentiment. The other side of the coin, Doctor.
KORBY: No one need ever die again. No disease, no deformities. Why even fear can be programmed away, replaced with joy. I’m offering you a practical heaven, a new paradise, and all I need is your help.
KIRK: All you wanted before was my understanding.
KORBY: I need transportation to a planet colony with proper raw materials. I’m sure there are several good possibilities among your next stops. No diversion from your route. I want no suspicions aroused. I’ll begin producing androids carefully, selectively.
KIRK: Yes, yes. No one need know, only to frighten uninformed minds.
KORBY: They must be strongly infiltrated into society before the android existence is revealed. I want no wave of hysteria to destroy what is good and right. You with me, Captain?
KIRK: You’ve created your own Kirk. Why do you need me?
KORBY: I created him to impress you, not to replace you.
KIRK: I’m impressed, Doctor, but not the way you think!

See most of that segment here: https://www.youtube.com/watch?v=nHxNZzzDu3Y

Of course the program ends with the real Kirk saving the day and dealing with Korby. But the things being warned about are more than just fiction. The promise of a better world and human immortality is the same dangerous nonsense that gave us the French Revolution, Communism, Nazism, and now the push for transhumanism and a utopian AI future.

The promise of a better humanity always comes at the expense of real humans. That must always be the case, since only real progress and real eternal life can come when mankind is in conformity to God and his will, and not acting in rebellion against him.

The interesting thing is how so many sci fi writers and those who make TV shows and films seem to get it – at least the first half of the equation: man’s manic search for perfection and eternity just lead to a dead end and to countless dead humans. Captain James Kirk understood this – or at least the show’s writers did.

So our mortality is now a good thing, given our sinful condition. The only hope to achieving heaven on earth is to get right with our Creator and submit to his purposes and plans. If sci fi can move us along in that direction, it has served a useful purpose.

But both parts of the message are needed: warnings against where secular humanism is taking us, as well as the good news of how God in Christ can deliver us from this and give us what we are really in need of.

[1272 words]

The post Warnings From Sci Fi – and Captain Kirk appeared first on CultureWatch.

Source: Warnings From Sci Fi – and Captain Kirk

Saturday Selections – June 28, 2025 | Reformed Perspective

The Franz Family and “Somewhere in glory”

If you liked the soundtrack to O Brother, Where Art Thou?, you’ll love this but of bluegrass gospel…

Tim Challies with 4 good questions to ask your tech

  1. Why were you created?
  2. What is the problem to which you are the solution, and whose problem is it?
  3. What new problem will you bring?
  4. What are you doing to my heart?

Canada’s Tax Freedom Day was June 21

June 21 was, according to the Fraser Institute, the day when – averaged across the country – Canadians stopped working for the government, and the money they earn for the rest of the year is the money they get to keep for their own households.

7 great questions to ask fellow believers

Want to get a deeper conversation started? Some of these could be great to pull out when you have a few couples over, or a group of friends.

How to get people to be friends with machines in 3 easy steps

The author of Digital Liturgies wants how AI “friendships” could be addictive in a way that’s even beyond pornography.

Government-mandated small business destruction

With a stroke of a pen a government can destroy a business that the owner might have spent a lifetime building up. The destructive potential for government interference in the marketplace might have you thinking those in power would tread very lightly, using their fearsome powers only when they had to. But, as this latest incident highlights, that isn’t always so.

A Quebec language law, if enforced, could cause all sorts of problems for board game stores in that province, since their niche games might not have any French on them at all, or not enough.

Source: Saturday Selections – June 28, 2025

Transhumanism:  Using technology to “Upgrade” People | VCY

Date: February 24, 2025
Host: Jim Schneider
​Guest: Alex Newman
MP3 | Order

https://embed.sermonaudio.com/player/a/816232114295755/

Some of the most powerful people on earth believe that one day they’ll be able to “upgrade” at least some human beings through genetic engineering and technological  schemes such as brain implants.  They’re so confident in their schemes they’re touting benefits as wild as eternal life and the ability to evolve into gods.  

Returning to bring listeners details on this issue, Crosstalk welcomed Alex Newman.  Alex is an award-winning international freelance journalist, author, researcher, educator and consultant.  He is senior editor for The New American,  co-author of Crimes of the Educators and author of Deep State: The Invisible Government Behind the Scenes.  He’s also founder of Liberty Sentinel.

Alex pulled no punches in his opening comments.  He described this as a diabolical agenda.  This doesn’t mean that everyone involved is evil or even understand the implications.  However, if you listen to the leaders of this movement, people like Ray Kurzweil, Yuval Noah Harari and Klaus Schwab, they explain how they feel they’ll become gods and will achieve immortality without Christ by merging with technology through things like brain implants, genetic engineering or uploading of their minds with computers systems.  

Christians realize how blasphemous this is and how it’s the oldest lie in history if you go back to Genesis 3 where Satan told Eve she could be like God and that she would not surely die.  

Klaus Schwab, the head of the World Economic Forum, has touted what he calls “The Fourth Industrial Revolution.”  Alex noted that Schwab first mentioned this in Foreign Affairs magazine published by the Council on Foreign Relations.  It comes down to two simple choices.  He said your options are (A) humanity will be robotized where we will be deprived of our hearts and souls and (B) it’s going to complement the best parts of humanity, driving us into a new moral and ethical paradigm where we will share a collective sense of destiny.  This will take place as we merge with our smart phone while also genetically engineering people.

For some this may sound great but in reality, it’s an attempt to overturn the moral order authored by God.  

This portion of the discussion is highlighted by audio from May of 2022 where Pekka Lundmark, the CEO of Nokia, communicated that by 2030 the smart phone as we know it today will not be the most common interface and that many of these things will be built into our bodies.   

If there was ever a time when it seemed like science fiction was becoming reality, this is it.  Hear more about where technology is taking us, and how this may affect your biblical worldview, when you review this edition of Crosstalk.

More Information

thenewamerican.com

libertysentinel.org

Asimov, AI, Robotics, and the Human Future | CultureWatch

Will the ‘Three Laws of Robotics’ save us?

Some transhumanists are clearly excited about a robotic, AI world. In an article posted yesterday I wrote about “The Promises and Perils of AI and Our Posthuman Future”. In it I quoted from five key titles on this topic. Authors included those who are experts in the field, along with those who offer ethical, philosophical and theological commentary on all this.

I noted how these thinkers and writers are divided in terms of how things will pan out. Some of them are rather optimistic and positive about how these developments will unfold, while some are much more pessimistic and negative.

As I have stated before when I write about such topics, I tend to be in the latter camp. Yes, many benefits and advantages to life have already occurred because of these new technologies, but we dare not be naïve about the very real damage and destruction they can also produce.

One fellow sent in a comment to my article, mentioning the well-known laws of Isaac Asimov concerning robotics. I replied by saying that yes, a number of the books listed in my piece did speak to this. For those not familiar with him, Asimov (1920-1992) was one of the big three English-speaking science fiction writers of last century, along with Robert A. Heinlein and Arthur C. Clarke.

In 1950 a number of his robot stories were collected and published in I, Robot. Included there was a set of ethical rules for robots and intelligent machines called the “Three Laws of Robotics”. The three laws say this:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Given the comment by my friend and my response, it seems worth while taking all this a bit further. So let me go back to one of the books I featured in my list, and quote from it further on this matter. In the James Barrat book, The Final Invention for example he spoke to this issue (although I left it out of the quote that I had shared). So here is what he said early on in his book:

And how will the machines take over? Is the best, most realistic scenario threatening to us or not?

When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in Al doesn’t inoculate you from naïveté about its perils. (pp. 4-5)

And here is part of what he does say in Chapter 1:

Now, it is an anthropomorphic fallacy to conclude that a superintelligent AI will not like humans, and that it will be homicidal, like the Hal 9000 from the movie 2001: A Space Odyssey, Skynet from the Terminator movie franchise, and all the other malevolent machine intelligences represented in fiction. We humans anthropomorphize all the time. A hurricane isn’t trying to kill us any more than it’s trying to make sandwiches, but we will give that storm a name and feel angry about the buckets of rain and lightning bolts it is throwing down on our neighborhood. We will shake our fist at the sky as if we could threaten a hurricane.

It is just as irrational to conclude that a machine one hundred or one thousand times more intelligent than we are would love us and want to protect us. It is possible, but far from guaranteed. On its own an AI will not feel gratitude for the gift of being created unless gratitude is in its programming. Machines are amoral, and it is dangerous to assume otherwise. Unlike our intelligence, machine-based superintelligence will not evolve in an ecosystem in which empathy is rewarded and passed on to subsequent generations. It will not have inherited friendliness. Creating friendly artificial intelligence, and whether or not it is possible, is a big question and an even bigger task for researchers and engineers who think about and are working to create AI. We do not know if artificial intelligence will have any emotional qualities, even if scientists try their best to make it so. However, scientists do believe, as we will explore, that AI will have its own drives. And sufficiently intelligent Al will be in a strong position to fulfill those drives.

And that brings us to the root of the problem of sharing the planet with an intelligence greater than our own. What if its drives are not compatible with human survival? Remember, we are talking about a machine that could be a thousand, a million, an uncountable number of times more intelligent than we are—it is hard to overestimate what it will be able to do, and impossible to know what it will think. It does not have to hate us before choosing to use our molecules for a purpose other than keeping us alive. You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us. (pp. 17-19)

Image of Our Final Invention: Artificial Intelligence and the End of the Human Era
Our Final Invention: Artificial Intelligence and the End of the Human Era by Barrat, James (Author)

He then addresses the Three laws of Asimov:

[A]nthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes. In the short story, “Runaround,” included in the classic science-fiction collection I, Robot, author Isaac Asimov introduced his three laws of robotics. They were fused into the neural networks of the robots’ “positronic” brains: (pp. 19-20)

He lists the three laws and then closes the chapter with these words:

The laws contain echoes of the Golden Rule (“Thou Shall Not Kill”), the Judeo-Christian notion that sin results from acts committed and omitted, the physician’s Hippocratic oath, and even the right to self-defense. Sounds pretty good, right? Except they never work. In “Runaround,” mining engineers on the surface of Mars order a robot to retrieve an element that is poisonous to it. Instead, it gets stuck in a feedback loop between law two—obey orders—and law three—protect yourself. The robot walks in drunken circles until the engineers risk their lives to rescue it. And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.

Asimov was generating plot lines, not trying to solve safety issues in the real world. Where you and I live his laws fall short. For starters, they’re insufficiently precise. What exactly will constitute a “robot” when humans augment their bodies and brains with intelligent prosthetics and implants? For that matter, what will constitute a human? “Orders,” “injure,” and “existence” are similarly nebulous terms.

Tricking robots into performing criminal acts would be simple, unless the robots had perfect comprehension of all of human knowledge. “Put a little dimethylmercury in Charlie’s shampoo” is a recipe for murder only if you know that dimethylmercury is a neurotoxin. Asimov eventually added a fourth law, the Zeroth Law, prohibiting robots from harming mankind as a whole, but it doesn’t solve the problems.

Yet unreliable as Asimov’s laws are, they’re our most often cited attempt to codify our future relationship with intelligent machines. That’s a frightening proposition. Are Asimov’s laws all we’ve got?

I’m afraid it’s worse than that. Semiautonomous robotic drones already kill dozens of people each year. Fifty-six countries have or are developing battlefield robots. The race is on to make them autonomous and intelligent. For the most part, discussions of ethics in Al and technological advances take place in different worlds.

As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced Al, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s. (pp. 20-21)

It has always been the case that science and technology tend to race ahead of ethical and spiritual considerations. As far as I am aware, Barrat is not a Christian. But he is asking a lot of important questions and is not skirting around the moral dilemmas that arise here.

As he rightly points out, we will need something more solid and secure than Asimov’s laws to help us steer through the murky waters that we are now in and that lie ahead. Many other books do similar things, and I listed 40 of them in a recent recommended reading list: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

And other books not found in that list could also be mentioned, including the important 2014 volume by Oxford University philosopher Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. It is vital that these folks and others keep asking the hard and penetrating questions.

But the worry is that such reflections, critiques and questioning will be outpaced by the very rapid advances in AI and related issues. As such, the global future is looking unsettling at best.

[1652 words]

The post Asimov, AI, Robotics, and the Human Future appeared first on CultureWatch.