Tag Archives: machine-learning

Is An “AI Apocalypse” Rapidly Approaching? | End Of The American Dream

Are we rushing to build super-intelligent entities that will eventually become so powerful that they will be able to wipe most of us out?  Some of the top researchers in the field of artificial intelligence are convinced that this is precisely what is happening.  We have already reached a point where AI is able to perform almost all intellectual tasks much faster and much more efficiently than humans can.  But at least for now we are still maintaining control over our creations.  But what is going to happen when we lose control and super-intelligent entities start sending millions of copies of themselves all over the globe through the Internet?

Let me ask you a question.

Do you remember the last time that you stepped on a bug?

Many of you may think that is a stupid question because you feel that it really does not matter if bugs live or die.

Well, according to an AI researcher at MIT, that is exactly how an ultra-powerful AI entity may view us

“It has happened many times before that species were wiped out by others that were smarter. We, humans, have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. The tricky thing is, the species that is going to be wiped out often has no idea why or how,” said Max Tegmark, an AI researcher at Massachusetts Institute of Technology, in an interview with The Guardian.

The good news is that we aren’t at that stage yet.

For the moment, we are still in control.

But the AI systems that we have created are starting to exhibit some very alarming behaviors

Some of the most powerful artificial intelligence models today have exhibited behaviors that mimic a will to survive.

Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission.

The findings stirred a frenzy of reactions online over the past week. As tech companies continue to develop increasingly powerful agentic AI in a race to achieve artificial general intelligence, or AI that can think for itself, the lack of transparency in how the technology is trained has raised concerns about what exactly advanced AI is able to do.

Some of you may argue that if AI systems start to give us too many problems we will just shut them down.

Well, what if those AI systems simply refuse to shut down?

Alarmingly, there was a recent incident in which this actually happened

However, Palisade Research recently released a report asserting that there had been an incident during which GPT-o3 – OpenAI’s reasoning model – seemingly ignored a command to shut down, having found a way to bypass the shutdown script and avoid being turned off. And let it be said, there was no ambiguity, in any sense, in what the command was asking for – the instructions were explicit and the workaround was too.

GPT-o3, released in April 2025, has been referred to as one of the most powerful reasoning tools on the market at the moment, completely outperforming predecessors across a plethora of domains – from math, coding and science to visual perception and beyond. Clearly, this new and improved reasoning model is good at what it does, but is it getting too clever for its own good? Or, for our own good?

But at least if we know where an AI system is located, we could destroy it if we needed to do so.

Personally, I am far more concerned about the possibility that ultra-powerful AI entities could become self-replicating and start sending millions of copies of themselves to computers all over the planet.

Jeffrey Ladish, the director of the AI safety group Palisade Research, believes that we are “only a year or two away” from such a scenario…

“I expect that we’re only a year or two away from this ability where even when companies are trying to keep them from hacking out and copying themselves around the internet, they won’t be able to stop them,” he said. “And once you get to that point, now you have a new invasive species.”

Wow.

So what would our world look like if vast numbers of AI entities that have broken free from human control start colluding together to fight back against the human race?

We really are racing into uncharted territory, and there are no guardrails.

For the moment, one of the biggest concerns is that AI is going to start taking most of our jobs.

According to Anthropic CEO Dario Amodei, AI could eliminate up to 50 percent of all entry-level jobs within the next five years

Anthropic CEO Dario Amodei is confident AI will be a bloodbath for white-collar jobs, and warns that society is not acknowledging this reality.

AI could wipe out up to 50% of all entry-level jobs while spiking unemployment to 10-20% in as little as one to five years, he says. Unemployment is 4.2% in the US as of April 2025.

“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Amodei tells Axios. “I don’t think this is on people’s radar.”

We don’t like to think about things like this.

But ignoring what is happening isn’t going to make it go away.

In fact, there is evidence that recent college graduates are increasingly losing jobs to AI right now

This month, millions of young people will graduate from college and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence.

That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.

You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had “deteriorated noticeably.” Oxford Economics, a research firm that studies labor markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains.

Can you be replaced by AI?

You might want to think about that.

At this stage, even criminals are being replaced by AI

Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it’s not really them. It’s a deepfake, powered by AI, and you’re the target of a sophisticated scam. These kinds of attacks are happening right now, and they’re getting more convincing every day.

That’s the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world’s biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.

In the years ahead, it is going to be exceedingly difficult to determine what is real and what is fake.

According to CBN News, AI crime is “already up 456% since last year”…

AI-enabled crimes are already up 456% since last year.

Email phishing attacks, identity theft, ransomware attacks, financial scams, and deepfake child pornography are all becoming more sophisticated and prevalent.

Artificial intelligence has become the tool of choice for online criminals because it is erasing the line between the real and the fake. Google’s newly announced video generator is about to flood the internet with AI-created clips that have the look of expensive films.

AI can take any video of someone and turn it into a very realistic deepfake that says or does anything the creator programs it to do.

Our world is being transformed into a science fiction novel right in front of our eyes.

And as AI becomes dominant in almost every field, most of us will simply no longer be needed.

In fact, one computer science professor is projecting that the total population of the world will fall to about 100 million by the year 2300

EARTH will have a dystopian population of just 100million by 2300 as AI wipes out jobs turning major cities into ghostlands, an expert has warned.

Computer science professor Subhash Kak forecasts an impossible cost to having children who won’t grow up with jobs to turn to.

That means the world’s greatest cities like New York and London will become deserted ghost towns, he added.

Prof Kak points to AI as the culprit, which he says will replace “everything”.

I agree that AI really is an existential threat to humanity.

Given enough time, it seems quite likely that we would lose control of what we are creating and it would turn on us.

But considering the path that we are currently on, will we destroy ourselves before we ever get to that point?

We have been making self-destructive decisions for a very long time, and now those choices are catching up with us very rapidly.

Michael’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.

About the Author: Michael Snyder’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperbackand for the Kindle on Amazon.com. He has also written nine other books that are available on Amazon.com including “Chaos”“End Times”“7 Year Apocalypse”“Lost Prophecies Of The Future Of America”“The Beginning Of The End”, and “Living A Life That Really Matters”.  When you purchase any of Michael’s  books you help to support the work that he is doing.  You can also get his articles by email as soon as he publishes them by subscribing to his Substack newsletter.  Michael has published thousands of articles on The Economic Collapse BlogEnd Of The American Dream and The Most Important News, and he always freely and happily allows others to republish those articles on their own websites.  These are such troubled times, and people need hope.  John 3:16 tells us about the hope that God has given us through Jesus Christ: “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”  If you have not already done so, we strongly urge you to invite Jesus Christ to be your Lord and Savior today.

The post Is An “AI Apocalypse” Rapidly Approaching? appeared first on End Of The American Dream.

The AI–Robotics Combo: Will All Employees Be Replaced? | ZeroHedge

Authored by Anders Corr via The Epoch Times,

On April 14, a local government administrator in the United States sent my relative a letter that she suspected of including artificial intelligence (AI) content. Sure enough, an AI detector found 83 percent generated by AI GPT.

She said it was the best letter she had ever received from a politician—and she writes to her representatives frequently. She praised the letter for responding to every single point she raised in her own letter, something no unaided politician had ever done.

We toyed with the idea of confronting the administrator publicly. If AI wrote a better letter than the administrator himself, perhaps he could be replaced with the technology, and his salary redeployed for more substantive taxpayer benefits. It was a tongue-in-cheek idea. But the logic is nevertheless disturbing.

If artificial intelligence is now better than one politician for one task, according to one constituent, is it plausible that in 10 or 20 years, AI could be better than all politicians for all their tasks, according to most constituents?

At that point, voters might just vote for an AI politician rather than a human one. Human politicians are, after all, time-constrained by their need to sleep, eat, and hobnob with their elite donors and other benefactors.

My relative decided not to confront the politician at his next public meeting. She wants to influence his decisions in the future, and public shaming is probably not the best way to do this. So he gets a pass to continue using AI on unsuspecting constituents. Even his tiny hold on power at the local level protected him from the truth.

If he can get away with it, perhaps many other politicians are doing the same. This empowers AI-using politicians at the expense of the old-fashioned types who simply do not have enough time to respond to every point of every letter of every constituent, but try anyway. AI politicians then gain an advantage in the next election, and over time, due to natural selection, all politicians will use AI, as those who don’t get voted out.

The United Arab Emirates (UAE), a small autocratic country in the Middle East, is already way “ahead” of this slow “democratic” transition to AI. In a world first, the UAE is using AI to both track the effects of existing legislation and write drafts of new legislation. Presumably, the president of the UAE will review the legislation prior to enacting it. Let’s hope so, as there would then be at least one human in the loop.

The UAE considers using AI to write legislation to be 70 percent more efficient than relying on human legislators to write laws. How that remarkably round number was arrived at is unclear. But as UAE citizens cannot vote, they could essentially become forced laborers working not only for the president of the UAE but also for AI, given that nobody understands exactly how AI comes up with its recommendations.

Now, consider expanding this to everything. A new startup in Silicon Valley, called Mechanize, audaciously wants to use AI to automate all jobs. The startup, launched on April 17, expects to start replacing white-collar jobs, such as those of accountants, lawyers, and authors (full disclosure: this author is an author, so may be biased in favor of humans).

But the company also envisions pairing AI with robots to mechanize other jobs, for example, in agriculture, construction, and manufacturing. Companies like Waymo, Zoox, Tesla, and Lyft are already well on their way to populating our streets with robotaxis that could eventually lead most of us to dump our cars, perhaps in compliance with a government fiat written by AI.

That the military could also be automated, despite the promises of AI companies to do no such thing, is obvious given the rise of armed drones on the battlefields of Ukraine, and the interest of the U.S. and Chinese militaries in matching AI with drone warfare. One reason the United States denies the fastest AI semiconductors to China is that they are needed for the small AI devices onboard military drones that must learn from the adversary’s strategies mid-flight. The drone that learns the fastest and adapts its tactics to enemy drones before returning to base will survive.

The Israel Defense Forces reportedly used AI to target as many as 37,000 Hamas and Palestinian Islamic Jihad (PIJ) suspects with a 90 percent accuracy rate. This was paired with some “acceptable” level of civilian casualties per target to arrive at those approved for aerial bombing, with not-too-accurate dumb bombs. AI saved a lot of time for the targeters, though.

Communists have long promoted the idea of full mechanization to “free” humans of the need to labor. In their “utopian” schemes, full mechanization would allow humans the free time to pursue whatever they want, including leisure, art, and family. With the rise of mechanization, automation, robots, and AI, a new utopianism is coming that will appeal to the “Silicon Valley proletariat” of coders, programmers, and other tech workers.

With AI, this coming “tech vanguard” can seek an AI communism, in which humans frolic in nature while being watched over by the machine. It sounds dystopian and easily manipulable by Leninists if not Stalinists. But its rosy-glassed adherents will see it the other way around. They have likely read Richard Brautigan’s 1967 poem envisioning a “cybernetic ecology”:

where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.

Brautigan was not specifically communist, though he was counter-culture.

In the mid-2000s, a British movement developed a concept similar to being “watched over by machines of loving grace” that would become known as “fully automated luxury communism.” It was described by The Guardian in 2015 as “an opportunity to realise a post-work society, where machines do the heavy lifting and employment as we know it is a thing of the past.” This was before AI became popular. With AI, even the white collar workers will be “free.”

AI is being touted, by even those who know its dangers more than others, as a carrot and stick, a necessary evil, like nuclear weapons, in the competition with China. This could be considered an “anti-communist” or “anti-authoritarian” use of AI. The idea is that, if the United States does not deploy the most sophisticated AI to both entice Beijing to reform, and deter Beijing from attack, market democracy could be at a disadvantage.

In any conflict that occurs, Beijing will certainly deploy all technologies at its disposal. This puts those who would prefer to go slowly and carefully, or avoid any future of AI, in a bind. Use AI fire to fight fire, or not? And what if the fire blows back on the freedom of the individual in a market democracy, after burning the authoritarian adversary?

Handing over so much power, up to and including “AI communism,” whether in the form of political power to legislate or industrial power that replaces trillions of dollars worth of human labor, is an immense concentration of power in the hands of whoever controls AI. That could be a dictator, an oligarchy, an elected official who accrues too much power, or a hacker. It could even be AI itself, if it goes rogue or is irretrievably granted that power at some point in the future.

The advent of AI is likely a disaster for human agency, especially if it later develops malign rather than benign attitudes toward humanity. A benign AI is in no way guaranteed if we relinquish power to an immensely powerful technology that even its creators do not fully understand, and are not confident they can control.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.

Source: The AI–Robotics Combo: Will All Employees Be Replaced?

‘Godfather of AI’ says he’s ‘glad’ to be 77 because the tech probably won’t take over the world in his lifetime | Business Insider

Geoffrey Hinton

Geoffrey Hinton gave a “sort of 10 to 20% chance” that AI systems could one day seize control.PONTUS LUNDAHL/TT NEWS AGENCY/AFP via Getty Images

  • Geoffrey Hinton, the “godfather of AI,” says the technology is advancing faster than expected.
  • He warned that if AI becomes superintelligent, humans may have no way of stopping it from taking over.
  • Hinton, who previously worked at Google, compared AI development to raising a tiger cub that could turn deadly.

A scientist whose work helped transform the field of artificial intelligence says he’s “kind of glad” to be 77 — because he may not live long enough to witness the technology’s potentially dangerous consequences.

Geoffrey Hinton, often referred to as the “godfather of AI,” warned in a CBS News interview that aired Saturday that AI is advancing faster than experts once predicted — and that once it surpasses human intelligence, humanity may not be able to prevent it from taking control.

“Things more intelligent than you are going to be able to manipulate you,” said Hinton, who was awarded the 2024 Nobel Prize in physics for his breakthroughs in machine learning.

He compared humans advancing AI to raising a tiger. “It’s just such a cute tiger cub,” he said. “Now, unless you can be very sure that it’s not gonna wanna kill you when it’s grown up, you should worry.”

Hinton estimated a “sort of 10 to 20% chance” that AI systems could eventually seize control, though he stressed that it’s impossible to predict exactly.

One reason for his concern is the rise of AI agents, which don’t just answer questions but can perform tasks autonomously. “Things have got, if anything, scarier than they were before,” Hinton said.

The timeline for superintelligent AI may also be shorter than expected, Hinton said. A year ago, he believed it would be five to 20 years before the arrival of AI that can surpass human intelligence in every domain. Now, he says “there’s a good chance it’ll be here in 10 years or less.”

Hinton also warned that global competition between tech companies and nations makes it “very, very unlikely” that humanity will avoid building superintelligence. “They’re all after the next shiny thing,” he said. “The issue is whether we can design it in such a way that it never wants to take control.”

Hinton also expressed disappointment with tech companies he once admired. He said he was “very disappointed” that Google — where he worked for more than a decade — reversed its stance against military applications of AI. “I wouldn’t be happy working for any of them today,” he added.

Hinton resigned from Google in 2023. He said he left so he could speak freely about the dangers of AI development. He is now a professor emeritus at the University of Toronto.

Hinton did not immediately respond to Business Insider’s request for comment.

Read the original article on Business Insider

Source: ‘Godfather of AI’ says he’s ‘glad’ to be 77 because the tech probably won’t take over the world in his lifetime

Technology and Faith: Can We Trust AI? | Elizabeth Prata

By Elizabeth Prata

There have always been technological advances in history. The printing press in 1448 comes to mind. The 1978 British TV show Connections “demonstrated how inventions and historical events are interconnected is Connections. Created by science historian James Burke, the series explores how seemingly isolated events and inventions influence the development of others, shaping the modern world”.

But I am glad I’ve been alive at this time in the world’s history, because I’ve seen incredible advances in technology. I remember seeing the movie 2001: A Space Odyssey. It was produced between 1965 and 1968 and released in ’68. The scene where the astronaut puts a credit card in the machine and presses numbers on a keyboard, and the screen lights up with a live video conference with his daughter, drew audible gasps and not a few scoffing laughs. Never in 1968 had the general populace imagined a live video call. I mean, in 1968 push button phones had barely been invented and were not widely used until the late 1970s. And now in 2025, a video conference across vast distances is common.

2001: A Space Odyssey video call scene, complete with push button phone personal computer keyboard, credit card, and live streaming. Envisioned in 1968.

Credit cards were new then, too. The Diner’s Club card was invented in 1950. General credit cards for any kind of purchase, not just restaurants, were not commonplace in 1968. In fact, when 2001 A Space Odyssey began production in 1965, Mastercard was not even on the scene yet. It was invented in 1966 and was called Interbank. In 1969 it was rebranded as Mastercard.

Since the year of my birth I’ve seen satellites, space travel, the internet, streaming, optical fibers, digital cameras, cell phones, personal computing, sonograms, heart transplants, insulin production, cloning, limb reattachment… and so much more.

And now, artificial intelligence.

AI can make ‘art’ (it’ll be a while before I consider a digitally produced picture ‘art’, hence the scare quotes). It can answer questions. Automate tasks. Generate content. Even make predictions. Someone on social media had warned about Grok, Elon Musk’s AI as opposed to Google, the research engine. Google presents the researcher with links for further research, leaving it to the live brain intelligent person to make decisions about the quality of and value in the links presented, while Grok simply gives the answer.

A couple of years ago, I read a novella called “The Machine Stops” by E.M. Forster. I’ve written about it before, it made a big impression on me. It’s a science fiction story written in 1909. The Edwardian era had its own breathtaking advances as well. As we read in this essay about the time period when the novella The Machine Stops was written,

AI generated steampunk machine

automobiles were becoming common; Louis Blériot successfully flew across the English channel in his prototype aircraft; Ernest Henry Shackleton’s expedition reached the South Magnetic Pole; London’s Science Museum was established as an independent institution; physicists Ernest Rutherford, Hans Geiger, and Ernest Marsden carried out their famous Gold Foil experiments, which proved an atom had dense nucleus with a positive charged mass. Edwardian society was modernizing industrially, scientifically, and technologically at an exponential pace.

The novella serves as a cautionary tale about the dangers of over-reliance on technology and the dehumanizing effects of unchecked technological advancement. It seems to predict the very moment in which we find ourselves today, 116 years later.

If you’re interested in prescient science-fiction, this essay describes why The Machine Stops is so eerie, and it’s well-written too.

With all this happening in our world, and trust me, an old lady, it is moving faster and faster, I turned to Answers in Genesis for help on how to think about Artificial Intelligence. We know there are smart, unsaved people, sure, but without gaining knowledge from THE Source, Jesus, it is worthless. Wisdom from the world gains us nothing. In fact, most unsaved people descend into such sinfulness that their thinking becomes futile. (Romans 1:21-22).

AI generated AI brain

The title of the 33-second video is AI Is NOT as Reliable as People Think, the synopsis states:

Multiple researchers have shown how people can easily use publicly available AI to intentionally create false but persuasive information, which is why we must not trust AI as our final authority for truth. God’s Word has to be our final authority in EVERY area.

It is worth watching. As I said, it is only 33 seconds long. We need to be mindful of where wisdom comes from and the final authority of that wisdom. The AiG video is a good exhortation.

For a longer treatment of the subject of AI, Patricia Engler, the local AI expert at AiG, wrote a two part essay, is titled

Part 1- AI: Useful Tool or Existential Threat?
What is AI, and how should Christians engage with it?

Part 2- The Effects of Artificial Intelligence

Only God is all-knowing, infallible, and the ultimate Truth. His Word, not the outputs of AI, must be our final authority. (Source).

AI is handy. It’s convenient. It’s not neutral though. Or is it? Did Grok achieve political neutrality? Is inherent bias completely absent in its algorithms? Time will tell. Meanwhile, we can consult the Bible for most of life’s conundrums. For the nitty gritty not addressed in the Bible, if you use AI, employ common sense and be wise.

The Promises and Perils of AI and Our Posthuman Future | CultureWatch

Key thoughts on where we are heading:

As science and technology march inevitably further on, what we find is always a mixed bag. New developments and discoveries and inventions can be a real Godsend, making life so much better, easier and more efficient. Of course many of these same things can be used for great evil as well, and it is always a balancing act in trying to pursue the good while restraining the bad.

Christians are not to be Luddites when it comes to new technologies, but neither are they to be gullible and unaware. In a fallen world almost everything can be used for good or ill. And given how AI is not some stand-alone thing, but is too often part of much bigger and scarier agendas, such as those of the transhumanist and posthumanist activists, great care is needed.

Artificial intelligence, along with so many related matters, be it robotics, genetic engineering, new digital technologies and so on, are developing far more rapidly than our ability to properly assess them morally, socially and spiritually. The many benefits and goods of all this can easily be outweighed by the many dangers and risks.

So Christians especially need to think carefully and prayerfully about our posthuman future. If some believers might be far too critical, others can be far too gullible and unaware of the brave new world implications found here. One social media friend for example made this comment when I was discussing these matters:

“Should we fear AI like Christian leaders have in the past? I think it will be a race to take advantage of it’s potential. With it we can translate the Bible without little effort to all the languages of the world. Communist and Muslim nations will not be able to stop the flow of information to their people. This is great potential to spark a global Christian Great Awakening.” I replied to him as follows:

AI is FAR more than about Bible translation of course. The Christian is called to be a biblical realist, fully aware of sin, power and corruption. Sure, some technologies can be used for good, but we dare not be naïve here. The transhumanists and posthumanists are fully committed to their dystopian vision. Go back and reread The Abolition of Man by Lewis, or any of the 40 books I discuss in the comment below.

That annotated reading list is found here: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

In this article I want to quote from just five of those volumes, demonstrating that some of those most involved in these areas are very much concerned about where things are heading. Refer back to my reading list for full bibliographic details of these books.

One volume, The Coming Wave, is penned by someone with a long history in this field. Mustafa Suleyman is currently the CEO of Microsoft AI. Early on in this important book he says this:

AI has been climbing the ladder of cognitive abilities for decades. And it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

From the start, it was clear to me that AI would be a powerful tool for extraordinary good but, like most forms of power, one fraught with immense dangers and ethical dilemmas, too. I have long worried about not just the consequences of advancing AI but where the entire technological ecosystem was heading. Beyond AI, a wider revolution was underway, with AI feeding a powerful, emerging generation of genetic technologies and robotics. Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks.

As the technology has progressed over the years, my concerns have grown. What if the wave is a tsunami? (p. 9)

For three decades Stuart Russell has been a leading figure in AI science. In Human Compatible: AI and the Problem of Control he asks a number of hard but crucial questions. In the book’s Afterword he writes:

Meeting a criterion such as generating “true and accurate” content does not, of course, guarantee that the system is completely safe. For example, a sufficiently capable system could be entirely truthful about its ineluctable plan to take control of the world. What we really need, of course, are systems that are provably safe and beneficial to humans, as outlined in this book. Unfortunately, the AI safety research community (which includes my own research group) has not moved nearly fast enough to develop an alternative technology path that is both safe and highly capable.

There is now broad recognition among governments that AI safety research is a high priority, and some observers have suggested the creation of an international research organization, comparable to CERN in particle physics, to focus resources and talent on this problem. This organization would be a natural complement to the international regulatory body suggested by British prime minister Rishi Sunak.

Despite the torrent of activity around Al regulation, almost no attention has been paid to the Dr. Evil problem mentioned in Chapter 10—the possibility that bad actors will deliberately deploy highly capable but unsafe AI systems for their own ends, leading to a potential loss of human control on a global scale. The prevalence of open-source Al technology will make this increasingly likely; moreover, policing the spread of software seems to be essentially impossible. (p. 320)

Mo Gawdat, the former chief business officer of Google [X] said this in Scary Smart:

It is predicted that by the year 2029, which is relatively just around the corner, machine intelligence will break out of specific tasks and into general intelligence. By then, there will be machines that are smarter than humans, full stop. Those machines will not only become smarter, they will know more (as they have access to the entire internet as their memory pool) and they will communicate between each other better, thus enhancing their knowledge. Think about it: when you or I have an accident driving a car, you or I learn, but when a self-driving car makes a mistake, all self-driving cars learn. Every single one of them, including the ones that have not yet been ‘born’.

By 2049, probably in our lifetimes and surely in those of the next generation, AI is predicted to be a billion times smarter (in everything) than the smartest human. To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein. We call that moment singularity. Singularity is the moment beyond which we can no longer see, we can no longer forecast. It is the moment beyond which we cannot predict how AI will behave because our current perception and trajectories will no longer apply.

Now the question becomes: how do you convince this superbeing that there is actually no point squashing a fly? I mean, we humans, collectively or individually, so far seem to have failed to grasp that simple concept, using our abundant intelligence. When our artificially intelligent (currently infant) supermachines become teenagers, will they become superheroes or supervillains? Good question, huh?

When such superpower is unleashed, anything can happen…. (pp. 7-8)

Image of Masters or Slaves?: AI And The Future Of Humanity
Masters or Slaves?: AI And The Future Of Humanity by Peckham, Jeremy (Author)

Scientist Jeremy Peckham has been involved in AI for some thirty years, and he offers this warning in Masters or Slaves? AI and the Future of Humanity:

While there’s a push towards creating ‘trustworthy AI’, even going as far as having product markings and standards approvals, I believe that this is dangerous because it doesn’t address the core effects on humanity. It focuses on important but subsidiary issues such as data bias and transparency. In essence many AI applications are just opaque algorithms, trained on a vast amount of data. As we’ve seen, this data could be skewed, and now the probability of input data machines matching this database was reached cannot be known. We cannot think of AI in the same way that we might think about constructing a safe or trustworthy bridge for traffic to cross, because in bridge design the engineering principles are well understood, verifiable and transparent.

The issue that we face as a civilization isn’t whether AI is or can ever be made trustworthy, but how we can use it wisely, given its limitations in the way it shapes us. (p. 214)

Finally, James Barrat in Our Final Invention makes this rather ominous remark:

In writing this book I spoke with scientists who create artificial intelligence for robotics, Internet search, data mining, face recognition, and other applications. I spoke with scientists trying to create human-level artificial intelligence, which will have countless applications, and will fundamentally alter our existence (if it doesn’t end it first). I spoke with chief technology officers of Al companies and the technical advisors for classified Department of Defense initiatives. Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes….

But artificial intelligence brings computers to life and turns them into something else. If it’s inevitable that machines will make our decisions, then when will the machines get this power, and will they get it with our compliance? How will they gain control, and how quickly? These are questions I’ve addressed in this book….

I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence. (pp. 3-5)

The words of these experts need to be carefully considered. And lest some claim that I am just quoting from religious worry warts, as far as I know, only Peckham of the five considered here is a Christian. So plenty of non-Christian or non-religious thinkers and players in this field are sharing very real concerns about our posthuman future.

We need to heed their warnings.

[1783 words]

The post The Promises and Perils of AI and Our Posthuman Future appeared first on CultureWatch.

The Church in an AI Future

The promises of AI are indeed amazing. The labor- and time-saving potential will save humanity hours of mindless tasks, and we’ve not even begun to realize the potential for medicine, among other things. However, potentials are not actuals, and history is full of unintended (and intended) applications and consequences of technology. The only way forward is to be clear on human exceptionalism and human fallenness.

As we near the end of the year, Breakpoint will look at the most important issues Christians faced in 2024. Every generation faces challenges. We may have hoped for different ones, but God chose to put us in this time and this place. These are the “You Are Here” arrows for the Church that help us better understand the moment we’re in.  

One of the “You Are Here” stories of 2024 is the rise of Artificial Intelligence. This isn’t merely a story about new and more powerful technologies. It’s a story about how society thinks of itself—how it understands what it means to be human. 

The “mad scientist” rarely begins as a villain. From Dr. Frankenstein to Spiderman’s Doc Ock, villains are often the victims of a combination of good intentions, unstoppable curiosity, and way too much arrogance. Their plights on screen mirror real life, as evidenced by artificial intelligence.  

In his book, 2084: Artificial Intelligence and the Future of Humanity, Oxford professor and Christian apologist Dr. John Lennox argued that the promise of AI outpaces the reality of it. AI may be great at specific, repetitive tasks, like playing chess, constructing sentences, or identifying precancerous tissue on a CAT scan. It isn’t as capable of other things, like navigating an unfamiliar room or detecting sarcasm.  

This is because, at least so far, AI lacks the kind of generalized intelligence that allows us to move from task to task, to think in the abstract, to apply background knowledge, to use common sense, and to understand cause and effect. For all the hype around “machine learning,” AI systems continue to be, at a fundamental level, programs that do what their creators tell them to do.

Read More