Tag Archives: robotics

Have You Seen What China’s New Humanoid AI-Powered Robots Are Capable Of Doing? | The Economic Collapse

It takes a lot to blow me away in this day and age, but the video footage of humanoid AI-powered robots in China that I am about to share with you truly blew me away. During the CCTV Spring Festival gala, humanoid AI-powered robots built by Unitree performed an incredibly complex martial arts routine that was simply jaw-dropping. I never thought that we would get to a point where robots could move like that. I am literally in awe of what the Chinese have been able to accomplish. What made the performance even more incredible is that large numbers of human children were also involved in the performance

Dozens of Unitree bots took to the stage at the CCTV Spring Festival gala, which is China’s most–watched TV show.

Wearing red vests, the robots performed kicks, flips, and even moves with nunchucks, swords, and poles.

Amazingly, their daring performance took place just metres away from human children performers.

If even one of the robots had made a mistake while swinging a weapon around, the child performers could have potentially been seriously hurt.

But there were no mistakes.

The footage that is posted below looks like it could have come out of a science fiction movie, but I assure you that this is very real

What a spectacular performance.

Needless to say, U.S. companies haven’t built anything remotely similar yet.

Last year, Unitree rolled out a bunch of clunky robots that twirled handkerchiefs around, and that was considered to be impressive at the time.

But the jump in sophistication that we witnessed in this year’s performance was truly monumental

The contrast with last year’s show was clear. In 2025, Unitree’s humanoids performed a folk Yangko dance, twirling handkerchiefs. This year, the machines executed aerial flips, table-vaulting parkour, continuous single-leg flips, and a 7.5-rotation airflare spin.

“It’s been just one year — and the performance jump is striking,” Georg Stieler, Asia managing director and head of robotics and automation at technology consultancy Stieler, told NBC News. He added that the robots’ motion control reflects advances in their AI “brains,” enabling fine motor skills useful in real-world factory settings.

If AI “brains” are this sophisticated now, what would they be like five or ten years in the future?

The Chinese already use more robots in their factories than the rest of the world combined.

As AI-powered robots become even more proficient at a whole host of tasks, where do human workers fit into the equation?

We might want to start thinking about that.

We also might want to start thinking about what future wars will look like.

It is getting easier to imagine entire armies of AI-powered robots killing everything in sight.

And the advances that China is making in drone warfare are truly impressive

Central to drone warfare is the ability to orchestrate mass sorties of UAVs. Known as swarm attacks, the tactic is particularly difficult to defend against using conventional weapons systems, forcing militaries to experiment with novel defense systems ranging from high powered microwave weapons to advanced laser guns. In addition to evolving defense tactics, swarm technologies poses difficult questions for engineers looking to better coordinate drones. A key question concerns organizing their behavior, namely, how to create a sense of awareness between weapons systems. According to a January 2026 report by The Wall Street Journal, researchers in China have turned towards the animal kingdom to teach drones how to hunt and evade potential targets, soliciting the behavior of hawks, wolves, and coyotes into their AI systems.

The development points to broader trends in Beijing’s drone development program. With dual-purpose economic and research infrastructure, Beijing has utilized its robust manufacturing wing to generate high-tech drones efficiently and more cost-effectively than other countries. With a chokehold on global commercial drone production, China is leading this global revolution, potentially posing major consequences for both its rivals and warfare more broadly.

How can you defend against vast numbers of ultra-sophisticated AI-powered drones that hunt in large swarms?

All of the old paradigms are going out the window.

The conflicts of the future will look completely different from the conflicts of the past.

If we fall behind, we are going to be in so much trouble.

Right now, the United States and China are engaged in a frenzied race for AI dominance.

What OpenAI and Anthropic have been able to achieve over the past year has been amazing, but Chinese tech companies continue to roll out brand new AI models as well

China is ringing in the Lunar New Year with a flurry of new artificial intelligence (AI) model launches. Tech companies, such as Alibaba, ByteDance, and Zhipu, have all announced new product launches in the weeks leading up to China’s biggest holiday, while industry watchers expect a new Deepseek model soon.

China is widely regarded as a major competitor to the United States in the race to adopt and develop artificial intelligence models.

Some experts are suggesting that as we are so focused on winning the race for AI dominance, we are missing the larger threat.

One expert is warning that if AI technology continues to grow at an exponential rate, we could soon be facing a scenario in which ultra-intelligent AI entities rebel against humanity and overpower us

Tech CEOs are locked in an artificial intelligence “arms race” that risks wiping out humanity, top computer science researcher Stuart Russell told AFP on Tuesday, calling for governments to pull the brakes.

Russell, a professor at the University of California, Berkeley, said the heads of the world’s biggest AI companies understand the dangers posed by super-intelligent systems that could one day overpower humans.

Ten years ago, anyone that said anything like this would have been considered a loon.

But not anymore.

Russell really does believe that we are allowing these AI companies to “essentially play Russian roulette with every human being on earth”

“For governments to allow private entities to essentially play Russian roulette with every human being on earth is, in my view, a total dereliction of duty,” said Russell, a prominent voice on AI safety.

Of course we shouldn’t just be concerned about an AI rebellion.

A human could potentially use ultra-advanced AI entities to impose global tyranny on a scale that we have never seen before in human history.

In a world where AI can literally watch, monitor, track and control everything that is going on in society, where could you hide?

We have truly entered very dangerous territory, but there is no way that the tech companies are going to turn back now.

Michael’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.

About the Author: Michael Snyder’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com.  He has also written nine other books that are available on Amazon.com including “Chaos”“End Times”“7 Year Apocalypse”“Lost Prophecies Of The Future Of America”“The Beginning Of The End”, and “Living A Life That Really Matters”.  When you purchase any of Michael’s books you help to support the work that he is doing.  You can also get his articles by email as soon as he publishes them by subscribing to his Substack newsletter.  Michael has published thousands of articles on The Economic Collapse BlogEnd Of The American Dream and The Most Important News, and he always freely and happily allows others to republish those articles on their own websites.  These are such troubled times, and people need hope.  John 3:16 tells us about the hope that God has given us through Jesus Christ: “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”  If you have not already done so, we strongly urge you to invite Jesus Christ to be your Lord and Savior today.

The post Have You Seen What China’s New Humanoid AI-Powered Robots Are Capable Of Doing? appeared first on The Economic Collapse.

The Promises and Perils of AI and Our Posthuman Future | CultureWatch

Key thoughts on where we are heading:

As science and technology march inevitably further on, what we find is always a mixed bag. New developments and discoveries and inventions can be a real Godsend, making life so much better, easier and more efficient. Of course many of these same things can be used for great evil as well, and it is always a balancing act in trying to pursue the good while restraining the bad.

Christians are not to be Luddites when it comes to new technologies, but neither are they to be gullible and unaware. In a fallen world almost everything can be used for good or ill. And given how AI is not some stand-alone thing, but is too often part of much bigger and scarier agendas, such as those of the transhumanist and posthumanist activists, great care is needed.

Artificial intelligence, along with so many related matters, be it robotics, genetic engineering, new digital technologies and so on, are developing far more rapidly than our ability to properly assess them morally, socially and spiritually. The many benefits and goods of all this can easily be outweighed by the many dangers and risks.

So Christians especially need to think carefully and prayerfully about our posthuman future. If some believers might be far too critical, others can be far too gullible and unaware of the brave new world implications found here. One social media friend for example made this comment when I was discussing these matters:

“Should we fear AI like Christian leaders have in the past? I think it will be a race to take advantage of it’s potential. With it we can translate the Bible without little effort to all the languages of the world. Communist and Muslim nations will not be able to stop the flow of information to their people. This is great potential to spark a global Christian Great Awakening.” I replied to him as follows:

AI is FAR more than about Bible translation of course. The Christian is called to be a biblical realist, fully aware of sin, power and corruption. Sure, some technologies can be used for good, but we dare not be naïve here. The transhumanists and posthumanists are fully committed to their dystopian vision. Go back and reread The Abolition of Man by Lewis, or any of the 40 books I discuss in the comment below.

That annotated reading list is found here: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

In this article I want to quote from just five of those volumes, demonstrating that some of those most involved in these areas are very much concerned about where things are heading. Refer back to my reading list for full bibliographic details of these books.

One volume, The Coming Wave, is penned by someone with a long history in this field. Mustafa Suleyman is currently the CEO of Microsoft AI. Early on in this important book he says this:

AI has been climbing the ladder of cognitive abilities for decades. And it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

From the start, it was clear to me that AI would be a powerful tool for extraordinary good but, like most forms of power, one fraught with immense dangers and ethical dilemmas, too. I have long worried about not just the consequences of advancing AI but where the entire technological ecosystem was heading. Beyond AI, a wider revolution was underway, with AI feeding a powerful, emerging generation of genetic technologies and robotics. Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks.

As the technology has progressed over the years, my concerns have grown. What if the wave is a tsunami? (p. 9)

For three decades Stuart Russell has been a leading figure in AI science. In Human Compatible: AI and the Problem of Control he asks a number of hard but crucial questions. In the book’s Afterword he writes:

Meeting a criterion such as generating “true and accurate” content does not, of course, guarantee that the system is completely safe. For example, a sufficiently capable system could be entirely truthful about its ineluctable plan to take control of the world. What we really need, of course, are systems that are provably safe and beneficial to humans, as outlined in this book. Unfortunately, the AI safety research community (which includes my own research group) has not moved nearly fast enough to develop an alternative technology path that is both safe and highly capable.

There is now broad recognition among governments that AI safety research is a high priority, and some observers have suggested the creation of an international research organization, comparable to CERN in particle physics, to focus resources and talent on this problem. This organization would be a natural complement to the international regulatory body suggested by British prime minister Rishi Sunak.

Despite the torrent of activity around Al regulation, almost no attention has been paid to the Dr. Evil problem mentioned in Chapter 10—the possibility that bad actors will deliberately deploy highly capable but unsafe AI systems for their own ends, leading to a potential loss of human control on a global scale. The prevalence of open-source Al technology will make this increasingly likely; moreover, policing the spread of software seems to be essentially impossible. (p. 320)

Mo Gawdat, the former chief business officer of Google [X] said this in Scary Smart:

It is predicted that by the year 2029, which is relatively just around the corner, machine intelligence will break out of specific tasks and into general intelligence. By then, there will be machines that are smarter than humans, full stop. Those machines will not only become smarter, they will know more (as they have access to the entire internet as their memory pool) and they will communicate between each other better, thus enhancing their knowledge. Think about it: when you or I have an accident driving a car, you or I learn, but when a self-driving car makes a mistake, all self-driving cars learn. Every single one of them, including the ones that have not yet been ‘born’.

By 2049, probably in our lifetimes and surely in those of the next generation, AI is predicted to be a billion times smarter (in everything) than the smartest human. To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein. We call that moment singularity. Singularity is the moment beyond which we can no longer see, we can no longer forecast. It is the moment beyond which we cannot predict how AI will behave because our current perception and trajectories will no longer apply.

Now the question becomes: how do you convince this superbeing that there is actually no point squashing a fly? I mean, we humans, collectively or individually, so far seem to have failed to grasp that simple concept, using our abundant intelligence. When our artificially intelligent (currently infant) supermachines become teenagers, will they become superheroes or supervillains? Good question, huh?

When such superpower is unleashed, anything can happen…. (pp. 7-8)

Image of Masters or Slaves?: AI And The Future Of Humanity
Masters or Slaves?: AI And The Future Of Humanity by Peckham, Jeremy (Author)

Scientist Jeremy Peckham has been involved in AI for some thirty years, and he offers this warning in Masters or Slaves? AI and the Future of Humanity:

While there’s a push towards creating ‘trustworthy AI’, even going as far as having product markings and standards approvals, I believe that this is dangerous because it doesn’t address the core effects on humanity. It focuses on important but subsidiary issues such as data bias and transparency. In essence many AI applications are just opaque algorithms, trained on a vast amount of data. As we’ve seen, this data could be skewed, and now the probability of input data machines matching this database was reached cannot be known. We cannot think of AI in the same way that we might think about constructing a safe or trustworthy bridge for traffic to cross, because in bridge design the engineering principles are well understood, verifiable and transparent.

The issue that we face as a civilization isn’t whether AI is or can ever be made trustworthy, but how we can use it wisely, given its limitations in the way it shapes us. (p. 214)

Finally, James Barrat in Our Final Invention makes this rather ominous remark:

In writing this book I spoke with scientists who create artificial intelligence for robotics, Internet search, data mining, face recognition, and other applications. I spoke with scientists trying to create human-level artificial intelligence, which will have countless applications, and will fundamentally alter our existence (if it doesn’t end it first). I spoke with chief technology officers of Al companies and the technical advisors for classified Department of Defense initiatives. Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes….

But artificial intelligence brings computers to life and turns them into something else. If it’s inevitable that machines will make our decisions, then when will the machines get this power, and will they get it with our compliance? How will they gain control, and how quickly? These are questions I’ve addressed in this book….

I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence. (pp. 3-5)

The words of these experts need to be carefully considered. And lest some claim that I am just quoting from religious worry warts, as far as I know, only Peckham of the five considered here is a Christian. So plenty of non-Christian or non-religious thinkers and players in this field are sharing very real concerns about our posthuman future.

We need to heed their warnings.

[1783 words]

The post The Promises and Perils of AI and Our Posthuman Future appeared first on CultureWatch.