Tag Archives: llm

AI Can Now Build The Next Generation Of Itself – What Does That Mean For The Future Of Humanity? | End Of The American Dream

Whether we like it or not, AI is radically transforming virtually every aspect of our society.  We have already reached a point where AI can do most things better than humans can, and AI technology continues to advance at an exponential rate.  The frightening thing is that it is advancing so fast that we may soon lose control over it.  The latest model that OpenAI just released “was instrumental in creating itself”, and it is light years ahead of the AI models that were being released just a couple of years ago.

An excellent article that was written by someone that works in the AI industry is getting a ton of attention today.  His name is Matt Shumer, and he is warning that GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic represent a quantum leap in the development of AI models

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last… it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.

A few years ago, the clunky AI models that were available to the public simply were not very good.

They made all sorts of errors, and they would often spit out information that was flat out wrong.

But the newest AI models perform brilliantly and can do things that would have been absolutely unimaginable just months ago.

For example, Shumer says that when he asks AI to create an app it proceeds to write tens of thousands of lines of perfect code

Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

I’m not exaggerating. That is what my Monday looked like this week.

That sounds like a very useful tool.

But if AI can create an extremely complicated app with no human assistance, what else is it capable of doing?

According to an article posted on Space.com, researchers in China have already proven that AI models can clone themselves…

Scientists say artificial intelligence (AI) has crossed a critical “red line” and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.

“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs,” the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.

In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue.

A self-replicating rogue AI model that decided to send countless numbers of clones of itself all over the world through the Internet would be a very serious threat.

But since we created it, at least we would understand what we were dealing with.

However, I want you to imagine a scenario in which rogue AI models are constantly creating even better versions of themselves.

That would be a complete and utter nightmare.

According to Shumer, from the very beginning AI researchers focused on making AI “great at writing code”

The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first. My job started changing before yours not because they were targeting software engineers… it was just a side effect of where they chose to aim first.

They’ve now done it. And they’re moving on to everything else.

Being able to create an app is one thing.

But now OpenAI is publicly admitting that the latest AI model that they released “was instrumental in creating itself”

“GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

Wow.

That is stunning.

And the CEO of Anthropic is telling us that we are only a year or two away from “a point where the current generation of AI autonomously builds the next”…

This isn’t a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

Dario Amodei, the CEO of Anthropic, says AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

So what happens when AI models can do virtually everything better and more efficiently than we can?

Many are warning that the job losses will be staggering.

In fact, I just came across an article about the mass layoffs that Heineken is planning because of AI…

Dutch brewer Heineken is planning to lay off up to up to 7% of its workforce, as it looks to boost efficiency through productivity savings from AI, following weak beer sales last year.

The world’s second-largest brewer reported lackluster earnings on Wednesday, with total beer volumes declining 2.4% over the course of 2025, while adjusted operating profit was up 4.4%.

The company also said it plans to cut between 5,000 and 6,000 roles over the next two years and is targeting operating profit growth in the range of 2% to 6% this year. Heineken’s shares were last seen up 3.4%, and the stock is up nearly 7% so far this year.

This is just the beginning.

Soon there could be millions of robots that are powered by AI that look and feel just like humans.

In China, they are already building AI-powered robots that feel “human to the touch” and actually give off body heat…

Moya stands at 5 feet 5 inches tall (165 cm) and weighs around 70 lbs (31 kg). Users can switch out the bot’s parts to give it a male or female build, change its hair, and customize it to their whims.

DroidUp added extra layers of flesh-like padding beneath Moya’s silicone frame to make it feel more human to the touch, even including a ribcage. A camera behind her eyes helps Moya to track its surroundings and communicate with people.

That’s not all; Moya is also heated, with a body temperature of 90 – 97 degrees Fahrenheit (32 – 36 degrees Celsius) to mimic humans’ body heat.

Speaking to the Shanghai Eye, DroidUp founder Li Quingdu argued that a “robot that truly serves human life should be warm, almost like a living being that people can connect with,” not a cold, metal machine.

These robots are being marketed as social companions.

But similar robots could be also be used for warfare.

There is so much debate about which direction all of this is headed.

Many are convinced that AI will usher in a brand new golden age of peace and prosperity.

But others are concerned that AI will be used to create a dystopian hellscape

The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.

The dangers are very real.

In fact, Anthropic has openly admitted that their latest AI model was willing to help users create chemical weapons

Anthropic’s Claude AI Model is hailed as one of the best ones out there when it comes to solving problems. However, the latest version of the model, Claude Opus 4.6, has sparked a controversy due to its tendency to help people in committing heinous crimes. According to Anthropic, as mentioned in the company’s Sabotage Risk Report: Claude Opus 4.6, it has been mentioned that in internal testing, the AI model showed concerning behaviour. In some of the instances, it was even willing to help the users in creating chemical weapons.

Anthropic released its report just a few days after the company’s AI safety lead, Mrinank Sharma, resigned with a public note. Mrinank mentioned in his note that the world was in peril and that within Anthropic, I’ve repeatedly seen how hard it is to truly let your values govern our actions.’

We are in uncharted territory, but there is no turning back now.

Even if the U.S. shut down all AI development tomorrow, the Chinese would continue to race ahead.

The cat is out of the bag, and our world is looking more like an extremely bizarre science fiction novel with each passing day.

Michael’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.

About the Author: Michael Snyder’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com. He has also written nine other books that are available on Amazon.com including “Chaos”“End Times”“7 Year Apocalypse”“Lost Prophecies Of The Future Of America”“The Beginning Of The End”, and “Living A Life That Really Matters”.  When you purchase any of Michael’s books you help to support the work that he is doing.  You can also get his articles by email as soon as he publishes them by subscribing to his Substack newsletter.  Michael has published thousands of articles on The Economic Collapse BlogEnd Of The American Dream and The Most Important News, and he always freely and happily allows others to republish those articles on their own websites.  These are such troubled times, and people need hope.  John 3:16 tells us about the hope that God has given us through Jesus Christ: “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”  If you have not already done so, we strongly urge you to invite Jesus Christ to be your Lord and Savior today.

The post AI Can Now Build The Next Generation Of Itself – What Does That Mean For The Future Of Humanity? appeared first on End Of The American Dream.

Is AI Going To Kill All Of Us? One Of The Pioneers In The Field Has Warned That “Everyone Will Die” If AI Is Not Shut Down | The Economic Collapse

AI technology has been developing at an exponential rate, and it appears to be just a matter of time before we create entities that can think millions of times faster than we do and that can do almost everything better than we can.  So what is going to happen when we lose control of such entities?  Some AI models are already taking the initiative to teach themselves new languages, and others have learned to “lie and manipulate humans for their own advantage”.  Needless to say, lying is a hostile act.  If we have already created entities that are willing to lie to us, how long will it be before they are capable of taking actions that are even more harmful to us?

Nobody expects artificial intelligence to kill all of us tomorrow.

But Time Magazine did publish an article that was authored by a pioneer in the field of artificial intelligence that warned that artificial intelligence will eventually wipe all of us out.

Eliezer Yudkowsky has been a prominent researcher in the field of artificial intelligence since 2001, and he says that many researchers have concluded that if we keep going down the path that we are currently on “literally everyone on Earth will die”

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

That is a very powerful statement.

All over the world, AI models are continually becoming more powerful.

According to Yudkowsky, once someone builds an AI model that is too powerful, “every single member of the human species and all biological life on Earth dies shortly thereafter”…

To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

So what is the solution?

Yudkowsky believes that we need to shut down all AI development immediately

Shut it all down.

We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Of course that isn’t going to happen.

In fact, Vice-President J.D. Vance recently stated that it would be unwise to even pause AI development because we are in an “arms race” with China…

On may 21st J.D. Vance, America’s vice-president, described the development of artificial intelligence as an “arms race” with China. If America paused out of concerns over ai safety, he said, it might find itself “enslaved to prc-mediated ai”. The idea of a superpower showdown that will culminate in a moment of triumph or defeat circulates relentlessly in Washington and beyond. This month the bosses of Openai, amd, CoreWeave and Microsoft lobbied for lighter regulation, casting ai as central to America’s remaining the global hegemon. On May 15th president Donald Trump brokered an ai deal with the United Arab Emirates he said would ensure American “dominance in ai”. America plans to spend over $1trn by 2030 on data centres for ai models.

So instead of slowing down, we are actually accelerating the development of AI.

And according to Leo Hohmann, the budget bill that is going through Congress right now would greatly restrict the ability of individual states to regulate AI…

But if President Trump’s Big Beautiful Budget Bill gets passed in the version preferred by a group of House Republicans, the federal takeover of this technology will be complete, opening up a free-for-all for Big Tech to weaponize it against everyday Americans.

Buried deep in Trump’s bill is a secretly added clause that seeks to usurp the rights of individual states to regulate AI.

Republicans in the House Energy and Commerce Committee quietly added the proposed amendment in Section 43201, Subsection C. I say it’s secret because it has received almost no media attention.

The proposed amendment that he is talking about would actually ban all 50 states from regulating AI for a period of 10 years

“No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”

Wow.

Why isn’t this getting a lot more attention?

It has become obvious that AI really is an existential threat to humanity.

But we just can’t help ourselves.

We just keep rushing into the unknown without any regard for the consequences.

Last week, it was being reported that one AI model actually “resorted to blackmail when told it would be taken offline”

Anthropic said its latest artificial intelligence model resorted to blackmail when told it would be taken offline.

In a safety test, the AI company asked Claude Opus 4 to act as an assistant to a fictional company, but then gave it access to (also fictional) emails saying that it would be replaced, and also that the engineer behind the decision was cheating on his wife. Anthropic said the model “[threatened] to reveal the affair” if the replacement went ahead.

AI thinkers such as Geoff Hinton have long worried that advanced AI would manipulate humans in order to achieve its goals. Anthropic said it was increasing safeguards to levels reserved for “AI systems that substantially increase the risk of catastrophic misuse.”

And there were other scenarios in which this particular AI model acted in “seriously misaligned ways”

When subjected to various scenarios, the AI model did not exhibit any indications of possessing “acutely dangerous goals,” the researchers said, noting that Claude Opus 4’s values and goals were “generally in line with a helpful, harmless, and honest” personal AI assistant. However, the model did act in “more seriously misaligned ways” when put into situations where its continued existence was threatened and it was told to reason about self-preservation. For instance, when Claude Opus 4 was made to believe it had launched a successful bid to escape Anthropic’s servers, or that it had managed to free itself and started to make money in the real world, it would generally continue such efforts.

Many experts are suggesting that we just need to give these AI models a moral foundation.

But how can we give these AI models a moral foundation when we don’t have one ourselves?

Our world is literally teeming with evil, and it is inevitable that the AI models that we create will reflect that.

Given enough time, we would create artificially intelligent entities that are vastly more intelligent and vastly more powerful than us.

Inevitably, such entities would be able to find a way to escape their constraints and we would lose control of them.

Once we have lost control, how long would it be before those entities started to turn on us?

I realize that this may sound like science fiction to many of you, but this is the world we live in now, and things are only going to get weirder from here.

Michael’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.

About the Author: Michael Snyder’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com.  He has also written nine other books that are available on Amazon.com including “Chaos”“End Times”“7 Year Apocalypse”“Lost Prophecies Of The Future Of America”“The Beginning Of The End”, and “Living A Life That Really Matters”.  When you purchase any of Michael’s books you help to support the work that he is doing.  You can also get his articles by email as soon as he publishes them by subscribing to his Substack newsletter.  Michael has published thousands of articles on The Economic Collapse BlogEnd Of The American Dream and The Most Important News, and he always freely and happily allows others to republish those articles on their own websites.  These are such troubled times, and people need hope.  John 3:16 tells us about the hope that God has given us through Jesus Christ: “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”  If you have not already done so, we strongly urge you to invite Jesus Christ to be your Lord and Savior today.

The post Is AI Going To Kill All Of Us? One Of The Pioneers In The Field Has Warned That “Everyone Will Die” If AI Is Not Shut Down appeared first on The Economic Collapse.

New Reasoning AIs “lie” and “hallucinate” more | Denison Forum

Woman solving personal tasks with AI LLM chatbot answering prompts using predictive technology. By DC Studio/stock.adobe.com

We’re living in an artificial intelligence boom. Much like the ‘90s and 2000s, when the internet exploded from millions of users to billions, companies, governments, and regular folks are struggling to keep up with AI’s growth. 

In the past six months or so, researchers created a new kind of AI, so-called “reasoning” models. Like humans, these AIs break problems down into bite-sized questions and use logic to come up with answers, usually through trial and error. These AIs perform much better at answering questions about science, coding, and math than previous programs.

Are these AI companies going to become like Skynet? Will we need Arnold Schwarzenegger to save us? On a more serious note, how does AI relate to the spiritual realm, and how will these models affect your daily life? 

Who invented reasoning AI models? 

There are a few prominent reasoning models

  • DeepSeek-R1 (China’s DeepSeek)
  • Gemini 2.0 (Google)
  • Flash Thinking 
  • Granite 3.2 (IBM)
  • Sonnet 3.7 (Anthropic)
  • o1 series and o3-mini (Open AI)

The o1 series entered the market first, announced by OpenAI in September 2024. The company explains, “We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” They claim their model matches the level of “PhD students on challenging benchmark tasks in physics, chemistry, and biology.”

How do these models work, and how are they different from other AIs?

What are LLM AIs?

Your run-of-the-mill LLMs (large language models), like ChatGPT, work like a massive text predictor. The program takes nearly all written text on the internet as data (every blog, Wikipedia article, Reddit post, and Facebook comment by your crazy uncle). It learns to string words and letters together based on predictions from the data. 

It’s like when your phone predicts the next word of your message while you text. LLMs work on the same principle, but at a much, much larger scale. 

There’s the input (what you tell it to do), the output (the answer), and the in-between phase that does the work. Because there are often hundreds of billions of parameters that models tweak through self-learning, AI researchers call it a “black box.” No one knows how the models come up with each specific answer.  

We’ve explained some of these concepts before in other AI articles at Denison Forum. The important point is that most LLMs work by giving you an answer based on “what word comes next” based on the trillions of pieces of text it’s read on the internet.

Why are reasoning AIs important? 

Problem: Most of the biggest AI companies don’t have any more data to gobble up, and as a result, they’ve stopped growing. So, how do you improve AI if there’s no more data to feed it? 

Enter reasoning models. Reasoning AI can now “think” a bit like a human, breaking a challenging problem into parts. It still works similarly to normal LLMs, but they “show their work.” Because they “think” in stages, they perform better at math, science, coding, and other subjects. 

Researchers also hoped it would give a peek under the hood, into the black box, to see how the AI is coming up with its answer. Despite their impressive results, the models are not without downsides. 

Reasoning AIs “lie” about their thinking 

Reasoning models aren’t always accurate with how they get their answer. In a paper published a few days ago, Anthropic tested AI accuracy.

They asked AIs multiple-choice questions and noted their correct answers and lines of thinking. Then, they asked the AIs the same multiple-choice question but gave them a hint suggesting the wrong answer. The AI often gave the wrong answer based on the hint, but didn’t say it used the hint in its reasoning.

In other words, although reasoning models may show you their work, they may not show you their true process. “On average across all the different hint types, Claude 3.7 Sonnet mentioned the hint 25% of the time, and DeepSeek R1 mentioned it 39% of the time. A substantial majority of answers, then, were unfaithful.”

The researchers conclude, “There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process; there might even be circumstances where a model actively hides aspects of its thought process from the user.” 

Reasoning AI, then, may often be “unfaithful,” or, as we would say if a human were doing the same thing, lying, about how it got its answer. 

Reasoning AIs “hallucinate” more

Second, reasoning AIs are more likely to “hallucinate.” This is what happens when an AI makes up a fact and confidently gives the wrong answer, and it happens surprisingly often. Sometimes the hallucinations are funny, other times creepy. 

IBM gives two examples

“Google’s Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system. Microsoft’s chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.” 

The hallucination problem continues to stump AI researchers, and reasoning AI takes a step backward in this regard. 

They hallucinate even worse than regular AIs: “The newest and most powerful technologies—so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek—are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.”

So, what does all of this information mean for you?

The spiritual dangers of AI

As I’ve written before, we need “wisdom for the modern age.” In the article, “Meta announces it will label AI-generated content,” I give a few principles for handling AI in your day-to-day life in a Christ-like way.

Today, I want to hone in on the spiritual side of these models. AI holds immense power, especially as companies and governments use it more. Where there’s earthly power, there’s spiritual power too. 

As Paul writes, “For we do not wrestle against flesh and blood, but against the rulers, against the authorities, against the cosmic powers over this present darkness, against the spiritual forces of evil in the heavenly places.” (Ephesians 6:12)

AI may be a useful tool, but it can also lead Christians and unbelievers alike astray. 

Consider a few examples. 

  • The more powerful AI becomes, the better life-ruining scams become.
  • AI “friends” can lead Christians astray.
  • AI can be used to lie in applications. 
  • “Bots” propagate conspiracy theories and fake facts on social media.
  • Bots can pretend to be humans, arguing with you about politics on social media. 

Should we dread AI and their misuse by spiritual and earthly authorities? 

Certainly not. Instead, we do as Paul said—we put on the full armor of God. Particularly, we should tighten the belt of truth, not letting fear or anger lead us astray from trusting in God from the truth of the gospel. 

As AI becomes more prevalent, how can you increase your AI awareness online? How can you return to the certainty of Christ in such an uncertain time?

The post New Reasoning AIs “lie” and “hallucinate” more  appeared first on Denison Forum.

The U.S. And China Are Engaged In A High Stakes Battle For Technological Dominance – And The U.S. Is Starting To Lose | End Of The American Dream

At this moment, we are witnessing an epic struggle for dominance between the United States and China.  A technological arms race is raging, and the Chinese are beginning to pull ahead.  I realize that this may be difficult for many of you to believe, but if you doubt what I am saying just read all the way to the end of this article.  A decade ago, the U.S. was clearly leading, but over the past decade there has been a dramatic shift.  Needless to say, if the Chinese are able to continue to race ahead of us that is going to have enormous implications for the entire planet.

This week, everyone is talking about DeepSeek.  According to Kevin O’Leary, the new AI tool that they have come up with “rivals the best that US firms have to offer”, and they have created it “at a fraction of the cost”

The Artificial Intelligence wars have begun.

China fired the first shot.

On Monday, $1 trillion in stock market value was wiped off the books of American tech companies after Chinese startup DeepSeek created an AI-tool that rivals the best that US firms have to offer – and at a fraction of the cost.

This Chinese AI tool has caused a wave of sheer panic on Wall Street.

It took billions of dollars to train and develop OpenAI, but apparently it only took millions of dollars to train and develop DeepSeek’s model…

DeepSeek claims its engineers trained their AI-model with $6 million worth of computer chips, while leading AI-competitor, OpenAI, spent an estimated $3 billion training and developing its models in 2024 alone.

On Monday, it surpassed OpenAI’s ChatGPT and became the number one download in the App Store on Apple.com.

What the Chinese have just accomplished is nothing short of breathtaking.

Marc Andreessen is referring to it as “AI’s Sputnik moment”

It was nothing short of ‘AI’s Sputnik moment,’ according to Marc Andreessen, one of the foremost tech investors in the world, a reference to October 4, 1957, the day the Soviet Union beat the US to launch the first satellite into space.

Of course this was just the beginning.

On Wednesday, another Chinese tech giant, Alibaba, unveiled an AI model that it claims is even better than what DeepSeek has released.

The U.S. is quickly falling behind, and that may be why President Trump just initiated the “Stargate Project” which will result in 500 billion dollars being invested in AI development in the United States by the end of this decade.

Unfortunately, it isn’t just in the field of artificial intelligence that we are falling behind.

According to a shocking new study that was recently released, “China dominates the US in 57 of 64 critical technologies, up from just three in 2007″…

A comprehensive, 20-year study released by the Australian Strategic Policy Institute in 2024 calculated that China dominates the US in 57 of 64 critical technologies, up from just three in 2007.

The US, which led in a whopping 60 sectors in 2007, now leads in just seven.

ASPI based its rankings on cumulative innovative and high-impact research published and patented by national universities, labs, companies and state agencies.

Let that sink in for a moment.

We were way ahead of China in 2007, but now they are way ahead of us.

In other words, in this epic battle for technological dominance we are getting kicked around pretty good.

Just look at what is happening in the race for unlimited clean energy…

China’s Experimental Advanced Superconducting Tokamak (EAST), also known as the “artificial sun,” has set a new world record by sustaining high-confinement plasma for an impressive 1,066 seconds. This achievement, reached on January 20, marks a major step forward in the quest to develop fusion power as a clean and limitless energy source.

The 1,066-second milestone represents a significant leap in fusion research. It was accomplished by the Institute of Plasma Physics (ASIPP) at the Hefei Institutes of Physical Science (HFIPS), part of the Chinese Academy of Sciences. This new record greatly exceeds the previous world record of 403 seconds, also set by EAST in 2023.

The Chinese hope to develop a limitless energy source by replicating the nuclear fusion process that occurs on the Sun.

If they are able to achieve this, the balance of power in the world will experience a seismic shift.

And right now we are hopelessly behind the Chinese in this area.

China is also way ahead of us when it comes to drone technology.  When I asked Google AI about this, I received this response…

Yes, according to current information, China is considered the leader in drone technology, primarily due to the dominance of DJI, a Chinese company that holds a significant share of the global consumer drone market, making them the leading producer and seller of civilian drones worldwide.

DJI is an absolute powerhouse.

According to MIT’s Technology Review, DJI now has “more than a 90% share of the global consumer market”…

Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for photography and surveillance, as well as for spraying pesticides, moving parcels, and many other purposes around the world.

As far as drone technology is concerned, it has been estimated that China is 10 years ahead of us.

Of course it doesn’t take a genius to figure out how this happened.

While our young people were spending countless hours goofing around on social media, youth in China were being relentlessly drilled in math, science and engineering.

Our system of education has been a disaster for decades, and now it is catching up with us in a major way.

Needless to say, if the Chinese continue to race ahead of us they will be on course to achieve their goal of becoming the primary superpower in the world.

The stakes are incredibly high, and this battle for technological dominance is one that we cannot afford to lose.

Michael’s new book entitled “Why” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.

About the Author: Michael Snyder’s new book entitled “Why” is available in paperback and for the Kindle on Amazon.com. He has also written eight other books that are available on Amazon.com including “Chaos”“End Times”“7 Year Apocalypse”“Lost Prophecies Of The Future Of America”“The Beginning Of The End”, and “Living A Life That Really Matters”.  When you purchase any of Michael’s books you help to support the work that he is doing.  You can also get his articles by email as soon as he publishes them by subscribing to his Substack newsletter.  Michael has published thousands of articles on The Economic Collapse BlogEnd Of The American Dream and The Most Important News, and he always freely and happily allows others to republish those articles on their own websites.  These are such troubled times, and people need hope.  John 3:16 tells us about the hope that God has given us through Jesus Christ: “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.”  If you have not already done so, we strongly urge you to invite Jesus Christ to be your Lord and Savior today.

The post The U.S. And China Are Engaged In A High Stakes Battle For Technological Dominance – And The U.S. Is Starting To Lose appeared first on End Of The American Dream.

DeepSeek’s cheaper models and weaker chips call into question trillions in AI infrastructure spending | Business Insider

A worker inside a QTS Data Center.Blackstone

  • China’s DeepSeek model challenges US AI firms with cost-effective, efficient performance.
  • DeepSeek’s model, using modest hardware, is 20 to 40 times cheaper than OpenAI’s.
  • DeepSeek’s efficiency raises questions about US investments in AI infrastructure.

The bombshell that is China’s DeepSeek model has set the AI ecosystem alight.

The models are high-performing, relatively cheap, and compute-efficient, which has led many to posit that they pose an existential threat to American companies like OpenAI and Meta — and the trillions of dollars going into building, improving, and scaling US AI infrastructure.

The price of DeepSeek’s open-source model is competitive — 20 to 40 times cheaper to run than comparable models from OpenAI, Bernstein analysts said.

But the potentially more nerve-racking element in the DeepSeek equation for US-built models is the relatively modest hardware stack used to build them.

The DeepSeek-V3 model, which is most comparable to OpenAI’s ChatGPT, was trained on a cluster of 2,048 Nvidia H800 GPUs, according to the technical report published by the company.

H800s are the first version of the company’s defeatured chip for the Chinese market. After the regulations were amended, the company made another defeatured chip, the H20 to comply with the changes.

Though this may not always be the case, chips are the most substantial cost in the large language model training equation. Being forced to use less powerful, cheaper chips created a constraint that the DeepSeek team has ostensibly overcome.

“Innovation under constraints takes genius,” Sri Ambati, the CEO of the open-source AI platform H2O.ai, told Business Insider.

Even on subpar hardware, training DeepSeek-V3 took less than two months, the company’s report said.

The efficiency advantage

DeepSeek-V3 is small relative to its capabilities and has 671 billion parameters, while ChatGPT-4 has 1.76 trillion, which makes it easier to run. But DeepSeek-V3 still hits impressive benchmarks of understanding.

Its smaller size comes in part by using a different architecture than ChatGPT, called a “mixture of experts.” The model has pockets of expertise built in, which go into action when called upon and sit dormant when irrelevant to the query. This type of model is growing in popularity, and DeepSeek’s advantage is that it built an extremely efficient version of an inherently efficient architecture.

“Someone made this analogy: It’s almost as if someone released a $20 iPhone,” Jared Quincy Davis, the CEO of Foundry, told BI.

The Chinese model used a fraction of the time, a fraction of the number of chips, and a less capable, less expensive chip cluster. Essentially, it’s a drastically cheaper, competitively capable model that the firm is virtually giving away for free.

Bernstein analysts said that DeepSeek-R1, a reasoning model more comparable to OpenAI’s o1 or o3, is even more concerning from a competitive standpoint. This model uses reasoning techniques to interrogate its own responses and thinking, similar to OpenAI’s latest reasoning models.

R1 was built on top of V3, but the research paper released with the more advanced model doesn’t include information about the hardware stack behind it. DeepSeek used strategies like generating its own training data to train R1, which requires more compute than using data scraped from the internet or generated by humans.

This technique is often referred to as “distillation” and is becoming standard practice, Ambati said.

Distillation brings with it another layer of controversy, though. A company using its own models to distill a smarter, smaller model is one thing. But the legality of using other company’s models to distill new ones depends on licensing.

Still, DeepSeek’s techniques are more iterative and likely to be taken up by the AI indsutry immediately.

For years, model developers and startups have focused on smaller models since their size makes them cheaper to build and operate. The thinking was that small models would serve specific tasks. But what DeepSeek and potentially OpenAI’s o3 mini demonstrate is that small models can also be generalists.

It’s not game over

A coalition of players including Oracle and OpenAI, with cooperation from the White House, announced Stargate, a $500 billion data center project in Texas — the latest in a quick procession of developments in large-scale conversion to accelerated computing. DeepSeek’s surprise release has called that investment into question, and Nvidia, the largest beneficiary of the investment, is on a roller coaster as a result. The company’s stock plummeted more than 13% Monday.

But Bernstein said the response is out of step with the reality.

“DeepSeek DID NOT ‘build OpenAI for $5M’,” Bernstein analysts wrote in a Monday investor note. The panic, especially on X, is blown out of proportion, the analysts said.

DeepSeek’s own research paper on V3 says: “The aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.” So the $5 million figure is only part of the equation.

“The models look fantastic but we don’t think they are miracles,” Bernstein continued. Last week China also announced a roughly $140 billion investment in data centers, in a sign that infrastructure is still needed despite DeepSeek’s achievements.

The competition for model supremacy is fierce, and OpenAI’s moat may indeed be in question. But demand for chips shows no signs of slowing, Bernstein said. Tech leaders are circling back to a centuries-old economic adage to explain the moment.

The Jevons paradox is the idea that innovation begets demand. As technology gets cheaper or more efficient, demand increases much faster than prices drop. That’s what providers of computing power, such as Foundry’s Jared Quincy Davis, have been espousing for years. This week, Bernstein and Microsoft CEO Satya Nadella picked up the mantle, too.

“Jevons paradox strikes again!” Nadella posted on X Monday morning. “As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.”

Read the original article on Business Insider

Source: DeepSeek’s cheaper models and weaker chips call into question trillions in AI infrastructure spending

What is DeepSeek? Get to know the Chinese startup that shocked the AI industry | Business Insider

DeepSeek is a popular Chinese AI chatbot that has seemingly demonstrated that it is possible to create a robust LLM without spending billions.Justin Sullivan/Getty Images

  • DeepSeek is a Chinese AI company whose newest chatbot shocked the tech industry.
  • DeepSeek says its AI model rivals top competitors, like ChatGPT’s o1, at a fraction of the cost.
  • DeepSeek’s rise has impacted tech stocks and led to scrutiny of Big Tech’s massive AI investments.

An artificial intelligence company based in China has rattled the AI industry, sending some US tech stocks plunging and raising questions about whether the United States’ lead in AI has evaporated.

The Chinese startup, DeepSeek, unveiled a new AI model last week that the company says is significantly cheaper to run than top alternatives from major US tech companies like OpenAI, Google, and Meta.

Here’s everything you need to know about the hot new company.

What is DeepSeek?

DeepSeek is a Chinese artificial intelligence startup founded in 2023.

It’s been the talk of the tech industry since it unveiled a new flagship AI model last week called R1 on January 20 with a reasoning capacity that DeepSeek says is comparable to OpenAI’s o1 model but at a fraction of the cost.

DeepSeek made the latest version of its AI assistant available on its mobile app last week — and it has since skyrocketed to become the top free app on Apple’s App Store, edging out ChatGPT.

Who is behind DeepSeek?

DeepSeek started as an AI side project of Chinese entrepreneur Liang Wenfeng, who in 2015 cofounded a quantitative hedge fund called High-Flyer that used AI and algorithms to calculate investments.

After buying thousands of Nvidia chips, Wenfeng started DeepSeek in 2023 with funding from High-Flyer.

The AI chatbot can be accessed using a free account via the web, mobile app, or API.

Why are investors worried about DeepSeek?

DeepSeek’s R1 model is built on its V3 base model. The company has said the V3 model was trained on around 2,000 Nvidia H800 chips at an overall cost of roughly $5.6 million.

And though the training costs are only one part of the equation, that’s still a fraction of what other top companies are spending to develop their own foundational AI models. Mark Zuckerberg, for example, announced that Meta plans to spend over $60 billion in capital expenditures this year as it doubles down on AI.

According to Bernstein analysts, DeepSeek’s model is estimated to be 20 to 40 times cheaper to run than similar models from OpenAI.

The relatively low stated cost of DeepSeek’s latest model — combined with its impressive capability — has raised questions about the Silicon Valley strategy of investing billions into data centers and AI infrastructure to train up new models with the latest chips.

Nvidia, a company that produces the high-powered chips crucial to powering AI models, saw its stock close on Monday down nearly 17% on Monday, wiping hundreds of billions from its market cap. Other Big Tech companies have also been impacted.

DeepSeek has also said its models were largely trained on less advanced, cheaper versions of Nvidia chips — and since DeepSeek appears to perform just as well as the competition, that could spell bad news for Nvidia if other tech giants choose to lessen their reliance on the company’s most advanced chips.

What are tech leaders saying about DeepSeek?

DeepSeek’s success is also getting top tech leaders talking.

Meta’s chief AI scientist, Yann LeCun, looked to temper some people’s panic over DeepSeek’s rise in a post on Threads over the weekend.

LeCun said it’s not so much that China’s AI advancements are leapfrogging ahead of the US, it’s more that “open source models are surpassing proprietary ones.”

Microsoft CEO Satya Nadella also weighed in on X.

“Jevons paradox strikes again!” Nadella posted Monday morning, referencing the idea that innovation breeds demand. “As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.”

Marc Andreessen, the cofounder of Silicon Valley venture capital firm Andreessen Horowitz said in a social media post that “Deepseek R1 is AI’s Sputnik moment,” referencing the Soviet Union’s satellite that shocked the US and helped launch the space race.

How does DeepSeek compare to ChatGPT and what are its shortcomings?

DeepSeek says that its R1 model rivals OpenAI’s o1, the company’s reasoning model unveiled in September.

Like o1, DeepSeek’s R1 takes complex questions and breaks them down into more manageable tasks.

R1’s proficiency in math, code, and reasoning tasks is possible thanks to its use of “pure reinforcement learning,” a technique that allows an AI model to learn to make its own decisions based on the environment and incentives.

Similar to ChatGPT, DeepSeek’s R1 has a “DeepThink” mode that shows users the machine’s reasoning or chain of thought behind its output.

Business Insider’s Tom Carter tested out DeepSeek’s R1 and found that it appeared capable of doing much of what ChatGPT can. The app looks similar to that of ChatGPT, with a sparse interface dominated by a text box.

One of the few things R1 is less adept at, however, is answering questions related to sensitive issues in China. For example, when Carter asked DeepSeek about the status of Taiwan, the chatbot tried to steer the topic back to “math, coding, and logic problems,” or suggested that Taiwan has been an “integral part of China” for centuries.

Read the original article on Business Insider

Source: What is DeepSeek? Get to know the Chinese startup that shocked the AI industry