Stargate will create jobs. But not for humans.

On Tuesday, I was thinking that I could write a story about it. Trump administration’s cancellation of Biden’s Executive Order on AI on AI. (The biggest meaning: that the labs are no longer called the government to report dangerous abilities, though they can do so.) But then two big and more important AI stories have fallen: one of them technicians , And an economic.

Sign up here to discover the major, complex problems facing the world and the most effective ways to solve them. Sent twice a week.

Stargate is a job program – but probably not for humans.

The economic story is a Star Gate. In conjunction with companies like Oracle and Soft Bank, Openai co -founder Sam Altman announced Surprisingly planned to invest $ 500 billion In the “new AI infrastructure for Openai” – that is, for data centers and power plants that will be needed.

People asked immediately. First, Elon was Musk Public announcement That “they don’t actually have money,” then Microsoft’s CEO, Satya Nadella Answer: “I’m good for my $ 80 billion.“(Microsoft, remember, is a big part in the open AI.)

Second, something Challenged Openai claims that the program will “create millions of US jobs.”

Why? Well, the only reasonable way for investors to get your money back to this project is that if the company is betting, Openai will soon develop an AI system that most humans can do on the computer. Yes. Economists are economists. Severe debate Exactly what Economic impact It would happen, if that happens, though the creation of millions of jobs does not seem to be the same, at least not a long period of time.

Massive automation has happened at the beginning of the industrial revolution, and before some people. Expected sincerely That it would be a good thing for society in a long time. (I think: It is really up to us whether we have a plan to maintain democratic accountability and proper surveillance, and share the benefits of a dangerous new science -fi world. Currently, we don’t have the same, so. So I’m not happy with the possibility of being automated.)

But even if you are more excited about automation, “we will replace all office tasks with AIS” – which is widely considered Openai’s business model – to rotate as a job program There is a funny plan. But then, investing $ 500 billion to eliminate numerous jobs will probably not get President Donald Trump, as the Star Gate has.

Deeppseek may have detected reinforcement on AI feedback.

The second big story of this week Deep Sk R1, A. New release The Chinese AI Startup DeepSEPSEEK, whose company advertises Openai’s O1 rivals. The thing that makes R1 a big deal is less and more technical.

To teach AI systems to give good answers, we classify their answers, and train them at home that we rank high. This is “RLHF)” RLHF “, and it has been the main approach to training modern LLMS since the Openai team has worked on it. (The process is. This 2019 dissertation is described.,

But RLHF is not how we got the ultra Super Human AI Games Program Alphazero. It was trained using a different game -based strategy: AI managed to invent new puzzles for himself, solve them, learn from solutions and improve.

This strategy is especially useful for teaching the model. Quickly It can do anything Expensive and gently. Alphazero can consider many different policies slowly and timely, knowing which is excellent, and then learning from the best solution. This is a self -player that made it possible for Alphazero to improve the previous game engines to a great extent.

So, of course, the labs are trying to find something like for big language models. The basic idea is simple: You let a model look for a long time, potentially using a lot of expensive counts. Then you train him on the answer that he finally found, trying to develop a model that will make the same result more cheap.

But so far, “Large labs did not see more success with this kind of self -improving RL,” Machine Learning Engineer Peter Schmidt Nelson Democracy In explaining the technical importance of Deep Sek R1. The thing that has influenced the engineers so much (and is very nervous) is that the team seems to have made significant progress using this technique.

This would mean that AI systems can be taught to do anything fast and cheaply that they know how to do slow and expensive – which will bring some fast and shocking improvement to these abilities. What the world has seen with Alphazero, only in the economy sectors. Game is more important than playing.

Another notable fact here: This progress is coming from the Chinese AI company. Given that American AI companies are not ashamed to use. The risk of sugar AI domination To advance its interests – and see that there is really a geographical political race around this technology – which says a lot about how fast China is moving.

I know that many people are sick to hear about AI. They are sick. AI Slope in their News Feeds And AI products that are worse than humans but are cheap, and they are not connected to Openai (or anyone else) The world’s first trillion leaves by automating the whole industry..

But I understand that in 2025, AI is really important – not because these powerful systems are developed, which are currently running well, but for what society stands and this It is ready to insist that this has been done with a responsibility.

When the AI ​​systems begin to operate independently and commit serious crimes (all are large labs. Working on “agents” Who can work freely), will we hold their creators accountable? If Openai offers less than a joke during a completely profitable transfer to its non -profit organization, will the government take steps to enforce a non -profit law?

Many of these decisions will be made in 2025, and the stake is high. If AI makes you anxious, then this is a lot of reason to demand action rather than because it is a reason.

A version of this story was originally published in the Future Perfect Newsletter. Sign up here!

Leave a Comment