Musk believes that AI threatens humans, indicating that only 5-10% can safely use AI.

In its view of AI, Musk said that we only have 10% chance to use artificial intelligence safely. Artificial intelligence will threaten human beings in the future. There are still many dangers and problems. There are very few people who can really control the AI ​​industry. .

Although Elon Musk is committed to promoting artificial intelligence (AI), he also believes that AI is very likely to pose a threat to humanity in the future. In an exclusive interview with Rolling Stone, the tech celebrity claimed that we have only 5-10% chance of success in using AI safely.

Musk believes that AI threatens humans, indicating that only 5-10% can safely use AI.

The outlook is not very good

Elon Musk took a lot of thought to think about the harsh reality and the possibility that humans could not control AI. These thoughts convinced him that if we were to survive, we must be one with the machine. To this end, he even created a startup that specializes in brain-computer interface (BCI) technology to achieve this goal. However, although his own laboratory OpenAI has been able to implement the self-learning function of AI, Musk said that the safe use of AI has only a "5-10% chance of success."

Recent reports from Rolling Stone magazine show that Musk has mentioned this tiny possibility with the employees of the BCI startup Neuralink mentioned above. Although Musk is actively involved in the development of artificial intelligence, he publicly acknowledges that this technology brings not only potential but also serious problems.

The challenge of using artificial intelligence safely is a double-edged sword.

First, one of the main goals of artificial intelligence (and what OpenAI is already pursuing) is to build AI that is smarter than humans, and that can learn independently without any human programming or interference. But how to achieve this ability is still unknown.

Second, the machine has no morality, regret, or emotion. Future artificial intelligence may be able to distinguish between “good” and “bad” behaviors, but obviously cannot have a human feeling.

In the article "Rolling Stones", Musk further elaborated on the current dangers and problems in the field of artificial intelligence, one of which is that only a few companies are controlling the AI ​​industry. He cited Google DeepMind as an example to illustrate.

"Between Facebook, Google and Amazon, oh, and Apple, they seem to be very concerned about privacy - they know you more than yourself," Musk said. “The concentration of power can lead to many risks. So if general artificial intelligence (AGI) represents the extreme level of power, should it be controlled by a few people in Google without supervision?”

Is it worth the risk?

Experts have different opinions about what Musk thinks AI is not safe. Facebook founder Mark Zuckerberg said he is optimistic about the future of artificial intelligence and believes that Musk’s warning is “quite irresponsible”, while Stephen Hawking made a public statement. The AI ​​system is considered to pose sufficient risks to humans and they may completely replace us.

Sergey Nikolenko is a Russian computer scientist specializing in machine learning and network algorithms. He recently shared his views on the matter. Nico Lianke said, "I feel that we still lack the necessary and basic understanding and methods, can not create a strong AI, and it is difficult to solve AI and other related issues."

As for today's artificial intelligence, he thinks that we have nothing to worry about. "I can bet that modern neural networks won't wake up suddenly and decide to overthrow the human hegemon," said Nico Lianke.

Musk himself may agree with this point of view, but his statement seems to be more focused on how to create future artificial intelligence on the basis of today.

We already have an AI system that can build artificial intelligence systems, some of which can communicate in their own language, and some are inherently curious. Although today's singularity technology and the rise of robots are still strictly limited to the science fiction level, advances in these areas of artificial intelligence have given people a glimpse into the real future world.

But the above concerns are not enough to prevent us from moving on. We can also let AI diagnose cancer, identify suicidal behavior, and help stop sexual transactions.

Artificial intelligence technology has the potential to save and improve people's lives on a global scale. Therefore, we must consider how to make artificial intelligence safe through future supervision. Musk's warning is only a word.

Even he himself said to "Rolling Stones": "I don't have the answer to all the questions. I have to figure this out clearly. I am trying to conceive a set of actions I can take in order to create a better future. If you If you have any suggestions in this regard, let me know."

Silicone Rubber Insulator

Suspension Composite Insulator Polymer,High-voltage, Low-voltageMaterial,Composite Polymer

TAIZHOU HUADONG INSULATED MATERIAL CO.,LTD , https://www.thim-insulator.com