Vitalik Buterin, co-founder of Ethereum, recently put forward a daring suggestion to slow the progress of superintelligent AI in order to safeguard humanity. He suggests temporarily halting global access to powerful AI hardware for one to two years. The purpose of this break would be to reduce AI progress and give the world more time to prepare for potential risks that come with highly advanced AI systems.
Superintelligent AI is the form of artificial intelligence that will far surpass the smartness of human brains in every aspect of life. There has already been much development done in the world of AI; however, an AI being a lot more intelligent than human brains raises issues for many experts. This particular AI might hold severe risks and challenges unless used with extreme care.
Buterin says that superintelligent AI might be only five years ahead, and its impact is still questionable. To prevent such damage, he supports a "soft pause" on the hardware of AI; this will limit the computing power available for AI development. This will, in effect, temporarily slow the growth in AI and give the world more time to design safety measures and regulations.
Buterin suggests monitoring the hardware of AI and forcing it to register and check on it. He proposed that AI systems be equipped with special chips, which, in case the international bodies failed to give permission, would prevent them from functioning. As a result, the pause would be enforced globally, preventing any attempts to evade it.
In a nutshell, Buterin's proposal shows how important it is to handle AI development carefully. Although temporarily stopping hardware might seem like a big step, it reflects the increasing concerns about the risks of superintelligent AI. His push for caution encourages a thoughtful approach, making sure that safety and innovation can work together for the benefit of everyone.