The hype surrounding applications based on artificial intelligence (AI) continues unabated. New “intelligent” tools that fascinate with innovative functions appear almost daily. At the same time, however, there is growing fear of systems that take over on their own. Experts around the world are alarmed and are advising on how to regulate intelligent systems. At present, the technical prerequisites are not in place to keep artificial superintelligences in check – this is now openly admitted by the developers of ChatGPT. OpenAI warns that superintelligent AI systems could emerge before the end of this decade that are smarter than humans.
Will AI lead to the extinction of humanity?
OpenAI got the ball rolling last year with the release of ChatGPT. Since then, numerous services based on artificial intelligence have come onto the market in the end-user segment alone. The progress of the technology is rapid and can hardly be stopped. But now Ilya Sutskever, co-founder of OpenAI, and Jan Leike, head of OpenAI’s future direction department, fear that “the enormous power of superintelligence could lead to the disempowerment of humanity or even the extinction of humanity.” In their in-house blog, they write, “Currently, we have no way to direct or control a potentially superintelligent AI and prevent it from going its own way.”
OpenAI is looking for ML experts
OpenAI announced plans to devote 20 percent of its computing power to solving the problem over the next four years. The company is in the process of assembling a research team, called the Superalignment Team. It hopes to make “scientific and technical breakthroughs.” If you are an expert in machine learning (ML), you can apply. The ChatGPT developer is still looking for ML researchers and engineers for its ambitious project.