Editor’s Note: The following is a short letter from Ray Kurzweil, a director of engineering at Google and cofounder and member of the board at Singularity Group, Singularity Hub’s guardian firm, in response to the Future of Life Institute’s current letter, “Pause Giant AI Experiments: An Open Letter.”
The FLI letter addresses the dangers of accelerating progress in AI and the following race to commercialize the know-how and requires a pause within the improvement of algorithms extra highly effective than OpenAI’s GPT-4, the massive language mannequin behind the corporate’s ChatGPT Plus and Microsoft’s Bing chatbot. The FLI letter has 1000’s of signatories—together with deep studying pioneer, Yoshua Bengio, University of California Berkeley professor of laptop science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and lots of others—and has stirred vigorous debate within the AI group.
Regarding the open letter to “pause” analysis on AI “more powerful than GPT-4,” this criterion is just too imprecise to be sensible. And the proposal faces a critical coordination drawback: those who conform to a pause might fall far behind companies or nations that disagree. There are large advantages to advancing AI in essential fields similar to medication and well being, training, pursuit of renewable power sources to exchange fossil fuels, and scores of different fields. I didn’t signal, as a result of I imagine we are able to tackle the signers’ security issues in a extra tailor-made manner that doesn’t compromise these very important strains of analysis.
I participated within the Asilomar AI Principles Conference in 2017 and was actively concerned within the creation of pointers to create synthetic intelligence in an moral method. So I do know that security is a essential problem. But extra nuance is required if we want to unlock AI’s profound benefits to well being and productiveness whereas avoiding the true perils.
— Ray Kurzweil
Inventor, best-selling writer, and futurist