News

OpenAI Co-Founder's Safe Superintelligence Raises $1 Billion Amid AI Funding Frenzy

The company will use the fund to develop safe artificial intelligence systems that will surpass human capabilities

Artificial Intelligence
info_icon

OpenAI co-founder Ilya Sutskever's new AI start-up Safe Superintelligence (SSI) has raised $1 billion, as per a Reuters report. The company will reportedly use the fund to develop safe artificial intelligence systems that will surpass human capabilities. 

While the valuation at which the funding is raised is not disclosed, sources told Reuters that the valuation is $5 billion. The company will also focus on hiring talents and acquiring talent power with the fund. Additionally, the AI start-up intends to have a trusted team of engineers and researchers in places such as Palo Alto, California, and Tel Aviv, Israel, adds the Reuters report. 

Advertisement

Some of the investors who took part in the funding round include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. To add to it, NFDG funds also participated in the round. 

The Founders of Safe Superintelligence

AI start-up Safe Superintelligence was founded in June this year by Ilya Sutskever, Daniel Gross, and Daniel Levy. Levy is a former OpenAI researcher and now the co-founder and optimisation lead of Safe Superintelligence. While Gross, an entrepreneur, is the technology strategist of the company. He also cofounded another AI start-up named Cue, which was later acquired by Apple. 

Advertisement

Meanwhile, Sutskever, the co-founder of OpenAI, left the company in May this year to start his own AI company. Soon after his resignation, he wrote on X in May, “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial.” 

Following this, OpenAI CEO wrote on X, “Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend."

In the past, there was a leadership issue at OpenAI when the company's board claimed that Sam Altman lacked transparency. Following this, there were media reports that claimed that Sutskever was focusing on AI safety, while Altman and others were focused on developing new technologies. After much turmoil, Altman, who was abruptly fired, was reinstated in March. 

Safe Superintelligence's Singular Focus on Advancing AI

As per the cofounders, the main aim of the company is to create 'superintelligence’. This basically refers to AI that will be smarter than humans. In a company blog post, it was mentioned, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.” 

Advertisement

The funding of this start-up comes at a time when there has been a surge in AI start-up funding. From April to June, investment in AI start-ups reportedly increased to $24 billion. This was more than double than the previous quarter, as per data from Crunchbase. Additionally, as per Crunchbase, “Thanks to the huge windfall seen in Q2, the first half of this year saw $35.6 billion go to AI start-ups—a 24 per cent increase from the $28.7 billion in H1 last year.” 

Tags

Advertisement

Advertisement

Advertisement

Advertisement