Ex-OpenAI chief scientist Ilya Sutskever launches SSI to focus on AI safety
Co-founder and former chief scientist of OpenAI, Ilya Sutskever, and former OpenAI engineer Daniel Levy have joined forces with Daniel Gross, an investor and former partner in startup accelerator Y Combinator, to create Safe Superintelligence, Inc. (SSI). The new company’s goal and product are evident from its name.
SSI is a United States company with offices in Palo Alto and Tel Aviv. It will advance artificial intelligence (AI) by developing safety and capabilities in tandem, the trio of founders said in an online announcement on June 19. They added:
“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Sustkever and Gross were already worried about AI safety
Sutskever left OpenAI on May 14 . He was involved in the firing of CEO Sam Altman and played an ambiguous role at the company after stepping down from the board after Altman returned. Daniel Levy was among the researchers who left OpenAI a few days after Sutskever.
Related: OpenAI makes ChatGPT ‘less verbose,’ blurring writer-AI distinction
Sutskever and Jan Leike were the leaders of OpenAI’s Superalignment team created in July 2023 to consider how to “steer and control AI systems much smarter than us.” Such systems are referred to as artificial general intelligence (AGI). OpenAI allotted 20% of its computing power to the Superalignment team at the time of its creation.
Leike also left OpenAI in May and is now the head of a team at Amazon-backed AI startup Anthropic. OpenAI defended its safety-related precautions in a long X post by company president Greg Brockman but dissolved the Superalignment team after the May departure of its researchers.
Other top tech figures worry too
The former OpenAI researchers are among many scientists concerned about the future direction of AI. Ethereum co-founder Vitalik Butertin called AGI “risky” in the midst of the staff turnover at OpenAI. He added, however, that “such models are also much lower in terms of doom risk than both corporate megalomania and militaries.”
Tesla CEO Elon Musk, once an OpenAI supporter, and Apple co-founder Steve Wozniak were among more than 2,600 tech leaders and researchers who urged that the training of AI systems be paused for six months while humanity pondered the “profound risk” they represented.
The SSI announcement noted that the company is hiring engineers and researchers.
Magazine: How to get better crypto predictions from ChatGPT, Humane AI pin slammed: AI Eye
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Bitcoin sentiment falls to 2023 low, but ‘risk on’ environment may emerge to spark BTC price rally
Bitcoin traders prepare for rally to $100K as ‘decoupling’ and ‘gold leads BTC’ trend takes shape
Clanker team earns $13 million in revenue from over 200,000 tokens on Base in just five months

US equities slide as trade war escalates, Powell signals no rate cut
Tariff and interest rate concerns overshadowed a positive March jobs report
Trending news
MoreCrypto prices
More








