OpenAI’s former chief scientist is starting a new AI company

  News, Rassegna Stampa
image_pdfimage_print

Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.

The announcement describes SSI as a startup that “approaches safety and capabilities in tandem,” letting the company quickly advance its AI system while still prioritizing safety. It also calls out the external pressure AI teams at companies like OpenAI, Google, and Microsoft often face, saying the company’s “singular focus” allows it to avoid “distraction by management overhead or product cycles.”

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” In addition to Sutskever, SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of technical staff at OpenAI.

As OpenAI pushes forward with partnerships with Apple and Microsoft, we likely won’t see SSI doing that anytime soon. During an interview with Bloomberg, Sutskever says SSI’s first product will be safe superintelligence, and the company “will not do anything else” until then.

https://www.theverge.com/2024/6/19/24181870/openai-former-chief-scientist-ilya-sutskever-ssi-safe-superintelligence