Ilya Sutskever, former OpenAI chief scientist, has launched Safe Superintelligence (SSI), focusing solely on AI safety. Co-founded with Daniel Gross and Daniel Levy, SSI aims to develop safe superintelligence without commercial pressures. Sutskever previously co-led OpenAI’s Superalignment team and was involved in the attempted ousting of OpenAI CEO Sam Altman, for which he later apologized.