Begin typing your search...

    OpenAI building new team to steer, control 'Superintelligent' AI systems

    The new team will be co-led by Ilya Sutskever (OpenAI's chief scientist and one of the company's co-founders) and Jan Leike (the research lab's head of alignment).

    OpenAI building new team to steer, control Superintelligent AI systems
    X

    Representative image (Photo: IANS)

    SAN FRANCISCO: Microsoft-owned OpenAI has said that it is assembling a team of top machine learning researchers and engineers to steer and control "Superintelligent" artificial intelligence (AI) systems.

    The concept of superintelligence refers to a hypothetical AI model that excels at a wide range of different skills, rather than one domain like some previous-generation models.

    The new team will be co-led by Ilya Sutskever (OpenAI's chief scientist and one of the company's co-founders) and Jan Leike (the research lab's head of alignment).

    "Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction," OpenAI said in a blogpost on Wednesday.

    According to the company, such a model could arrive before the end of the decade.

    Additionally, the company said that it will dedicate 20 per cent of the compute they have secured to date over the next four years to solve the problem of superintelligence alignment.

    "While this is an incredibly ambitious goal and we're not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem. There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today's models to study many of these problems empirically," the researchers said.

    Moreover, the company said that the new team's work is in addition to existing work at OpenAI aimed at improving the safety of current models like ChatGPT, as well as understanding and mitigating other risks from AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and others.


    IANS
    Next Story