Begin typing your search...

    MS working on AI model that takes images as cues

    The multi-modal large language model (MLLM) can help in an array of new tasks, including image captioning, visual question answering and more.

    MS working on AI model that takes images as cues
    X
    Representative image

    NEW DELHI: As the war over artificial intelligence (AI) chatbots intensifies, Microsoft has unveiled Kosmos-1, a new AI model that can also respond to visual cues or images, apart from text prompts or messages.

    The multi-modal large language model (MLLM) can help in an array of new tasks, including image captioning, visual question answering and more.

    Kosmos-1 can pave the way for the next-stage beyond ChatGPT’s text prompts. “A big convergence of language, multimodal perception, action, and world modeling is a key step toward artificial general intelligence. In this work, we introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context and follow instructions,” said Microsoft’s AI researchers in a paper.

    The paper suggests multi-modal perception, or knowledge acquisition and “grounding” in the real world, is needed to move beyond ChatGPT-like capabilities to artificial general intelligence (AGI), reports ZDNet.

    “More importantly, unlocking multimodal input greatly widens the applications of language models to more high-value areas, such as multimodal machine learning, document intelligence, and robotics,” the paper read.

    The goal is to align perception with LLMs, so that the models are able to see and talk.

    Experimental results showed that Kosmos-1 achieves impressive performance on language understanding, generation, and even when directly fed with document images.

    It also showed good results in perception-language tasks, including multimodal dialogue, image captioning, visual question answering, and vision tasks, such as image recognition with descriptions (specifying classification via text instructions).

    “We also show MLLMs can benefit from cross-modal transfer, ie, transfer knowledge from language to multimodal, and from multimodal to language. In addition, we introduce a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs,” said the team.

    Visit news.dtnext.in to explore our interactive epaper!

    Download the DT Next app for more exciting features!

    Click here for iOS

    Click here for Android

    DTNEXT Bureau
    Next Story