Begin typing your search...

    ChatGPT can help debunk myths on vaccine safety on social media: Study

    Results published in the journal Human Vaccines and Immunotherapeutics show that ChatGPT scored 9 out of 10 on average for accuracy. The rest of the time it was correct but left some gaps in the information provided.

    ChatGPT can help debunk myths on vaccine safety on social media: Study
    X

    Representative image

    LONDON: Open AI’s ChatGPT may help to increase vaccine uptake by debunking myths on social media around jab safety, according to a study.

    Researchers from Instituto de Investigacion Sanitaria (IDIS)-Hospital Clinico Universitario of Santiago de Compostela in Spain asked the artificial intelligence (AI) chatbot the top 50 most frequently asked Covid-19 vaccine questions. They included queries based on myths and fake stories such as the vaccine causing long Covid.

    Results published in the journal Human Vaccines and Immunotherapeutics show that ChatGPT scored 9 out of 10 on average for accuracy. The rest of the time it was correct but left some gaps in the information provided.

    Based on these findings, the researchers said that the AI tool is a "reliable source of non-technical information to the public," especially for people without specialist scientific knowledge.

    However, the findings do highlight some concerns about the technology such as ChatGPT changing its answers in certain situations.

    "Overall, ChatGPT constructs a narrative in line with the available scientific evidence, debunking myths circulating on social media," said lead author Antonio Salas, Professor at the Faculty of Medicine at the University of Santiago de Compostela.

    "Thereby it potentially facilitates an increase in vaccine uptake. ChatGPT can detect counterfeit questions related to vaccines and vaccination. The language this AI uses is not too technical and therefore easily understandable to the public but without losing scientific rigour.

    "We acknowledge that the present-day version of ChatGPT cannot substitute an expert or scientific evidence. But the results suggest it could be a reliable source of information to the public," Salas said.

    In 2019, the World Health Organization (WHO) listed vaccine hesitancy among the top 10 threats to global health. During the recent Covid-19 pandemic, misinformation spread via social media contributed to public mistrust of vaccination.

    The study set out to test ChatGPT's ability to get the facts right and share accurate information around Covid vaccine safety in line with current scientific evidence.

    ChatGPT provided correct answers to queries that arose from genuine vaccine myths, and to those considered in clinical recommendation guidelines to be false or true contraindications.

    However, the research team does highlight ChatGPT's downsides in providing vaccine information.

    Professor Salas, who specialises in human genetics, concludes: "Chat GPT provides different answers if the question is repeated 'with a few seconds of delay'."

    "Another concern we have seen is that this AI tool, in its present version, could also be trained to provide answers not in line with scientific evidence.

    "One can 'torture' the system in such a way that it will provide the desired answer. This is also true for other contexts different to vaccines. For instance, it might be possible to make the chatbot align with absurd narratives like the flat-earth theory, deny climate change, or object to the theory of evolution, just to give a few examples.

    "However, it's important to note that these responses are not the default behaviour of ChatGPT. Thus, the results we have obtained regarding vaccine safety can be probably extrapolated to many other myths and pseudoscience."

    IANS
    Next Story