Begin typing your search...

    AI tech like ChatGPT can be used to turbocharge fraud: FTC's Lina Khan

    In a Congressional hearing to protect consumers from fraud and other deceptive practices, Khan and fellow commissioners warned House representatives of the risks involved with AI technologies.

    AI tech like ChatGPT can be used to turbocharge fraud: FTCs Lina Khan
    X
    Representative image

    SAN FRANCISCO: The US Federal Trade Commission (FTC) Chair Lina Khan has warned that modern AI technologies like ChatGPT can be used to "turbocharge" fraud.

    In a Congressional hearing to protect consumers from fraud and other deceptive practices, Khan and fellow commissioners warned House representatives of the risks involved with AI technologies.

    "AI presents a whole set of opportunities, but also presents a whole set of risks," Khan told the House representatives.

    "I think we've already seen ways in which it could be used to turbocharge fraud and scams. We've been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,a she stated.

    Khan also warned that AI's ability to turbocharge fraud should be considered a "serious concern," reports TechCrunch.

    The agency launched a new Office of Technology (OT) in February with the goal of supporting the agency's law enforcement and policy work by offering in-house technical expertise.

    OpenAI's ChatGPT may lead to aid scammers and create new mobile threats.

    AI-driven ChatGPT, that gives human-like answers to questions, is also being used by cyber criminals to develop malicious tools that can steal your data.

    The first such instances of cybercriminals using ChatGPT to write malicious codes were recently spotted by Check Point Research (CPR) researchers.

    In a bizarre incident, AI chatbot ChatGPT, as part of a research study, recently falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

    Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

    "ChatGPT recently issued a false story accusing me of sexually assaulting students," Turley had posted in a tweet.

    Brian Hood, regional mayor of Hepburn Shire in Australia, also threatened to sue OpenAI if the Microsoft-owned company doesn't correct false information about him.

    ChatGPT reportedly named Hood as a convicted criminal, involved in a past and real bribery scandal at Australia's Reserve Bank (RBA).

    Visit news.dtnext.in to explore our interactive epaper!

    Download the DT Next app for more exciting features!

    Click here for iOS

    Click here for Android

    IANS
    Next Story