Google using language AI model to match stories with fact checks
Google is now leveraging BERT, one of its language AI models, in full coverage news stories to better match stories with fact checks and better understand what results are most relevant to the queries posted on Search.
By : migrator
Update: 2020-09-13 05:16 GMT
New Delhi
The more advanced AI-based systems like BERT-based language capabilities can understand more complex, natural-language queries.
However, when it comes to high-quality, trustworthy information, even with its advanced information understanding capabilities, Google do not understand content the way humans do.
Instead, search engines largely understand the quality of content through what are commonly called "signals."
"For example, the number of quality pages that link to a particular page is a signal that a page may be a trusted source of information on a topic," said Danny Sullivan, Public Liaison for Google Search.
Google has more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results.
The company has also made it easy to spot fact checks in Search, News and in Google Images by displaying fact check labels.
"These labels come from publishers that use ClaimReview schema to mark up fact checks they have published," Sullivan said in a blog post.
Sullivan, however, admitted that Google systems aren't always perfect.
"So if our systems fail to prevent policy-violating content from appearing, our enforcement team will take action in accordance with our policies," he said.
Google is working closely with Wikipedia to detect and remove vandalism that it may use in knowledge panels.
The search engine giant is now able to detect breaking news queries in a few minutes versus over 40 minutes earlier.
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android