Microsoft won't sell facial recognition tech to police: President Brad Smith
After IBM and Amazon, Microsoft President Brad Smith who is known as the torchbearer always vouching for an ethical and responsible AI on Thursday said the company would not sell its facial recognition technology to police.
By : migrator
Update: 2020-06-11 19:27 GMT
San Francisco
In an interview with The Washington Post Live, Smith said Microsoft has already been taking a "principled stand" on the ethical use of this AI technology.
"If all of the responsible companies in this country cede this market to those that are not prepared to take a stand, we won't necessarily serve the national interest or the lives of the black and African people of this nation as well. We need Congress to act, not just tech companies alone. That's the only way we will guarantee that we will protect the lives of people," Smith was quoted as saying.
"As a result of the principles that we've put place, we do not sell facial recognition technology to police departments in the US today," Smith added.
Smith said the company will not sell facial recognition until there's a "national law in place grounded in human rights that will govern this technology".
On Wednesday, Amazon announced to apply the brakes on its facial recognition technology called ‘Rekognition' for police for one year, in the wake of potential misuse of the technology by the cops as racial protests gain steam in the US after the death of Afrrican-American George Floyd.
"We're implementing a one-year moratorium on police use of Amazon's facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families," Amazon said in a statement.
Technology giant IBM this week terminated its general purpose facial recognition and analysis software products.
IBM CEO Arvind Krishna said in a letter to the US Congress that users of Artificial Intelligence-based systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement.
Often touted as a tool that can help law enforcement agencies to quickly track criminals, facial recognition technology has courted controversy for the enormous potential of its misuse and lack of regulation.
Facebook in January agreed to pay $550 million to settle a 2015 class-action privacy lawsuit against its use of facial recognition technology in a US state.
New York-based Clearview AI recently said in a legal filing that it will not make its facial recognition app available to non-governmental customers anywhere.
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android