Amazon began marketing a facial recognition system, called Rekognition, to law enforcement agencies as a means of identifying suspects shortly after the tool was introduced in 2016. The system — which analyzes images and video, and compares them with databases of photographs to pick out individuals — has been used by the Police Department in Orlando, Fla., and the Sheriff’s Department in Washington County, Ore.
But recently, Amazon came under criticism from the American Civil Liberties Union and a group of more than two dozen civil rights organizations for selling the technology to police authorities. The A.C.L.U.’s argument: The police could use such systems not just to track people committing crimes but also to identify citizens who are innocent, such as protesters.
Now, some Amazon shareholders are joining those appeals. In a letter addressed to the company’s chief executive, Jeff Bezos, a group of investors explained why they want a halt to Rekognition sales to the police:
Such government surveillance infrastructure technology may not only pose a privacy threat to customers and other stakeholders across the country, but may also raise substantial risks for our company, negatively impacting our company’s stock valuation and increasing financial risk for shareholders.
In addition to our concerns for U.S. consumers who may be put in harm’s way with law enforcement’s use of Rekognition, we are also concerned sales may be expanded to foreign governments, including authoritarian regimes.
Amazon had no immediate comment on the letter.
In a blog post published shortly after the initial call by the A.C.L.U. to ban the sale of Rekognition to the police, Matt Wood, general manager of artificial intelligence at Amazon Web Services, wrote:
We believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future. The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm. The same can be said of thousands of technologies upon which we all rely each day. Through responsible use, the benefits have far outweighed the risks.
Tech companies recently have come under scrutiny for their work with the government.
In April, Google employees protested the company’s work on a Pentagon project that used image recognition to improve military drone operations. They said the tech giant “should not be in the business of war.” In response, Google said it would not renew the contract for that particular piece of work, known as Project Maven, and has since created a set of principles to guide its artificial intelligence projects. The new guidelines prohibit work that could cause injury or violate human rights — but do not rule out all forms of defense work.
Microsoft has come under scrutiny for its involvement with United States Immigration and Customs Enforcement, which is itself part of broad criticism this week about the government’s separating families at the United States borders. The software giant drew criticism for temporarily deleting a January blog post that described how the company was “proud to support” work with I.C.E. The agency was using Microsoft’s Azure cloud services to “utilize deep learning capabilities to accelerate facial recognition and identification.”