Wednesday, 11 March 2020

Members of the Black Community Critical of Police Use of AI Facial Technology


By Neil Armstrong

Photo contributed    Kingsley Gilliam of the Black Action Defense Committee (BADC)


Members of the Black community are critical of the use of Clearview AI technology by the Toronto Police Service and they welcome Chief Mark Saunder’s order to halt the practice.

Clearview AI is a powerful and controversial facial recognition tool that works by scraping billions of images from the Internet.

Kingsley Gilliam of the Black Action Defense Committee (BADC) says the over surveillance of Black people has resulted in a significant overrepresentation in the court system, provincial jails and federal penitentiaries.

“The use of the AI technology would significantly increase this overrepresentation and increase the number of wrongfully convicted members of our community.”

BADC was founded in 1988 by a group of activists, including Dudley Laws, Sherona Hall, Charles Roach, Lennox Farrell and others, in response to a series of police shootings of Black men in Toronto.

The organization is urging the Toronto Police Services Board (TPSB) to reject any recommendation to authorize the use of this technology.

“This is the thin edge of the wedge in taking away personal privacy that citizens have fought for in a free and democratic society.”

Acknowledging that the use of face recognition technology by police is a rising phenomenon in modern society, BADC said this is fraught with “serious moral, ethical, legal and privacy concerns that are guaranteed in a free and democratic society.”

According to Knia Singh, a barrister and solicitor, “The privacy concerns of the public are at risk with technology like this. More importantly, computers may not catch the nuances in facial recognition that humans do, therefore there is a possibility there will be errors due to computer algorithms and identification methods which may lead to convicting an innocent person who looks like the perpetrator.”

Photo contributed      Lawyer and community advocate Knia Singh


Constable Victor Paul Kwong said some members of the service began using Clearview AI in October 2019 with the intent of informally testing this new and evolving technology. 

“The Chief directed that its use be halted immediately upon his awareness, and the order to cease using the product was given on February 5, 2020.  We have requested the Information and Privacy Commissioner and the Crown Attorneys Office work with us to review the technology and its appropriateness as an investigative tool for our purposes given that it is also used by other law enforcement agencies in North America.  Until a fulsome review of the product is completed, it will not be used by the Toronto Police Service,” he said.

In addition to this, the Toronto Police Service is undertaking a full review of its use of Clearview AI and is consulting with the Information and Privacy Commissioner’s Office and the Crown Attorneys’ Office to consider all aspects of this technology and its application to police investigations. 

The constable said Clearview AI was not being used at the time of the May 2019 Toronto Police Services Board discussion on facial recognition. 

“That work continues and the Service has not changed its position since that time. At no time was the Clearview AI technolog used for livestreaming or real-time information gathering and there were no costs associated with its use.  Our current review includes a comprehensive analysis of each time the technology was accessed by an investigator. We appreciate the public interest in the matter but this work will take some time.  We will provide updates in the days and weeks ahead,” Kwong said.

Meanwhile, the TPSB says members of the public will be invited to make deputations on the use of Clearview AI technology at its meeting to assist the board in its consideration of the matter.
At its meeting on February 25, Board Chair Jim Hart said the use of Clearview AI’s face recognition service by some Toronto Police Service members has raised serious questions regarding the use of this technology.

“Members of the community have a legitimate interest in this topic. The board appreciates the dialogue that has been generated, and understands the importance of closely examining and then discussing the various issues it raises, including the internal process for reviewing and approving new technology that the service wishes to use,” he said.

He said there are many questions that the use of this technology has generated, and it is important that these questions be carefully considered and explored.

In the meantime, Brian Beamish, Information and Privacy Commissioner of Ontario, says the indiscriminate scraping of the Internet to collect images of people’s faces for law enforcement purposes has significant privacy implications for all Ontarians.

“We have made it clear in the past that my office should be consulted before this type of technology is used. We were not aware that the Toronto Police Service was using Clearview AI technology until contacted by them on February 5. We are relieved that its use has been halted,” he said.

[This story has been published in the North American Weekly Gleaner, March 5-11, 2020.]

1 comment:

  1. This technology is dangerous, unethical and immoral.. it should not be used by law enforcement nor by the state or state authorized personnel. Not because it is possible should it be done. This crosses the threshold into Gestapo type tactics.

    ReplyDelete