Last week, Boston became the second-largest U.S. city to ban the use of facial recognition software.
“Boston should not use racially discriminatory technology that threatens the privacy and basic rights of our residents,” said the bill’s co-sponsor and Boston City Councilor At-Large Michelle Wu, according to local media. “Community trust is the foundation for public safety and public health.”
Boston’s ordinance was passed as recent U.S. protests called for police reform in the wake of George Floyd’s death while being restrained by a police officer. Various cities in Massachusetts and California have previously prohibited their governments from leveraging facial recognition software, in fear of inaccuracies and surveillance that limits civil liberties.
But as public discourse heats up over police protocol and surveillance tech, O’Melveny & Myers special counsel Scott Pink says the private sector’s usage of the tech will be impacted by shifting public opinion. In an interview with Legaltech News, he also discusses why a new proposed California law may usher in a new legislative tactic for regulating facial recognition tech.
This interview has been edited for brevity and length.
Legaltech News: What are the ramifications of Boston’s recent banning of facial recognition tech usage by the city government?
Scott Pink: I think it’s showing a trend among at least local city governments to revisit the usage of facial recognition technology in law enforcement. I think that is probably being spurred by recent events and the protests across the country regarding policing. There have been earlier efforts to ban by other governments but I think you’ll see further increasing scrutiny of these technologies to make sure they are properly used and implemented.
What do these government-usage bans mean for the private sector that develops and uses these technologies?
They don’t have a direct impact. I think it depends on how they’re being used. There’s certainly a comfort level for certain kinds of security, such as access to a cellphone or those types of things, which a consumer may have control over. But these bans might impact other usage in the private sector. [Such as] how is this being used in allowing access to a facility or things that might be akin to how law enforcement uses it.
What are you advising clients that may develop or leverage facial recognition software about how the recent bans impact them?
When you start getting into the biometric area, I generally will tell clients notice of consent is advisable in general. [It] may not be required but is often advisable to do that so consumers are completely comfortable in how their data is being used.
However, I do think the California Assembly Bill 2261 is an effort, at least by California, to put some parameters around the use of technology and incorporate the concept of consent except in certain circumstances, and demonstrating accuracy.
How could California’s Assembly Bill 2261, if passed, be a game-changer for facial recognition software regulations in the U.S.?
It’s certainly similar to Illinois’ [Biometric Information Privacy Act] in that it requires consent in many circumstances except for certain photo security scenarios under the definition of law that you have to have probable cause that they’ve committed a serious crime. So the use in security may be somewhat impacted because you have to have a more robust screening procedure.
The second thing is, it sort of puts the onus on the providers of the tech to reach a certain level of accuracy that hasn’t been part of any statute.
On the flip side from the company point of view, it gives them a somewhat clear set of guidelines into when they can use it [for] and the appropriate circumstances, which I think is helpful for both protecting individual rights and the private sector to know the rules of the road. [With this law] they aren’t trying to guess on a case-by-case basis what is permitted based on a lawsuit or claims made.
Some tech companies that have recently paused or ended their selling of facial recognition technology to law enforcement, say there needs to be a federal law governing facial recognition technology. What do you think that type of law would look like?
I think the California assembly bill reflects, to what I understand, what some in the industry is comfortable with. It’s a regulation that allows it [using facial recognition software] but provides some rules of the road.
If I was looking at [a national facial recognition law] on behalf of the industry I would say it has to be a law that recognizes that it’s permissible and provides a set of standards that can be reasonably implemented commercially so it doesn’t become so difficult to implement and is basically a ban. But [also that it’s] not so lax [that] perhaps you get some confusion about what it means. I think it has to be a well-thought-out standard that prescribes when consent notice is required, allows for appropriate exceptions for security [and other matters] when it may not be realistic or appropriate, and provides a standard that companies can reasonably reach to ensure accuracy.