When Algorithms Learn to Recognise You
If the use of biometrics like fingerprints and iris scans for identity raises concerns about personal privacy, facial recognition poses a far greater threat. Unlike other biometrics, faces are easily captured, and the lack of regulation and rapid advancement of technology expose individuals to the risk of active surveillance.
This article was first published in The Mint. You can read the original at this link.
One of the concerns around the use of biometrics for identity is the fear that since they represent physical features that are unique to each of us, their misuse represents a threat to personal privacy. As a result we are instinctively reluctant to trust something so personal into the hands of the government for fear that if, due to a failure of state capacity, this information is compromised, it will cause us irremediable harm. If biometrics are going to be the key to our digital identities, there is an apprehension that criminal elements who figure out how to get their hands on them will be able to steal our identity.
At present, fingerprints and iris scans are the government’s biometrics of choice. They are, both individually and in combination, capable of uniquely identifying us from the rest of humanity. While there have been claims that all it takes to capture your fingerprints is the application of Fevicol on your fingers, it is hard to conceive of situations in which someone can actually apply that glue on your hands without your knowledge. These biometrics therefore have an inherent, non-trivial barrier to collection that is an effective defence against ambient capture.
Even though the government may have chosen to limit itself to fingerprints and iris, these are, by no means, the only biometric markers that are being used to identify you. At least one of the other biometric currently in use is more pervasive and infinitely more worrying.
Last week Google announced the launch of Google Lens, a product that integrates the smartphone camera with Google’s core search functionality—allowing the company’s proven expertise in text and web search to extend to photos and videos. It works with Google Assistant so that when you look at something through your camera, the artificial intelligence infers what it is you might want to know about it.
For instance, if you are at a historical landmark, all you need to do is point your phone in its general direction and Lens will cross reference your location with its analysis of the image and provide you with detailed historical information about what you are looking from its data banks. Thanks to this ability to dynamically parse images in a way that has, so far been reserved for text, Lens allows us to interact with the physical world far more intuitively than before.
It should not come as a surprise to anyone that Google has this sort of technology at its command.
We’ve known for a while that all the big tech giants have been focussed on improving their image recognition technology. We have witnessed the results of these endeavours ourselves as the hundreds of images we’ve stored in the cloud have more recently begun to arrange themselves by category and miraculously compose themselves into short videos around algorithmically curated topics. Just as image search algorithms have begun to get increasingly accurate—capable of tagging images sometimes even better than humans can.
In all of this, the primary focus has been to identify the people in these images. Social media companies have long understood the advantage of accurate facial recognition—the more skilful they get at identifying friends and acquaintances within images, the better is their ability to understand the web of our social connections. By uploading a decade’s worth of personal images onto sharing websites, we have provided them ample data with which to train algorithms in facial identification. We’ve seen how these algorithms have improved over the years with the many connections that our social networks have miraculously generated for us.
In the context of personal privacy, all of this is actually a cause for considerable concern. Unlike iris and fingerprint data, your face is always visible, and easily captured—often without your knowing it. Given the ever expanding network of cameras and mobile devices that surround us, facial recognition exposes us to the significant risk of active surveillance. If Google Lens is all it is cracked up to be, that risk may just have doubled as we now have in play, accurate mobile technology capable of providing workable, real-time facial recognition in still and moving images.
Facial recognition is, at best, superficially regulated. We have next to no information as to how these technologies are being developed within technology companies—but we do get a glimpse of how rapidly they are advancing when we notice that the algorithms are now capable of identifying us from our baby pictures, with facial hair or under costume makeup.
If we worry that the use of biometrics exposes us to persistent surveillance I would argue that facial recognition poses the greatest threat, far more than iris or fingerprints ever could. It is imperative that we understand the full extent of this risk and take steps to protect ourselves adequately.