As I understand it the idea goes like this:
- Somehow, Ria identifies your face from a photo (my guess is one which is tagged as your name and only has one face in it).
- Riya then uses the facial recognition technology to learn your face
- Riya continues to spider the internet for photos, applying the ‘fingerprint’ it has built up for each face to see whether it can identify people the people each photo.
In addition to spidering websites for photos, I’m sure it will also search photo repositories such as Flickr.
But I have some serious privacy concerns about this product.
If someone puts a photograph of me on the Internet then I’ve lost an element of privacy – and there’s not much I can do about that. However if my name cannot be attributed to that photo, perhaps because they havn’t tagged my name with it, then I’m pretty ‘safe’. There’s little to connect me to that photo.
Riya removes that element of anonymity within photos – and the most concerning examples of this are where you might be in the background, where you are not the primary focus of the photo.
Specifically, my concern goes like this:
- Riya identifies “Ben Metcalfe” by running facial recognition technology across tagged photos of me. With three photos and a matching facial match, it confirms my identity into its database
- Riya continues to spider the rest of Flickr (and, indeed the Internet as a whole), picking up photos of me that would not have otherwise been attributed to me, due to a lack of tagging/metadata.
- For argument’s sake, let’s say I decide to visit this weekend’s London Erotica 2005 event or attend last Monday’s Google Open House London ‘semi-recruitment’ event – both private pursuits a person wouldn’t want, say, their employer to be aware of.
- At both events someone takes a photograph in which I’m in the background, maybe dressed in my PVC gimp outfit (remember, this is an example, yeah?). Those photos are uploaded to Flickr and subsequently spidered by Riya.
- Someone running an arbitrary search for “Ben Metcalfe” is presented with these potentially damaging photos of me.
Many might argue that this is all “fair game”, but my point would be that this kind of technology was originally developed for use in regulated environments – such as local authority CCTV scanning.
Photos are funny things, where the person who took the photo, not the person(s) who is/are in the photo, holds all the rights. This steps up the concerns to a whole new level – where abuse of the service can take place, and people’s privacy and anonymity are seriously eroded.
There are also some potential issues here. Matt Locke made the point that it would be difficult for the software to distinguish him from his identical twin brother – at best limiting the use of the software in this instance, and at worst falsely attributing him to incriminating photos (who knows what kind of debauchery Matt’s brother gets up to???!?).
The other issue is around the use of an opt-out feature (which I would like there to be). The issue is simply that I could imagine many people opting out and thereby limiting the value of the proposition.
And don’t get me started on the potential consequences of the rumoured pre-launch buy-out by Google (the company which seems to upping it’s potential to obliterate it’s original mantra of “do no harm”).
Who knows what will happen to Riya. I’ve signed up for the beta (although am yet to receive an invitation). I understand it is due to launch into V1.0 very soon, so that will probably be the first opportunity to see whether my concerns are valid. I fear they might be…