Skip to content →

Riya – when the “open concept” goes too far (REPOST)

For those of you who don’t know, Riya is a new “stealthy start-up” which is pitching itself as a photo search engine. Ryia’s USP is that it incorporates facial recognition technology.

As I understand it the idea goes like this:

  • Somehow, Ria identifies your face from a photo (my guess is one which is tagged as your name and only has one face in it).
  • Riya then uses the facial recognition technology to learn your face
  • Riya continues to spider the internet for photos, applying the ‘fingerprint’ it has built up for each face to see whether it can identify people the people each photo.

In addition to spidering websites for photos, I’m sure it will also search photo repositories such as Flickr.

A number of people seem to be really digging Riya – including Michael Arrington of TechCrunch who is holding the Riya’s launch party in is home!

But I have some serious privacy concerns about this product.

If someone puts a photograph of me on the Internet then I’ve lost an element of privacy – and there’s not much I can do about that. However if my name cannot be attributed to that photo, perhaps because they havn’t tagged my name with it, then I’m pretty ‘safe’. There’s little to connect me to that photo.

Riya removes that element of anonymity within photos – and the most concerning examples of this are where you might be in the background, where you are not the primary focus of the photo.

Specifically, my concern goes like this:

  • Riya identifies “Ben Metcalfe” by running facial recognition technology across tagged photos of me. With three photos and a matching facial match, it confirms my identity into its database
  • Riya continues to spider the rest of Flickr (and, indeed the Internet as a whole), picking up photos of me that would not have otherwise been attributed to me, due to a lack of tagging/metadata.
  • For argument’s sake, let’s say I decide to visit this weekend’s London Erotica 2005 event or attend last Monday’s Google Open House London ‘semi-recruitment’ event – both private pursuits a person wouldn’t want, say, their employer to be aware of.
  • At both events someone takes a photograph in which I’m in the background, maybe dressed in my PVC gimp outfit (remember, this is an example, yeah?). Those photos are uploaded to Flickr and subsequently spidered by Riya.
  • Someone running an arbitrary search for “Ben Metcalfe” is presented with these potentially damaging photos of me.

Many might argue that this is all “fair game”, but my point would be that this kind of technology was originally developed for use in regulated environments – such as local authority CCTV scanning.

Photos are funny things, where the person who took the photo, not the person(s) who is/are in the photo, holds all the rights. This steps up the concerns to a whole new level – where abuse of the service can take place, and people’s privacy and anonymity are seriously eroded.

There are also some potential issues here. Matt Locke made the point that it would be difficult for the software to distinguish him from his identical twin brother – at best limiting the use of the software in this instance, and at worst falsely attributing him to incriminating photos (who knows what kind of debauchery Matt’s brother gets up to???!?).

The other issue is around the use of an opt-out feature (which I would like there to be). The issue is simply that I could imagine many people opting out and thereby limiting the value of the proposition.

And don’t get me started on the potential consequences of the rumoured pre-launch buy-out by Google (the company which seems to upping it’s potential to obliterate it’s original mantra of “do no harm”).

Who knows what will happen to Riya. I’ve signed up for the beta (although am yet to receive an invitation). I understand it is due to launch into V1.0 very soon, so that will probably be the first opportunity to see whether my concerns are valid. I fear they might be…

Published in News Thoughts and Rants


  1. Here’s betting that the software throws up so many false positives that it becomes a bit crap.

    Face recognition, like fingerprint recognition, only works when you have a relatively small sample size. When your sample size is the whole internet, then there will be all sorts of quirks.

    Humans are always going to be infinately better at tagging than computers.

    That said, some of your privacy concerns apply equally to human tagging as they do to automated tagging. If I (or someone else) spots a photo of your on Flickr or the web, I could add your name to the metadata myself (on Flickr if tags are left open, or on, making your photo equally searchable…

  2. Ben Ben

    That said, some of your privacy concerns apply equally to human tagging as they do to automated tagging. If I (or someone else) spots a photo of your on Flickr or the web, I could add your name to the metadata myself (on Flickr if tags are left open, or on, making your photo equally searchable…

    This is true

  3. It’s quite a scary thought but then again may be usefull in some instances. It should be an opt-in service though as theres no conceivable way there can be an opt-out service. Theres no way of making sure someones face structure is their email over the internet.

  4. The face recognition isn’t nearly powerful enough at this time to pick you out in your ‘gimp suit’ (or other “fictional” wearables) in an anonymous crowd. In fact, in order to (as Frankie alluded to) keep the sample sizes fairly small, and, therefore, the training results more accurate, we use email address as a unique identifier.

    Only those with both your photo and your email address can create an official digital signature.

    Ah…as we live our lives more and more online, we come up against these heavy security questions. Any other security measures you can suggest would be great.

    BTW…nice meeting you at Les Blogs, Ben. 😉

Comments are closed.