(Disclosure: I am a member of the non-profit APML Workgroup, which facilitates the development of the APML specification)
Tom Morris has an interesting take on the space, primarily in response to a post by friend and former colleague (and current captain of the good ship backstage.bbc) Ian Forrester on his concept of an “APML Lite” (not currently connected with the APML Workgroup).
In his post, Tom raises a number of issues and concerns around the attention markup space – which I feel would be useful to address. However, just for the sake of those not completely across what APML is trying to do, let me define two key points:
- Attention is the term given to the entire scope of what you consume and ‘pay’ interest to – be it websites, books, songs, etc.
- Attention Profile is a metadata payload of that attention, in the form of keywords (or themes) and weightings, which help score how much attention you pay a given keyword. The idea is that a system tracking your attention could generate such a profile which could then be easily ported to another application and processed accordingly. (APML is a proposed XML format for this payload)
Ok, so Tom starts out with the fundamental question about the validity of Attention Profiles:
“The problem I see is that I am not sure what the point is of attention formats. I can see the point of attention, sure. That’s easy. But for me, attention is a set of algorithms which sit above the data layer. When building applications, you try hard to separate out the business process from the database.”
I’m going to assume Tom means to ask “what the point is of attention profile formats?” as the purpose of complete attention formats is to distribute entire attention payloads across systems (which he advocates throughout his post and implies straight of the bat by mentioning the concept of separating business process from database).
“…attention is a set of algorithms which sit above the data layer.”
Well, as mentioned above, technically attention is not the algorithms that sit above the data – it is the data itself. And that data tends to be heavy (imagine a file listing every website you ever visited or every song you ever listened to, each time it was played).
The primary purpose of attention profile formats are to empower the end-user with something of value that they can easily move around the ecosystem. Something that isn’t unmanageably huge.
APML is a way to reflect the product of the very algorithms he mentions. For example, different attention keepers who you allow to track your attention could come to very different conclusions about your attention interests based on the same data. Attention tracker #1 could conclude that you like “football” and “London”, attention tracker #2 could conclude from the same dataset that you actually like “Arsenal” (a specific football team) and “Islington” (a specific region of London).
And don’t forget the granularity to this regard is not just the keyword itself but the weighting too.
Now, assuming that attention tracker #2 has produced a better and more accurate profile for you, APML gives you the opportunity to export that higher-value profile elsewhere. If you had to export the entire dataset to another system you could end up with the new system using a similar inferior algorithm to attention tracker #1 and you would be stuck with crappy profile and perhaps crappy recommendations.
Tom questions this concept:
“A different attention tracker is meant to trust this, even though the process that is used to calculate it may as well have been Mystic Meg’s bloody tarot cards.”
Well, making reference to the example above, in terms of generating the profile it’s up to the user to pick and choose which services they feel produce the best quality results for them – just like you have to decide whether Google or MSN Search give you better search results. However if a user has decided that a given exported profile is accurate then, yes, a recipient attention tracker is meant to trust this file – after all it’s been given the user’s seal of approval.
Obviously APML is just a proposed format, and agnostic from whether one provider is better than another, but it’s not unreasonable to assume that the user would know whether they’re going to be exporting a good profile or not – a service should be showing their profile in the primary interface and also making accurate recommendations. And if Mystec Meg ever produces an attention service and a user wants to export a profile from her then why should they not be able to so (no matter how poor it might be)? There’s the wider, more common, issue here about the user’s right to data portability from silos.
“We can own our attention data all we like, but we need open attention algorithms too, if we want to do anything truly useful with it.”
I’m a proponent of open-source and open-data, and to a fair degree that extends to algorithms. But I’d have to disagree that attention data is only ‘truely useful’ if the algorithms that process that data are ‘open’. For a start, some of the most useful algorithms around – such as Google’s search algorithm – is anything but open yet highly useful.
But crucially, another key use of APML, as mentioned above, is to programmatically reflect the product of these algorithms – which gives you the benefit of them in an environment where the vendor maintains a proprietary secret sauce algorithm. The philosophical debate as to whether vendors should maintain secret sauce/proprietary anything is beyond the scope of this document, and frankly a notion we all have to work around with regardless of whether we agree with it or not. So APML actually helps you when you are dealing with an ecosystem of proprietary algorithms.
Collaborative filtering vs keywords
All of this may, however, be missing Tom’s fundamental question – and that is the keyword approach.
“The problem with hitching data formats to specific use cases is that nobody knows what the use cases will be.”
He’s right, APML is assuming that the ingesting attention engine is going to be keyword based – but that’s because keywords are becoming a pretty common currency for attention profile data. I would beg to differ that we don’t know what the use cases are. Just thinking about the projects I am personally involved, I am advising Orange on a personalized homepage and recommendation service which makes heavy use of keywords as part of its unique selling proposition. I’ve been involved in, and aware of, a fair degree of keyword-orientated work at the BBC too.
“Ideally, an attention engine would be able to pull in data like who I’m talking to, what products I’ve bought on sites like Amazon, what music I’m listening to, who and when I add people to social networking services, and then make rules-based guesses as to how to direct my attention to further my goals.”
“… in RDF, we have a way to represent all the data in a format that could quite feasibly scale up. Through GRDDL, XSLT and microformats, we have a relatively straight-forward process to move data in. What we get for very little work is the potential of a relational database where all the relationships are URLs.”
From these two quotes I get the impression Tom is orientating his thoughts and aspiration about a different attention reccomendation model – perhaps something like collaborative filtering (“people who bought book x also bought book y and book z”, “people who visited link a also visited link b and link x”, etc). To be fair, this is also yet-another, albeit different, use case and so if Tom won’t be drawn on any I’m slightly at a loss as to how this one is any more valid that any other.
However there are some thoughts on this.
Firstly, there is already a specification for exporting entire raw attention datasets of urls – Technorati’s attention.xml. The possibility to do a fair chunk of what Tom is advocating has already been around (with his proposed ‘full on data’ approach) for some time. And it’s fair to say no one has really done anything with it. From talking to various people involved with the specification, I think it’s fair to say that Technorati have moved on from it.
(In fact, their consumer proposition these days is about keywords, funnily enough.)
One of the aspirations, I believe, of the APML Workgroup is to produce something that is ready to be implemented in the consumer space rather than build specifications and formats for the sake of computer science.
Keep everything, including the kitchen sink
In many ways what Tom suggests is the ‘keep everything-and-the-kitchen-sink model’, the lossless model where nothing is lost or left behind – and I think his primary beef is actually not with APML but with the notion that a ‘lossy’ keyword model is a good (or at least valid) model in the attention space.
Only time will tell which is more successful, but so far there are no successful consumer-orientated implementations of attention.xml or anything like what he is describing. And I question whether consumer-orientated services will need a user’s entire raw attention data to give them an accurate recommendation.
There are more complicated debates, too, like traversing objects – deciding that I like “Arsenal” as an attention concept from my urls and then recommending me books or friends in a social network with similar interest – you can’t do that accurately with the kitchen-sink model (unless you convert to keywords, and then you have profiles and thus APML…)
It’s early days for APML, but already I can see many examples where such an approach as a far more likely chance of adoption and it is for that reason I am supporting APML.