Category Archives: Web Services

Presdo API contest

Presdo logo

Presdo, the online scheduling start-up, has is running a mashup competition around their API. Build something cool with it, win an iPhone 3G.

Sounds pretty good to me, so I was delighted to accept Presdo founder Eric Ly’s request to be a judge for the competition. Yes, that sadly means I can’t win one of those little boxes of Apple 3G goodness 🙁

If you have a cool idea for a calendaring-meets-your-favorite-web-app/API/social site/whatever application then you’ve got until July 18 2008. Good luck!

[via VentureBeat]

Apture: elegantly adding context to your site

“Wow, that’s really really slick!”

That was my reaction when Tristan first showed me a demo of Apture (which just opened for signups, if you want to add it to your blog or website).

We’d met a few times previously and he’d been teasing with hints about the product he was working on – but refused to show me anything, or even give me any detail about what he and his fellow co-founders were really up to.

All I knew was that we shared a common interest in both grassroots and mainstream media, and importance of innovation given the nature of the content often being communicated. We’d spent several meetings discussing all sorts of interest stuff – from the way the media is often the last resort to keep governments and business in check, the need for informed society, through to the power of building products with a platform-orientated architecture.

Very much a meeting of minds – and so when I finally got to experience Apture, I was delighted that it too was at the intersection of so many of my favorite topics. I’m also proud to say that I am a member of Apture’s advisory board.

Welcome to Apture

For me, Apture is about bringing light-touch context and background to topics within the page you are looking at. In essence, it provides a simple framework to attach background context and ancillary content to subjects mentioned in your page – all without interrupting the flow of your reading and crucially, without leaving the page you looking at. In fact, you have already experienced Apture! (unless you are reading this in a feed reader, in which case check out the page on my blog)

When I saw the first demo of the product, what excited me the most was the implementation – which I think is slick and impressive. The thoughtful UI makes the product simple and intuitive to use, backed up by some pretty tight code that makes the seamless experience possible.

Elegantly handling off-site links and embeddable media

From my days working at the BBC News Website, I’ve seen first hand the importance of providing background information on the subjects discussed in a news story. Not everyone follows the news agenda as deeply as others, and providing a bit of context can really make the difference and ensure the reader is able to engage with the latest developments being written about.

I’d also seen examples of how the BBC had got some of it’s interface and style guidelines wrong – like not using hyperlinks inside body content and completely missing the early emergence of embeddable media (arguably pioneered by YouTube). I have to hold my hands up to these as much as anyone else at the Beeb as I was there at the time these things took off.

On both counts Apture solves these problems in an elegant way.

The concern around marking up body content with hyperlinks is about usability. When the user clicks on them she is taken to a new destination page mid-flow of her reading. Apture solves this concern by providing the essence of the page you want to link to in an easily manipulated floating window that the user can quickly digest and either get back to the copy or potentially elect to click through to a fuller page of content. The point is that the reader makes an informed decision whether to jump to a new page or continue reading. Apture also lets the reader position the window around the content so that they can interact with it later on when they are ready.

Another key part of this is the selection of the media you use to provide that background to your post. Apture helps you there too – by recommending relevant content from across numerous repositories on the internet – including Wikipedia, Flickr and IMDB. Finally, it reformats these pages so that the pertinent information is displayed clearly inside the Apture window that is associated with your subject.

Apture also provides a unique way to embed media, and can even handle certain types of media asset just by it noticing you are linking out to a photograph or a video in your piece.

Open for business

Having been in closed beta for some months, this week Apture was released to the public. Getting Apture on your site is really simple (just a line of javascript or the installation of a WordPress Plugin) and of course it is totally free.

You can also take a tour of the product and see more demos of it in action.

Channel 4 launches tiny widget/mashup competition

Given that I helped establish (probably) the first developer mash-up competition run by a media organization and also my recent foray into the world of widgets, I was particularly interested to read that UK broadcaster Channel 4 is dipping its toe into the water by running a similar competition around it’s Film4 service.

And to continue the ‘toe dipping’ analogy further, I’d have to suggest that it is only a little toe – as a quick review of the ‘Platform 4’ contest site demonstrate.

Create a widget or mashup from two RSS feeds, the winner gets £1500 and two runners up get £250 each. That’s it.

Now don’t get me wrong, it’s a fantastic start and I want to congratulate them for it (I have a feeling my ex-BBC colleague and supporter Matt Lock could be behind this as he’s now at C4). However I really hope they build out Platform 4 into a complete developer resource to help bring innovation into Channel 4.

Finally, if the BBC, Channel 4 and even ITV can get together and build out an IPTV offering, it would be great to think that Beeb and C4 could combine efforts along with other developer networks to help support each other.

[via TechCrunchUK]

Unifying the mobile platform: why the iPhone is really important

The reason why the iPhone is an important phone is not because of its shiny gadgetness or its touch interface. It’s not even important because it’s the first serious media player to be combined with a phone.

It’s important because of its web-based approach to application development. I believe this approach will spawn other manufactures to follow suit and in turn we will find ourselves with a truly unified development platform not owned by any single vendor or manufacturer.

Right now developing applications for mobile phones is a pain with no single way of rolling out an application to every phone (or even the majority of phones) on the market. Sun’s J2ME was supposed to solve all this but instead we still have a chaotic environment of different MIDP profiles, screensizes, capabilities and even carriers who prevent unsigned (read: non-rev-shared) java applets from running on some of their phones.

This is kind of what the world of computing was like before the Internet – when Macs wouldn’t read files created on PC’s and vice-versa. The internet came along and a common set of standards were created that allowed documents to be interchanged between any computer. Later on we managed to coerce those standards into lightweight applications that more often than not provide all the functionality we needed.

I believe we are finally going to see this happen on the mobile phone. Apple is leading the way by promoting the iPhone’s Safari browser as the development environment for the iPhone – but there is no reason why this can’t be emulated on other phones too.

Apple is setting the bar for future high end phones and the way to achieve the kind of features they are offering on other platforms is to also go the browser-orientated route too. That’s what will convince phone manufactures like Nokia and Sony Ericsson to focus on the browser in their future phones and in turn unify the platform for all of us.

The other ingredients are in place too. Opera is a great little browser on the phone and until today’s iPhone release was by far the most advanced mobile browser for javascript and early-ajax functionality. I’m sure they’ll be looking to partner with (or even sell to) a major manufacture to continue it’s development. Microsoft is already investigating this area with DeepFish, although it’s not current available for general use.

Google’s significant backing of Firefox development and its interest in the mobile space must also guarantee something is going on with Firefox. However we in the community need to make sure that the Mozilla/Firefox engine doesn’t get 0wned by Google solely for their benefit in the Google phone.

But not going to get one today…
So I swung by the Apple store in downtown San Francisco to check out the scrum just as they opened their doors at 6pm to start selling the iPhone.

It was chaos and there’s no way in the world I’d have wanted to spend more than a few hours in that environment – certainly not a 24hrs+ in line.

Most of the people queuing up wanted it because it’s the latest cool shiny gadget – and that’s fine, but it doesn’t float my boat. But it was interesting to spot a few interesting faces in the line, such as Netvibe’s Tariq Krim, who were buying it “solely for the API”. Tariq doesn’t even live in the US but can see the benefit of having one to build out Netvibes onto.

Personally, I’d love one for development but I have no interest in it as a consumer phone nor do I wish to be an AT&T Mobile customer.

Today it’s all about the “I have it first” crowd – and that’s not a head space I think is all that positive. I certainly don’t want to be part of it, but it’s one that Apple feeds off with great success. “A marketer’s wet dream” as my wife described it.

I look forward to reading the inevitable technical reviews of the phone and the official development documentation to grok when I need to build something for it. I also want to see what Blackberry, Nokia, Microsoft and Sony Ericsson have in the works in response.

(disclosure: Orange France Telecom is currently a significant client of mine, although I do not work in any mobile-related area for them. I do work on a project that is a competitor to Netvibes, mentioned in this article.)

At Google Developer Day 07, San Jose… Google Gears looks hawt

I’m at Google’s Developer Day in San Jose. Looks like it’s going to be a great event – just finished the key note which concluded with an appearence by Sergey.

Google Developer Day

Google are announcing a number of things today (well they’ve announced them already because we’re the last timezone to hold a Dev Day). They, in order of most interesting IMHO, are:

  1. Google Gears – local storage ajax proxy for offline use
  2. Google Mashup Editor – looks like Yahoo Pipes, MS PopFly rival play
  3. Google Mapllets – Widgets that you pull into Google maps that display multiple data sets on GMaps

Google Gears looks particularly exciting because it’s essentially a local storage system that picks up the slack to provide data to your AJAX apps when your computer is offline. Gets plus points for being open source (BSD License) and for having simple javascript API.

I was already to start thinking how it matches up against Adobe Apollo, but then Adobe came out and said they were working with Google to implement this functionality in Apollo too. That’s really significant because the only other player who I was aware doing something in this area was Mozilla – and it turns out they’re not dialed into this too (Google funds a significant part of Firefox’s development).

If you build AJAX apps you need to grok Google Gears. My only fear is what Google do with the data (if anything)… but it’s an open source app so the code is there for inspection. More here.

Will report more.

Amazon S3 cost savings and the future of utility computing services

Amazon have been doing some amazing things in the utility computing sector with S3 (storage) and EC2 (virtual servers).

It’s touted as being an economical platform way to run your start-up/company/website/whatever from as it makes use of the space capacity Amazon owns from it’s e-commerce platform (so, er, what happens over Christmas when that platform is at it’s peak?).

I think it very much depends on what you intend to use EC2 and S3 for, but photo sharing site Smugmug has been a champion of S3 for some time – claiming it’s saving them over $500k in the first year. At this point in time they only use Amazon S3 for storage of the images

Michael Arrington called them on their numbers during Web2.0 Summit, and so the Smugmuggers have come back with some real numbers on their blog. The conclusions are:

  • Total amount NOT spent over the last 7 months: $423,686 [by not buying IDE disks, RAID controllers and single CPU servers for their SAN]
  • Total amount spent on S3: $84,255.25
  • Total savings: $339,430.75
  • That works out to $48,490 / month, which is $581,881 per year. Remember, though, our rate of growth is high, so over the remaining 5 months, the monthly savings will be even greater.

I was talking to an interesting start-up on Friday who were offering a product that could essentially be leveraged as a ‘commodity’/’utility’ computing product. I can’t really say what their vertical is but they were marketing it (some-what understandably) with a specific implementation so that it could be leveraged as a turn-key product. Their business plan was based on the percentage their product had on revenue.

I felt that by wrapping an interesting utility product into a single means of implementation they were shutting many doors to ways their product could be used in other areas. My overarching point to them was that what Amazon was really trailblazing here is the pricing model – and rather than charging based on the somewhat fuzzy impact it had on revenue (which could easily have been 0 or negative, not just positive – and potentially off putting IMHO as it requires companies who might be private to disclose revenues, etc) they could charge based on usage. $0.10 for 1000 calls to their service, etc.

EC2, Amazon’s flexible virtual server product, in many ways is even more fascinating. I’m not convinced it’s the cheapest way to run a server continually – not over a 12 month term at least. And its performance over the Christmas period is a bit of an unknown. But the ability to suddenly double or triple the number of instances of an application server, especially for a short time during a serving peak has a real definite value. The emergence of ways to automate the ramping-up and ramping-down of service over the course of a day/week/month cycle are particularly exciting.

It’s definitely an interesting area and I’m curious to see how the industry adopts it.

Your own utility computing service?

My final point on all this, I guess, is look to see whether you have a potential utility computing product in your inventory or as part of your start-up. Even if it’s not your company’s core product it might be something that’s part of your platform. Amazon is still a book and CD retailer after all, and it’s just utilizing affordences in their serving platform.

Anyway the point is not only that you can offer something, but that you can easily monetize it in perhaps the most scalable and lowest risk way – pay as you use. So start thinking whether you have something like gateways (eg api< ->SMS), data processing (sorting, ranking, etc) or database capacity.

I had a change of heart…

So look, it’s the morning after the night before and I’ve decided to take down the links to the JSON and RSS feeds.

In general don’t feel it ‘wrong’ for someone to link to them directly, after all they were all listed in a javascript file on the BBC News Website (or derived by changing easily guessable paths in the URL). That’s how I discovered them, it’s been years since I had access to the BBC News Webservers so there was no ‘insider knowledge’ of paths etc.

BUT yes I do admit that because I personally know the licensing position of this data it wasn’t something that I should have done.

And despite the fact that I warned on the blog post that the data wasn’t licensed for off-site use, I take the point that it was little irresponsible to encourage people to do it anyway. Feel free to ridicule me on your blog posts, mash-up photos of me with egg on face, etc etc.

I do, however, still maintain that if I hadn’t posted the urls, it’s highly likely that someone else would have as they were put into the public ether by the BBC on their site.

But please, kids, don’t try those urls at home. They’re bad for you.

(BTW I decided to take this down after a night’s sleep on it, the BBC didn’t ask me to take it down)

BBC News adds live stats (+ XML)

The BBC News Website has added a number of “Live Stats” features to their site. And I’ve been able to derive the urls for all the XML files powering it – creating some amazing mash-up potential!

First off, the “official” features are:

BBC News Live Stats Puffbox

screengrab of the BBC News live stats puffbox

These appear on all stories (Puffbox is the term for anything on the right-hand side of a story or promotion pieces on index pages).

BBC News Live Stats Map

BBC News live stats map

It’s a nice enough consumer-orientated Flash map, I thought… But then realised “Arhh, Flash!”. Of course, that means the data will be driven by XML-over-HTTP! So here comes my derived ‘unofficial’ features:

BBC News Live Stats: Most Popular by Region:

 Worldwide (ALL)
 North America
 South America

BBC News Live Stats – Most Popular by Email:

 By Email

BBC News Live Stats – Most Popular by the Hour:


In all cases, stories are listed purely by their ID in the BBC News CPS. However, urls can be easily derived as follows:

For text/’normal’ IDs:<id>.stm

For video IDs:<videoid>

(Please keep the reference to the namespace in there – BBC folk use those urls to monitor link popularity, and I believe it is important for the BBC to be able to discover just how important third-party use like this is in terms of driving traffic back to their site.)

Also, the new BBC News Alert Ticker contains some interesting OPML files that include reference to previously un-announced “Breaking News” RSS feeds:

BBC News RSS – Breaking News:

 Breaking News (UK Edition)
 Breaking News (World Edition)

(Like most BBC News RSS feeds, it is probably safe to assume that both the “World Edition” and the “UK Edition” would carry the same breaking news. The generation of two feeds is a ‘bug’ from the way the BBC News website is run with separate UK and International facing options)


Well there you go. I may have left, and the rest of the BBC but I’m still keen to make sure all this good data gets out into mash-up space.

Finally I would like to take the opportunity to confirm that all of the above urls were derived by sniffing HTTP packets being requested by computer to the BBC servers by the Live Stats Map or from the OPML files that came with the BBC News Alert Ticker. It’s all public data folks, and nothing NDA.

Launch #1: BBC Programme Catalogue

I said to watch out for Wednesday [second half of post], and here it is…

Today we launched two new and exciting projects: and BBC Programme Catalogue.

I’ll come onto in the next post, but here’s the BBC Programme Catalogue:

The best way to describe the BBC Programme Catalogue is to call it “IMDB for BBC programmes”.

It’s a web-based front-end of a database containing more than 70 years of BBC TV and radio programming. That’s almost 950,000 programmes, with a delay of about 6 weeks behind what’s currently being played out. Infax, as the back-end database is known, is maintained by teams of hard-working BBC librarians who catalogue, label, and create metadata for practically everything that the BBC broadcasts.

It’s also another example of us trying to expose resources that were previously only available behind commercial deals or for industry b2b use.

Infax database has been available for a while, but only via BBC WorldWide (our commercial division) aimed at people who need to search BBC programmes to buy (for their TV network).

Now it’s available to everyone, and soon via API too.

Mad props go out to Matt Biddulph who worked on this, along with Ben Hammersley. I read somewhere someone drop my name in too, but I really can’t take any credit for this – it’s all theirs. Well done chaps!