#Audiohack: Timed data binding for HTML5 media with memento.js

This American Life's Audio Hackathon was last weekend. It was a good time exploring different ways to enhance podcast creation, discovery, and consumption.

My team worked on a web app that synchronizes social and multimedia content – photos, ads, tweets, videos, maps, links, etc. – with an audio file. The basic idea was to make it easier for listeners to engage with external content referenced within a podcast episode. For example, if the host is talking about a specific celebrity tweet, this app could display said tweet at the specific time in the podcast when it's being referenced. It'd also be awesome for a show like Serial, where the producers would be able to annotate the show with actual documents and additional details about the case – synchronized show notes.

But the really valuable result of this hackathon project is the JavaScript tool that we (ahem, Vijith) wrote to handle time-based data binding for HTML5 media objects: memento.js.

From the docs:

Memento.js binds data to regions of audio or video and allows you to quickly recall the results at any point during playback or scrubbing. Calling the .data() method on a memento object will retrieve the data corresponding to that point in time. Calling the .data() method repeatedly at different points in time will retrieve different data.

Conceptually, memento can be thought of as a way to slave all JavaScript execution to the playback as defined by the bound media node. It doesn't meaningfully use JavaScript events, though. Instead, it wraps around a playable media node and a queryable data structure, and then uses timing information from the former to deliver dynamic data payloads from the latter that automatically change over time.

Check it out.