Wednesday, 15 August 2012
by sven
filed under Code and Announcements
Comments: 3

Our latest offering of open source software from the headquarters is last.json, a JSON library for C++, that you can now find on GitHub. If you are coding in C++, need to work with JSON data and haven’t found a library that you like, do check it out.

We at benefit a lot from open source software. Almost all our servers run Linux, the main database system runs PostgreSQL, and our big data framework for data analysis is based on Hadoop, just to name a few examples. Of course, not the entirety of all software needed to run is freely available. We have had to write lots of code ourselves. When a building block is missing in the open source software universe that we have to carve ourselves, and we think our solution is good and is general enough to be useful for other people, we like to contribute back to the community and release it as free and open source.

JSON has become hugely popular as a format for data exchange in the past few years. The name JSON stands for “JavaScript Object Notation”, and it is really just the subset of the programming language JavaScript’s syntax that is needed to describe data. A valid bit of JSON is either a number (say, 12 or -5.3), a truth value (true or false) a string literal (“hello world!”), the special value null (a placeholder for missing or unassigned data), or one of the following two: lists of JSON values and mappings of property names to JSON values. These last two data types allow to actually express almost any data using JSON. A list could be [1,2,3] or [99, “bottles or beer”]. It is literally a list of data elements, which can be of identical type (like the all numbers list in the first example), or different types (like a number and some text in the second example). You can add structure to your data using mappings: { “object”: “bottle of beer”, “quantity”: 99 }. A mapping is basically a set of key-value pairs, where the key is a bit of text (“object” and “quantity” in the example) and the value can have the form of any of the JSON data types.

Now you know all the rules of JSON data. The reason why it is so ultimately versatile is that you can nest those data types. Any element of a list or any value in a mapping can be a list or a mapping itself. Or any of the other primitive data types. This is perfectly valid JSON:

  "artist": "White Denim",
  "similar artists": ["Field Music", "Unknown Mortal Orchestra", "Pond"],
  "toptracks": [
    { "title": "Street Joy", "scrobbles_last_week": 739 },
    { "title": "It's Him!", "scrobbles_last_week": 473 },
    { "title": "Darlene", "scrobbles_last_week": 386 }

You can imagine how this can be used to describe virtually any data structure. It is much simpler than XML and many other data formats. And the good thing is that not only computers are able to read JSON, humans are, too! As you can see in the example, not only can you read the data, you understand immediately what it is about. More often than not, JSON data is self-explanatory.

So, as I said before, JSON has become very popular for data exchange. It is a breeze to use in JavaScript (which is not surprising, because any JSON is also valid JavaScript) and many other programming languages like Python, Perl or Ruby. If you are familiar with any of these languages, you probably see that these languages have data types very similar to the JSON types, and it is therefore easy to represent and work with JSON data in those languages.

Unfortunately, less so in C++. C++ is strongly typed, which means that you always declare a variable with a specific type. It can be a number or a text string if you want, but you have to decide which one it shall be at the time you are writing your programme code. There are standard types for lists and mappings, too, but those require their data members to be of identical type. So you can have a list of numbers, or a list of strings, but not a list of items that could individually be a number or a string.

We use C++ for many of our backend data services, because it is fast and not resource hungry. If you have a good level of understanding, you can do great things in C++, and we love to use it for certain tasks. When we first wanted to use JSON for data exchange in our C++ programmes, we looked for a good library that makes it easy to juggle with JSON data, but we couldn’t find none that really satisfied our needs. So we spent some time writing our own library. And because we think it’s not too bad, and other people might have the same needs, we have now open sourced it under the MIT license, which basically means that you can use it freely in your own projects, but we refuse any liability for bugs or whatever could go wrong with it.

So, how do you work with JSON using last.json? The library defines a datatype lastjson::value which can hold any JSON data. You can check at runtime what data type it actually holds, and then convert it (or parts of it) to standard C++ types. The best practice, however, is to use it much like you would in those scripting languages I mentioned earlier: you just access elements of list or mappings as the data types you expect them to be. If the JSON data does not have the structure you assumed, the last.json library will throw an exception that you can catch. Imagine, you have a variable std::string json_data that contains the JSON fragment from the example above (the one about White Denim):

lastjson::value data = lastjson::parse(json_data);

This parses the json string into the lastjson::value representation. And these are a few things you can do with the parsed JSON data:

    << "Artist name: "
    << data["artist"].get_string()
    << std::endl
    << "Second similar artist: "
    << data["similar artists"][1].get_string()
    << std::endl
    << "Top track last week: "
    << data["toptracks"][0]["title"].get_string()
    << std::endl
    << "... with "
    << data["toptracks"][0]["scrobbles_last_week"].get_int()
    << " scrobbles."
    << std::endl;
catch (lastjson::json_error const & e)
    << "Error processing JSON data: "
    << e.what()
    << std::endl;

last.json tries to make working with the JSON data as easy as in scripting languages. This was just an example, and last.json has many more cool features. So if C++ is your language of choice, go and check it out now.

Design Changes to

Friday, 3 August 2012
by Simon Moran
filed under Announcements and Design
Comments: 49

For the last few months, we’ve been working on some design improvements, and after a couple of weeks in beta, we’re ready for our first full release. We’re pretty excited, and we wanted to share some of the details of the new design with you.

What’s new?

On almost every page on the site, we’ve moved the secondary navigation menu from the left side of the page to the upper right. This gives you a wider page, with more space for what matters: the content. On pages where there are a lot of items in the navigation menu, we’ve grouped the less frequently-used items into a small dropdown menu on the right.

Old navigation:

New navigation:

We’ve also redesigned Artist, Album and Track pages from scratch, and rebuilt the page templates completely. Have a look:

An Artist page:
An Album page:
A Track page:

There are three main aspects to the changes:

Tidier, more rational layout.

These pages are very rich in information, and as the site has developed we’ve added more and more content to them. Our user research indicated that it was time to step back and take a fresh look at how the pages were laid out.

The new design groups actions and information together logically so that it’s easier to locate things on the page, and it’s laid out hierarchically, with the things most people use most often nearer the top of the page. We’ve also removed some less-important things from the main page, though most content is still accessible through the menu at the top of the page.

Fresher visual design

We regularly go out and talk to people about, and ask how we can improve things. In response to user feedback, we’ve updated the visual design of the page with more emphasis on images, more legible text, and cleaner, simpler graphics.

New page templates

We’ve built brand new page templates, which are more flexible and dynamic, so that your pages load faster and you spend less time waiting for pages to refresh. We’ve only just started to explore the possibilities of the new templates, so expect more optimisations and speed improvements in coming weeks.

We’ve also taken the first steps towards “responsive design” – which means pages working just as well on your mobile and tablet as they do on a full-size web browser. There’s still more work to do before we can release this, so stay tuned!

What’s next?

We’re going to continue updating the site gradually, over the coming weeks and months. We’re also going to address the feedback we’ve already had, from the beta release, with further tweaks and improvements to get the pages just how you want them.

Thanks for reading! We’d love hear what you think of the new designs, either in the comments here or in our forums

Advanced Robotics

Tuesday, 24 July 2012
by christopher
filed under Announcements
Comments: 32

Do you have Robot Ears?

That’s the question we asked a few weeks ago. As we explained at the time, we’ve been training an army of music-listening robots (or “audio analysis algorithms” if you want to get technical!) to try to better understand the music you scrobble.

The idea is that by automatically analysing tracks we’ll be able to add helpful tags, improve recommendations, and provide novel new ways to explore your collection and discover new music.

We asked for your help to evaluate our robots. We thought they were doing a pretty good job in most cases but there was definite room for improvement, and like any good scientist we were looking for some large-scale evidence (i.e. lots of feedback from real people) rather than just going on our own impressions. So we built the Robot Ears app which asks humans to classify tracks and then compares their answers with what our “robots” said about the tracks.

Click to try the latest Robot Ears

Now, six weeks later, we’ve gathered over 30,000 judgements on 600 or so music tracks and we’re ready to share some initial results.

*Spoiler alert*
The robots did pretty well – but we’re not satisfied yet!

We’re kicking off another round of experiments, to learn even more about a wider variety of music tracks. The more people we can get to take part the better, so whether you’ve tried it already or not, please visit Robot Ears - and help the robots to keep improving!

Want to know what we (and the robots) have learned so far? Read on for the details…

The results so far

We were aiming to answer two different questions with this experiment:

  1. Are the labels we’re trying to apply to tracks meaningful?
  2. Do our robots reliably apply the right labels to a track?

The first question is the more fundamental – if we’re using labels that don’t mean anything to humans, it doesn’t much matter what our robots say! To answer this question we looked at the average agreement between humans for each track. If humans reliably agree with each other we can conclude the label has a clear meaning, and it’s worth trying to get our robots to replicate those judgements.

We were looking at 15 different audio “features”. Each feature describes a particular aspect of music, such as:

  • Speed
  • Rhythmic Regularity
  • Noisiness
  • “Punchiness”


The features have a number of categories, for example “Speed” can be fast, slow or midtempo. Each time a human used the Robot Ears app, they were asked to sort tracks into the appropriate categories for a particular feature. Meanwhile our robots were asked to do the same. At the end of a turn, we showed you how your answers matched up with the robot’s:

After we’d gathered about 16,000 human judgements we took a look at the results so far. There were a few interesting learning points about which features were doing better or worse. Based on this we adjusted some of the labels, threw some out completely, tweaked our robot algorithms and started a new experiment. Another 14,000 judgements later we reached the following results:

We can see that the levels of human agreement vary quite a lot across the features, with activity, percussiveness, smoothness and energy seeming to be the most reliable. By the end of the second round experiment there were just a handful of features (rhythmic regularity, sound creativity and harmonic creativity) we still weren’t convinced by. We aren’t giving up on these, but it seems like we don’t quite have the right words to describe them yet!

Speaking of which – we had a side question in each test: “would you change any of these labels?”

We got some interesting suggestions. Some were helpful. Some… less so! For example:

  • Noisiness: noisydistorted
  • Energy: softcalm
  • Energy: energeticpowerful, emotional, EXTREME HIGH
  • Danceability: dancestrong beat, rhythmic
  • Danceability: atmosphericambient, spacious
  • Harmonic Creativity: little harmonic interestboring
  • Tempo: steadygreat workout stuff
  • Punchiness: punchywide dynamic range
  • Sound Creativity: consistentsimple
  • Sound Creativity: variedupfront texture
  • Smoothness: uneventurbulent

One user also suggested renaming the Not Sure box “I’m an idiot”!

So what about the second question: “How did our robots do?” Well, again, there was quite a range of performance across the different types of feature:

As you can see there are a few features our robots are particularly good at, and a few where their ears definitely need to be cleaned out!

What’s next?

Doing these first two experiments allowed us to refine our terminology and the way our robots classify tracks. Naturally, being built in London, our robots are currently very excited about the Olympics. In that spirit we’re going to award them a bronze medal for progress so far:

We’ve already started to work on some new functionality based on the more reliable features. Here’s a sneak preview of what Mark and Graham came up with at a recent internal hack day:

There’s a lot more work still to do though, and so we’re kicking off a third round of experiments. The key difference is we’ll be using a lot more music tracks this time, and hopefully getting a lot more user feedback.

Whether you’ve taken part already or not, we’d love it if you’d come visit Robot Earsand help our robots go for the Gold! Originals

Tuesday, 19 June 2012
by chrisp
filed under Announcements
Comments: 3

As Starship once sang, “We built this original programming and video content hub on rock and roll!” (Something like that, wasn’t it?) Well, today you’ll notice a new top-level navigation on dedicated to exactly that. Alongside Music, Radio, Events, Charts and Community, look for a fabulous new content feature we’re calling Originals, which aims to shine a light on the wealth of intimate artist experiences we’ve generated over the years here at, and continue to create every day.

For some years now we’ve strived to find new ways of bringing you closer to the music by means of video and editorial content – from the hundreds of sessions we’ve filmed in our New York studios, to our intimate Live in London shows and festivals around the world. Up to now though, we haven’t made it easy for you to find that content on, and we felt the best way to fix that was to create a new section of the site where all this stuff would live. And lo, Originals was born!

There you’ll find sessions, studio performances and interviews with established names such as Coldplay, Noel Gallagher, The Shins, The Temper Trap, Norah Jones, Snow Patrol and Wilco, as well as beautifully shot films from the hottest emerging talent around the world. We’re really proud of our sessions at, so we’re launching Originals with two very special new recordings.

In New York, where the team are old hands at capturing all the best music on camera, we were joined for an exclusive and intimate session by Keane. Watch them performing songs old and new (as well as answering your questions) here. The London team celebrates the launch of Originals by showcasing phenomenal new UK indie rockers Bastille, who treated us to a sublime session aboard Lightship 95, a floating recording studio on the River Thames. Watch it here.

It’s not all videos and live performances though. On Originals we’ve also gathered together some of the editorial pieces you’re used to reading elsewhere, including the Hype Chart, Tag of the Week and New Releases blogs that were previously dotted across our various platforms. In short, Originals is the place watch, read about and listen to the hottest – and biggest – new music happenings out there! Please take a look around – we really hope you like the latest addition to the stable!

Chris Price
Head of Music,

Do you have robot ears?

Thursday, 14 June 2012
filed under Announcements
Comments: 11

UPDATE: Thanks for all the feedback so far! We’ve just launched a new version based on what the robots have learned: Robot Ears

Help! We need somebody.

Actually, we need a whole bunch of somebodies to help us evaluate some new ways to tag tracks.

The research team here at have been investigating various interesting properties of music and trying to figure out how to get machines to recognise them. Last year we looked at tempo measurement and how the rhythm, timbre and harmony of a track change over time. Today we’re asking you to help with a third task, looking at some more unusual musical properties…

Some aspects of music, like tempo and key, are pretty well defined. Take a song with a strong beat and most people will tap their foot along with it in the same way. The key of a song is generally clear too, based on its melody and harmonies. There are usually ‘correct’ answers for “What’s the tempo of this song in beats per minute?” and “What key is this song in?”

Other musical properties are trickier. Does this song sound “punchy” and “energetic”? Would you say it was “percussive”? Or “smooth”? Is the beat “metronomic” or “irregular”, and in either case could you “dance” to it? Which tracks are “sad” and which are “happy”? How “aggressive” are one artist’s songs, and are another artist’s songs more “mellow”?

We think we’ve made some progress with answering these kinds of questions automatically, and now it’s time to get some real human listeners to weigh in and tell our robots what they’re getting right and wrong. Because there aren’t clear-cut answers to these kinds of question it becomes doubly important to compare machine answers against human judgement – and we need as many humans involved as possible!

We’ve built a Robot Ears app where you can help with this while hopefully having some fun testing out your own robot ears. You listen to a few tracks, and then give your judgement as to which of a number of categories they fit (if any). We’ll check that against what our robots said, and in the process find out where there’s room for improvement in their judgements.

We’re also interested to know whether you think these tags are meaningful to begin with. You can hit the icon next to a category name to suggest alternatives.

The more humans we can pit against our robots and the more rounds you complete, the better we’ll be able to automatically analyse tracks in future – which in turn will help us provide you with more flexible and interesting radio and music recommendations!

So to paraphrase a famous princess: Help us, users. You’re our only hope.

Come help teach our robots a thing or two

An update on Password Security

Friday, 8 June 2012
by Matthew
filed under About Us and Announcements
Comments: 63

Hello from HQ,

Earlier this week, received an email that let us know a text file containing cryptographic strings for passwords (known as “hashes”) that might be connected to had been posted to a password cracking forum. We immediately checked the file against our user database, and while this review continues, we felt it was important enough to act on.

We immediately implemented a number of key security changes around user data and we chose to be cautious and alert users. We recommend that users change their password on and on any other sites that use a similar password. All the updated passwords since yesterday afternoon have been secured with a more rigorous method for user data storage.

To reach as many users as quickly as possible, we are sending these alerts via social media, direct email and on the site itself.

We take the security of our users very seriously, and going forward we want you to know we’re redoubling our efforts to protect our users’ data.

Thanks for your support,

The Team

Friday, 2 March 2012
by Marcus
filed under Code and Announcements
Comments: 6

The open source tool balance is an essential part of the service infrastructure here at Multiple instances of balance are running on each and every web server node, on the various production back end servers, and also on our development machines. So at any given time there are probably thousands of instances running simultaneously on our machines.

What does it do?

balance is a so-called load balancer. It is generally used as a proxy to distribute a large number of incoming requests to a group of servers. In other words it is responsible for balancing the load between all the servers in a group. Quite often, load balancers are dedicated hardware products. However, balance is a software load balancer, which means it can just run as an additional program on any server.

In addition to load balancing, balance also supports a scheme called failover. This means you can define a second group of servers and balance will route requests to the second group if all servers in the first group fail. This failover scheme is used by most of our backend services at We usually have a main server and a backup server that kicks in once the main server fails.

End of story?

Certainly not! There are some subtleties in the use of balance that have given us headaches in the past. By far the biggest problem is that there are cases when failover just doesn’t work right in our environment. So here’s a real example…

One day we had to take down the main server for one of our backend services to replace a hard drive. The backup server was running fine and we relied on balance to take care of routing all requests through to the backup box. Unfortunately, shortly after the main server went down, we noticed that most requests to the service failed.

What had happened? balance has a configurable connect timeout, i.e. it tries to connect to a service and then waits for a certain amount of time until it figures out that it can’t connect. If the server machine is running, the connect will fail almost instantly if the service itself is unavailable. However, if the server is down, it’ll wait until the connect timeout has elapsed. So in our case, balance was trying to connect to the main server (which was down) and then waiting for 5 seconds before attempting to connect to the backup server. In the meantime, the client had already given up (it was using a much smaller timeout). balance would only notice that the client had given up by the time it had established the connection to the backup server. The next time the client tried to connect, the same thing would happen all over again.

But someone else would certainly have had the same problem before?

I’m quite sure of that. And I guess that’s what caused the autodisable feature to be added to balance. When this feature is being used, balance will automatically disable servers that it fails to connect to. The downside, though, is that there’s no way to automatically enable servers again. And manually enabling them isn’t really an option given the number of instances of balance we’re running and given that it could cause all servers to be permanently disabled in case of, for example, temporary network failure.

So what now?

We had to face the fact that in theory we had a really nice redundancy scheme, but it could fail quite miserably in practice. So I began to look around for alternatives to balance and found a couple of other open source load balancers. Sadly, all of them had either been abandoned by their authors, failed to build out of the box or just didn’t fulfill our requirements.

balance was actually just what we needed. The only thing it was missing was support for monitoring all back end connections and dynamically disabling and enabling them as they fail or pass the monitoring checks.

So eventually I started looking into adding exactly that functionality to balance.

Implementing monitoring for balance was relatively straightforward, even though it made me aware of how much I had gotten used to developing software in C++. With balance being written in pure C, I was really missing exception handling and the C++ standard library.

The amount of code changes was massive considering the rather small code base of balance. As of now, more than a thousand lines of code have changed and another thousand lines have been added. So we decided to fork the original project and rebrand it as

It took about a week to refactor the existing code and finally add the monitoring feature. Along the way of adding monitoring, quite a few bugs have been fixed as well (for details, just have a look at the commit log if you’re interested) and I hope these fixes make up for all the bugs that I’ve undoubtedly introduced by adding loads of new code.

The code has since been reviewed by the MIR team here at and is available from

If you have an application for, please give it a try and let us know what you think and like or dislike about it!

Building Best of 2011

Monday, 16 January 2012
by omar
filed under Announcements and Trends and Data
Comments: 6

Earlier this week we released our Best of 2011 charts. 2011 saw you spend over 71 thousand years listening to music and scrobble more than 11 billion tracks. We’ve been churning through all of this data to find out what truly defined 2011.

New for this year is the discoveries chart. We went back to the beginning of time (well, to 2003) and checked every one of your 61 billion scrobbles to work out which artists were first scrobbled in 2011.

We’ve also broken these charts down by country and tag. Whatever you’re interested in, from experimental music in Mexico, the latest innovations in Finnish pop, or just what’s Big in Japan, you now have a means to browse them.

Following on from last year we are providing you with a data download. Musicbrainz IDs are now included in this data (where we have them) as part of our continued collaboration with Musicbrainz.

Producing the ‘Best of’ Charts is a very different process to our usual weekly charts. What follows is an overview of the process. In particular I’ll explain how we determined the new albums and discoveries of 2011, and how we turned these into the charts you see on the site.

New Albums

Our top artists are calculated based on albums released in 2011. One issue with albums is that they are typically released many times in many locations. To get around this we used a new version of the Musicbrainz database to find track listings for albums that were first released in 2011.

Of course, that isn’t the end of the story. Our library doesn’t always match up with Musicbrainz. Such issues need to be handled when we align album information from Musicbrainz with our own scrobble data. It’s one of the reasons we’re improving our Musicbrainz ID coverage .

New Discoveries

We label an artist as a new discovery if they were first scrobbled in 2011. As I mentioned previously, this can only be decided by checking through all of the scrobbles we have ever received.

This task is complicated by misspelled artist names, collaborations, and remixes. A nice example is Britney Spears’ collaboration with Sabi. Britney is certainly not a new discovery, even though this incorrectly-titled artist was first scrobbled in 2011. We avoid this by mapping artist names to their correct versions, before sorting through their scrobbles.

Our Human Computer

Our final step was to send the charts to our secret weapon: the music team. They pored through thousands of the top artists of 2011, matching them against their own databases and removing/adding artists that were incorrect or missing.

Data Download

This year we have two data downloads: the first – like last year’s – contains the top artists and albums of 2011; the second contains only the top artists, because they do not all have associated albums. In the data you’ll find all of the artists and albums from Best of 2011, along with play and listener counts, top tags, and image links.

In both cases we have added Musicbrainz IDs to the data. You can use these on our own API, BBC Music, and The Guardian. Use the data as you please; we look forward to seeing what you come up with!

2011's New Discoveries

Friday, 13 January 2012
by matts
filed under Announcements and Trends and Data
Comments: 3

Every year when Best of rolls around, we look at the chart to see if our data could have predicted who’d make it big. While there are a few in there we saw coming * cough * Adele * cough * the reality is that every year things get harder and harder to foresee.

That’s one of the reasons we launched our New Discoveries chart; to show off just how diverse your year in music really is.

Sure, it’s full of credible indie acts; Purity Ring, Death Grips and Work Drugs all did fairly well, while Wugazi – an album of mash-ups between Wu Tang Clan and Fugazi – made it to 13th place after getting huge buzz over the summer.

Someone we might have expected big things from was former Oasis frontman Noel Gallagher. He made it to number three on the New Discoveries chart, but only to 69 on the overall UK chart. That’s not quite as high as we might have expected. Similarly, Gaslight Anthem side project The Horrible Crowes made it to number 12 on the New Discoveries chart, largely off the back of Gaslight Anthem fans trying it out.

Further down the list GLaDOS makes an appearance. The Aperture Science Psychoacoustics Laboratory made it to number 7 on the chart after Valve released several albums worth of material from Portal 2. Soundtracks often jump to the top of the Hype Chart after hardcore fans flock to new releases, and while none of the artists on Drive were eligible for the New Discoveries chart they all got a huge boost when that came out.

Up until the last minute it looked as if the New Discoveries chart would be topped by none other than Rebecca Black. The “Friday” singer was number one on the chart right up until December, but while her video has collected some 17.5 million views on YouTube’s music community only played the song 320,000 times between them.

Our first New Discoveries list is actually topped by Youth Lagoon, the project of Boise, ID native Trevor Powers. His dream-like album shot up the Hype Chart in autumn, and appeared to become a fixture throughout the winter for many listeners. He also creeps into the US overall top chart at 100.

For a taster of what these artists have to offer, listen to our New Discoveries playlist on the recently launched Discover app.

In case you missed it yesterday then our design team played with an early cut of our New Discoveries chart to create this neat little poster as a bit of a bonus. Don’t forget that you can also filter the chart to find the New Discoveries that best reflect your tastes using the Country and Tag options.

Here’s to another unpredictable year in music!

Best of 2011 is here!

Thursday, 12 January 2012
by Sarah Ransome
filed under Announcements and Trends and Data
Comments: 10

Best of 2011 is a reflection of the year in music, highlighting the most popular and hottest new artists all based on the tracks you’ve been scrobbling.

This year’s ‘Top Artists’ chart was compiled by looking at scrobbles for albums released between 1st January and 31st December 2011. As in previous years, we aren’t counting live albums, greatest hits collections, EP’s and singles. You might not be all that surprised when you see who’s sat at number one, but dig a little deeper using our lovely new Country and Tag filtering options to find the No. 1 which suits you!

Another new feature for 2011 we’re really excited about is our ‘Top New Discoveries’ chart. This was compiled by looking at the number of listeners for artists who had their first scrobble between 1st December 2010 and 31st December 2011. Discovering new music is core to the experience; so we wanted to highlight the artists who caught your attention this year and who you should keep an eye on during 2012. Again, use the filtering options to personalise your view.

Additionally, we took a look at the Year In Music to see what our data had to say about 2011. We hope you’re as fascinated as we were by the impact of music news on your scrobbles.

For developers, we have provided the chart data as TSV and XML files. Download and start hacking, we’d love to hear what you come up with.

Finally, as a little easter egg, we’ve created a commemorative poster of this year’s New Discoveries chart. The eagle-eyed amongst you will notice that it’s slightly different to what you see online; we made this before taking all of December’s data into account. You can download the poster here.