What's cooking in the Last.fm playlisting lab

Thursday, 27 September 2012
by Mark Levy
filed under About Us
Comments: 47

In the Music Information Retrieval team here at Last.fm we’re currently developing a new generation of smart playlisting engines, and we’d like take the chance to give you a sneak preview of what they can do, as well as explaining a bit more about playlisting services in general.

You can think of playlisting engines as falling into two categories: one repeatedly chooses which track to stream next when you’re listening to an internet radio station like any of Last.fm’s radio stations; the other selects a single set of tracks from a collection all in one go, like iTunes genius or Google Music’s instant mix. While in theory these do similar jobs, as every good scientist knows, the difference between theory and practice is greater in practice than it is in theory, and in practice the requirements for these two types of playlists can be very different. Our new generation service is designed to provide instant playlists from collections of any size, and you can try a demo right now, or read on to find out more.

Last.fm instant playlisting

We’ll talk a bit more about radio playlisting in a separate post, but one of the main characteristics required from the other type of engine is the ability to choose from music collections of wildly varying sizes. Our existing engines have mostly been targeted at very large commercial catalogues containing millions of recordings – you can see them at work in the Last.fm Spotify app (start playing any track, go to the Now Playing tab in the app and click “Similar Tracks Playlist”).

The new generation of engines is designed to continue to do a really good job when choosing tracks from small personal collections. In practice that means we can’t rely on any single type of information to tell us which other tracks might be a good match for any particular playlist. Luckily thanks to your scrobbles and tags, and a bit of audio analysis and machine learning magic on our side, we have three independent types of information linking artists and tracks. Another new feature is the ability to generate playlists based on mood and other musical properties. Finally when playlisting from personal collections we’ve been able to experiment with ways of choosing the sequence of tracks that aren’t restricted by licensing rules.

But we know we still have a huge amount to learn before any machine can approach the skill of a human DJ, so we’ve built a simple demo to let you try out the services. Please let us know how you think we’re doing and we’ll incorporate your feedback into our final version of the new engines. Thanks for listening!

Genre Timelines and More Distinctive Lyrics

Thursday, 6 September 2012
by Janni Kovacs
filed under About Us
Comments: 10

For the past five months I have had the honour of being the next data team intern at Last.fm, building software and trying to make sense of what people now call Big Data™. In particular during my time here I looked at biographical data for artists, i.e. the place and the year a band was formed. This data is generated by Last.fm’s users and attached to artists’ wiki pages (see the factbox on the right of the page). There’s a nice number of artists where this type of data is available, so I was wondering what kind of analyses we could do with it.

When did this genre take off?

One thing that I was looking for in the data was empirical evidence of when certain genres became popular. Since we have a massive amount of user tag data available we can easily correlate tags and years and measure “popularity” of a genre by counting the number of artists formed in a specific year. Even with this data being skewed a bit towards the more popular artists, you can definitely see spikes of popularity for certain genres where you’d expect them:

Click for a larger version

Props to our users getting punk and post-punk in the right order!

If you’re a fan of metal music maybe the following chart, showing the progression of metal subgenres from hard rock to death metal, will be of interest:

Click for a larger version

Distinctive lyrics for cities

Andrew did a fantastic job a while ago generating distinctive lyrics for certain genres. I was wondering if we could generate distinctive lyrics for cities as well. By taking about 75.000 song lyrics, matching them to artist’s location metadata from our wikis and applying a simple term frequency function to each word, we can generate a list of words that occur in some cities more often than in others. Please take these results with a grain of salt as they are skewed by several factors, especially towards the more popular artists:

Click to open full images in a new window.

Warning: they contain lyrics you may find offensive. Not safe for work.

London

Atlanta
 

Los Angeles

New York

Seattle

I really like that “sorry” is in London’s top 10

In internships you’ll often find that you’re given pointless work just to occupy yourself. This is not the case at Last.fm. You’ll be able to work on in-production code and be given plenty of time to do things on your own, whatever interests you. So even though the ball pit is no more (turns out they have to be cleaned once in a while), if you enjoy working on backend software and exploring immense data sets then this is the right place to do it.

How are you feeling today?

Thursday, 30 August 2012
by Mark Levy
filed under Announcements
Comments: 23

Just over a year ago the Music Information Retrieval team here at Last.fm embarked on a project to see how well we might be able to identify musical characteristics of songs by a process of automatic analysis. Our aim was to fill in some of the gaps left by our existing tagging system.

Last.fm tags make up an astonishing encyclopedia of descriptions, and are a testament to the generosity, knowledge and enthusiasm of our community of users. Together with scrobbles, tags help us power recommendations, radio, and many of the most interesting services that we offer. Although you can make up any tag you like, we noticed that in practice most people use tags that describe genre, or closely related things such as the era or nationality of an artist. On the other hand tags rarely describe the sound of songs in musical terms, and they talk about subjective things like mood less often than you might imagine, given the close connection that most of us experience between music and our feelings about life.

Last.fm mood report

The potential benefits of having a new and separate strand of information about music were obvious, but the big challenge for this project was that existing methods of automatic music tagging simply didn’t work very well. Nine blog posts, two published research papers, three public and numerous internal demos, several hack days, and a great many man hours later, we think we’re starting to get somewhere, and we’d like to show you some results.

As a first taster we’ve put together a visualization of your musical mood over the past 120 days, based on automatically computed machine tags for the tracks which you’ve scrobbled during that time. While individual tags are still far from perfectly accurate, we think that when taken together over all your listening week by week they still paint an interesting picture – one that stands a chance of reflecting real changes in your musical life. Enjoy, and please let us know if you find them interesting!

last.json

Wednesday, 15 August 2012
by Sven Over
filed under Code and Announcements
Comments: 3

Our latest offering of open source software from the Last.fm headquarters is last.json, a JSON library for C++, that you can now find on GitHub. If you are coding in C++, need to work with JSON data and haven’t found a library that you like, do check it out.

We at Last.fm benefit a lot from open source software. Almost all our servers run Linux, the main database system runs PostgreSQL, and our big data framework for data analysis is based on Hadoop, just to name a few examples. Of course, not the entirety of all software needed to run Last.fm is freely available. We have had to write lots of code ourselves. When a building block is missing in the open source software universe that we have to carve ourselves, and we think our solution is good and is general enough to be useful for other people, we like to contribute back to the community and release it as free and open source.

JSON has become hugely popular as a format for data exchange in the past few years. The name JSON stands for “JavaScript Object Notation”, and it is really just the subset of the programming language JavaScript’s syntax that is needed to describe data. A valid bit of JSON is either a number (say, 12 or -5.3), a truth value (true or false) a string literal (“hello world!”), the special value null (a placeholder for missing or unassigned data), or one of the following two: lists of JSON values and mappings of property names to JSON values. These last two data types allow to actually express almost any data using JSON. A list could be [1,2,3] or [99, “bottles or beer”]. It is literally a list of data elements, which can be of identical type (like the all numbers list in the first example), or different types (like a number and some text in the second example). You can add structure to your data using mappings: { “object”: “bottle of beer”, “quantity”: 99 }. A mapping is basically a set of key-value pairs, where the key is a bit of text (“object” and “quantity” in the example) and the value can have the form of any of the JSON data types.

Now you know all the rules of JSON data. The reason why it is so ultimately versatile is that you can nest those data types. Any element of a list or any value in a mapping can be a list or a mapping itself. Or any of the other primitive data types. This is perfectly valid JSON:

{
  "artist": "White Denim",
  "similar artists": ["Field Music", "Unknown Mortal Orchestra", "Pond"],
  "toptracks": [
    { "title": "Street Joy", "scrobbles_last_week": 739 },
    { "title": "It's Him!", "scrobbles_last_week": 473 },
    { "title": "Darlene", "scrobbles_last_week": 386 }
  ]
}

You can imagine how this can be used to describe virtually any data structure. It is much simpler than XML and many other data formats. And the good thing is that not only computers are able to read JSON, humans are, too! As you can see in the example, not only can you read the data, you understand immediately what it is about. More often than not, JSON data is self-explanatory.

So, as I said before, JSON has become very popular for data exchange. It is a breeze to use in JavaScript (which is not surprising, because any JSON is also valid JavaScript) and many other programming languages like Python, Perl or Ruby. If you are familiar with any of these languages, you probably see that these languages have data types very similar to the JSON types, and it is therefore easy to represent and work with JSON data in those languages.

Unfortunately, less so in C++. C++ is strongly typed, which means that you always declare a variable with a specific type. It can be a number or a text string if you want, but you have to decide which one it shall be at the time you are writing your programme code. There are standard types for lists and mappings, too, but those require their data members to be of identical type. So you can have a list of numbers, or a list of strings, but not a list of items that could individually be a number or a string.

We use C++ for many of our backend data services, because it is fast and not resource hungry. If you have a good level of understanding, you can do great things in C++, and we love to use it for certain tasks. When we first wanted to use JSON for data exchange in our C++ programmes, we looked for a good library that makes it easy to juggle with JSON data, but we couldn’t find none that really satisfied our needs. So we spent some time writing our own library. And because we think it’s not too bad, and other people might have the same needs, we have now open sourced it under the MIT license, which basically means that you can use it freely in your own projects, but we refuse any liability for bugs or whatever could go wrong with it.

So, how do you work with JSON using last.json? The library defines a datatype lastjson::value which can hold any JSON data. You can check at runtime what data type it actually holds, and then convert it (or parts of it) to standard C++ types. The best practice, however, is to use it much like you would in those scripting languages I mentioned earlier: you just access elements of list or mappings as the data types you expect them to be. If the JSON data does not have the structure you assumed, the last.json library will throw an exception that you can catch. Imagine, you have a variable std::string json_data that contains the JSON fragment from the example above (the one about White Denim):

lastjson::value data = lastjson::parse(json_data);

This parses the json string into the lastjson::value representation. And these are a few things you can do with the parsed JSON data:

try
{
  std::cout
    << "Artist name: "
    << data["artist"].get_string()
    << std::endl
    << "Second similar artist: "
    << data["similar artists"][1].get_string()
    << std::endl
    << "Top track last week: "
    << data["toptracks"][0]["title"].get_string()
    << std::endl
    << "... with "
    << data["toptracks"][0]["scrobbles_last_week"].get_int()
    << " scrobbles."
    << std::endl;
}
catch (lastjson::json_error const & e)
{
  std::cerr
    << "Error processing JSON data: "
    << e.what()
    << std::endl;
}

last.json tries to make working with the JSON data as easy as in scripting languages. This was just an example, and last.json has many more cool features. So if C++ is your language of choice, go and check it out now.

Design Changes to Last.fm

Friday, 3 August 2012
by Simon Moran
filed under Announcements and Design
Comments: 49

For the last few months, we’ve been working on some design improvements, and after a couple of weeks in beta, we’re ready for our first full release. We’re pretty excited, and we wanted to share some of the details of the new design with you.

What’s new?

On almost every page on the site, we’ve moved the secondary navigation menu from the left side of the page to the upper right. This gives you a wider page, with more space for what matters: the content. On pages where there are a lot of items in the navigation menu, we’ve grouped the less frequently-used items into a small dropdown menu on the right.

Old navigation:

New navigation:

We’ve also redesigned Artist, Album and Track pages from scratch, and rebuilt the page templates completely. Have a look:

An Artist page: http://www.last.fm/music/The+Maccabees
An Album page: http://www.last.fm/music/Rihanna/Loud
A Track page: http://www.last.fm/music/Micachu/_/Golden+Phone

There are three main aspects to the changes:

Tidier, more rational layout.

These pages are very rich in information, and as the site has developed we’ve added more and more content to them. Our user research indicated that it was time to step back and take a fresh look at how the pages were laid out.

The new design groups actions and information together logically so that it’s easier to locate things on the page, and it’s laid out hierarchically, with the things most people use most often nearer the top of the page. We’ve also removed some less-important things from the main page, though most content is still accessible through the menu at the top of the page.

Fresher visual design

We regularly go out and talk to people about Last.fm, and ask how we can improve things. In response to user feedback, we’ve updated the visual design of the page with more emphasis on images, more legible text, and cleaner, simpler graphics.

New page templates

We’ve built brand new page templates, which are more flexible and dynamic, so that your pages load faster and you spend less time waiting for pages to refresh. We’ve only just started to explore the possibilities of the new templates, so expect more optimisations and speed improvements in coming weeks.

We’ve also taken the first steps towards “responsive design” – which means pages working just as well on your mobile and tablet as they do on a full-size web browser. There’s still more work to do before we can release this, so stay tuned!

What’s next?

We’re going to continue updating the site gradually, over the coming weeks and months. We’re also going to address the feedback we’ve already had, from the beta release, with further tweaks and improvements to get the pages just how you want them.

Thanks for reading! We’d love hear what you think of the new designs, either in the comments here or in our forums

Advanced Robotics

Tuesday, 24 July 2012
by Christopher Sutton
filed under Announcements
Comments: 32

Do you have Robot Ears?

That’s the question we asked a few weeks ago. As we explained at the time, we’ve been training an army of music-listening robots (or “audio analysis algorithms” if you want to get technical!) to try to better understand the music you scrobble.

The idea is that by automatically analysing tracks we’ll be able to add helpful tags, improve recommendations, and provide novel new ways to explore your collection and discover new music.

We asked for your help to evaluate our robots. We thought they were doing a pretty good job in most cases but there was definite room for improvement, and like any good scientist we were looking for some large-scale evidence (i.e. lots of feedback from real people) rather than just going on our own impressions. So we built the Robot Ears app which asks humans to classify tracks and then compares their answers with what our “robots” said about the tracks.


Click to try the latest Robot Ears

Now, six weeks later, we’ve gathered over 30,000 judgements on 600 or so music tracks and we’re ready to share some initial results.

*Spoiler alert*
The robots did pretty well – but we’re not satisfied yet!

We’re kicking off another round of experiments, to learn even more about a wider variety of music tracks. The more people we can get to take part the better, so whether you’ve tried it already or not, please visit Robot Ears - and help the robots to keep improving!

Want to know what we (and the robots) have learned so far? Read on for the details…


The results so far

We were aiming to answer two different questions with this experiment:

  1. Are the labels we’re trying to apply to tracks meaningful?
  2. Do our robots reliably apply the right labels to a track?

The first question is the more fundamental – if we’re using labels that don’t mean anything to humans, it doesn’t much matter what our robots say! To answer this question we looked at the average agreement between humans for each track. If humans reliably agree with each other we can conclude the label has a clear meaning, and it’s worth trying to get our robots to replicate those judgements.

We were looking at 15 different audio “features”. Each feature describes a particular aspect of music, such as:

  • Speed
  • Rhythmic Regularity
  • Noisiness
  • “Punchiness”

etc.

The features have a number of categories, for example “Speed” can be fast, slow or midtempo. Each time a human used the Robot Ears app, they were asked to sort tracks into the appropriate categories for a particular feature. Meanwhile our robots were asked to do the same. At the end of a turn, we showed you how your answers matched up with the robot’s:

After we’d gathered about 16,000 human judgements we took a look at the results so far. There were a few interesting learning points about which features were doing better or worse. Based on this we adjusted some of the labels, threw some out completely, tweaked our robot algorithms and started a new experiment. Another 14,000 judgements later we reached the following results:

We can see that the levels of human agreement vary quite a lot across the features, with activity, percussiveness, smoothness and energy seeming to be the most reliable. By the end of the second round experiment there were just a handful of features (rhythmic regularity, sound creativity and harmonic creativity) we still weren’t convinced by. We aren’t giving up on these, but it seems like we don’t quite have the right words to describe them yet!

Speaking of which – we had a side question in each test: “would you change any of these labels?”

We got some interesting suggestions. Some were helpful. Some… less so! For example:

  • Noisiness: noisydistorted
  • Energy: softcalm
  • Energy: energeticpowerful, emotional, EXTREME HIGH
  • Danceability: dancestrong beat, rhythmic
  • Danceability: atmosphericambient, spacious
  • Harmonic Creativity: little harmonic interestboring
  • Tempo: steadygreat workout stuff
  • Punchiness: punchywide dynamic range
  • Sound Creativity: consistentsimple
  • Sound Creativity: variedupfront texture
  • Smoothness: uneventurbulent

One user also suggested renaming the Not Sure box “I’m an idiot”!

So what about the second question: “How did our robots do?” Well, again, there was quite a range of performance across the different types of feature:

As you can see there are a few features our robots are particularly good at, and a few where their ears definitely need to be cleaned out!


What’s next?

Doing these first two experiments allowed us to refine our terminology and the way our robots classify tracks. Naturally, being built in London, our robots are currently very excited about the Olympics. In that spirit we’re going to award them a bronze medal for progress so far:

We’ve already started to work on some new functionality based on the more reliable features. Here’s a sneak preview of what Mark and Graham came up with at a recent internal hack day:

There’s a lot more work still to do though, and so we’re kicking off a third round of experiments. The key difference is we’ll be using a lot more music tracks this time, and hopefully getting a lot more user feedback.


Whether you’ve taken part already or not, we’d love it if you’d come visit Robot Earsand help our robots go for the Gold!

Last.fc win the World Cup (almost)!

Thursday, 21 June 2012
by Michael Coffey
filed under About Us and Lunch Table
Comments: 16

Last weekend we decided to take a break from counting your scrobbles and spend our time playing a bit of football instead. It was a chance to swap our football table for a football pitch and take on a few other music related entities at the 6th annual Big Scary Monsters 5-a-side tournament. I was suffering from a hurty knee, but went along to inspire the team Coach Taylor style. “CLEAR EYES, FULL HEARTS!”

Top L>R: Ben Spittle, Michael Horan, Michael Coffey, Sven Over, Paul Blunden

Bottom L>R: Dan Sleath, Matt Clark, Dom Amodio, Nick Calafato

There were 24 teams competing, first in a group then a knockout stage. Last year we went out in the group stage, but felt we’d had a tough group and were eager to prove we could do better this time. However, not even me shouting “man on”, “down the line”, or “well played, that’s liquid” could stop us losing our first game to a fantastic Abeano 7-0. Not a great start, but we followed it with a 2-2 draw against Fanzine about Rocking and then two wins against The Xcerts and Hassle, both 2-1, mainly thanks to the amazing Dan “The Cat” Sleath in goal. Our final group game was against our old friends Drowned in Sound. We’d lost to them at last year’s tournament and were outplayed in a friendly we’d arranged in the meantime, but neither team could break the deadlock and the game ended goalless.

Coach Coffey was an inspiration to his team.

This all meant we were through to the knockout stage where in round 1 we were up against Punktastic, which we’d heard were “pretty handy”. Both teams were tired, but another two late goals from Last.fm meant we were through to play Tall Ships in the quarter-finals. It was another tough game, but a clean sheet and a last minute goal amazingly put us through to the semi-finals. Something none of us really believed was possible at the start of the day.

Last.fc warming up on Wembley’s doorstep.

We were on a high, dreaming of an open top bus tour of the “Shoreditch Triangle” upon our return, but next up were last year’s champions, Old Blue Last. It was clear straight away that these guys had played football before and not even our star goalkeeper could stop the onslaught. The dream was over, but we felt the 4-0 scoreline was respectable against a team of such quality.

Finally, Old Blue Last beat Abeano 2-0 in a rather exciting and closely fought final. We took solace in the fact these were the only teams we’d lost against, claimed third place having beaten the other semi-final loser, and went home looking forward to next year after a great time was had by all.

Last.fm Originals

Tuesday, 19 June 2012
by Chris Price
filed under Announcements
Comments: 3

As Starship once sang, “We built this original programming and video content hub on rock and roll!” (Something like that, wasn’t it?) Well, today you’ll notice a new top-level navigation on Last.fm dedicated to exactly that. Alongside Music, Radio, Events, Charts and Community, look for a fabulous new content feature we’re calling Originals, which aims to shine a light on the wealth of intimate artist experiences we’ve generated over the years here at Last.fm, and continue to create every day.

For some years now we’ve strived to find new ways of bringing you closer to the music by means of video and editorial content – from the hundreds of sessions we’ve filmed in our New York studios, to our intimate Live in London shows and festivals around the world. Up to now though, we haven’t made it easy for you to find that content on Last.fm, and we felt the best way to fix that was to create a new section of the site where all this stuff would live. And lo, Originals was born!

There you’ll find sessions, studio performances and interviews with established names such as Coldplay, Noel Gallagher, The Shins, The Temper Trap, Norah Jones, Snow Patrol and Wilco, as well as beautifully shot films from the hottest emerging talent around the world. We’re really proud of our sessions at Last.fm, so we’re launching Originals with two very special new recordings.

In New York, where the Last.fm team are old hands at capturing all the best music on camera, we were joined for an exclusive and intimate session by Keane. Watch them performing songs old and new (as well as answering your questions) here. The London team celebrates the launch of Originals by showcasing phenomenal new UK indie rockers Bastille, who treated us to a sublime session aboard Lightship 95, a floating recording studio on the River Thames. Watch it here.

It’s not all videos and live performances though. On Originals we’ve also gathered together some of the editorial pieces you’re used to reading elsewhere, including the Hype Chart, Tag of the Week and New Releases blogs that were previously dotted across our various platforms. In short, Last.fm Originals is the place watch, read about and listen to the hottest – and biggest – new music happenings out there! Please take a look around – we really hope you like the latest addition to the Last.fm stable!

Chris Price
Head of Music, Last.fm

Do you have robot ears?

Thursday, 14 June 2012
by Christopher Sutton
filed under Announcements
Comments: 11


UPDATE: Thanks for all the feedback so far! We’ve just launched a new version based on what the robots have learned: Robot Ears


Help! We need somebody.

Actually, we need a whole bunch of somebodies to help us evaluate some new ways to tag tracks.

The research team here at Last.fm have been investigating various interesting properties of music and trying to figure out how to get machines to recognise them. Last year we looked at tempo measurement and how the rhythm, timbre and harmony of a track change over time. Today we’re asking you to help with a third task, looking at some more unusual musical properties…

Some aspects of music, like tempo and key, are pretty well defined. Take a song with a strong beat and most people will tap their foot along with it in the same way. The key of a song is generally clear too, based on its melody and harmonies. There are usually ‘correct’ answers for “What’s the tempo of this song in beats per minute?” and “What key is this song in?”

Other musical properties are trickier. Does this song sound “punchy” and “energetic”? Would you say it was “percussive”? Or “smooth”? Is the beat “metronomic” or “irregular”, and in either case could you “dance” to it? Which tracks are “sad” and which are “happy”? How “aggressive” are one artist’s songs, and are another artist’s songs more “mellow”?

We think we’ve made some progress with answering these kinds of questions automatically, and now it’s time to get some real human listeners to weigh in and tell our robots what they’re getting right and wrong. Because there aren’t clear-cut answers to these kinds of question it becomes doubly important to compare machine answers against human judgement – and we need as many humans involved as possible!

We’ve built a Robot Ears app where you can help with this while hopefully having some fun testing out your own robot ears. You listen to a few tracks, and then give your judgement as to which of a number of categories they fit (if any). We’ll check that against what our robots said, and in the process find out where there’s room for improvement in their judgements.

We’re also interested to know whether you think these tags are meaningful to begin with. You can hit the icon next to a category name to suggest alternatives.

The more humans we can pit against our robots and the more rounds you complete, the better we’ll be able to automatically analyse tracks in future – which in turn will help us provide you with more flexible and interesting radio and music recommendations!

So to paraphrase a famous princess: Help us, Last.fm users. You’re our only hope.

Come help teach our robots a thing or two

An update on Last.fm Password Security

Friday, 8 June 2012
by Matthew Hawn
filed under About Us and Announcements
Comments: 63

Hello from Last.fm HQ,

Earlier this week, Last.fm received an email that let us know a text file containing cryptographic strings for passwords (known as “hashes”) that might be connected to Last.fm had been posted to a password cracking forum. We immediately checked the file against our user database, and while this review continues, we felt it was important enough to act on.

We immediately implemented a number of key security changes around user data and we chose to be cautious and alert Last.fm users. We recommend that users change their password on Last.fm and on any other sites that use a similar password. All the updated passwords since yesterday afternoon have been secured with a more rigorous method for user data storage.

To reach as many users as quickly as possible, we are sending these alerts via social media, direct email and on the Last.fm site itself.

We take the security of our users very seriously, and going forward we want you to know we’re redoubling our efforts to protect our users’ data.

Thanks for your support,

The Last.fm Team