Home Features 8 companies that are coupling AI with music

8 companies that are coupling AI with music

20 MIN READ

In 2016 scientists from Sony CSL Research Labs showcased the first song to be composed by artificial intelligence (AI), the Beatles-Esque pop song titled ‘Daddy’s Car’, which made us think, if there is one thing that AI can do better than humans, is crunch numbers and algorithms, which ideally is a self-contained sequence of actions to be performed by a computer. And what is music all about, if not rhythms and sequences?

For years now, machine learning and artificial intelligence has been creeping into creative fields; the latest example being; how Google’s AI neural networks are either helping people discover alternative creative fields, or creating trippy art-pieces as a part of its AI Experiments program Deep Dream. Be it painting, literature or music, we seem to have come across, or even used AI tools directly or indirectly for exploring creative opportunities. Take music for example, whether it is discovering/ searching or creating and marketing, AI is slowly becoming a hot topic amongst the music artists’ community and is unknown by many. Let’s take a look at some companies that are combining AI and music:

Flow Machines by Sony CSL

Funded by the European Research Council and coordinated by François Pachet (Sony CSL Paris – UMPC), The goal of Flow Machines is to research and develop artificial intelligence systems that are able to generate music autonomously or in collaboration with human artists. The research firm states that they turn music style, which can come from individual composers – from Mozart to Abbas – or multiple sources, into a computational object, which can be read, replicated and worked on by AI. This is the company that unveiled the first AI pop song ‘Daddy’s Car’.

A Reuters report stated that the starting point of the song creation is a database of sheet music of more than 13,000 existing songs, from which the user can choose any number of titles with a sound or feel they would like the new song to incorporate. The algorithm analyses the songs’ characteristics, and statistical properties related to rhythm, pitch and harmony. It will learn, for instance, which notes will ‘go well with a given chord, what chord is supposed to succeed after a given chord, or which notes usually come after a given note. From the emerging pattern, the algorithm creates a partition or lead sheet with similar characteristics. Based on these characteristics, the AI composes the music.

AI Duet – on Google AI Experiments

Last year search engine giant, Google launched A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out (open source). The portal was meant to make machine learning technology more accessible to people who are interested in AI but do not have the technical background. Basically, the site lets people interact with AI projects that Google researchers have created and also lets users create their own projects/ experiments with AI involved.

One of the projects on the AI Experiments page is the AI Duet, which showcases the application of machine learning on music. It lets the user play a duet with the computer, one has to play some notes, and the computer will respond to the melody. The application allows the use of keyboard, computer keys, or even plugging in a MIDI keyboard.

AI Duet takes notes while the user is playing a key and runs it through a neural network that has been trained using machine learning with hundreds of different melodic examples. The neural network looks for melodic and rhythmic patterns it can identify, and matches it with the user’s key. It then times it and generates its own organic melody which sounds like a response to the input key/ melody.

AI Experiments states that, Yotam Mann, known for ‘making music with code’ has built the project, with friends from the Magenta and Creative Lab teams at Google. It’s built with Tensorflow, Tone.js, and open-source tools from the Magenta project, which is Google’s foray into art using AI.

Jukedeck

Jukedeck is a startup that claims to bring artificial intelligence to music composition. Its system involves feeding hundreds of scores into its AI neural networks, which analyze probabilities of successive music notes or the progression of chords. The deep neural network turn all the probabilities and progressions into compositions that can be turned into audio using an automated audio production program.

With origins at Cambridge University, the team comprises of composers, producers, engineers, academics and machine learning experts with a shared passion for music and technology, who created Jukedeck. The team is training Jukedeck in deep neural networks to understand how to compose and adapt music, to give people the tools to personalize the music they need.

By using this machine learning technology that can compose and adapt professional-quality music, that gives individuals and companies personalised music that’s dynamically shaped to their needs. Startup is stated to have been looking to sell tracks to consumers who need background music for projects. According to a New York Times report, the company charges large businesses $21.99 to use a track which is royalty free, for a fraction of what hiring a musician would cost.

Brain.FM

Scientists have always argued that music has an effect on the human mind and some have said that one can improve the functionality of the brain by listening to certain kinds of music. Enter Brain.FM, which has come up with AI that creates a blend of tones that helps people decrease anxiety, sleep disorders and improve mental performance; hence, no more ADD.

Neuroscientists along with the audio startup from Chicago have created a machine that can take all the rules of making neuroscience music and create rhythms that sound like humans have created it. According to the company, the AI creates music designed for the brain to enhance focus, relaxation, meditation, naps and sleep, effective within 10 – 15 minutes of use. Its all about unlocking music’s potential to influence cognitive states.

People can use this AI generated music for their desired state, be it deep sleep or focused work or meditation. The tool allows a user to listen to the different sounds for the first seven times and then it is all paid. Premium accounts cost $7 a month and $4 a month if one pays for a year up front which amounts to $48, users can also opt for a lifetime subscription of $150.

Spotify

Almost all the music lovers would have come across this name while surfing for music streaming applications and tools. Ideally Spotify is a digital music service that gives the users access to millions of songs. But what people don’t know is that it uses machine learning indirectly.

Spotify uses a method called collaborative filtering, which it uses to collate as much data as possible from a user’s listening behavior. It then does a comparative analysis with thousands of other data that it has collected from other users across the globe. This data is then used to improve recommendations and suggest new music basedon the user’s listening habit.

The company acquired another machine learning firm called Echo Nest which uses AI to gather data about new music posted to blogs, news websites, and social media machine learning for better music discovery.

People using Spotify would be familiar with its Discover Weekly feature, that suggests new music and the recently launched Daily mix. Voila, both these features use machine learning to mine user’s listening data to create a personalized playlist designed to present hours of music that a user will love.

LANDR

One of the most interesting emerging technologies in music is LANDR which claims to provide instant audio mastering service as it combines AI with music production. LANDR works by computing each new song by analyzing the production style. The AI software is constantly learning as people keep uploading their tracks for mastering, sort of like a observant baby with a clean slate.

Every time a user uploads a track, LANDR creates a custom ‘digital fingerprint’ of the track, which it then cross references with its database to identify the track’s genre and production style. Then, based on the needs of the user’s track, it applies a custom set of adaptive tools like multi-band compression, EQ, stereo enhancement, limiting and aural excitation. It then makes subtle, intelligent frame-by-frame adjustments based on the unique properties of the track.

What is interesting here is that mastering by real people with human flesh occurs in the final stages of the production of a song when it is conditioned, so that it sounds clearer, consistent, richer, and true to its making. With LANDR’s algorithm analyzing large repositories of songs that have been previously mastered, as well as songs in other genres for similar patterns, artists can simply upload raw songs into LANDR’s cloud engine to get a finished product.

How to Use LANDR to Share Your Tracks Privately

Everything you need to know about LANDR’s new private sharing feature.

Posted by LANDR on Tuesday, 16 August 2016

Shazam

No, its not the DC Comics character; this is the mobile app that recognizes music and TV around a user. It allows people to discover, explore and share the music and TV. Shazam’s been in action since a decade and claims to connect more than 1 billion people on iOS and Android.

Suppose you are at a bar and you like the music that is playing, and you are too embarrassed to either ask your friend or the barkeep to tell you the song, to you can start the app and tap the Shazam button. A digital fingerprint of the audio will be created and, within seconds, matched against a database of millions of tracks and TV shows. Using its AI algorithm, the app gives you the name of the track, the artist, and information such as lyrics, video, artist biography, concert tickets and recommended tracks. It also lets users purchase or listen to the song using its partner services.

The company has a repository of millions of tracks which have been broken down to numeric signatures unique to each track, like a fingerprint. Its machine learning tool listens to the audio and its engine converts it into a signature, like the fingerprint, and then tries to match it with its repository like an ID proof.

Cognitive music with IBM Watson

Grammy award-winning music producer Alex Da Kid paired up with Watson Music to see if they could create a song together. According to IBM, Watson’s ability to turn millions of unstructured data points into emotional insights would help create a new kind of music that for the first time ever, listened to the audience.

Watson AlchemyLanguage API helped by analyzing five years of natural language texts. Once Watson had learned the most significant cultural themes, Watson Tone Analyzer read news articles, blogs and tweets to find out what people felt about them. It also inspired the artist by analyzing years’ worth of popular music.

The Tone Analyzer API then read the lyrics of over 26,000 ‘Billboard Hot 100′ songs while its Cognitive Color Design Tool tool ingested its album art. Watson Beat then looked at composition of those songs to find useful patterns between various keys, chord progressions and genres completing an emotional fingerprint of music by year.

Alex Da Kid used Watson’s ’emotional insights’ to develop ‘heartbreak’ as the concept for his first song, ‘Not Easy,’ and explored musical expressions of heartbreak by working with Watson Beat. Alex then collaborated with X Ambassadors to write the song’s foundation, and lastly added genre-crossing artists Elle King and Wiz Khalifa to bring their own personal touches to the track.

Abhinav Mohapatra
An author who has a keen interest for the ‘off-beat’ An author who has a keen interest for the ‘off-beat’, he has covered and explored multiple facets of the marketing, advertising & technology sphere in his career. Lured towards the ‘cool’ technologies, he is an HTC snob, Hollywood movie buff and philosopher who likes to observe the world through his ‘Red Spectacles’.