Flow Machines by Sony CSLFunded by the European Research Council and coordinated by François Pachet (Sony CSL Paris – UMPC), The goal of Flow Machines is to research and develop artificial intelligence systems that are able to generate music autonomously or in collaboration with human artists. The research firm states that they turn music style, which can come from individual composers – from Mozart to Abbas – or multiple sources, into a computational object, which can be read, replicated and worked on by AI. This is the company that unveiled the first AI pop song ‘Daddy’s Car’. A Reuters report stated that the starting point of the song creation is a database of sheet music of more than 13,000 existing songs, from which the user can choose any number of titles with a sound or feel they would like the new song to incorporate. The algorithm analyses the songs’ characteristics, and statistical properties related to rhythm, pitch and harmony. It will learn, for instance, which notes will ‘go well with a given chord, what chord is supposed to succeed after a given chord, or which notes usually come after a given note. From the emerging pattern, the algorithm creates a partition or lead sheet with similar characteristics. Based on these characteristics, the AI composes the music.
AI Duet – on Google AI ExperimentsLast year search engine giant, Google launched A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out (open source). The portal was meant to make machine learning technology more accessible to people who are interested in AI but do not have the technical background. Basically, the site lets people interact with AI projects that Google researchers have created and also lets users create their own projects/ experiments with AI involved. One of the projects on the AI Experiments page is the AI Duet, which showcases the application of machine learning on music. It lets the user play a duet with the computer, one has to play some notes, and the computer will respond to the melody. The application allows the use of keyboard, computer keys, or even plugging in a MIDI keyboard. AI Duet takes notes while the user is playing a key and runs it through a neural network that has been trained using machine learning with hundreds of different melodic examples. The neural network looks for melodic and rhythmic patterns it can identify, and matches it with the user’s key. It then times it and generates its own organic melody which sounds like a response to the input key/ melody. AI Experiments states that, Yotam Mann, known for ‘making music with code’ has built the project, with friends from the Magenta and Creative Lab teams at Google. It’s built with Tensorflow, Tone.js, and open-source tools from the Magenta project, which is Google’s foray into art using AI.
JukedeckJukedeck is a startup that claims to bring artificial intelligence to music composition. Its system involves feeding hundreds of scores into its AI neural networks, which analyze probabilities of successive music notes or the progression of chords. The deep neural network turn all the probabilities and progressions into compositions that can be turned into audio using an automated audio production program. With origins at Cambridge University, the team comprises of composers, producers, engineers, academics and machine learning experts with a shared passion for music and technology, who created Jukedeck. The team is training Jukedeck in deep neural networks to understand how to compose and adapt music, to give people the tools to personalize the music they need. By using this machine learning technology that can compose and adapt professional-quality music, that gives individuals and companies personalised music that’s dynamically shaped to their needs. Startup is stated to have been looking to sell tracks to consumers who need background music for projects. According to a New York Times report, the company charges large businesses $21.99 to use a track which is royalty free, for a fraction of what hiring a musician would cost.
Brain.FMScientists have always argued that music has an effect on the human mind and some have said that one can improve the functionality of the brain by listening to certain kinds of music. Enter Brain.FM, which has come up with AI that creates a blend of tones that helps people decrease anxiety, sleep disorders and improve mental performance; hence, no more ADD. Neuroscientists along with the audio startup from Chicago have created a machine that can take all the rules of making neuroscience music and create rhythms that sound like humans have created it. According to the company, the AI creates music designed for the brain to enhance focus, relaxation, meditation, naps and sleep, effective within 10 – 15 minutes of use. Its all about unlocking music’s potential to influence cognitive states. People can use this AI generated music for their desired state, be it deep sleep or focused work or meditation. The tool allows a user to listen to the different sounds for the first seven times and then it is all paid. Premium accounts cost $7 a month and $4 a month if one pays for a year up front which amounts to $48, users can also opt for a lifetime subscription of $150.
SpotifyAlmost all the music lovers would have come across this name while surfing for music streaming applications and tools. Ideally Spotify is a digital music service that gives the users access to millions of songs. But what people don’t know is that it uses machine learning indirectly. Spotify uses a method called collaborative filtering, which it uses to collate as much data as possible from a user’s listening behavior. It then does a comparative analysis with thousands of other data that it has collected from other users across the globe. This data is then used to improve recommendations and suggest new music basedon the user’s listening habit. The company acquired another machine learning firm called Echo Nest which uses AI to gather data about new music posted to blogs, news websites, and social media machine learning for better music discovery. People using Spotify would be familiar with its Discover Weekly feature, that suggests new music and the recently launched Daily mix. Voila, both these features use machine learning to mine user’s listening data to create a personalized playlist designed to present hours of music that a user will love.
LANDROne of the most interesting emerging technologies in music is LANDR which claims to provide instant audio mastering service as it combines AI with music production. LANDR works by computing each new song by analyzing the production style. The AI software is constantly learning as people keep uploading their tracks for mastering, sort of like a observant baby with a clean slate. Every time a user uploads a track, LANDR creates a custom ‘digital fingerprint’ of the track, which it then cross references with its database to identify the track’s genre and production style. Then, based on the needs of the user’s track, it applies a custom set of adaptive tools like multi-band compression, EQ, stereo enhancement, limiting and aural excitation. It then makes subtle, intelligent frame-by-frame adjustments based on the unique properties of the track. What is interesting here is that mastering by real people with human flesh occurs in the final stages of the production of a song when it is conditioned, so that it sounds clearer, consistent, richer, and true to its making. With LANDR’s algorithm analyzing large repositories of songs that have been previously mastered, as well as songs in other genres for similar patterns, artists can simply upload raw songs into LANDR’s cloud engine to get a finished product.
How to Use LANDR to Share Your Tracks Privately
Everything you need to know about LANDR’s new private sharing feature.Posted by LANDR on Tuesday, 16 August 2016