How Google’s Deep learning AI is making music

15

October

2017

5/5 (1)

People are often arguing that artificial intelligence will be capable of nearly everything humans can do. Specifically, it’s not about dumb machines programmed to do very specific tasks —it’s about AIs that learn and get better by watching us and parsing our data for patterns. It will certainly replace a lot of jobs, but often you also hear that creative jobs will be spared, as AIs lack emotional intelligence and proactivity which are essential human characteristics.

But can computers be creative? – This can be a very philosophical question, but looking at what Google is doing right now, you will be very surprised. Last year, they launched an open-source research project called Magenta, which aims to explore the limits of what artificial intelligence can do with machine learning in arts. Google uses a different approach of learning compared to how classical AIs work and builds intelligence.
They developed an algorithm, which uses neural networks — a complex mathematical system that can learn tasks by analyzing large amounts of data. In recent years, it has been proven to be a very effective way of recognizing objects and faces in photos and identifying commands spoken into smartphones. Today they are using it to teach machines to synthesize new sounds, on notes generated by different instruments. So far, the experiment has yielded classical piano compositions that are actually hard to distinguish from human-composed music.

Magenta will not only produce music but will also provide musicians with a completely new range of tools to make music. One great example is the infinite drum machine, which organizes thousands of everyday sounds through machine learning and you can generate beats with it. These tools are all part of a number of initiatives Google launched, including Magenta and Creative Lab, to introduce free-to-use AI tools to the greater mass. They hope that everyday users can provide them with helpful insights on how they should build and improve it, as well as to help independent developers and musicians to create new experiences. There is definitely more to come and I’m very excited to see where these projects are heading. If you want to try out the infinite drum check out this website: https://experiments.withgoogle.com/ai/drum-machine

Hutson, M. (2017, August 08). How Google is making music with artificial intelligence. Retrieved October 14, 2017, from http://www.sciencemag.org/news/2017/08/how-google-making-music-artificial-intelligence

Metz, C. (2017, June 03). Finally, Neural Networks That Actually Work. Retrieved October 14, 2017, from https://www.wired.com/2015/04/jeff-dean/

Please rate this

2 thoughts on “How Google’s Deep learning AI is making music”

  1. Hi Yang! Interesting post! I did some reading and was surprised by how far AI are at the moment on the field of creativity. Besides making music, there are already records of AI mimicking and further enhances styles of well-known painters, as well as make informed creative decisions in the filmmaking industry. However, as IBM’s Rob High argues, it is not the AI who is conducting the creative process. By specifying teaching parameters for creativity, artists have gone as far as using AI to design sculptures and create paintings that mimic great works of art. For example, using the style transfer technique, artists can “teach” AI algorithms by showing them pictures of a style of painting like Impressionism to transpose photos and video to the same style.Thus, AI learns from the input provided by humans, and afterwards creates an picture or melody according to its parameter settings. Do you consider the AI as creative? or is the individual controlling the parameters the creative one?

    Nevertheless, great post!

    https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/ai-creativity.html

  2. Hi Yang Tran,

    This is a very interesting article. It reminds me a lot of something I recently read. I’m referring to a robot called Yumi. The Swiss-Swedish company ABB developed this robot called YuMi and it performed alongside Andrea Bocelli and the Lucca Philharmonic Orchestra. It was able to use AI to quickly learn about the movements a music conductor needed to do to lead an orchestra through a song. It used only two weeks to figure out how to listen to the music and adjust in live time. Having played in a band myself, I know how important it is to have a conductor who knows how to lead the band. It’s an amazing skill that were able to perform through robots now, and I’m super excited to see what else is yet to come. I also wonder what a combination of Yumi and Google Magenta would look like, and what it could lead to.

Leave a Reply

Your email address will not be published. Required fields are marked *