Making music with magenta.js

Magenta.js is a JavaScript library that helps you generate art and music on the web. In this tutorial, we'll talk about the music generation bits in @magenta/music -- how to make your browser sing, and in particular, how to make your browser sing like you!

As a library, @magenta/music can help you:

  1. make music in the browser by having some neat abstractions over the WebAudio API.
  2. use Machine Learning models to generate music in the browser.

Table of contents

Step 0: First things first!

If you're going to use Magenta, you need to add it to your page. Add this somewhere in your page's head element (it's also available as a module which you can install via npm):

Step 1: Making sounds with your browser

Everything in @magenta/music is centered around NoteSequences. This is an abstract representation of a series of notes, each with different pitches, instruments and strike velocities, much like MIDI.

For example, this is a NoteSequence that represents "Twinkle Twinkle Little Star". Try changing the pitches to see how the sound changes!

Editable

A special feature of NoteSequences is how they keep time. Sequences can be:


Editable

Playing a NoteSequence

When you pressed the "Play" button above, this started or stopped a Player. There are several kinds of players in @magenta/music-- the default player uses a built in "synth" sound to make the sounds. A different kind of player is a SoundFontPlayer, which lets you use real sounds for any of the notes played.

In the example below, try uncommenting the SoundFont player to see how that affects the sound of the NoteSequence. We're still using the "Twinkle Twinkle Little Star" from above, and whatever changes you've made to it are persisisting. If you accidentally broke the sequence, just refresh the page! 😅

Editable

And you control the player with:

Players also have the ability to call a callback method after every note that is played. This is extremely useful if you want to update a visualization as a result of playing a NoteSequence, which we will see below.

Visualizing a NoteSequence

Listening to NoteSequences is great, but sometimes it's useful to look at a piano roll representing the notes. @magenta/music has a built-in Visualizer for that, which paints the notes to a canvas, and updates them using a Player's callback:

You can configure a visualizer's appearance, such as the size and colours of the notes. Try changing the values below and seeing how the piano roll is updated!

Editable

Useful helpers

There are a lot of other helper methods sprinkled around the @magenta/music codebase that you might need but not know where to find. Here are some of my favourites:

Step 2: Using Machine Learning to make music

@magenta/music has several Machine Learning models, each with different strengths:

The models are built with Tensorflow.js, so they run directly in the browser, using WebGL shaders (so that they won't be unbelievably slow)

Now that we know how to use NoteSequences and Players, adding some basic Machine Learning is a continuation of that. The pattern for using any of these models is:

  1. Load @magenta/music (which we already know how to do!)
  2. Create a model from a checkpoint (i.e. where the weights, or the encoding, of the model lives)
  3. Ask the model to do something.

MusicRNN

A MusicRNN is an LSTM-based language model for musical notes -- it is best at continuing a NoteSequence that you give it.

To use it, you need to give it a sequence to continue -- when it's ready, the model will return a Promise containing the following sequence.

With Music RNN, you can configure the number of steps the new sequence will be, as well as the "temperature" of the result -- the higher the temperature, the more random (and less like the input) your sequence will be. You can play around with these values and see how the resulting sequences are different:

Editable

Music RNN has other checkpoints you can use, which are trained on different melodies and instruments. For example, drum_kit_rnn makes new drum sequences.

MusicVAE

A MusicVAE is a variational autoencoder made up of an Encoder and Decoder -- you can think of the encoder as trying to summarize all the data you give it, and the decoder as trying to recreate the original data, based on this summarized version. As a generative model, you can think of a VAE as coming up with new sequences that could be a decoding of some summarized version.

The Music VAE implementation in @magenta/musicin particular does two things: it can create new sequences (which are reconstructions or variations of the input data), or it can interpolate between two.

Creating new sequences

As before, changing the temperature changes the randomness of the result.

Editable

Interpolating between two sequences


You're now ready to build your own amazing, Machine Learning powered, music instrument! If you want more information, you can check out:

Have fun! 💕