How to do vocal remove – the programmatic way

These days, many of my classmates are asking to help them removing vocals from songs for my school’s Singing Contest. I’m doing these things for quite a long time and I’m already used to it.

Suddenly, I feel like digging out how vocal removal works so I searched in the internet. I found there are many ways in doing this like using FFT to eliminate the frequency of the  vocals or using the FFT of different signals to eliminate the vocals. Among all of them I found one that is surprisingly easy to implant – phase cancellation.

In  phase cancellation, it’ll make an assumption that the vocals are recorded using one microphone only. Since the vocals are manifested equally across the channels, we can invert the phase of one of the channel and add that signal to the another track to eliminate the vocals. This produce pretty good result as long as the vocals are recorded in stereo and follows the assumption. This algorithm does not work well or at all when the audio is mono or the vocals are recorded using more than one microphones or the music is also recorded with only one microphone.

OK. Here’s the algorithm:

output[i] = (inputL[i] - inputR[i]) / 2;

OR

output[i] = (inputR[i] - inputL[i]) / 2;

That why I said that it’s easy, it’s only a piece of simple math.

Do you want a working example running in your web browser? Here’s one that is created with the web audio API and HTML5 Drag & Drop API. Enjoy!

Draw fireworks in HTML5 canvas (with sounds!)

I always wondered if we can make cool fireworks animation using a procedural way (in here it simply means not using any image sprites or other things like that). Finally, I successfully learned that from this tutorial and I think it missed something: how can a great graphics like this without sounds? So I added it and show it to you. Hope you can also learned something like me.

Processing images on the web!

Want to add some effects to the images on your own web? Before the era of HTML5, this is only possible with browser-specific extensions (I’m talking about you IE filters & -webkit-filters) and this restricted people to do the processing on the server-side or using Flash.

Now, with the help of new HTML5 APIs called the canvas API, we can draw things like images on a canvas in raster format and do processing. In this post, I’ll talk about the basics of the canvas API and how to process images directly.

The <canvas> tag is supported in:

  1. Chrome
  2. Firefox
  3. IE 9+
  4. Safari
  5. Opera

First, we need to define the canvas that the image is going to be drawn. This can be done by a HTML <canvas> element.

<canvas id="image" width="300" height=150">
    <p>Your browser doesn't support the canvas element.</p>
</canvas>

The content inside the <canvas> element will be ignored by a browser that recognizes the tag so we can put a message to show the user that their browser doesn’t support this feature.

Then, we need to get the canvas using the traditional DOM API.

var canvas = document.getElementById('image');

Here’s the most important point: we’re going to get the canvas context that contains several methods that allows us to draw thing on.

var context = canvas.getContext('2d'); //Get the 2d canvas context

You may have a question: why is it be ‘2d’ ? It’s because the canvas API is designed not just for 2d raster drawing but also for 3d drawings and we are not discussing that here.

Then, load the image with new Image() constructor and draw it on the canvas by using the context.drawImage(image,x,y,w,h) method.

var image = new Image();
image.onload = function(){
    //Set the canvas' size as same as the image
    canvas.width = this.width;
    canvas.height = this.height;
    //Draw the image to the canvas
    context.drawImage(this, //the image object
                      0, //The x-position of the image, 0 means the left
                      0, //The y-position of the image, 0 means the top
                      this.width, //The width of the image
                      this.height //the height of the image
                     );
};
image.src = "<INSERT YOUR IMAGE'S PATH HERE>";

After that, you should see the image is drawn to the canvas. Now, I’m going to show you how to get access to the image data and do processing on it. To access the image data, we use the context.getImageData(x,y,w,h) method.

var pixels = context.getImageData(0,0,canvas.width,canvas.height); //Get the image data
var data = pixels.data;
//data is a large one-dimensional array containing all the pixels' colour values.
//Every pixels contains 4 colour channels: Red, Green, Blue and Alpha values,
//they are contained in the order of [R,G,B,A,R,G,B,A,...]
for(var i = 0; i < data.length; i += 4){
    //Do the processing here
    //The colour values are stored in this order:
    var r = data[i]; //The red component
    var g = data[i+1]; //The green component
    var b = data[i+2]; //The blue component
    var a = data[i+3]; //The alpha component
}
context.putImageData(pixels,0,0); //Put the data back to the canvas

See next page to know how to handle the colour values properly and make some effects!

How to do vocal removal nicely with Adobe Audition

Today one of my friends asked me how to do vocal removal nicely. He told me that he had tried some other softwares like Audacity and the result wasn’t good. I think many people may have these kind of problems too so I should help others.

In this article, I’ll use an audio editing software called Adobe Audition. (Beware: It’s not a freeware and you can get a trial version from Adobe easily.)

P.S. I’m using Adobe Audition CS6 but the method I mentioned here also works in the previous versions.

First, open Audition and load the audio file in.

Audition startup

Audition user interface

Then, click [Favorites -> Remove Vocals].

Favorites -> Remove Vocals

Removing Vocals

After the processing, it is basically finished. However, if you want to remove the vocals even further, you can follow the steps below:

Click [Effects -> Special -> Vocal Enhancer].

Click [Effects -> Special -> Vocal Enhancer]

Then click [Music -> Apply]. The music will get enhanced and the vocal will be further suppressed.

Vocal Enhancer

Finally, save the file and you’re finished!

GLSL fragment shaders in JavaScript!

Recent I see some very nice visual effects done in GLSL fragment shader here. I love them very much and I started learning GLSL fragment shaders afterwards.

I think that they are great but they must run on GPU. Only some browser vendors provide API to access the GPU (Actually, that API is called WebGL) so these visuals can’t run on some browsers that don’t offer the WebGL API.

In order to overcome these, I tried to port GLSL fragment shaders to javascript and draw the result using HTML5 canvas. The canvas API has a much broader browser support (and a flash fallback is available).

The demos I created are un-optimized and maybe a bit slow. After all, GLSL shaders are supposed to run on a GPU. However, it’s still worth to convert some of the GLSL shaders to achieve some nice effect such as post-processing of photos.

Here’s the demo:

  1. http://jsfiddle.net/licson0729/eBjQ8/
  2. http://jsfiddle.net/licson0729/YJqB9/
  3. http://jsfiddle.net/licson0729/7Qe34/
  4. http://jsfiddle.net/licson0729/T3hb7/
  5. http://jsfiddle.net/licson0729/9QVxA/

The techniques I used is to render the pixels one by one, with the render function be the one in the GLSL fragment shader. Also, we need to change the schematics (vec2, vec3, etc.) to its JavaScript equivalent.

Here’s the format of the code:

//The requestAnimFrame fallback for better and smoother animation
window.requestAnimFrame = (function () {
    return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.oRequestAnimationFrame || window.msRequestAnimationFrame || function (callback) {
        window.setTimeout(callback, 1000 / 60);
    };
})();

//Prepare our canvas
var canvas = document.querySelector('#render');
var w = window.innerWidth;
var h = window.innerHeight;
canvas.width = w;
canvas.height = h;
var ctx = canvas.getContext('2d');

var time = Date.now();
var buffer = ctx.createImageData(w, h);//The back buffer we used to paint the result into the canvas

//The main rencer function
function render(time, fragcoord) {
    /* put the GLSL fragment shader's JavaScript equivalent here. */
    return [0,0,0,0]; //the final colour value
};

function animate() {
    var delta = (Date.now() - time) / 1000;
    buffer = ctx.createImageData(w, h);
    ctx.clearRect(0, 0, w, h);
    for (var x = 0; x < w; x++) {
        for (var y = 0; y < h; y++) {
            var ret = render(delta, [x, y]);
            var i = (y * buffer.width + x) * 4;
            buffer.data[i] = ret[0] * 255;
            buffer.data[i + 1] = ret[1] * 255;
            buffer.data[i + 2] = ret[2] * 255;
            buffer.data[i + 3] = ret[3] * 255;
        }
    }
    ctx.putImageData(buffer, 0, 0);
    requestAnimFrame(animate);
};

window.onresize = function () {
    w = window.innerWidth;
    h = window.innerHeight;
    canvas.width = w;
    canvas.height = h;
};

animate();

Hope you like it and encouraged you to start learning about computer graphics.

My latest experiment – audio distortion with the Web Audio API

Recently I’ve created a brand new experiment – audio distortion using the Web Audio API. It works great.

Experiment Here

You may ask: why I have the idea of doing audio distortion on the web?

It’s because I think that distorting audio gives us an effect of old, weird cassette tapes sounds and makes the audio a bit of vintage feel. (I love vintage things so much 🙂 )

Remember, Web Audio API is currently only available on Chrome and Safari 6 so please use them to visit my experiment or that’s your problem.

Isn’t this post too short? Yeah it surely is. Remember to see my other posts!