This is a pretty imperfect solution, especially since some of the noise is embedded in the same frequency range as the voice you hear on the file, but here goes nothing. What I was talking about with regards to the frequency spectrum is that if you hear the sound, the background noise has a very low hum. This resides in the low frequency range of the spectrum, whereas the voice has a more higher frequency. As such, we can apply a bandpass filter to get rid of the low noise, capture most of the voice, and any noisy frequencies on the higher side will get cancelled as well.
Here are the steps that I did:
- Read in the audio file using
audioread
.
- Play the original sound so I can hear what it sounds like using. Do this by creating an
audioplayer
object.
- Plotted both the left and right channels to take a look at the sound signal in time domain... if it gives any clues. Looking at the channels, they both seem to be the same, so it looks like it was just a single microphone being mapped to both channels.
- I took the Fourier Transform and saw the frequency distribution.
- Using (4) I figured out the rough approximation of where I should cut off the frequencies.
- Designed a bandpass filter that cuts off these frequencies.
- Filtered the signal then played it by constructing another
audioplayer
object.
Let's go then!
Step #1
%% Read in the file
clearvars;
close all;
[f,fs] = audioread('Hold.wav');
audioread
will read in an audio file for you. Just specify what file you want within the ''
. Also, make sure you set your working directory to be where this file is being stored. clearvars, close all
just do clean up for us. It closes all of our windows (if any are open), and clears all of our variables in the MATLAB workspace. f
would be the signal read into MATLAB while fs
is the sampling frequency of your signal. f
here is a 2D matrix. The first column is the left channel while the second is the right channel. In general, the total number of channels in your audio file is denoted by the total number of columns in this matrix read in through audioread
.
Step #2
%% Play original file
pOrig = audioplayer(f,fs);
pOrig.play;
This step will allow you to create an audioplayer
object that takes the signal you read in (f
), with the sampling frequency fs
and outputs an object stored in pOrig
. You then use pOrig.play
to play the file in MATLAB so you can hear it.
Step #3
%% Plot both audio channels
N = size(f,1); % Determine total number of samples in audio file
figure;
subplot(2,1,1);
stem(1:N, f(:,1));
title('Left Channel');
subplot(2,1,2);
stem(1:N, f(:,2));
title('Right Channel');
stem
is a way to plot discrete points in MATLAB. Each point in time has a circle drawn at the point with a vertical line drawn from the horizontal axis to that point in time. subplot
is a way to place multiple figures in the same window. I won't get into it here, but you can read about how subplot
works in detail by referencing this StackOverflow post I wrote here. The above code produces the plot shown below:
The above code is quite straight forward. I'm just plotting each channel individually in each subplot.
Step #4
%% Plot the spectrum
df = fs / N;
w = (-(N/2):(N/2)-1)*df;
y = fft(f(:,1), N) / N; % For normalizing, but not needed for our analysis
y2 = fftshift(y);
figure;
plot(w,abs(y2));
The code that will look the most frightening is the code above. If you recall from signals and systems, the maximum frequency that is represented in our signal is the sampling frequency divided by 2. This is called the Nyquist frequency. The sampling frequency of your audio file is 48000 Hz, which means that the maximum frequency represented in your audio file is 24000 Hz. fft
stands for Fast Fourier Transform. Think of it as a very efficient way of computing the Fourier Transform. The traditional formula requires that you perform multiple summations for each element in your output. The FFT will compute this efficiently by requiring far less operations and still give you the same result.
We are using fft
to take a look at the frequency spectrum of our signal. You call fft
by specifying the input signal you want as the first parameter, followed by how many points you want to evaluate at with the second parameter. It is customary that you specify the number of points in your FFT to be the length of the signal. I do this by checking to see how many rows we have in our sound matrix. When you plot the frequency spectrum, I just took one channel to make things simple as the other channel is the same. This serves as the first input into fft
. Also, bear in mind that I divided by N
as it is the proper way of normalizing the signal. However, because we just want to take a snapshot of what the frequency domain looks like, you don't really need to do this. However, if you're planning on using it to compute something later, then you definitely need to.
I wrote some additional code as the spectrum by default is uncentered. I used fftshift
so that the centre maps to 0 Hz, while the left spans from 0 to -24000Hz while the right spans from 0 to 24000 Hz. This is intuitively how I see the frequency spectrum. You can think of negative frequencies as frequencies that propagate in the opposite direction. Ideally, the frequency distribution for a negative frequency should equal the positive frequency. When you plot the frequency spectrum, it tells you how much contribution that frequency has to the output. That is defined by the magnitude of the signal. You find this by taking the abs
function. The output that you get is shown below.
If you look at the plot, there are a lot of spikes around the low frequency range. This corresponds to your humming whereas the voice probably maps to the higher frequency range and there isn't that much of it as there isn't that much of a voice heard.
Step #5
By trial and error and looking at Step #5, I figured everything from 700 Hz and down corresponds to the humming noise while the higher noise contributions go from 12000 Hz and higher.
Step #6
You can use the butter
function from the Signal Processing Toolbox to help you design a bandpass filter. However, if you don't have this toolbox, refer to this StackOverflow post on how user-made function that achieves the same thing. However, the order for that filter is only 2. Assuming you have the butter
function available, you need to figure out what order you want your filter. The higher the order, the more work it'll do. I choose n = 7
to start off. You also need to normalize your frequencies so that the Nyquist frequency maps to 1, while everything else maps between 0 and 1. Once you do that, you can call butter
like so:
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
The bandpass
flag means you want to design a bandpass filter, beginFreq
and endFreq
map to the normalized beginning and ending frequency you want to for the bandpass filter. In our case, that's beginFreq = 700 / Nyquist
and endFreq = 12000 / Nyquist
. b,a
are the coefficients used for a filter that will help you perform this task. You'll need these for the next step.
%% Design a bandpass filter that filters out between 700 to 12000 Hz
n = 7;
beginFreq = 700 / (fs/2);
endFreq = 12000 / (fs/2);
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
Step #7
%% Filter the signal
fOut = filter(b, a, f);
%% Construct audioplayer object and play
p = audioplayer(fOut, fs);
p.play;
You use filter
to filter your signal using what you got from Step #6. fOut
will be your filtered signal. If you want to hear it played, you can construct and audioplayer
based on this output signal at the same sampling frequency as the input. You then use p.play
to hear it in MATLAB.
Give this all a try and see how it all works. You'll probably need to play around the most in Step #6 and #7. This isn't a perfect solution, but enough to get you started I hope.
Good luck!