what is smoothing in very basic terms
What is smoothing and how can I do it?
I have an array in Matlab which is the magnitude spectrum of a speech signal (the magnitude of 128 points of FFT). How do I smooth this using a moving average? From what I understand, I should take a window size of a certain number of elements, take average, and this becomes the new 1st element. Then shift the window to the right by one element, take average which becomes the 2nd element, and so on. Is that really how it works? I am not sure myself since if I do that, in my final result I will have less than 128 elements. So how does it work and how does it help to smooth the data points? Or is there any other way I can do smoothing of data?
Smoothing can be done in many ways, but in very basic and general terms it means that you even out a signal, by mixing its elements with their neighbors. You smear/blur the signal a bit in order to get rid of noise. For example, a very simple smoothing technique would be, to recalculate every signal element f(t) to as 0.8 of the original value, plus 0.1 of each of its neighbors:
f'(t) = 0.1*f(t-1) + 0.8*f(t) + 0.1*f(t+1)
Note how the multiplication factors, or weights, add up to one. So if the signal is fairly constant, smoothing doesn't change it much. But if the signal contained a sudden jerky change, then the contribution from its neighbors will help to clear up that noise a bit.
The weights you use in this recalculation function can be called a kernel. A one-dimensional Gaussian function or any other basic kernel should do in your case.
Nice example of one particular kind of smoothing:
Above: unsmoothed signal Below: smoothed signal
Examples of a few kernels:
In addition to the nice answer of Junuxx I would like to drop a few notes.
Smoothing is related to filtering (unfortunately quite vague Wikipedia article) - you should pick the smoother based on it's properties.
One of my favorites is the median filter. This is an example of a non-linear filter. It has some interesting properties, it preserves "edges" and is quite robust under large noise.
If you have a model how your signal behaves a Kalman filter is worth a look. Its smoothing is actually a Bayesian maximum likelihood estimation of the signal based on observations.
Smoothing implies using information from neighboring samples in order to change the relationship between neighboring samples. For finite vectors, at the ends, there is no neighboring information to one side. Your choices are: don't smooth/filter the ends, accept a shorter resulting smoothed vector, make up data and smooth with that (depends on the accuracy/usefulness of any predictions off the ends), or maybe using different asymmetric smoothing kernels at the ends (which ends up shortening the information content in the signal anyway).
You can find the entire matlab code for smoothing moving average filter for varying lengths of taps. www.gaussianwaves.com/2010/11/moving-average-filter-ma-filter-2/
Others have mentioned how you do smoothing, I'd like to mention why smoothing works.
If you properly oversample your signal, it will vary relatively little from one sample to the next (sample = timepoints, pixels, etc), and it is expected to have an overall smooth appearance. In other words, your signal contains few high frequencies, i.e. signal components that vary at a rate similar to your sampling rate.
Yet, measurements are often corrupted by noise. In a first approximation, we usually consider the noise to follow a Gaussian distribution with mean zero and a certain standard deviation that is simply added on top of the signal.
To reduce noise in our signal, we commonly make the following four assumptions: noise is random, is not correlated among samples, has a mean of zero, and the signal is sufficiently oversampled. With these assumptions, we can use a sliding average filter.
Consider, for example, three consecutive samples. Since the signal is highly oversampled, the underlying signal can be considered to change linearly, which means that the average of the signal across the three samples would equal the true signal at the middle sample. In contrast, the noise has mean zero and is uncorrelated, which means that its average should tend to zero. Thus, we can apply a three-sample sliding average filter, where we replace each sample with the average between itself and its two adjacent neighbors.
Of course, the larger we make the window, the more the noise will average out to zero, but the less our assumption of linearity of the true signal holds. Thus, we have to make a trade-off. One way to attempt to get the best of both worlds is to use a weighted average, where we give farther away samples smaller weights, so that we average noise effects from larger ranges, while not weighting true signal too much where it deviates from our linearity assumption.
How you should put the weights depends on the noise, the signal, and computational efficiency, and, of course, the trade-off between getting rid of the noise and cutting into the signal.
Note that there has been a lot of work done in the last few years to allow us to relax some of the four assumptions, for example by designing smoothing schemes with variable filter windows (anisotropic diffusion), or schemes that don't really use windows at all (nonlocal means).