In our wavetable series, we discussed what size our wavetables needed to be in order to give us an appropriate number of harmonics. But since we interpolated between adjacent table entries, the table size also dictates the signal to noise ratio of playback. A bigger (and therefore more oversampled) table will give lower interpolation error—less noise. We use “signal to noise ratio”—SNR for short—as our metric for audio.

SNR has a precise definition—it’s the RMS value of the signal divided by the RMS value of the noise, and we usually express the ratio in dB. We’ll confine this article to sine tables, because they are useful and the ear is relatively sensitive to the purity of a sine wave.

We could derive the relationship of table size and SNR analytically, but this article is about measurement, and it’s easily extended to other types of waveforms and audio.

### Calculating SNR

To calculate SNR, we need to know what part of the sampled audio is signal and what part is noise. Since we’re generating the audio, it’s pretty easy to know each. For example, if we interpolate our audio from a sine table, the signal part is the best precision sine calculations we can make, and the noise is that minus the wavetable-interpolated version. RMS is root-mean-squared, or taking the square roots of all the samples, producing the average, then squaring that value. We do that for both the signal and the noise, sample by sample, and divide the two—and convert to dB. The greatest error between the samples will be somewhere in the middle. Picking halfway for the sine is a good guess, but we can easily take more measurements and see.

It ends up that the table size can be relative small for an excellent SNR with linear interpolation. This shouldn’t be surprising, since a sine wave is smooth and therefore the error of drawing a line between two points gets small quickly with table size. A 512 sample table is ample for direct use in audio. It yields a 97 dB SNR. While some might think that’s fine for 16-bit audio but not so impressive for 24 bit, a closer look reveals just how good that SNR is.

Keep in mind, this is a ratio of signal to noise. While the noise floor is -97 dB compared with the signal, that’s not the same as saying we have a noise floor of -97 dB. The noise floor is -97 dB when the signal is 0 dB (actually, this is RMS, so a full-code sine wave is -3 dB and the noise is -100 dB). But people don’t record and listen to sine waves at the loudest possible volume. When the signal is -30 dB, the noise floor is -127 dB. When the signal is disabled, the noise floor is non-existent.

However, if that’s still not good enough for you, every doubling of the table size yields a 20 dB improvement.

### Code

Here’s a simple C++ example that calculates the SNR of a sine table. Set the tableSize variable to check different table sizes (typically a power of 2, but not enforced). The span variable is the number of measurements from one table entry to the next. You can copy and paste, and execute, this code in an online compiler (search for “execute c++ online” for many options).

```
#include <iostream>
#include <cmath>
#if !defined M_PI
const double M_PI = 3.14159265358979323846;
#endif
using namespace std;
int main(void) {
const long tableSize = 512;
const long span = 4;
const long len = tableSize * span;
double sigPower = 0;
double errPower = 0;
for (long idx = 0; idx < len; idx++) {
long idxMod = fmod(idx, span);
double sig = sin((double)idx / len * 2 * M_PI);
double sin0, sin1;
if (!idxMod) {
sin0 = sig;
sin1 = sin((double)(idx + span) / len * 2 * M_PI);
}
double err = (sin1 - sin0) * idxMod / span + sin0 - sig;
sigPower += sig * sig;
errPower += err * err;
}
sigPower = sqrt(sigPower / len);
errPower = sqrt(errPower / len);
cout << "Table size: " << tableSize << endl;
cout << "Signal: " << 20 * log10(sigPower) << " dB RMS" << endl;
cout << "Noise: " << 20 * log10(errPower) << " dB RMS" << endl;
cout << "SNR: " << 20 * log10(sigPower / errPower) << " dB RMS" << endl;
}
```

### Quantifying the benefit of interpolation

This is a good opportunity to explore what linear interpolation buys us. Just change the error calculation line to “double err = sin0 – sig;”, and set span to a larger number, like 32, to get more readings between samples. Without linear interpolation, the SNR of a 512-sample table is about 43 dB, down from 97 dB, and we gain only 6 dB per table doubling.

You can extend this comparison to other interpolation methods, but it’s clear that linear interpolation is sufficient for a sine table.

### Extending to other waveforms

OK, how about other waveforms? A sawtooth wave is not as smooth as a sine, so as you might expect, it will take a larger table to yield high SNR number. Looking at it another way, the sawtooth is made up of a sine fundamental. The next harmonic is at half the amplitude, which alone would contribute half the signal and half the noise, but it’s also double the frequency—the equivalent of half the sine table size and therefore 20 dB worse than the fundamental is taken alone. It’s a little more complicated than just summing up the errors of the component sines, though, because positive and negative errors can cancel.

But the measurement technique is basically the same as with the sine example. The signal would be a high-resolution bandlimited sawtooth (not a naive sawtooth), and noise would be the difference between that and the interpolated values from your bandlimited sawtooth table. Left to you as an exercise, but you may be surprised at the poor numbers of a 2048 or 4096 sample table in the low octaves (where the is no oversampling). But again, the noise only occurs when you have signal, particularly when you have a bright waveform, and remains that far below it at any amplitude. It’s still hard to hear the noise through the signal!

Checking the SNR of wavetable generated by our wavetable oscillator code is a straightforward extension of the sine table code. For a wavetable of size 2048 and a given number of harmonics, for instance, create a table of size 2048 times span. Then subtract each entry of the wavetable, our “signal”, from the corresponding interpolated value for the “noise”. For instance, if tableSize is 2048 and span is 8, create a table of 16384 samples. For each signal sample *n*, from 0 to 16383, compare it with the linearly interpolated value between span points (compare samples 0-7 with the corresponding linear interpolation of samples 0 and 8, etc., using modulo or counters).

It’s more code than I want to put up in this article, especially if I want to give a lot of options or waves and interpolations, but it’s easy. You might want to make a class that lets you specify a waveform, including number of harmonics and wavetable size, which creates the waveform. Create a function to do the linear interpolation (“lerp”) and possibly others (make it a class in that case); input the wavetable and span, output the computed signal and noise numbers. Then main simply makes the call to build the waveform, and another call to analyze it, and displays the results.

I would be very interested in hearing about some other interpolators, Also at what point do you think it would be not be worth the cost?

Unrelated question:

I have been using Reaktor for the last decade, I was wondering about coding my Reaktor instruments outside of Reaktor, what would I do that in? (would I want to look into learning Juice? or something?) I don’t know what the next step forward is…

Anyways, Thanks so much for doing what you do!

Tough question to answer—it depends on so many things. Ultimately, you have to decide what your requirements are (such as SNR), and achieve that by the most economical means. making tradeoffs if needed. For instance, bandlimited interpolation—perhaps by a kaiser windowed sinc function—is a great way to go if signal fidelity is most important, but it’s processor intensive and requires a substantial signal delay. You might try a site like kvraudio, the DSP and PLugin Development forum, for specific recommendations.

For putting your ideas in code, yes, I’d use a plugin framework of some type, JUCE for example. It will save you from spending a lot of time learning and implementing the plugin format details (AU, VST), the MIDI subsystem (Core Audio, ASIO), OS file system, drawing, threads, etc. Most of your coding with then be the user interface, and the audio processing code.