How many bits can you hear? video

A listening test in video form. On the plus side, it’s helpful to see the spectrum as it’s playing, and a peak meter to see how the sweeps compare to each other, and to the announcing voice. On the minus side, YouTube drops my 24-bit uncompressed source down to 16-bit. Worse, it somehow maintains the minimum 16-bit level for all subsequent sweeps—if you can hear the 16-bit, you’ll hear the ret too. So, for the quieter sweeps, please reference the audio from A listening test.

Tip: I’ve included the audio from A listening test in a player below the video. The audio is identical, but starts about two seconds later in the video. You can start the video, and as soon as the audio (or meter and spectrogram color) appears, start the audio player, and mute the video. As long as you’re within a second or so, it will match up fine with the video.

Posted in Digital Audio, Video | Tagged , , , | Leave a comment

A listening test

Click to play in your browser, or right-click-download to play as you wish (24-bit .wav file):

multi-level sweeps.wav

I call this a listening test, because it involves your ears, equipment, and even environment. Any of these can influence your limits of hearing. There are practical limits to how quiet your environment is—a noisy street, nearby computer fans, air conditioning—we all strive to lower the background noise. Fortunately, electronic equipment it very low noise at affordable cost, but there is a minimum noise floor that can’t be eliminated at any cost, due to physical issues like Johnson-Nyquist noise and shot noise. You ears have limitations of the minimum energy required to deflect your eardrums, as well as your own human noise floor, which can include age and injury related issues such as ringing, in addition to the basics like breathing and blood flow.

The audio test file that repeats the same sweep signal, dropping in volume by one-bit increments. There are many type of test tones I could have used. It could have been a sinusoidal sweep or series of tones for each pass, dithered or not dithered, or noise bursts.

I chose to again create a purely digital signal (see Perspective on dither). The advantages are that it’s very easy to hear; it has higher harmonic content that sine waves, and it won’t compete directly with the noise of your electronics for your attention at low levels like noise test signal will. I think if it’s hearable by you in your environment and on your equipment, you’ll know with confidence whether you are hearing it or not. The fact it’s digital means there is no inherent distortion or aliasing—no quantization error, all changes line up exactly with the sample periods—and no need to add dither noise.

The audio file announces each sweep by effective sample size. The signal amplitude is two steps, peak to peak. That is, one lsb of the effective sample size positive, and one negative. That’s twice the minimum amplitude possible, but it’s a good choice because that matches things like the dither level for the sample size.

For instance, “5-bit” is announced, followed by a signal that is 2 steps in amplitude, peak to peak, out of 31 possible steps (there are 2^5 levels, or 2^5-1 steps). By the time it gets to “24-bit”, it’s 2 steps out of 16,777,215! A new sweep starts every five seconds. I used the Mac OS test-to-speech feature, unaltered from its default level. You may be annoyed that it keeps you from turning up the volume as it gets quieter, but that would be cheating the test anyway, and not give you the true idea of the relative levels. I could supply a file with only the sweeps, which is handy to find where your electronics noise floor overwhelms the signal, but I fear someone will damage their hearing or equipment, should something unexpected happen. Trust me, all electronics has a noise floor far above the 24-bit sweep, but it you want that minimum sweep, you can get it from the Perspective on dither article.

There is no quiz attached to this, it’s for you to explore you limits on your own. I hope this helps your perspective!

Posted in Digital Audio | Tagged , | 1 Comment

Biquad calculator v3

The latest version of the biquad calculator. It also takes on the functionality of the frequency response grapher:

Sample rate (Hz)
Fc (Hz)
Q
Gain (dB)
a coefficients (zeros)
b coefficients (poles)

bigger: Yes.

more filters: I probably won’t go deep into allpass filters, but people ask about calculating their coefficients from time to time, so here it is. And added first order filters for comparison.

phase plot: In earlier versions of the calculator, phase wasn’t important, we’re interested in an amplitude response and live with the phase response. But in adding the allpass filter type, phase is everything. It’s also good to know for other filter types, and for plotting arbitrary coefficients.

frequency response grapher: Edit or paste in coefficients and complete the edit with tab or a click outside the editing field to plot it. You can change the plot controls, but if you change a filter control then the calculator will resume as a biquad calculator.

For instance, clear the b coefficients, and place this sequence into the a coefficients: 1,0,0,0,0,1. Then click on the graph or anywhere outside the edit field to graph it. That’s the response of summing a signal with a copy of it delayed by five samples, a simple FIR filter—a comb filter.

Because the calculator can also plot the response of arbitrary coefficients, the biquad calculator now displays the normalized b0 coefficient (1.0)—which you can ignore in a typical biquad implementation.

The coefficients fields accept values separated by almost anything—commas, spaces, new lines, for instance. And they ignore letters and values followed by “=”. You can use “a0 = 0.971, a1 = 0.215…”, for instance. Even scientific notation is accepted (“1.03e4”,).

Posted in Biquads, Digital Audio, Filters, Widgets | Tagged | 24 Comments

Time resolution in digital audio

I’ve been involved in several discussions on timing resolution in digital audio, recently—honestly, I never knew it was a concern before. For example, in a video on MQA, the host explained that standard audio sample rates (44.1 and 48 kHz) were unable to match human perception of time resolution. He gave little detail, but the human limits for time resolution apparently came from a study how finely an orchestra conductor could discern timing events. At least, that’s the impression I got, with little said and no references given. But the main point is that there was a number—7 µs (or a range of 5-10 µs) for human timing resolution.

The argument against lower sample rates was that 44.1 kHz has a period (1/SR) of 22.68 µs, about three times larger than this supposed resolution of the ear. It’s also noted that a sample rate of 192 kHz would match the ear’s ability.

However, this is a fundamental misunderstanding of how digital audio works. Audio events don’t start on sample boundaries. You can advance a sine wave—and by extension, any audio event—in time by much smaller increments.

But certainly, since digital audio uses finite values, there must be some limitation, no? In a sense, there is, but it’s based on bit depth. This should be intuitive—if a signal gets small enough, below one bit, it won’t register in the digital domain and we cannot know what time it happened. In his excellent article, Time resolution of digital audio, Mans Rullgard derives an equation for time resolution. The key finding is that it depends only on the number of bits in a sample and the audio frequencies involved, and has no connection to sample rate. And 16-bit audio yields far greater time resolution than we can resolve as humans. Please read the article for details.

Here’s where I tell you it doesn’t matter

Put plainly:

There is no time resolution limitation for dithered audio at any bit depth. Not due to sample rate, not due to bit depth.

The linked article shows why sample rate isn’t a factor, and that the time resolution is far better than the 7 µs claim. I’ll take that further and show you why even bit depth isn’t a factor—in dithered audio.

I wrote earlier that, intuitively, we can understand that if a signal is below one bit, it won’t register, and we could not know when it happened from the signal information. Does the topic of sub-bit-level error sound familiar? Isn’t that what we solved with dither? Yes, it was…

Proof by demonstration

We typically dither higher resolution audio when truncating to 16-bit. We do this to avoid audible quantization errors. This supposed timing resolution issue is a quantization issue—as Mans’ article points out. I assert that dither fixes this too, and it’s easy to demonstrate why.

You can do this experiment in your DAW, or an editor such as Audacity. It’s easiest to discuss the most common dither, TPDF dither, which simply adds a low, fixed level of white noise (specifically, triangular PDF, which is just random plus random—still sounds like white noise).

Take any 24-bit audio file, and dither it (TPDF) to 16-bit. In your DAW or editor, subtract one from the other. In your DAW, you can have each file on a separate track. If you did this right and didn’t change boundaries or alignment, both will be lined up exactly. Keep both track faders at 0 dB. Invert one—you probably have a “trim” plugin or something that lets you invert the signal. Now play both together. You should hear white noise. Actually, you’ll probably hear nothing at all, unless you’re monitoring very loudly. Be careful, but if you turn up you sound system you’ll hear faint white noise for the duration of the audio.

This is expected, of course—the whole idea of dither is to add noise to decorrelate quantization error from the music.

Now, think about what this means. The difference between the dithered 16-bit track and the original is white noise. If you record this white noise onto a third track via bussing, and play just the 24-bit original and the noise track, it is indistinguishable from the dithered 16-bit track. It’s basic mathematics. A – B = n, therefore B = A – n. (See, for this test to be 100% mathematically correct, you’ll need to invert the resulting noise track. But practically speaking, the noise isn’t correlated, so it will sound the same either way. But yeah, invert it if it worries you.)

Let’s run that by again. The 16-bit dithered audio is precisely a very low level (typically -93 dB full scale rms) white noise floor summed with the original 24-bit audio. If you don’t hear timing errors from the 24-bit audio, you won’t hear it from that with low-level white noise added, which means you won’t hear it from dithered 16-bit audio. Ever, under any circumstances.

Consider that the same is true even with 12-bit audio, or 8-bit audio—even 4-bit. If you prefer, though, you could say that there are terrible timing resolution errors in 4-bit audio, but all that dither noise is drowning them out!

Posted in Digital Audio | 5 Comments

Amplitude Modulation Principles and Interactive Widget video

This video demonstrates important uses of AM—and serves as a demonstration of using the AM widget.

Posted in Amplitude Modulation, Digital Audio, Video | Leave a comment

Amplitude Modulation Deep and Fast video

Here’s Amplitude Modulation Deep and Fast, a not too lengthy video that gets to the roots of what amplitude modulation does mathematically. I’ve referred to AM’s role in sampling theory in past articles, and its relatively simple math makes understanding sampling much easier than traditional explanations. And this will serve as the primary math basis for an upcoming video on sampling theory.

Fun fact: AM’s ability to convert a multiplication to simpler addition/subtraction was exploited in global navigation a few hundred years ago, in the couple of decades before the invention of logarithms displaced it.

Posted in Amplitude Modulation, Video | Leave a comment

AM widget

Frequency
Hz
Amplitude
%
1
A
2
3
1
B
2
3

Experiment with amplitude modulation with the AM widget. Here’s a diagram of how it works; two sets of three summed sinusoids to simulate two signals, the two are multiplied together for amplitude modulation:

Tremolo: First, simulate a musical signal with three harmonics by setting A1 to 100 Hz at 60%, A2 to 200 Hz at 50%, A3 to 300 Hz at 40%. With all B amplitudes at 0%, the result is zero. Reveal the signal by setting B1 to 0 Hz at 100%. Now set B2 to 10 Hz, and bring up its amplitude—you’ll see the sidebands, responsible for the richness of the tremolo effect, created around each signal frequency.

Ring modulation: Continue with the same settings as the tremolo simulation, but remove B1 by setting its amplitude to 0%. Slide the B2 frequency up to simulated “ring” modulation (balanced modulation) by a sine wave at various frequencies. As you move up in frequency, the lower sidebands move downwards, but as they cross zero they seem to change direction. That’s because there is no practical difference between negative and positive frequencies—sinusoids look the same going in either direction. Although similar to tremolo, the original signal doesn’t appear in the output, only the sidebands, since there is no DC (0 Hz) offset.

AM radio: For standard AM radio, we must keep the signal positive by adding a DC bias. So we’ll need to set one of the frequencies to 0 Hz—since the cosine of 0 is 1, that gives us our bias, and we can use the other two frequencies to simulate the signal, the radio program material. Start with A1 at 0 Hz, 90%. Set A2 to 30 Hz at 50%, and A3 to 50 Hz at 40%. Now set B1 to 200 Hz, 100%, to simulate a radio transmission at a station frequency of 200 Hz (yes, very low, but we need to stay on the chart!). Try a second transmission by setting B2 to 400 Hz, 60%, and a third with B3 at 700 Hz, 80%. These would be three radio station broadcasting the same signal. You can change A2 and A3 to simulate changing broadcast material, and see the result in each “station” simultaneously. If you change the frequencies too much, you’ll see them spill into the areas of the other stations—this is why radio stations have band limits on broadcasts, so they don’t disrupt other stations.

Digital audio: We can simulate the spectrum of digital audio—this is great for understanding how aliasing behaves. We’ll simulate a signal with three frequencies, at a sample rate of 500 Hz, which means our highest allowable frequency is 250 Hz. First, construct the signal by setting A1 to 30 Hz at 100%, A2 to 60 Hz at 80%, and A3 to 90 Hz at 70%. Sampling modulates the signal with a pulse train (the “P” in PCM). It would take infinite cosines to simulate an ideal pulse train, but need only the first few to get the basic idea. Set B1 to 0 Hz at 50%. This is the offset that keeps the pulse train positive (alternating between 0 and 1) in the time domain. In the frequency domain, it’s the component that includes the signal in the result. Next set B2 to 500 Hz at 100%. This is the fundamental (first harmonic) of the sampling frequency. Finally, set B3 to 1000 Hz—the second harmonic. A perfect pulse train would have all integer multiples of the sampling frequency, but we have enough to fill the display.

Consider that graph for a moment, how conversion back to analog requires a lowpass filter to remove everything above 250 Hz. Now use the A3 frequency slider to move its component up through the spectrum. Notice that while it moves up, the corresponding sideband above moves down. If you raise A3 above 250 Hz, its “alias” is now moving downward in the audio band beneath 250 Hz. That’s why we need to avoid producing frequencies above half the sample rate—it results in aliasing.

Posted in Amplitude Modulation, Widgets | Leave a comment

Further thoughts on wave table oscillators

I see regular questions about wave table oscillators in various forums. While the process is straight forward, I sympathize that it’s not so simple to figure out what’s important if you really want to understand how it works. For instance, the choice of table size and interpolation—how do these affect the oscillator quality? I’ll give an overview here, which I hope helps in thinking about wave table oscillators.

Generating a signal

With a goal to generate a tone, we have an idea that instead of computing a tone for an arbitrary amount of time, we can save a lot of setup time and memory by computing just one cycle of the tone at initialization time, storing in a table, and playing it back repeatedly as needed.

This works perfectly and can play back any wave without aliasing. The problem is that the musical pitch will be related to the number of samples in the table and the sample rate. Even if we made a table for every possible pitch we intend to play, the tuning will be off for many intended notes, because we’re restricted to an integer number of samples.

Tuning the signal

We can solve the tuning problem, and avoid building a lot of tables, by resampling a single-cycle wave table, the equivalent of slowing down or speeding up the wave. The problem with slowing down the wave is that waveforms with infinite harmonics, such as sawtooth, will have a increasingly noticeable drop in high end the more we pitch it down. The easy solution is to start with a table that’s long enough to fit all the harmonics we can hear, for the lowest note we need, and only change pitch in the upward direction. To pitch up, we use an increment greater than 1 as we step through the table. To get at all possibles pitches, we step by a non-integer increment—2.0 pitches the tone up and octave, 1.5 pitches up a fifth. Of course, this means we need to interpolate between samples in our wave table. How we do this is a major consideration in a wave table oscillator.

Aliasing

The problem with pitching in the upward direction is aliasing, as harmonics are rasied above half the sample rate. We could implement a nice lowpass filter to remove the offending harmonics before the interpolation. But a cheaper solution is to pre-compute filtered tables, and choose the appropriate table as needed, depending on the target pitch. This is why my wave table oscillator uses a collection of tables. Read the wave table oscillator series for the details, I won’t cover this aspect further here. Just understand that we need to remove harmonics before resampling, for precisely the same reason we use a lowpass filter before the analog-to-digital converter when sampling. The multi-table approach simply lets us do the filter beforehand to make the oscillator more efficient.

Choosing table size

As noted previously, table size is determined primarily by a combination of the lowest pitch needed and how many harmonics we need. For instance, we’d like our sawtooth wave to have all harmonics up to as high as we can hear (let’s say 20 kHz). For a pitch of 20 Hz, that means 1001 harmonics (20, 40, 60, 80…20000). The sampling theorem tells us we need something over two samples for the highest frequency component, so clearly we need a table size of at least 2003. For convenience, let’s make that 2048, a power of two.

Further, if we want to save memory for our tables that serve higher frequencies, we can cut the table size in half for each octave we move up. A table serving 40 Hz could be 1024 samples, 80 Hz, 512 samples, and so on.

The choice of interpolation method

If we have a very good interpolator—ideally a “sinc” interpolator—our job is done. But such interpolators are computational expensive, and we might want to run many oscillators at the same time. To reduce overhead per sample, we can can implement a cheaper interpolator. Choices include no interpolation, such as just grabbing the nearest table value, linear interpolation, which estimates the sample with a straight line between the two adjacent samples when the index falls between them, and more complex interpolation that requires three or more table points. Cutting to the chase, we’d like to use linear, if we can. It produces significantly better results than no interpolation, but has only a small amount of additional computation per sample.

But by giving up on the idea of using an ideal interpolator, we introduce error. Linear interpolation works extremely well for heavily oversample signals. Recall that using a 512 sample sine table with linear interpolation, the results a good as we have any chance of hearing—error is -97 db relative to the signal. Sine waves are smooth, and as we have more samples per cycle, the curve between samples approaches a straight line, and linear interpolation approaches perfection.

The catch is that our table of 2048 for 20 Hz doesn’t necessarily hold a sine wave. For a sawtooth, it holds harmonic 1001 with just over two samples. Linear interpolation fails miserably for producing that harmonic. The errors cause significant distortion and aliasing with that harmonic. And a whole lot of harmonics below it, but improving as we move towards the lower harmonics where they will be awesome.

Our first thought might be that we need a table size 128 times as large, to get up above 512 for the highest harmonic. So, 262,144 samples, just for the lowest table?

No, we cheat

Here’s where psychoacoustics saves the day. Even though our distortion and noise floor figures for the higher harmonics look pretty grim, it’s extremely difficult to hear the problem over the sweetly rendered lower harmonics. And, fortunately, oscillators we like to listen to won’t likely have a weak fundamental and a strong extremely-high harmonic with nothing in between. Natural and pleasant sounding tones are heavily biased to the lower harmonics.

Also, if we choose to keep constant table sizes, then as we move up each octave, removing higher harmonics, the tables are progressively more oversampled as a result. Constant table sizes are convenient anyway, and don’t impact storage significantly. So, at the low end we’re saved by masking, and though we lose masking as we move up in frequency, at the same time the linear interpolation improves with increasingly oversampled tables. At the highest note we intend to play, near 20 kHz, error will be something like -140 dB relative to the signal.

Posted in Digital Audio, Oscillators, Wavetable Oscillators | 12 Comments

WaveUtils updated

WaveUtils needed only a minor change for compatibility with the WaveTableOsc update—addWaveTable changes to AddWaveTable. But I added something new while I was at it.

The original wave table articles advocated minimizing the number of tables necessary—one per octave—by allowing a reasonable amount of aliasing. Aliasing that is not only difficult to hear, but is normally be removed in typically synthesizer applications by the synth’s lowpass filter.

But that is simply a choice of the individual wave tables and how much of the spectrum we’re willing to let each cover. We could use more wave tables and allow no aliasing at all.

In addition to the fillTables function, which builds active wave tables. I’ve added fillTables2, which accepts a minimum top frequency, and a maximum top frequency. For instance, we might want to support a minimum of 18 kHz, using a value of 18000 divided by the sample rate, so that harmonics are supported to at least that frequency. If we use 0.5 for the maximum, then no aliasing is allowed. Higher values allow aliasing. For instance, a value of 0.6 allows a top frequency of 0.6 times the sample rate, or 26460 Hz at 44.1 kHz sampling. That’s 4410 above half the sample rate, so aliasing can extend down to 17640 Hz (22050 – 44100). Another way to look at it is to subtract the value from 1 and multiply by the sample rate to get the alias limit, (1 – 0.6) * 44100 = 17640.

Here are some examples. First, the original octave tables. To understand the spectrograms, time is left to right—a 20 second sweep from 20 Hz to 20 kHz of a sawtooth. You can see the aliasing as harmonic frequencies bend down at the top, although the algorithm minimizes the aliasing with advantageous choices of switching frequencies at the highest sweep frequencies, where there is the least masking. This uses ten wave tables to sweep the ten octaves of audio from 20 Hz to 20 kHz.

I think the aliasing is masked pretty well. But if the idea of aliasing bothers you, and you want at least 18 kHz coverage, 34 wave tables will get you this, at 44.1 kHz sample rate:

Now for an asymmetrical example. If you want 18 kHz, but are willing to accept aliasing above 20 kHz, 24 wave tables will do it:

WaveUtils source

Posted in Source Code, Wavetable Oscillators | 16 Comments

WaveTableOsc optimized

The wave table oscillator developed here in 2012 is pretty lightweight, but I never took a close look at optimization at the time. An efficient design is the number one optimization, and it already had that. I was curious how much minor code tweaks would help.

Free wrap

For instance, to avoid added complexity in the original article, I didn’t employ a common trick of extending the wave table an extra sample in order to avoid the need for a wrap-around test in the linear interpolation. A rewrite was a good opportunity to add it, and check on the improvement. The code change was trivial—add one to the allocation size for each table, and duplicate the first sample to the last when filling them, in AddWaveTable. Then remove the bounds check in the linear interpolation, saving a test and branch for every sample generated.

// old
float samp0 = waveTable->waveTable[intPart];
if (++intPart >= waveTable->waveTableLen)
    intPart = 0;
float samp1 = waveTable->waveTable[intPart];

// new
float samp0 = waveTable->waveTable[intPart];
float samp1 = waveTable->waveTable[intPart + 1];

Factor

And, I spotted another optimization opportunity. The oscillator needs to select the appropriate wavetable each time it calculates the next sample (GetOutput), but the choice of wavetable only changes when the frequency changes. So, the selection should be moved to SetFrequency. In general use, it should make no difference—an analog-like synth would usually have continuous modulation of the oscillator, for instance. But moving the calculation to SetFrequency would be a substantial win for steady tones.

I’ll go through the changes and their contributions, but a little explanation first. Unlike most of my DSP classes that provide a new sample on each Process call, the oscillator requires two calls—GetOutput and UpdatePhase—every time. It was just a design choice, I could have also provided a Process call that does both. In addition, SetFrequency might also be called every time, if the modulation is constant. For a steady tone, we’d call SetFrequency once, then the other two calls repeatedly for each sample. For a sweep, all three would be called for each sample. Of course I could trivially add a Process call that couples the existing UpdatePhase and GetOutput.

The wave table selection code iterates from lowest to highest, until it finds the right table, so it’s slower for higher frequencies. So, we’d expect an improvement in moving that code to SetFrequency for the steady tone case, and that the amount of improvement would be greater for high tone settings. We’d expect the case of a sweep to give no change in performance, since it still requires the wave table selection on each sample.

// old
inline void WaveTableOsc::setFrequency(double inc) {
  phaseInc = inc;
}

// new: the added code was simply repositioned from GetOutput
void SetFrequency(double inc) {
  mPhaseInc = inc;

  // update the current wave table selector
  int curWaveTable = 0;
  while ((mPhaseInc >= mWaveTables[curWaveTable].topFreq) && (curWaveTable < (mNumWaveTables - 1))) {
    ++curWaveTable;
  }
  mCurWaveTable = curWaveTable;
}

After moving the table selection from GetOutput, implementing the “one extra sample” optimization gave about a 20% improvement to the steady-tone case (SetFrequency not included) on its own. Unexpectedly, changing the variable temp from double to float gave a similar individual improvement. For steady tones, these changes together yielded a 30% improvement to the loop (SetFrequency not included), and 60% for high tones.

At first test, the sweep case did not see the improvement, as the biggest change simply moved the calculation between the two routines called on each sample. I was disappointed, because I expected at least a small improvement from the other changes, but instruction pipelines can be fickle. The minor tweak of using a temporary variable to calculate mCurWaveTable changed that—yielding a a 24% improvement. This is in a loop that sweeps a sawtooth exponentially through the audible range (SetFrequency, GetOutput, UpdatePhase, calculate the next frequency with a multiply, loop overhead). Basically, all the same code is executed, but apparently it worked out better for the processor. So, a great improvement overall.

Another goal of the change was to put everything in a single header file—because I like that. GetOutput was formerly not inline, so there was a small improvement moving it. I also changed the case of the function names (setFrequency to SetFrequency) to align with the way I currently write code, so it’s not a drop-in replacement for the old version.

I tried one other improvement. The wave table selection “while” loop takes longer, of course, as frequency moves to higher tables. I tried to special-case octave tables by using clever manipulation of a floating point number, essentially getting a cheap log conversion. The advantage would be a single calculation for any frequency, instead of an iterative search. It ended up not being worth it for the case of octave tables, the iterative solution was more efficient that I expected, and more importantly it’s far more general.

Using picobench to benchmark, the new code is about 25% faster for the case of a 20-20kHz sawtooth sweep. Not a huge change, because the code was already very efficient. But as part of single-header-file rewrite without increasing complexity, that’s a pretty good bonus.

WaveTableOsc.h

Posted in Source Code, Wavetable Oscillators | 6 Comments