My WaveTableOsc object uses a series of wavetables that are copies of single-cycle waveforms pre-rendered at a progressively smaller bandwidth. By “replicating wavetables” I mean taking a full-bandwidth single cycle waveform, and using it to make a complete set of progressively bandwidth-reduced wavetables. In this article, I’ll demonstrate principles and source code that will let you take any source waveform suitable for the lowest octave, and build the rest of the tables automatically.

The source waveform might be any time-domain single-cycle wave—something you drew, created with splines, recorded with a microphone, calculated—or it can be a frequency-domain description—harmonics and phases. Fundamentally, we’ll use a frequency-domain description—the array pair we use in an FFT—that holds amplitude and phase information for all possible harmonics. If we start with a time-domain wave instead, we simple convert it to frequency domain as the first step. The source code builds the bandwidth-reduced copies, converts them to the time domain, and adds them to the oscillator until it contains all necessary tables to cover the audio range without noticeable aliasing.

We’ll build a new table for each octave higher, by removing the upper half of the previous table’s harmonics. If it’s not clear why, consider that all harmonics get doubled in frequency when shifting pitch. A 100 Hz sawtooth wave has harmonics spaced 100 Hz apart; shifting it up an octave to 200 Hz shifts the spacing to 200 Hz apart, so half of the harmonics that were below Nyquist move above.

### The code

In principle, the code is simple:

repeat while the wavetable has harmonic content: add wavetable to oscillator remove upper half of the harmonics

This is one of those places where I’d like to be more sophisticated, and use a faster “real” FFT and avoid the silliness of filling in a mirror-image array, but it’s more code and clutter—I’m sticking with the generic FFT from the WaveTableOsc source code. I’ve also reused the makeWaveTable function, so the new code here is minimal. The end product is WaveUtils.h and WaveUtils.cpp, plain C utility functions. I’d probably wrap the functions into a manager class that dealt with WaveTableOsc and wavetable classes, if I were focusing on a specific implementation and not an instructive article.

The two reused functions are used internally, and we need just one new function that we call directly—fillTables:

void fillTables(WaveTableOsc *osc, double *freqWaveRe, double *freqWaveIm, int numSamples) { int idx; // zero DC offset and Nyquist freqWaveRe[0] = freqWaveIm[0] = 0.0; freqWaveRe[numSamples >> 1] = freqWaveIm[numSamples >> 1] = 0.0; // determine maxHarmonic, the highest non-zero harmonic in the wave int maxHarmonic = numSamples >> 1; const double minVal = 0.000001; // -120 dB while ((fabs(freqWaveRe[maxHarmonic]) + fabs(freqWaveIm[maxHarmonic]) < minVal) && maxHarmonic) --maxHarmonic; // calculate topFreq for the initial wavetable // maximum non-aliasing playback rate is 1 / (2 * maxHarmonic), but we allow // aliasing up to the point where the aliased harmonic would meet the next // octave table, which is an additional 1/3 double topFreq = 2.0 / 3.0 / maxHarmonic; // for subsquent tables, double topFreq and remove upper half of harmonics double *ar = new double [numSamples]; double *ai = new double [numSamples]; double scale = 0.0; while (maxHarmonic) { // fill the table in with the needed harmonics for (idx = 0; idx < numSamples; idx++) ar[idx] = ai[idx] = 0.0; for (idx = 1; idx <= maxHarmonic; idx++) { ar[idx] = freqWaveRe[idx]; ai[idx] = freqWaveIm[idx]; ar[numSamples - idx] = freqWaveRe[numSamples - idx]; ai[numSamples - idx] = freqWaveIm[numSamples - idx]; } // make the wavetable scale = makeWaveTable(osc, numSamples, ar, ai, scale, topFreq); // prepare for next table topFreq *= 2; maxHarmonic >>= 1; } }

### Defining an oscillator in the frequency domain

Here’s an example that creates and returns a sawtooth oscillator by specifying the frequency spectrum (note that we must include the mirror of the spectrum, since we’re using a complex FFT):

WaveTableOsc *sawOsc(void) { int tableLen = 2048; // to give full bandwidth from 20 Hz int idx; double *freqWaveRe = new double [tableLen]; double *freqWaveIm = new double [tableLen]; // make a sawtooth for (idx = 0; idx < tableLen; idx++) { freqWaveIm[idx] = 0.0; } freqWaveRe[0] = freqWaveRe[tableLen >> 1] = 0.0; for (idx = 1; idx < (tableLen >> 1); idx++) { freqWaveRe[idx] = 1.0 / idx; // sawtooth spectrum freqWaveRe[tableLen - idx] = -freqWaveRe[idx]; // mirror } // build a wavetable oscillator WaveTableOsc *osc = new WaveTableOsc(); fillTables(osc, freqWaveRe, freqWaveIm, tableLen); return osc; }

### Defining an oscillator in the time domain

I you have a cycle of a waveform in the time domain, you can create and oscillator this way—just do an FFT to convert to the frequency domain, and pass the result to fillTable to complete the oscillator:

WaveTableOsc *waveOsc(double *waveSamples, int tableLen) { int idx; double *freqWaveRe = new double [tableLen]; double *freqWaveIm = new double [tableLen]; // convert to frequency domain for (idx = 0; idx < tableLen; idx++) { freqWaveIm[idx] = waveSamples[idx]; freqWaveRe[idx] = 0.0; } fft(tableLen, freqWaveRe, freqWaveIm); // build a wavetable oscillator WaveTableOsc *osc = new WaveTableOsc(); fillTables(osc, freqWaveRe, freqWaveIm, tableLen); return osc; }

Here’s a zip file containing the source code, including the two examples:

### Caveat

As with our wavetable oscillator, this source code requires wavetable arrays—whether time domain or frequency domain—of a length that is a power of 2. As examined in the WaveTableOsc articles, 2048 is an excellent choice. The source code does no checking to ensure powers of 2—it’s up to you.

**Update:** Be sure to check out the improved WaveUtils, in WaveUtils updated.

Awesome thank you. I got very close to this where it sounded similar but your solution is better.

Great, glad to hear you put in the effort and got a workable solution!

Can you please explain how you take into consideration the time-domain reversal caused by doing the FFT twice? Do you just ignore this? I was looking for a complex conjugate somewhere to compensate, but then I thought “maybe it doesn’t matter”?

Actually, on this note, could you please explain the rationale behind using the imaginary ‘channel’ of the FFT to essentially determine the inverse FFT (convert from freq domain back into time-based wavetable)? I assume there’s some sort of nice DSP trick involved here, but I’m coming from a domain where I’ve always used the IFFT to get from freq to time, and I’m curious about how this works.

Additionally, can you explain how the scaling in makeWaveTable() works? I see that for the first table, it will be auto-scaled, then after that it will scale each table by the same amount. Does this need to take into consideration the loss of energy by removing harmonics?

Let me answer the second question while I can do it quickly, and I’ll reread the first question when I have more time to make sure I’m answering the right question…

I calculate the scaling factor for the “root” (first/lowest) table so that the only the ratios are significant—just a matter of choice. For instance, without the scaling, if you make a sawtooth, it would have a peak-to-peak amplitude greater than that of a sine wave when both have a fundamental of 1.0. With auto-scale, the harmonic content doesn’t matter—you’ll always get the same peak-to-peak amplitude. Again, only the ratios matter—a fundamental value of 15.5 or 1.0 yield a ±1.0 sine wave when auto-scaled.

As for the remaining tables, no, you don’t want to make up for the energy loss. Note that in a non-bandlimited world, the harmonics would still be there, just moved above our threshold of hearing. So, we want to keep the remaining, audible harmonics at constant amplitudes across tables.

For the first question…I’m not sure exactly what question you’re asking, but I’ll note that the FFT and the iFFT process is one and the same; in some implementations there is a difference in scaling (for example, 1/M for the FFT, M for iFFT), but in others, the algorithm is designed for unity scaling in either direction (Bracewell). That’s another reason I put in the auto-scale feature—that way you could convert from time domain to frequency domain, in order to do the band-limited replication, without concern for scaling or the original amplitude.

Hi Nigel,

I just want to clarify something about the use of the FFT in your code as I’m starting to try and build on my understanding of FFT’s rather than just taking them as a black box. I think I understand the usage of the mirror as a result of using a complex DFT and including negative frequencies etc. So the Saw components are negated at N/2 on wards due to being an odd function ?

David asked about the reason for the use of the imaginary array / components.

Am I correct in thinking that the Sawtooth specturm would normally be put into the imaginary/ai array due to being built up entirely of sine frequency/imaginary components ?

I know that it is possible to calculate and inverse fft from a forward fft implementation(not a specific inverse fft like those available in fftw) by swapping the real and imaginary arrays. i.e putting the saw spectrum into the real array (cosine ?) will result in the fft filling the imaginary/ai array with the real valued x[n] time domain components leaving the ar/real array as zero values afterwards ?

(I’m getting this above idea after reading some of Lyon’s Understanding Digital Signal Processing book but I may have misunderstood.)

Hence the reason the time domain wave form that fills the wavetables is taken from the imaginary/ai array ?

Would this then correspond to the reason for us filling the imaginary array with a time domain signal and zeroing the real array?

i.e the imaginary array has an fft performed on it and then returns the specturm in the real array which again we pass into the fft as per above and generate out time domain samples to fill the wavetables ?

Apologies for the long question but any light you could shed on your usage / implementation of the complex FFT here would be a major help.

Regards

(As always loving the site, would love to see an Ear Level Engineering take on Virtual Analogue and TPT Filters one day!)

Josh

Hi Josh,

Yes, you have it right—the saw is an odd function, hence the negative symmetry. And yes, the imaginary part corresponds to sine; if you want to specify a phase other than sine or cosine, you need a combination or value in real and imaginary.

You have the right idea about the saw waveform values ending up in the real array after running the iFFT on the harmonic values, but I want to stress that it doesn’t imply that FFT and iFFT are same but require swapping real and imaginary parts. They are one and the same, except, possibly, for scaling (some implementation build in a symmetric scaling factor, some don’t). So if you load up a sawtooth harmonic amplitudes in the imaginary part, do an (i)FFT, then repeat the FFT, you’ll get the same thing back (except scaled by a factor of the FFT size, in the case of the implementation I used here).

But, in general, you calculate the magnitude as the square root of the sum of the squares of the real and imaginary parts—it’s just that in this case the imaginary part would be zero so it doesn’t matter.

Really, there are more efficient ways to work with real-only data. Filling one side with zeros is a waste of operations. There are real FFTs (essentially rearranging the data to pack it into an FFT half the size). But in these examples, the complex FFT isn’t being run constantly, so performance is not an issue, and using a plain complex FFT kept the code simple. If you really wanted to do a lot of crunching, there are faster FFTs—some come with the OS, hardware assisted, or can be template based (allowing any size FFT to be optimized and unrolled at compile time), FFTW, etc. I just give the vanilla, yet complete on any platform, version. And again, not much benefit to be seen for a wavetable oscillator, since the wavetables are built in a flash at the beginning.

Nigel

Hi Nigel,

Thanks so much for taking the time to reply to that.

You’ve clarified a lot of things there for me.

So just so I can understand correctly. Would I need to modify the code to collect the values from both the real and imaginary arrays to fill the wave table if I supplied the (i) fft with non zero spectrum values in both the real and imaginary array ?

Or would I still just collect the values from the imaginary array to create my time domain signal even after applying a spectrum with both sine and cosine parts ?

Should I be considering the signal I have created from a mixed cosine and sine spectrum/phase harmonics to be hermitian? (cancelling the imaginary parts of the time domain) and as a result the time domain values are complete (not missing any information) within the ai/imaginary array after performing the (i)fft of my mixed phase spectrum (due to the reversal technique you clarified earlier, swapping real and imaginary inputs) ?

I’m playing around with various different spectrum values in my synth plugin and using an oscilloscope etc to view the results and would just like to understand the combination of sine and cosine / arbitrary phase harmonics.

Many Thanks again

Josh

Don’t overthink it (I know it’s easy!): If you specify a real signal in the frequency domain—and that means one that is symmetric (whether odd or even) between the positive and negative frequency halves—then an iFFT will result in a real signal. And that means that the imaginary part will be zero.

is it possible to add index array for wavetables for scanning synthesis so that the wavetable can change over time

like the ppg system

Yes, absolutely. The oscillator has one dimension of wavetables, and you would add another, to implement multiple sets of wavetables per oscillator. If you want to avoid clicks as you shift through wavetables, you could crossfade (interpolate) between sets.

Nigel

Hi Nigel,

Thanks for some great tutorials! I have a little problem with my wavetable synth. When I do a frequency sweep, at the point where the wavetables switch and the number of harmonics diminishes, I get a (kind of expected) huge drop in the richness of the tone. I am aware of why and I am about to try some sort of wavetable interpolation but I am curious why yours doesn’t have the same problem? I have downloaded and run your code and it sounds great, without doing anything special at the point where the wavetables switch.

Thanks for your time

Hi Matt,

For any given oscillator frequency, you need to pick the most complete wavetable that doesn’t alias. It sounds like you are switching too soon, picking a table meant for a higher frequency range.

In order to minimize the number of tables needed, and without oversampling the oscillator to avoid aliasing, I’m taking advantage of the space between the highest frequency you can hear, easily, and half the sample rate, where aliasing begins. Each wavetable is tagged with the highest frequency that it’s suitable for (topFreq). Since I order the wavetables from the most complex to the least, you need to pick the first wavetable that can handle the current oscillator frequency (which is related to the phase increment).

Nigel

Well I tried using slightly different frequencies to you, starting at 60 instead of 40. I calculated the number of harmonics in the same way as you maxPartials= round(SAMPLE_RATE / (3.0 * f) + 0.5);

This gives me 246 for the range 60-120, which I then half for each subsequent octave.

This gave me bad results with the wavetable switch sounding really obvious. However, just changing the starting freq to 40, as you have it fixes it! Not sure what I’m doing wrong there.

Using both wavetable and sample interpolation, and starting from 40Hz I have everything above 20Hz sounding acceptable.

maxPartials= round(SAMPLE_RATE / (3.0 * f) + 0.5);

There shouldn’t be a “round” there, since that’s what the +0.5 and truncate is doing. Still, I don’t think that’s the problem you’re seeing, alone, unless you’re seeing it mainly at the very top of the range.

Hi Nigel, great stuff!

I’m new to DSP, so pardon the newbie question. I’m wondering if you can explain to me if there is any particular reason you are using arrays instead of some other container. Is there a performance or other reason for this? Or is it just that DSP engineers like working with C-style code for some reason? I’ve been trying out your example code in a project and have been getting stuck with memory leaks etc. I’m still learning, so no doubt this is my error, but I’m just curious about the way you’re using memory here and if you have any other thoughts about what to consider when building off of these examples. Thanks! -Nick

Hi Nick. Well, C is enormously popular, and is relatively close to the machine level—it was written to write operating systems, so it doesn’t protect you from memory, which can be good and bad. Arrays are pretty efficient. With C++, you can have arrays that have more protection (from overrunning their bounds, for instance), if that’s what you want. (And with C++ you can use template meta programming for efficiencies that are difficult to replicate in other languages, but that’s beyond the scope of this blog.)

Thoughts building off these examples? Yes, support for hard sync of the oscillator (maybe a future article), and that if I were to make it more general (to give strong support for an arbitrary number of oscillators, and user-creatable wavetables), I’d have a wavetable manager that supported sharing of wavetables between oscillators with reference counting…and maybe morphing between wavetables…

Nigel

Very interested in seeing your thoughts on hard sync. I’m currently using MinBLEP for this with the classic wave forms. Can MinBLEP (or a PolyBLEP) be applied to wavetables? I’d like to learn more about how to reduce the increasing CPU overhead as the frequency increases. I’ve read that a linear phase dc blocking BLEP is good but I don’t know how to implement this.

Band limited steps are a problem with arbitrary waveforms, but the general principle of mixing in a band limited transition is the right direction. Basically, if you took the difference between a perfect (or sufficiently oversampled) reset (hard sync) and that same transition that’s been lowpass filtered to a fraction of the current sample rate (as if preparing to down-sample it), you’ll be left with an oversampled transition. I don’t know what it’s called, but I’m sure I’m not the first to think of it, so it probably has a name somewhere. You can use that as a jumping-off point, essentially starting at the appropriate offset (depending on where between sample the hard-sync reset occurred), downsampling it (skipping samples) and adding those sample in with the wave.

The fundamental problem with hard sync is that while the resets occur at arbitrary places between samples, the naive implementation moves the transition to the next real sample. That timing error will move around (jitter), and that results in aliasing.

Hi Nigel,

Fantastic article.

I am currently using the wavetable code in a JUCE based synth and it sounds absolutely fantastic.

I’m quite new to DSP and am wondering whether you would be able to shed some more light on the FFT / IFFT process.

Specifically I am wondering about the arguments passed to the fft function for synthesizing waveforms and how these are calculated?

I understand the concepts of the harmonic content or the individual waveforms as far as fourier theory and sinusoidal basis functions are concerned but am at a bit of a loss as to how to calculate the specturm/arguments to pass to the FFT routine.

Do you have any advice or articles you might point me towards to get a better understanding of this?

i.e how to work out the following

freqWaveRe[idx] = 1.0 / idx;

Is it simply the amplitude value/signal components strength that is calculated here?

Basically id like to start experimenting with other wave shapes similar to modern Soft Synths like SuperSaw’s etc etc.

Many thanks again for the brilliant tutorials.

freqWaveRe[idx] = 1.0 / idx;

We’re just setting the amplitude of a given harmonic here. For a sawtooth wave, the harmonics drop off as the reciprocal of the harmonic number:

1st harmonic: 1.0 (1/1)

2nd harmonic: 0.5 (1/2)

3rd harmonic: 0.33333… (1/3)

4th harmonic: 0.25 (1/4)

…

Brilliant,

Thanks Nigel, I had thought that was the case but wanted to ensure I had the right idea.

I think maybe in that case I’ll try building up some other waveshapes in matlab or something similar so I can add some other shapes to your fft routine.

Thanks a lot for clarifying.

Josh

I have a question about the code in fillTables(). On the second line, you zero the sample at numSamples << 1 in both arrays. But then in your code to determine the maxHarmonic, you decrement maxHarmonic (which is initialised to numSamples << 1) while the values at that index sum to greater than minVal. But surely since i[maxHarmonic] and r[maxHarmonic] have been zeroed, maxHarmonic will always equal numSamples << 1?

In addition, to my ears this code:

for (idx = 1; idx <= maxHarmonic; idx++) {

ar[idx] = freqWaveRe[idx];

ai[idx] = freqWaveIm[idx];

ar[numSamples – idx] = freqWaveRe[numSamples – idx];

ai[numSamples – idx] = freqWaveIm[numSamples – idx];

}

sounds better when the second two assignments are negated, is that how it's supposed to be? I'm very much a beginner with DSP, so it might be that I'm doing something wrong elsewhere, but I get lots of aliasing unless I flip those signs.

On flipping the signs of those assignments: No, if the incoming arrays are right, you should not flip the signs. But look at the SawOsc function. You’ll see that the second half of the array is the negative mirror of the first. This is because real signals are conjugate symmetric. Note that the upper half corresponds to negative frequencies. Alternatively, you could just zero the upper half; the result of the iFFT would yield half the signal value, but the code autoscales the wavetables so it does’t matter.

Since we’re always working with real signals, I could have simplified things by building in the conjugate symmetry. But I chose to expose the actual values for the iFFT instead.

Arg—good catch, Tom. “> minval” should be “< minval” in the code posted. I’ll fix that...thanks! Of course, it works fine for the worst-aliasing case of assuming that you have the maximum number of harmonics the table will fit, it just doesn’t optimize table memory for non-optimal input (for instance, a 2048-length array specifying only 4 harmonics needs fewer tables than one specifying ~1024 harmonics—but the current code does not make that optimization as intended). But I’ll have to check closer to make sure that changing only that comparison works as intended. (It doesn't fail now, I don’t want to make it fail by “fixing” it without a thorough check.) Nigel PS—I checked—changing to “< minval” optimizes the number of tables correctly, as intended.

Yes, I thought that might be what you meant! I fixed my other problem too – in rewriting your code I had accidentally swapped the real and imaginary arrays in my conversion from time to frequency domain. Works fantastically now!

Thanks so much for all your articles, I’ve learned more about practical DSP from your website than from any other, because of your wonderful step-by-step build-up of code based on real practical concerns. I find it hard to learn pure theory, but when you can actually learn how build an oscillator/envelope etc. it’s much easier!

Awesome, Tom! And thanks again for catching that bug—I’ve updated the code. At some point I’d like to clean it all up with a wavetable manager that makes it simpler to work with wavetables and share them between oscillators. I’m tied up with a project and have a backlog on website activities, though. The big project being blocked is finishing a video on sampling theory and rate conversion that I think presents it in a way that hasn’t been seen before…

This method works great…Thanks for your input. One thing is that I noticed when using my Phase Distortion, I’m getting aliasing on the top frequencies. Is that because You are interpolating an octave apart? Do I need to do something closer to minor thirds or half steps?

Basically the Phase Distortion offsets the mid-point of the cycle. For values 0.5 it’s the opposite: It stretches the first half, and squashes the second half.

But this causes some obvious aliasing….

The table spacing won’t matter much. Phase distortion introduces aliasing. think of it this way: If you start with a sine wave, clearly the wavetable spacing does not matter, because you only need one for the audio range. Now manipulate around the midpoint, as you suggest, and stretch it into something close to a sawtooth. You will alias badly. You can use oversampling or another technique to reduce the problem.

Yea, we were afraid you were going to say oversampling. We are trying to get away from that, because oversampling is a CPU hog. Your FFT method is one of the only ones we have found to eliminate aliasing, but then we still have the issue of our Phase Distortion.

Oversampling is the only sure fire way of reducing aliasing with phase distortion. I’ve heard of another method called mipmapping, but there are absolutely no classes or information online about it….At least not detailed enough to prototype it.

I would love for you to tackle mipmapping.

Actually, the wavetable oscillator here does use “mipmapping”. The term is originally from computer graphics, where textures of multiple resolutions were pre-rendered in order to optimize memory and processing needs when zoomed in or out. In wavetables, sometimes people use the term to refer to keeping multiple tables pre-rendered with different harmonic complexity.

Thanks for all that you do. You’ve helped me so much. Do you by any chance know of another way, other than oversampling, that I can reduce the aliasing when using phase distortion?

I’d be starting about the same place as you, since I haven’t tackled phase distortion, and I’d be doing a web search on something like “phase distortion synthesis reduce aliasing”. There are ways to deal with discontinuities by windowing in bandlimited transitions—this is an effective way to reduce aliasing from hard sync’ing oscillators, for instance.

What are your thoughts on pre-rendered oversampling then decimating later? Is it worth it. I’ve been reading and a few guys talked about using oversampling to reduce aliasing, then decimating later.

Yes, that works, but at a high cost. This wavetable method has pre-rendered oversampling AND pre-rendered pre-decimation bandwidth reduction (the decimation being the wave scanning according to pitch). If you do oversampling then variable-rate (according to pitch) downsampling, which is what you’re suggesting, the oversampling can be pre-rendered, but the downsampling must be done on the fly, with tradeoffs of quality versus computation. With multiple pre-rendered wavetables, the quality can be perfect, essentially, and the computation trivial. The main cost is the number of wavetables, but memory is cheap.

Got it….I’m trying to use your FFT method, at the same time I still want to use my PWM, without aliasing.

Since bad aliasing doesn’t really start until the high frequencies. I could possibly still use 4x oversampling of the FIR frequencies starting at C6 up to C8. Since I’m only using one wavetable per octave, then I’m only using oversampling for one oscillator…in theory.

I may be able to get away with oversampling, if I’m only applying it to 2-3 wavetables.

You shouldn’t be getting audible aliasing, if the wavetables are built correctly. That is, the “one table per octave” design is allowed to alias above one-third of the sample rate, though that’s simply an economy of using octave-spaced tables—you could eliminate aliasing by using more tables, shifting them less. But I doubt you’re hearing that. If you’re using the dual-sawtooth method of generating PWM, that won’t alias.

Would there be a way that I could remove the bandwidth-reduced replication? The reason why I asked, is that after working with wavetables, I noticed that I don’t start hearing aliasing if I use a full bandwidth single cycle wavetable until like notes C6. The full bandwidth wavetable sounds so much warmer than the bandwidth-reduced. So I would like to use the full bandwidths until I actually need the bandwidth-reduced versions in the higher registers.

The fillTables function ensures optimal bandwidth reduction, but the oscillator doesn’t require it. The topFreq parameter in oscillator’s addWaveTable tells the oscillator to switch to a higher wavetable (only if there is one) when above a wave table’s topFreq. But you can set that to any value you want, if you want the wave table to remain in use through higher frequencies, then add more closely spaced tables at the high end.

So essentially, I can increase the number of wavetables, say every Minor 3rd, and turn off the bandwidth reduction to get the full bandwidth without aliasing?

One thing you never mentioned in this process. How in the world do you create a power of 2 (2048, 4096) length wavetable from a single cycle? I currently use Serum, but are there any other ways?

I’m not sure that I follow the first part. When a new sample is requested, the oscillator code looks its current frequency setting, and finds the first table whose topFreq is equal to or above the oscillator frequency. It’s up to you, if you call addWaveTable directly, to determine how much aliasing you want to allow, on a table-by-table basis. If you use fillTables, it handles those decisions for you.

If you capture a single cycle from an audio recording, of a length that’s not a power of two, and want to use it as a wavetable for fillTables, you’ll need to use interpolation to stretch it to the next power of two, either via the time domain or frequency domain. I’ll have to cover that at another time.

Thanks so much for taking time to put out all this helpful info!

In “Defining an oscillator in the frequency domain” does not the “freqWaveRe” array contains the waveshape values from 1.0 to -1.0 when it is passed to the fillTables?

If yes, then does it mean that we can simply store there the different waveshapes and then we will need only and ifft instead of a combination of fft and ifft?

By this I mean building a description of the signal in the frequency domain, then using that to generate the time domain wave tables. From that full-spectrum (all the harmonics you want), you IFFT to the lowest wave table, trim the top harmonics off, IFFT to the next, etc. So you are correct—if you start by specifying harmonics in the frequency domain, then it’s just a series of IFFTs to build the tables. But be sure to skip ahead and see my more recent improvements to WaveTableOsc and WaveUtils.

I’m wrong, or the code miss “delete” for any “new array” been created?

Yikes—thanks for catching that! I tend to simplify before uploading, try for as minimal code and includes to make it easy to understand, went too far here. I usually avoid “new”, vectors are better…I’ll fix and re-upload this and the revised wave utils in from later article.

Thanks for the reply! 🙂

No worries, its glad to help people like you, which help us in this conplex digital world 🙂

One more question (since I have your attention) 🙂 I’ve requested it on kvr, but maybe thats the correct place https://www.kvraudio.com/forum/viewtopic.php?f=33&t=562551

How can I change the phase of some harmonics from the frequency domain?

I would for example output 3 harmonics, each at different phase. But adding an offset to IM array output weird results.

Where am I wrong?

Thanks again

If I’d like to FM the signal generated by WaveTable (PM in reality, modulating the phase), does it makes sense to oversample a bandlimited wavetable? Or I can keep within the topFreq Algo and choose the correct table for each sample? (Not really sure if that can be done honestly, just speak at higher voice :))

Well…the bottom line is that FM can easily produce high frequencies than you generated, so you will get get aliasing when it does. The brute force solution is oversampling, where you get what you pay for, but there are other tricks you can do to ameliorate the issue. Even the venerable DX7 rolled the gain of the modulation off at higher frequencies to reduce noticeable aliasing.

Hi Nigel,

I’m a beginner, I mostly try to make sounds on arduino. my question would be: does this code run on arduino ide+teensy audio dsp? and how can I load the arbitrary wave data to this code? (waveform[2048]{ -893, -379, 678, 1213, 2306, 2870, 4004, 4593, 5766, 6381, 7585, 8215, 9439….etc) Sorry for my bad english …

Tom

Romania

It should run on anything that a full C++ compiler supports. You can load the waveform array as you said, computer it, or read it from other storage.