Jump to content

How much fidelity do VST instruments really have?


RexRed

Recommended Posts

When Euripides famously advised us to "question everything", he was indulging in a bit of hyperbole. As you're discovering, literally questioning everything leads you down a rabbit hole of indecision, frustration and paralysis. Sometimes, you're better off simply accepting that "this works, but that doesn't" and carry on.

That said, I completely understand one's desire to understand how stuff works, and acknowledge that such understanding often leads to better practices. Do you need to know how automobiles work in order to drive one? No. But Is it helpful to understand its basic principles? Of course. The mechanically knowledgeable driver will get better fuel efficiency, longer-lasting brake pads, longer engine life and get into fewer accidents. Not because he can rebuild an engine in the back yard, but because his daily driving habits are informed by knowledge of what actions are harmful to the car, why preventative measures such as frequent oil changes extend the life of the engine, why slamming the brakes wears pads faster and even wastes gas by turning already-paid-for kinetic energy into useless heat energy.

If you're really wanting to explore this rabbit hole, I'd suggest starting at the bottom with some reading on how digital audio works. A good introduction that won't overwhelm the mathematically-challenged is Digital Audio Explained for the Audio Engineer, by Nika Aldrich. The author keeps it simple - some critics say too simple - but still covers the important bases. It won't directly answer all of the questions posed above, but will give you a foundation for either answering them yourself or to better understand explanations you come across later.

Once those fundamentals are clear, you can then tackle the broader questions such as how a DAW deals with mixed sample rates. I would refer you to the documentation for an explanation as to how that's handled.

Effect plugins, for the most part, don't really care about sample rates. Multiplying two numbers gives the same result regardless of how many times per second you do it. When 2x oversampling is enabled within a plugin, it's just duplicating every sample before performing the operation. The upsampling itself is efficient, but now you're doing twice as many calculations in the same period of time, which reduces the availability of CPU cycles for other things. At some point you'll hit a wall where the CPU cannot keep up, and you have to increase buffer sizes or freeze tracks to carry on. However, up until that point there is no penalty for "making the CPU work hard". The CPU can either handle it, or it can't. It's not a draft animal. 

Oversampling neither improves nor diminishes audio quality. It does one thing only: it mitigates potential aliasing caused by harmonic distortion that some processors can produce. So does it hurt anything by always enabling oversampling? No. It just gives your CPU more work to do. The key is understanding where harmonic distortion comes from, which plugins might cause such distortion and which ones never will, and how much aliasing is acceptable. You can see aliasing with a spectrum analyzer, so start there. Keep in mind that the analyzer can show aliasing that isn't actually audible, so addressing it with 16x oversampling will only make you feel better, not improve your mix.

Synthesizers have the ability to output different sample rates. However, the SR they use internally is unrelated. For example, say you want to generate a nice clean sine wave, A "pure" sine wave has no harmonics, so generating it at a higher sample rate gets you closer to that ideal. But when the sine wave is rendered in the DAW, it has to be at the same SR as the rest of the project, which implies a sample rate conversion within the synth. There is anecdotal evidence that some synths sound better at certain sample rates, which speaks more to the internal coding of the synth than any broader principle.

As for FM8 specifically, how it works under the hood is proprietary to NI. I would expect that measures are indeed in place to prevent aliasing, simply  because frequency modulation famously creates gobs of high-frequency components that can potentially exceed Nyquist. I'm guessing they wouldn't need 96KHz operators to handle this, but might well do the modulation at a high sample rate, which would allow aggressive filtering at the output. I suspect that FM8 doesn't care about your project sample rate except at the final output, but if I'm wrong about that then yes, FM8 could conceivably be a candidate for oversampling just because of its potential for creating "illegal" frequencies. 

Sample libraries are a little different, in that there is neither need for nor benefit to oversampling the source files. Whatever flaws a sample has are baked into the original recording. Few sample libraries are sampled at 96KHz, but many are 48KHz and most are 44.1. Within Kontakt, you can mix 'n match libraries with different sample rates. Kontakt handles that internally. Exactly how it does that, I don't know. NI doesn't like to talk about things like that. But I do know that if you're loading a 48 KHz library alongside a 44.1KHz library, you will not hear any difference between loading them into separate instances versus one instance of Kontakt and thus forcing SR reconciliation. Either way, the output is going to match the project sample rate.

"Is it worth it making a song at 192khz if the song will sound REALLY good when rendered down to 24 bit 96khz?"

Oh man, that's a rat's nest. Audio interfaces are designed for maximum fidelity at one specific sample rate. High-end pro units are likely to work best at 96 KHz, while most prosumer products work better at 44.1 or 48. None are optimized for 192 or 384. But that's irrelevant if your projects are 100% in-the-box and never include recorded audio. If your project consists entirely of sampled instruments and soft synths, a 96KHz project sample rate makes no sense unless you'll be sending it out for mixing/mastering/distribution and the people you're sending it to have requested 96KHz. Otherwise, all you'd be doing is forcing a lot of unnecessary sample rate conversions.

A lot of users assume that if higher sample rates are objectively "better", then there must be some benefit to converting to higher rates. But sadly, you cannot add fidelity back in after it's gone. The reality is that you can only degrade fidelity. However, it's important to remember that digital audio is of such a high quality that it's far better than our ears can distinguish. The trick isn't so much knowing how to make it better, but how much it can be made worse before anyone notices.

It comes down to this...there is nothing wrong with taking steps that you suspect might improve your music. Experimentation is to be encouraged. But if you can't actually hear the difference, what have you really accomplished? Ah, but I hear you say, just because *I* can't hear the difference doesn't mean somebody else can't hear it. Forget that. You are the benchmark. Nobody is ever going to be listening more closely than you do.

  • Like 5
  • Thanks 1
  • Great Idea 1
Link to comment
Share on other sites

On 7/25/2022 at 9:29 PM, RexRed said:

What is the meaning of life?

If it has anything to do with actually finishing songs....well, I'm working on that. 😄

As for what bitrate you record at, 88 or 96 isn't going to hurt anything (unless you find that you run up against processing power and disk space), but accepted wisdom these days is that if there's any sonic difference, you'll perceive it on mic'd up acoustic performances with a lot of space and minimal processing.

For the pop and rock and electronic stuff that I think most of us do, probably not. You're a pro, though, and I'm a hobbyist. If I were earning money from this, I'd probably invest in a faster computer (I did recently spend $250 to build what would have been a screamer half a dozen years ago 😄).

As for the 64-bit double precision....not all plug-ins handle it well, IME. There's a very nice freeware compressor plug-in called Leveling Tool, modeled on the LA/2A, that has a huge volume drop when 64-bit double precision is engaged in Cakewalk.

As for upsampling individual plug-ins at render time, a thing to be careful with is if you have songs that depend on a virtual instrument's internal arpeggiator and FX like rhythmic delays. The timing of those can get thrown off by upsampling, so that if you're mixing with the upsampling disengaged and then flip it on for rendering, the song will sound different. Plug-ins don't all do their math in the same way based on the same timings.

As I said, I did some listening tests and experiments when I released "Sensation," and what I determined was that leaving plug-in upsampling off and rendering at 88 (or 96) yielded the best-sounding (and most faithful) results. You can listen to "Sensation" on my Bandcamp page and hear how much it depends on the arp and delay timing to be just right. My favorite sound design (and even compositional) techniques involve rhythmically-sync'd delays and modulations, so this is critical for me.

Not so critical when I'm using a bit of slapback and chorus to fatten up a vocal.

Quote

Euripides

Rhymes with "you rip a'dese."

Edited by Starship Krupa
Add silly pun
  • Like 1
Link to comment
Share on other sites

12 hours ago, bitflipper said:

The mechanically knowledgeable driver will get better fuel efficiency, longer-lasting brake pads, longer engine life and get into fewer accidents.

This is a good analogy. Study up and learn what things really make a difference (my votes go for tires, brake pads and occasional throttle body cleaning 😁).

In the case of the DAW, and sample rates and whatnot, it's free to experiment.

Don't even trip on whether it's a "placebo effect." If it sounds better, it is better. Anyone who does this long enough will eventually have the experience of spending 15 minutes dialing in a compressor plug-in only to discover that it's bypassed (or actually on a different track). If you read about it, try it. See if it makes a difference. There's no authority judging us for subtleties in sound fidelity.

How many people who listen to our music will listen to it as critically as we do? Probably none, but since I make music primarily to please myself, it has to sound good under the audio microscope. Grainy reverb tails, harsh (in a bad way) synth notes, my ears pick those right out.

  • Like 3
Link to comment
Share on other sites

Kurzweil SP6-8 88-Key Stage Piano Bundle with Keyboard Stand, Bench, Pedal & Dust Cover

https://smile.amazon.com/Kurzweil-Fully-Weighted-Hammer-Action-Keyboard-Piano-Style/dp/B084JMJWJW/ref=sr_1_1_sspa

...and it comes with 2GB of onboard sample content!

Price: $1,549.99

I really doubt that 2GB of sample content is going to rival my PC based Falcon, Avenger Vengeance or Nexus synths.

 And my Kontakt synth with a TB of samples cost $1,600 (collector's edition)

This synth at Amazon does not even list the quality specs of the samples. 

Like headphones no longer specify the frequency response.

Apple released earbuds with no frequency response listed and now everyone does it.

You buy them because they are Apple and come in a white box, not because they are full range. lol

Edited by RexRed
Link to comment
Share on other sites

2 gigs of kurzweil samples may sound better than a 20 gigs of another library, just as you can can record a guitar part in a bad room at 96 kH that sounds worse than a good room at 44.1. Internal sample rates dont define how good a sample sounds as much as how carefully the sample is curated (the new hot word for sampling).  An out of tune sample at 96 kH won’t work as well as an in tune sample at 16 bits,

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...