Jump to content

What sample rate are you recording at (and recommend)?


Christian Jones

Recommended Posts

48-24 and never noticed any VST artifacts. 48 is standard for video and dithers just fine to 44.1.    VSTs are designed to work at multiple sample rates so the artifact thing mentioned towards the top makes no sense to me

Edited by Stxx
  • Like 1
Link to comment
Share on other sites

There will be different answers for different people.  I suggest people use their ears.  Since my computer is powerful enough and I do not use more than perhaps 30 tracks, I can record using 88.2 or 96K and it does sound better to me.  I appreciate the better latency when recording.  I do not use tracks during recording that I do not need to listen to when recording new tracks.  This allows me to have a smaller buffer size and retain the latency benefit.  Then on playback, where latency is not an issue, I can increase the buffer size.  I hope this makes some sense.

But this does not mean that this should apply to everyone.    

 

  • Like 1
Link to comment
Share on other sites

Most of the time 24 bit 44.1 or 16bit 44.1

 

Most people have a bottle neck on preamp quality, etc that high res above this is basically lost (even if the preamp, etc indicate they support high res).  

While I appreicate high res files and recording, the vast majority of the time clients can't hear the difference at all on their own playback systems, and by the time it is mixed down to a single stereo file (likley at 16 bit 44.1) it loses those things that made the high res sound different.  

I record high res also, but honestly is it usually just a waste of disc space.  You will hear far more difference in the way audio is processed by various plugins and techniques than you will strictly based on the bit depth and sample frequency.  Focus more on things that matter.  How many people do you know that have a playback rig that even supports higher than 44.1 16bit?  The only people I know that do are recording engineers also.  Most musicians don't even have that.  Unless they are a hyubrid musician and recording engineer, it seems pretty rare to see a rig that actually reveals the difference or limitations in the source.   

Link to comment
Share on other sites

11 minutes ago, Brian Walton said:

Most people have a bottle neck on preamp quality, etc that high res above this is basically lost (even if the preamp, etc indicate they support high res).  

...  How many people do you know that have a playback rig that even supports higher than 44.1 16bit?  The only people I know that do are recording engineers also.  Most musicians don't even have that.

Regarding the importance of the pre-amp, I totally agree. My bro and I decided I don't have the equipment necessary to validate the specs on my RME UCX. I would need much better microphones than I own, and I own some pretty mics. I would also need an an-echoic chamber.

Now on the 'who has 24 bit rigs anyway' thing, I'm not sure I can agree anymore. I used to have to make CDs for bands when I was done, but now I only post 24 bit Wave files and they go from there.

My phone handles 24/48 and that means so does my car. I don't think I'm special. I think a lot of folks have fairly good stereos in their cars. I'm not sure, but I think my kid's bluetooth speaker sounds 24 bits of good. not sure on that or how it works. The phone is surely 24 bit (again, my phone). 

So it may be that lots of folks can handle 24 bit. These days I don't always make 16 bit versions of a bounce unless there's an old guy in the band who has to have CDs.

 

Link to comment
Share on other sites

Modern Audio Interfaces upsample the audio anyway.

16bit/44.1 is about 90db or so of dynamic range whereas 24 bit roughly 110db.  The difference between loud and inaudible soft is realized in 16bit/41.1

In reality we have a  narrower dynamic range; so too bandwidth. What mic actually recorded something near 20khz or as low as 10hz or 20hz and we shelve these areas off generally anyway.

High-Frequency rates used to reproduce the sound are most likely not needed and discarded (after all it is a mathematical process) especially so for many genres but not the case for acoustic music with very fine mics; definitely record at 24bit/96khz.

 

 

 

 

Edited by Bart Nettle
Link to comment
Share on other sites

I do not think it is as simple as dynamic range and frequency response.   As I mentioned before, there is the latency issue.

I will add one more issue - samples per second.  If 32 samples per second provided great dynamic range and frequency response it would still sound very unrealistic as one can hear the "grains" of the sample.

So how many samples per second does the ear actually transmit to the brain?  What is the sample rate of analog?  Is there a limit?  Just curious?  Anyone have an idea?

 

Edited by AB3
Link to comment
Share on other sites

Good points!   You can hear lower sampling frequency, so the point you cannot discern a difference would be that particular ears resolution.

The thing about ears is that a half a db level difference, especially between 2- 4 khz, is easily heard.  

When you are in the jungle and tigers are everywhere hearing is survival. LOL

 

Edited by Bart Nettle
Link to comment
Share on other sites

10 hours ago, Gswitz said:

Regarding the importance of the pre-amp, I totally agree. My bro and I decided I don't have the equipment necessary to validate the specs on my RME UCX. I would need much better microphones than I own, and I own some pretty mics. I would also need an an-echoic chamber.

Now on the 'who has 24 bit rigs anyway' thing, I'm not sure I can agree anymore. I used to have to make CDs for bands when I was done, but now I only post 24 bit Wave files and they go from there.

My phone handles 24/48 and that means so does my car. I don't think I'm special. I think a lot of folks have fairly good stereos in their cars. I'm not sure, but I think my kid's bluetooth speaker sounds 24 bits of good. not sure on that or how it works. The phone is surely 24 bit (again, my phone). 

So it may be that lots of folks can handle 24 bit. These days I don't always make 16 bit versions of a bounce unless there's an old guy in the band who has to have CDs.

 

It isn't about the ability to technically playback 24bit it is if someone has the equipment to render and reveal those differences.  Your phones doesn't have a high resolution output.  Stock car speakers are no where close.  And the list goes on.

 

You are correct that people could playback the files, but very few people have the playback system to reveal those differences.  Plenty of studies show that most can't even tell a decent bit rate MP3 from a WAV file.  Let alone the more subtle jump to high res audio.

Link to comment
Share on other sites

While a song is going "full-tilt" (especially if it's been peak-limited for high average level), you'll not notice the difference between 24Bit and 16Bit audio.

Listen to an isolated long reverb decay at 16Bits with no dither...

The end of the decay breaks up and sounds awful.

That same long reverb decay at 24Bits sounds smooth.

 

 

  • Like 1
Link to comment
Share on other sites

I think it's sensible to use different sample-rates depending on circumstances.

  • If I'm working with video, I'm using 48kHz.
  • If I'm working with AmpSims, and especially if the project isn't too large, I'll work at 96kHz.
  • If we're cutting a VO in the studio for my wife (Morning show on the local classic-rock station), we'll record that at 44.1kHz.

 

IMO, There are many other aspects of a project that have a more profound effect on the final result than sample-rate.

  • Song
  • Arrangement
  • Performance
  • Mics/preamps/placement

I don't believe any record has ever been bought (or not) solely because of the sample-rate.

 

Craig mentioned that CbB has optional over-sampling.

Aliasing noise can result from processing distortion (AmpSim, etc).  Using a higher sample-rate (or over-sampling) puts the aliasing up above human hearing.

When the HeadRush guitar processor was first released, it had audible aliasing noise.  That issue has since been addressed (several firmware updates).

Aliasing sounds unnatural/nasty (not as bad as digital-clipping... but a close second).  It sticks out like a sore-thumb.

  • Thanks 1
Link to comment
Share on other sites

3 hours ago, Brian Walton said:

It isn't about the ability to technically playback 24bit it is if someone has the equipment to render and reveal those differences.  Your phones doesn't have a high resolution output.  Stock car speakers are no where close.  And the list goes on.

One would think so, I know, I know. And go ahead and write me off as an audiophool for what I'm about to claim, or say that it's placebo effect, BUT.

I was setting up VLC Player on this plastic RCA tablet I got at Wal-Mart for $40 that has one built-in speaker for playback, and fiddling around with the deep settings and found this one that said OpenSL ES, and I don't even know what that means. But I selected it, and restarted the playback, and the difference in what came out of this ridiculous recycled cigarette pack wrapper cone 1.5" speaker, not to mention what has to be a pretty nasty DAC wasn't even subtle. Clearer, better transients, more intelligible, rounder and tighter bass....

Speaking of which, I've also been dismayed to notice differences in playback quality between different music players on the danged thing. This is using my cheap bedroom Sony headphones. Best so far is Black Player.

All I can figure is that they are just really good at getting those ones and zeroes to the DAC in an orderly fashion without doing anything to them on the way and that the other players I tried are less so. I am a skeptical person and believe me I wish that it were not this way, I would much prefer to have listened and come to the conclusion that all FLAC's played on my $40 RCA tablet into my $20 Sony headphones sounded just the same through any music player, especially that VLC couldn't be topped.

Long way to get there, but my point was that the "bottleneck" idea doesn't apply to audio. A well-played, well-recorded, well-mixed, and well-mastered record is still going to sound better through the tin(n)iest little speaker. The Beatles took over the world 55 years ago making records that sounded great through really crappy reproduction equipment. Japanese transistor radios, when Japan's manufacturing quality was, in general, worse than China's is today.

"Nights in White Satin" blew me away on my friend's clock radio in 1972 hearing those soaring strings and wailing backing chorus. This all came across on a single nasty 3" speaker in a plastic enclosure that also contained an electric clock mechanism. I know that the fact that the Moodies' recording was so good to begin with made a difference, even down to using great tube mics to capture the vocals.

Link to comment
Share on other sites

2 hours ago, Jim Roseberry said:

Listen to an isolated long reverb decay at 16Bits with no dither...

The end of the decay breaks up and sounds awful.

That same long reverb decay at 24Bits sounds smooth.

True, but we shouldn't be truncating to a lower bit depth. If we dither the 16 bit the actual signal will sound the same as 24bit since the quantisation distortion will be gone. This can be tested by bouncing to 16bit with dither, inverting the polarity on it and then playing it alongside the 24 bit. The result will be silence.

Inverting without dither and we'll hear the quantisation distortion.  

Edited by ien
Link to comment
Share on other sites

39 minutes ago, ien said:

True, but we shouldn't be truncating to a lower bit depth. If we dither the 16 bit the actual signal will sound the same as 24bit since the quantisation distortion will be gone. This can be tested by bouncing to 16bit with dither, inverting the polarity on it and then playing it alongside the 24 bit. The result will be silence.

Inverting without dither and we'll hear the quantisation distortion.  

I'm aware of not truncating from 24Bits down to 16Bits.

The flip-phase example above would produce a result that's close to silence, but I don't think it can be absolute silence (dither noise is added to the 16Bit file -  which is not present on the 24Bit file).

I get the point.  😉

Edited by Jim Roseberry
  • Like 1
Link to comment
Share on other sites

I still on the fence of which sample rate and bit depths are the ultimate choice, but let me just put my understanding, ideas and observation here.

For recording, I might record at 96/32 or 96/24. I've never tried it though. Recording guitar or realistical sound from microphone would have better accuracy with higher sampling rate, so I might pick 96khz.

For any General setting, especially for Cakewalk, I'd set 32bit for bit depth. The reason is, the inside a DAW, every process adds extra bits. And some vst use either 32 float or 64 double precision. So the internal processes are all done with 32 or 64 bit information. If you set 24 bit depth, when you freeze it or record it, some information will be trancated. So even though internal process are 32 float or 64 bit double precision, once you write a wave information then it will be tranced. So the sound changes with bouncing it or when you freeze it. Processing it multiple time may deteriorate the sound by occasionally lower the bit depth to 24 bit during the work, which I think I can avoid by setting it 32bit for DAW's file recording. (For the program itself, I use 64 double precision engine, even though some says 64bit double float doesn't makes the sound better as 32 float is more than enough)

People may not notice the difference but in theory, transaction happens.

 

For mastering, without a doubt, 96/32 or higher, maybe 192 if it's an option. For mastering, for better clearity with less distortion, higher sampling rate is generally introduced by mastering engineers too. Bob Katz says upsampling to higher sampling rate for mastering improves the quality. I'd skip the theory of why upsampling is good for mastering but I think it comes with some downside. Eventually, the file needs downsampling to 44khz, you have to use filter to remove folding noise, and that's how resampling works. But the problem is, whenever you use a filter, the peak changes. So for example, before resampling it, the peak was set to -0.3db with blickwall limiter but once you downsampled to 44hz, then it could exceed the peak of even 0db! so this may cause other distortion so I need to take an extra care about the peak with SRC. Reintroducing a limiter at downsampling process may change the sound quality, too.

 

For production, I think it's difficult to say which sampling rate is the best. Most of the sample library offers the samples with 44hz. So with different sampling rate setting for a DAW cause resampling of these sample files inside the DAW. The inside DAW resampling doesn't come with the good quality compared to iZotope SRC.

And for synth, the sound changes depending on sampling rate. Most of the time, (not always, thought) higher sampling rate would make synth sound better. So it may be good to use higher frequency settings when the synth doesn't offer oversampling options inside.

When you use distortion, like tube distortion inside z3ta, the sound clearity differs by the sampling rate.

And sometimes, higher sampling rate produces more Inter Modulation distortion, while 44khz produce more potential folding distortion. Some times for some reasons, idk why but folding distortion makes the sound warm. And for IMD, by introducing linear phase filters that cut higher frequency above in audible frequency helps make the sound even clearer, but the filter might potentially cause pre ringing effect, I don't think it's audible nor significant but I think it might change some transient information, or affect lower frequency as well. Which in the end will be added  up to the entire mix to some extent. So it might have some weird time shift in sound. 

So as the sample comes with 44hz, producing at 44hz might be a fine choice. But I don't know.

Maybe 48hz is better as you also use many effects.

 

And for mixing, you probably need more plugins to be introduced, 48hz might be the best as you can reduce both IMD and folding noise.

If you set 88hz or higher, the sound is so clean with less distortion but now I miss some lowend energy, making me think weather this was a better decision or not. Folding noise is not musical in theory but the extra clearity doesn't mean the song is warm at low frequency, nor powerful.

And plugins work differently too. Many EQs will appreciate with higher sampling rate. and also non liner processes like compressors or distortion types of plugins can sound differently.

For clearity, higher sampling rate would be better but some plugin sounds better at 44khz or 48khz. It'd make sense, though, that the developer forcuses on rather 44hz or 48khz as that's a common sampling rate. Or, maybe my ears are not good. Sometimes they even inroduces oversampling options inside a plugin so 44 or 48 are not a bad choice.

So maybe, maybe I will use 48khz for mixing as it's good on both CPU and the quality.

 

I think for recording from mic, maybe 96khz or 48khz, 48khz can be realistic for both production and mixing. And 96khz or higher for mastering.

Well but I don't know...

Edited by mikannohako
Link to comment
Share on other sites

17 hours ago, Brian Walton said:

I record high res also, but honestly is it usually just a waste of disc space.  You will hear far more difference in the way audio is processed by various plugins and techniques than you will strictly based on the bit depth and sample frequency.  Focus more on things that matter.  How many people do you know that have a playback rig that even supports higher than 44.1 16bit?

As the original question is about "recording", it is different from intermediate format (which better keep 32bit FP) and the final format (which can be 16bit).

Each bit is ~6dB (SNR, DNR, etc... just a approximation, but it works well in all math). When you record without some hardware compressor/limiter and let say set the gain to result -18dB average and record into 16bit,  your average resolution will be 13bits. And not so loud section can easily be recorded with just 10bits. During mixing and mastering, you will level this signal (with compressors, EQ, etc.). Try bitcrash something to 10bit, that is easy to notice even with $1 headphones on $1 Realtek built-in interface. And if you record close to 0dB, a part of signal is going to be digitally clipped. That you can also hear on low end equipment. So 24bit for recording is a good idea.

15 hours ago, AB3 said:

I do not think it is as simple as dynamic range and frequency response.   As I mentioned before, there is the latency issue.

I will add one more issue - samples per second.  If 32 samples per second provided great dynamic range and frequency response it would still sound very unrealistic as one can hear the "grains" of the sample.

So how many samples per second does the ear actually transmit to the brain?  What is the sample rate of analog?  Is there a limit?  Just curious?  Anyone have an idea?

With 32 samples per second you can get up to 16Hz frequency. So the frequency response it not good by definition 😉

 

4 hours ago, Jim Roseberry said:

While a song is going "full-tilt" (especially if it's been peak-limited for high average level), you'll not notice the difference between 24Bit and 16Bit audio.

Listen to an isolated long reverb decay at 16Bits with no dither...

The end of the decay breaks up and sounds awful.

That same long reverb decay at 24Bits sounds smooth.

I believe that 16bits dithering make sense.   I am sure there is better equipment then my and way more advanced listeners, I can hear the difference only starting from 14bit downwards.

But is such examples I think it is important to mention was it with unusual amplification or not. Was you playing already mastered track and at the very end could hear the difference in reverb, or was it just reverb sound and the signal was amplified +12 dB or more. Because with sufficient amplification it is possible to hear the difference of 24bit dithering on notebook speaker. I had to amplify more then 60dB + max all other volumes to achieve that, so it make no sense but  possible 🙃

Link to comment
Share on other sites

I've always worked doing audio for video so  48k for me.  In the old days 48k synced to picture more accurately with less drift.  I wonder if that is still true.

I use to be able to hear the difference between 44.1 and 48.  If I had heard the program a million times,  hours on end (while working on the project intently)  and with good monitors I was familiar with.  I could detect a slight difference.  Again that was 20-25+ years ago when digital was getting going and I wonder if it is still possible to hear the difference today.   I doubt my ears work any where near as well and I'm not willing to put in the effort to find out.

So can anyone hear the difference between 44.1 and 48 these days?

  

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...