Jump to content

What sample rate are you recording at (and recommend)?


Christian Jones

Recommended Posts

45 minutes ago, msmcleod said:

Whilst what you say is largely true, it's worth pointing out that at 96Khz the analog signal can only produce a square wave at 48Khz.

Even at 96Khz, the approximation of a 12Khz sine wave will only have 4 steps from zero to peak. That goes down to 2 steps at 48Khz.

So the argument for using a higher sampling frequency is more to do with getting better accuracy of the audible high frequencies... i.e. ones that will look less like tetris blocks.

According to your logic  20kHz can not be reproduced with a sample rate of 44.1.  We know it is. How do you explain that?

Link to comment
Share on other sites

9 minutes ago, John said:

According to your logic  20kHz can not be reproduced with a sample rate of 44.1.  We know it is. How do you explain that?

It can, but it comes out as a square wave. 

The speakers can't reproduce the extra harmonics of the square waves though (neither can our ears at that frequency), so the fact that its a square wave probably don't matter for listening.

Try it yourself - record a 20Khz sine wave at 44.1Hz at take a look at the waveform.

For lower frequencies however, the quantisation effect becomes more apparent. Again, the speakers will smooth a lot of this out, but it does matter more during processing.

Link to comment
Share on other sites

I use 24bit/96khz for a few different reasons. First I like the ability to deliver at different rates without having to upsample, secondly it's my music...why wouldn't I want to create at the highest resolution I can...once you're done there's no going back. A lot of people misunderstand bit rate and sample rate so I use a simple analogy...sample rate is how many times you want to take a snapshot of an analog signal...the higher the rate the more snapshots you take, the closer you come to the original signal. Bit rate- think of bit rate as the amount of data you take each time you take those snapshots...once again the higher the rate the closer to the true analog signal you come. There's other factors but the basics hold for the most part 😎

Now can you distinctly hear the difference? Some say yes, some say no...it's hard to say either way because you can only hear through your own ears. Equipment, specifically convertors will make a large difference in quality as well.

At the end of the day choose what works best for you and your needs(and equipment limitations).

Bill

  • Like 1
Link to comment
Share on other sites

I

5 minutes ago, msmcleod said:

It can, but it comes out as a square wave. 

The speakers can't reproduce the extra harmonics of the square waves though (neither can our ears at that frequency), so the fact that its a square wave probably don't matter for listening.

Try it yourself - record a 20Khz sine wave at 44.1Hz at take a look at the waveform.

For lower frequencies however, the quantisation effect becomes more apparent. Again, the speakers will smooth a lot of this out, but it does matter more during processing.

f that were true it would sound like a square wave. Have you ever looked at a 20 kHz wave form on an oscilloscope that was after it was converted to digital?  It is not a square wave. 

Link to comment
Share on other sites

15 minutes ago, John said:

I

f that were true it would sound like a square wave. Have you ever looked at a 20 kHz wave form on an oscilloscope that was after it was converted to digital?  It is not a square wave. 

Most people over 25 struggle to hear 20Khz, so I doubt they could tell whether it sounded like a square wave or a sine wave.

And yes I have looked at a 20Hz sine wave on an oscilloscope after it's been digitised. No, it's not an exact square wave, but it's not a sine either.

20Khz is a bad example anyhow, 22.05Khz would a  better example. Try looking at that.

Better still, show me exactly how a 22.05Khz sine wave can be represented at a 44.1Khz sample rate, when you've only got two samples between peak and trough.

Link to comment
Share on other sites

2 minutes ago, msmcleod said:

Most people over 25 struggle to hear 20Khz, so I doubt they could tell whether it sounded like a square wave or a sine wave.

And yes I have looked at a 20Hz sine wave on an oscilloscope after it's been digitised. No, it's not an exact square wave, but it's not a sine either.

20Khz is a bad example anyhow, 22.05Khz would a  better example. Try looking at that.

Better still, show me exactly how a 22.05Khz sine wave can be represented at a 44.1Khz sample rate, when you've only got two samples between peak and trough.

That is an absurd point. No one can hear 22.05 kHz nor is it important to music.  Why not ask for showing a 50 kHz wave when 44.1 is meant to only go to 20 kHz.  Its just as pointless.  Believe  what ever you want. 

Link to comment
Share on other sites

I still use 24/44.1. With 44.1 kHz, you can stream more tracks, projects take up less space, and audio streams more efficiently. However as I've pointed out many times, some sounds and processes created in the box sound better at 96 kHz. This is an obvious difference, not a wine-tasting kind of thing. However, CbB has upsampling so even in a 44.1 kHz project, you can render at 96 kHz and obtain the audio benefits of working at that rate. 

I'm not too concerned about using a high sample rate to reproduce high frequencies better, because the digital stream goes through a smoothing filter during D/A conversion that gets rid of the stair-stepping.

I've NEVER met anyone who can reliably differentiate between a project recorded at 96 kHz that's played back at 96 kHz or 44.1 kHz. So if you can get the benefits of 96 kHz performance as needed at 44.1 kHz via oversampling, then AFAIC there's really no need to record at 96 kHz.

You could make an argument that the signal degrades when doing sample rate conversion, and therefore, you're best recording at 96 kHz so you have only one potential sample rate conversion (if needed to do 44.1 kHz or whatever). While in theory this is true, sample rate conversion is way better than it was back in the early days of digital audio. CbB has excellent quality SRC, and I doubt anyone can hear a difference caused by oversampling.

  • Like 1
Link to comment
Share on other sites

3 hours ago, msmcleod said:

Whilst what you say is largely true, it's worth pointing out that at 96Khz the analog signal can only produce a square wave at 48Khz.

Even at 96Khz, the approximation of a 12Khz sine wave will only have 4 steps from zero to peak. That goes down to 2 steps at 48Khz.

So the argument for using a higher sampling frequency is more to do with getting better accuracy of the audible high frequencies... i.e. ones that will look less like tetris blocks.

Sorry man, but you describe a common mistake of sample rate interpretation. Yes, with 96kHz you have just 2 reference points to build 48kHz wave. But that is sufficient to reconstruct it (and all lower frequencies) perfectly.

Note that sampling (and corresponding reverse conversion) is using the fact that any audio is a combination of waves. There is no "square wave" in audio. Think about your speaker... to reproduce perfect square wave, it has to move instantly. That is not possible (the speed of light limit...).

A "square wave" and other jumping forms (really approximations of them) are used in subtractive synthesizers since from the frequency spectrum perspective they have "all frequencies".

 

Link to comment
Share on other sites

For amateurs and people with cheaper gear, using double rates like 96 can help you not suffer as badly if you have gain set too low and you're too busy with your friends to notice. If you can set your levels properly 44.1 is sufficient imho. And the more tracks you have the less it matters in part because they won't all have the gain too low.

Edited by Gswitz
Link to comment
Share on other sites

It is not possible to claim without reservation, high sample rates result in lower latency.

If the driver buffer settings are the same, latency is lower using a higher sample rate because it takes less time to fill the buffers.

Higher sample rates make the CPU work harder so it may not be possible to run an interface at the same buffer setting as a lower sample rate.

 

Link to comment
Share on other sites

Anyone here ever checked this site out?

Sample rate conversion analysis

Our favorite DAW looks pretty good.

I dutifully ran all their tests and submitted the results from Mixcraft 7 some time ago but they never put them up.

I will say this about sample rates and conversions and all that: just as a rule, I believe that the fewer operations performed on the material the better.

This isn't yet one of those debates about "whether humans can hear the difference," but it is my belief that in the decades to come, research will reveal that there is a lot more information that we are able to take in via the sense of hearing than previously understood. Specifically that there are other kinds of perception than just frequency and amplitude.

I am a huge skeptic when it comes to audiophool stuff, yet I have been moved to tears by hearing the difference between two audio sources that "experts" would say I shouldn't be able to tell the difference between.

I also suspect that not everyone has the ability to hear at the same level. This would make evolutionary sense, as the ability to hear with greater acuity, especially directional and higher pitched sounds, and respond to them quickly, would be a trait that would make for a higher survival rate in environments with certain types of predators and prey.

When I listen to MP3's at lower conversion rates, they sound like they "have the corners rounded off" "smeared transients," and my hearing, at age 57, after playing in punk bands....

How Sir George Martin could still make good mixing decisions well into his '80's, even though his hearing had to have been pretty shot....I can pick tiny little metallic "tings" out of a dense mix where a piece of drum hardware hit another piece, and go in and solo tracks until sure enough, there it is. I'm sure you all can, too.

There's phase distortion, and a thing that I never hear discussed, group delay. There's more going on than we're measuring for today. More than we're able to measure for. It will be exciting to see, if we learn about it in my lifetime.

  • Like 1
Link to comment
Share on other sites

3 hours ago, Starship Krupa said:

I will say this about sample rates and conversions and all that: just as a rule, I believe that the fewer operations performed on the material the better.

This isn't yet one of those debates about "whether humans can hear the difference," but it is my belief that in the decades to come, research will reveal that there is a lot more information that we are able to take in via the sense of hearing than previously understood. Specifically that there are other kinds of perception than just frequency and amplitude.

I agree with that. However digital audio is an interesting animal, because it's all about numbers. Early sample-rate conversion algorithms simply didn't have the resolution not to disturb the numbers due to roundoff errors and such. As the resolution increased, so did the quality. But the other issue is that because they are numbers, anything you do is an operation - even just changing the level, passing the signal through DAC, or clocking it.

In any A/B test, the playing field absolutely has to be level. Although the numbers in the digital data stream may be correct, that doesn't mean there aren't phase differences caused by phenomena like jitter after that data leaves the hard drive, including the second it hits the DAC. If a converter exhibits more jitter at 96 kHz than 44.1 kHz - which is technically possible - then the 96 kHz audio might sound inferior. Or, sound different and be interpreted as "better," the same way that people like the phase shift cause by filters in "character" equalizers.

Monitoring a source with jitter, even small amounts, will sound different compared to monitoring after the data stream has hit a playback medium that re-synchronizes the digital stream. Overall, I suspect the deeper people get into why "identical" audio sounds different, they'll find that phase differences introduced after the data stream leaves the hard drive are the issue more than frequency response or amplitude. But I don't know...it may be that higher sample rates are needed not so much to capture frequencies, but preserve proper phase relationships. Paul Reed Smith has done some research along those lines, maybe I'll see him at NAMM and find out what he's been up to lately.

And I still think DSD sounds better than PCM, but that it has more to do with the post-filtering than the data stream. Maybe. At least I think so :)

Link to comment
Share on other sites

Craig, what are your thoughts on group delay?

I recently had an experience where I replaced my Alesis RA-100 with a vintage Crown D60 that a client had given me in trade as credit on an amp repair.

The Crown came out of the production studio at a radio station and wasn't working when I got it, the 2N2055's in the output, which you're supposed to buy in matched pairs from Crown, had been replaced heaven knows how many times with whatever was in Radio Shack's parts bins, and I put in a new volume pot that didn't seem to have the same taper as its mate on the other channel. I didn't have high hopes for it, but it looked really cool, and I figured what difference is a workhorse solid state power amp going to make anyway. The only issue might have been the lower rated wattage of the Crown.

I turned it on and played my favorite test song through it, Radiohead's "Everything in its Right Place," and was floored. I immediately hollered for my housemate, also an audio repair guy, and he listened, was also blown away, and we took a closer look at the service manual for the Crown.

It had a description of the design theory, and it included the fact that they had paid special attention to minimizing phase shifts and group delay issues.

I've not studied the schematic for the RA-100. For all I know it could have a couple of chip amps in it. Given Alesis' reputation and philosophy of delivering value for the money, it probably does.

Link to comment
Share on other sites

On 1/20/2019 at 12:38 AM, AB3 said:

Thanks scook.  But can a higher sample rate theoretically result in lower latency?

Latency of a 64-sample ASIO buffer size at each sample-rate:

  • 44.1kHz = 1.5ms
  • 48kHz = 1.3ms
  • 88.2kHz = 0.73ms
  • 96kHz = 0.67ms
  • 176.2kHz = 0.036ms
  • 192kHz = 0.033ms

 

Round-trip latency is the sum of the following

  • ASIO input buffer
  • ASIO output buffer
  • The driver's (often hidden) safety-buffer
  • Latency of the A/D D/A

 

Thus, at higher sample-rates, latency can be lower.

However, the CPU has far less time to fill the buffer... so CPU use is significantly higher.

Note that some audio interfaces won't allow using smaller ASIO buffer sizes at sample rates higher than 48k.

  • ie:  The UA Apollo goes down to a 32-sample ASIO buffer size, but you can't select a buffer size lower than 64-samples if working above 48kHz.

 

Edited by Jim Roseberry
Link to comment
Share on other sites

Hello everyone

With PCM, every block of bits are sampled from no bits to 16 or 24 bit depth back to no bit depth at every sampling frequency rate , creating a stepped resolution. 

If you are recording  acoustic instrument(s) with quality mics, DSD remains superior in quality due to the difference between One bit sampled at very high rates 2.8 or5.6mhz where the next one bit is sampled related to the previous differential and it imparts a higher linear resolution  similar to tape being linear; tiny magnetic particles.

There is still a filter but much above influencing the audible range and there is ultimately the filtering when converted to PCM which kinda negates the advantage.

Exporting a  commercial mix to DSD and  then from DSD playback preserving the mix as it exited where it can be mastered via an analogue chain to the various formats into another machine or computer without truncation errors which is the problem with 24 bits.. Eventually, 24bits has to become 16bits. And the DSD conversion to analogue and picked up after the chain down to  another 16bits conversion or a high res media and why some Artist offer  DSD. purchases not to mention a revival of vinyl and high res downloads. 

In theory 16 bits can be divisible by 44.1; 88.2;  196 or rate that 44.1 can be divided into and in theory could be mixed down to 44.1 without dither. Not so much for 96khz.

I use  24bits with 32bits in the DAW mostly at 88.2  and want an Interface that'l do 196khz for jobs destined for DSD export . Other non divisible sampling frequencies like 96khz aren't an issue here for Broadcasting for DVD etc.

The higher the sampling frequency rate  at 16bit depth the more linear with less quantization errors, versus 24bits or with in DAW 32 bit processing at a lower sampled frequency is the quandary.

I have found recording at higher sample rates in 16 bits to be slightly easier on the PC and DAW.

But 24bit depth does give more initial headroom making recording in it more relaxed.

With drives so cheap now 24bit at as high a sample rate as you can get would seem to be the direction to go with computers getting more capable. And just because you can why not!  

Well, a 24bit96khz recording converted to 16bit 44.1 is indistinguishable. That is how good 16 bit is.

But do let me know if you can, how 176kz is at16 bits goes with plugins and track count! 

Meanwhile 24bit at 48khz isn't as much of a strain as 88 or 96khz.

 

 

 

 

 

 

Edited by Bart Nettle
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...