Jump to content

Michael Anderwald

Members
  • Posts

    3
  • Joined

  • Last visited

Reputation

3 Neutral

About Michael Anderwald

  • Birthday 11/26/1982
  1. When done right, there aren't any artifacts due to conversion that wouldn't be much worse when working in 44.1 kHz and 16 bits all through the rendering pipeline. The hard part about sampling rate conversion is finding the best sounding low-pass filter for the material (which is solved very simply by using one of the popular sample rate conversion algorithms like SoX , or Audacity, or r8brain, or any other good one as can be found on http://src.infinitewave.ca/). Converting from 24 bit to 16 bit has no artifacts either. You either simply truncate numbers which raises the noise floor (which in 16 bit resolution can result e.g. in audible distortion of long reverb tails at high listening levels), or you use dithering to utilize the statistic distribution of signals below the 16 bit noise floor, which adds a tiny bit of noise, but that noise contains audible information thus lowering the noise floor even further. Just like gradients are dithered in digital imaging to avoid banding with hard edges in gradients. Not all advantage is being dumped, and the conversion doesn't add garbage. Unless you consider dithering garbage, in which case you just leave it off without losing anything that would have been there in the first place. People used to compare this to saying "It doesn't make sense to shoot a movie on 35mm film stock, when the movie is going to be released straight to VHS." So, you're trying to keep the production quality as high as you can, as long as you can from the beginning of the production process in order to end up with the best possible end product. This way you capture the most information at data acquisition and lose the least of it during the steps further down the line. Best, Michael
  2. What the scope of the project, and what's your budget? Also, what kind of mastering style are you into? I've had a bunch of stuff mastered by various people and they all did their job well, so based on your posting I could recommend almost any random mastering engineer who fits your budget. Also, where are you based, and does it matter to you where the ME is based?
  3. It doesn't matter. The stair-steps are one way to interpret a series of values on a graph. But the lines between the points are imaginary and don't exist in reality. We are talking about pulses with a certain amplitude, taken at a constant time interval. Think a series of lollipops with equal distance to each other, where only the length of the stick varies. This has implications, because it means that as long as a signal contains for example no frequencies above 20 kHz, it doesn't matter if it's captured at a pulse rate of 44.1 kHz, or 192 kHz. The reconstructed signal looks identical. There are no stairs that get smaller at higher sample rates. It has nothing to do with dither, which would only be relevant to the lengths of the lollipop sticks, not the (imaginary) information between them. The zero-order-hold (stair stepped) representation of digital voltage sampling is maybe one of the biggest reasons why people think analog is somehow intrinsically more clean/continuous than digital audio, when in reality even the cheapest AD/DA chipsets beat practically any analog recording medium in terms of noise and lack of distortion. This is why when you right-click a WAV or AIFF file in Windows, you'll be told that the file contains PCM data, where PCM stand for Puls-Code-Modulation. By the way, I voted 96 kHz because I made a few tests and came to the conclusion that I can substantially mitigate aliasing artifacts when using heavy distortion. As the final delivery medium I'm completely content with properly band-passed 44.1 kHz.
×
×
  • Create New...