Jump to content
  • 0

Specific Pro Tools comparison question


Joe_Southern

Question

I've used Cakewalk/Sonar since the beginning of time. Instruments were made of dinosaur bones when I first started. It's been perfect for me through the years and still is, but I have been thinking about a question.

Suppose for simplicity, I have several stems - guitar, vocals, bass, drums, etc.

Now, if I bring those into cakewalk and pro tools and then in theory, just say I mixed them the same - same plug in's etc at same settings.

My question is that when those are mixed by the algorithm in the programs themself, is there a noticeable difference in how each program processes the files? Do the mixes end up about the same?

I see lots of things about the operation of the two programs, but never anything kind of breaking it down like in my example. Has anyone done this to give some insight?

I wonder because my vocals sound so crappy (:

Link to comment
Share on other sites

Recommended Posts

  • 0

This is, unfortunately, a question that only the developers of the respective programs could answer definitively and objectively.

From a user's perspective, all we can do is test it. Make a couple of small mixes using no plug-ins, and/or simple plug-ins using the same fader and plug-in settings. Then render the projects and listen.

There are those end users who will tell you that "all DAW's sound the same, period," because they knew a guy who knew a guy who read on a forum that someone tried a null test of unknown parameters and source audio and it nulled perfectly. Presumably the fictional person ran this test on every DAW currently on the market.

I can say that I ran null tests a few years ago using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null. They weren't terribly far apart, but at least at the panning settings I used, no FX, the render wasn't identical.

In the meantime we have advertising material from different DAW's (including Samplitude/Music Creator and Mixcraft) touting how they improved the sound of their respective audio engines. You see why I am personally skeptical of the "all DAW's sound exactly alike" statements. I might go as far as saying that it's likely that at the task of recording whatever audio is coming in from the interface, you'll get the same results. However, as soon as you start creating mixes, that's when a whole lot of proprietary algorithms for mixing and panning and interfacing with plug-ins enter the picture.

You did say "noticeable." That's subjective. Who's doing the listening and noticing? Experienced trained listeners? What programs and hardware are they using to play it back and listen? Audio player software for computers varies widely in sound quality. There are so many variables that the only way to get satisfactory answers, IMO, is to set up some tests ourselves and listen.

Link to comment
Share on other sites

  • 0
35 minutes ago, Starship Krupa said:

I can say that I ran null tests a few years ago using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null. They weren't terribly far apart, but at least at the panning settings I used, no FX, the render wasn't identical.

So, did you actually hear the difference or just see it?

Most importantly, can anyone listen to a finished product (or even a half-baked one for that matter) and be able to tell what DAW was used?

My money is on "no "

Even if there was a noticeable difference, my original comment still stands: Vocal training and practice makes far more difference than chasing after gear and software.

  • Like 2
  • Meh 1
Link to comment
Share on other sites

  • 0

I have to agree with @Byron Dickens  

The OP is staying they don’t like how their vocals sound.  That’s an example of extreme difference in say compared to commercial recordings.  
in the dozens of threads on this topic I’ve read in the past it is stated that each DAW has a very very tiny little difference that only the best ears or measuring tools will reveal. That is not a extreme difference by any stretch.  
If I remember correctly it is something to do with panning laws that will be a tiny bit different in each DAW. 
Bottom line is great sounding vocals comes first from a great performance. Those performances don’t require much processing to finish nicely. 
A weak or terrible vocal performance might be made listenable By using  turd polishing. The tools for turd polishing don’t really differ from DAW to DAW. So changing to different DAW will make zero difference other than you might find better tips on turd polish for certain DAWs over others.  

Edited by John Vere
Link to comment
Share on other sites

  • 0

I was kind of joking about the vocals - but they do need help ha ha. It's more of an overall concept question.

Thanks for the thoughtful comments.

I've done tons of programming in the past (and current) and can see how the actual processing of the DAW - it would be almost impossible to be exactly same.  Just  as two photo programs would not convert the same .bmp file to a .jpg file exactly the same.  Close? Probably. Noticeable difference? Probably not. 

There isn't a standard on how to mix those signals together as far as I know. So, there would be some difference in the approaches. I guess I wondered if someone had tested that. It really could be dramatic over all the various DAW's on the market. But I was more focused just on the Pro Tools vs Cakewalk. 

The tests Startship Krupa gave on the Sine wave testing. That's exactly what I was curious about. But then also, is there a subjective difference.  

 

  • Thanks 1
Link to comment
Share on other sites

  • 0
19 minutes ago, Joe_Southern said:

I've done tons of programming in the past (and current) and can see how the actual processing of the DAW - it would be almost impossible to be exactly same.  Just  as two photo programs would not convert the same .bmp file to a .jpg file exactly the same.  Close? Probably. Noticeable difference? Probably not. 

There isn't a standard on how to mix those signals together as far as I know. So, there would be some difference in the approaches. I guess I wondered if someone had tested that. It really could be dramatic over all the various DAW's on the market.

Yes! How could 6 different development teams all make the same decisions as to how to mix audio files together? To render them?

I'm actually in the middle of setting up a listening test using a pair of virtual instruments playing arpeggio patches. I'm hoping that it will allow me (and others) to do some testing of subjective listening quality.

I was just messing about keeping my chops up with my secondary DAW, Mixcraft and noticed a difference between how the same two synths playing overlapping arps sounded in relation to how the same dataset sounded in Cakewalk.

The plan is to expand the testing to other DAW's I have access to like Ableton Live Lite and Studio One Artist 5.

Apparently I was the only one who got the joke about your vocals sounding crappy. Practical hint on that, though: your voice is an instrument just like the others you play. What makes your playing get better? Practice. Same with singing. The more I practice, the better my pitch and overall confidence and performance. Record your vocal part as many times as you can stand to over a number of days. Record, listen to the take, repeat.

  • Thanks 1
Link to comment
Share on other sites

  • 0

Because of physics and the Nyquist theory (and I think that my MSEE holds some sway on this subject) the more likely function of the "quality" of a final audio product has to do with the innate quality of the D to A converters that are processing into the analog realm, moreso than any minor differences in the algorithm of any particular DAW. Although, dither and floating point can have a large effect, in the end... a conversion of sample rate and bit depth is pretty hard to screw up.

It's just mathematical conversion.

My 2 cents worth. 

Edited by OutrageProductions
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • 0

@Starship Krupa "using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null."

Any electronic engineer worth his salt will tell you that a square wave is the premier test signal; it has immediate and discernible effects on rise time (how fast a circuit can respond to an infinitely short impulse) and over/undershoot (known as 'ringing') that determine quality of AD/DA conversion. AFAIK, most I/O interfaces are still using some version based on the backbone of the Motorola 56k family of chips. But the true tell is in the quality of ancillary design and components like capacitors and resistors that determine both observations, objective and subjective. 

Which is why tubes and transformers are always 'colored' and generally more pleasing (to some) than discrete circuits in analog audio. Luckily, the modern ability to map and recreate those characteristics in the digital realm has come a long way recently. Hence, some excellent renditions of the LA2a. 😁

Link to comment
Share on other sites

  • 0
3 hours ago, Starship Krupa said:

I can say that I ran null tests a few years ago using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null. They weren't terribly far apart, but at least at the panning settings I used, no FX, the render wasn't identical.

Just curious - is the sine wave testing you mention done from the file input to a track, and then comparing the output of an exported file result analysis?  -I would be interested in seeing a setup using a virtual audio driver as DAW input, and then a virtual audio driver output capture as well, used between different DAWs. Of course, that would only analyze the various results on an identical signal set passing through the programming, it would not necessarily compare subjective quality.

I say that, because for years I was concerned about the coloration and degradation of audio in a recording & eventual playback & delivery sense. And when digital became commonly available, I was hoping it would address some of that. But, so far, it seems to have imposed only a different set of conditions on the process. -Audio still has so many differing effects on each individual alone, that finding what "sounds best", or even "the same" is really up to each individual in each situation.

Yes, electrically, and physically, we can measure accuracy of a signal which we think we understand is part of what makes up "good" sound and try to use that which technically works best. But does it really?  I still find things like analog emulation to be such a surprise to me in audio processing, -that we would decide to re-color the audio processing back to known analog standards that were once considered to be electrically unavoidable loss & degradation (i.e. tape sims, etc.) workarounds... It just shows me, that no matter the technique, getting "good" audio is still as variable as ever. We just keep coming up with (hopefully) better tools to manage it all, -seems to me.   -So far, I like my results in Cakewalk, -bad vocal days or not.

  • Like 1
Link to comment
Share on other sites

  • 0

You guys go on Gear Space and this discussion has been gone over for probably 6,000 pages. As I said above I read a lot of it and the results are down in the . 00005 % difference zone. It’s all been done before buy  lots of us audio nerds as well as way out there hi fi freaks. It’s really a waste of time worrying about it. I’d rather worry about how fast my fridge can cool my beer. And anyways it all ends up on a cell phone speaker anyway 😬

  • Like 2
Link to comment
Share on other sites

  • 0
On 4/28/2023 at 5:43 PM, Starship Krupa said:

Yes! How could 6 different development teams all make the same decisions as to how to mix audio files together? To render them?

Because they're not inventing a mixer. They're copying how it behaves. Anything different than that is not a mixer, but something else.

Link to comment
Share on other sites

  • 0
On 5/1/2023 at 11:22 AM, Bruno de Souza Lino said:

Because they're not inventing a mixer. They're copying how it behaves. Anything different than that is not a mixer, but something else.

Indeed they are. But all (physical) mixers don't behave and sound alike, although there was a time when most people believed that they did. And my questions have to do with what choices (and compromises) they make when copying said mixer.

I was once a professional software QA engineer (at Macromedia/Adobe), and I know something about how programmers approach problems. Everyone wants to implement their own great ideas, and there's no magic hand that comes down out of the sky and slaps their hand when they decide not to adhere to accepted principles or practices.

Here's Harrison on their claims about how their DAW, Mixbus, supposedly sounds "better" than other DAW's:

"When the digital revolution came, we were asked to convert the analog "processor" into a digital processor, while leaving the control surface unchanged. Film mixers wanted the control surface to work and sound exactly like the analog mixer they were using for previous projects. This required us to develop a digital audio engine that operated and sounded exactly like the analog mixer they were using for previous projects. This transition was not undertaken by any other company, and it has provided us with techniques and proprietary technology that we have incorporated into all of our high-end mixers. Mixbus gives us an opportunity to share this technology with a much wider range of users."

Sure, it's a marketing blurb.

On 4/28/2023 at 5:12 PM, John Vere said:

You guys go on Gear Space and this discussion has been gone over for probably 6,000 pages

And so what? A pile of amateurs having a big debate. Gearspace is known for being a big weenie-waving fest. Yeah, I get that some people are tired to death of this discussion. "Gawd not this again!" So don't follow it!

The culture over here is blessedly different. I 100% agree that if one is going to worry about some aspect of the sound resulting from their recording and mixing process, there are many many many many things to be concerned about before you get to "hmmm, I wonder if there is a difference between how different DAW's sound. I better get the 'best' one."

The answer to that (from me) is if you're really concerned about it to just frickin' try out the DAW's you're considering. They all have either trial or freebie versions. If you hear a difference between Pro Tools First, Cakewalk, Ableton Live! Lite, REAPER, et al, then go with the one that sounds best to you. Don't go by the "wisdom" of a bunch of guys (and they're always guys) trying to top each other in some discussion.

That's my approach: if you're curious, if it's important to you, then try it. I did, recently, and was brave enough to post about my progress through the process. One of the conclusions that I and the people sharing in the discussion have come to seems to be that its really hard to design tests that themselves do not introduce difference.

Let's say someone wants to test the summing engine. If they go with already-created audio material, well, that right there bypasses the DAW's own recording engine. Then most DAW's want to convert imported audio files. There's another point of failure, at that point we're also testing their conversion algorithm. And so on and so on. It's really hard to do, and I'll bet that way less than 1% of people who debate this have ever actually tried to test it themselves.

People get hung up on making it 100% absolutely objective. So my approach (in that other thread) is to loosen up a bit on trying for an ideal, and go with actual musical program material, generated by the DAW using plug-ins, entirely in the box. People have pointed out that my methodology is imperfect, but my methodology imitates how actual people use music software. We don't record pure sine waves and square waves from test equipment. We record and mix complex program material.

I'm flailing around trying it right now, and as far as I'm concerned, the other people weighing in have been really cool about it. They're keepin' me honest. But I'm also emphasizing that what I'm coming up with is imperfect and not intended to prove anything other than maybe that it's really hard to design and implement a test for this. I posted the renders that came out of my exploration for anyone to listen to if they are curious.

So now I'm one of those mythical guys (and we're always guys😂) on a forum somewhere who tried testing it. And my "conclusion" is that objective tests are next to impossible. Which is kind of as it should be. When we're trying out other music tools, we're certainly not objective. We noodle around and listen, we claim that blue guitars with gold hardware are ugly, etc.

 

  • Like 1
Link to comment
Share on other sites

  • 0
20 minutes ago, Starship Krupa said:

So now I'm one of those mythical guys (and we're always guys😂) on a forum somewhere who tried testing it. And my "conclusion" is that objective tests are next to impossible. Which is kind of as it should be. When we're trying out other music tools, we're certainly not objective. We noodle around and listen, we claim that blue guitars with gold hardware are ugly, etc.

When you focus on specific instances at a given time, you can listen over and over and over and speak of some different things each time. But that's often the problem. If I play you the same thing twice and you listen for two different things, you'll remember two different things. This isn't hallucination, this isn't deception. This is how our brains work. The problem is, if you expect a difference, you're gonna listen for different things and you'll hear different things even if you have the same stimulus. And this is one of the really important points of doing any kind of auditory testing. If you listen for something differently for different features, you'll remember different things. That because you extracted a little piece out of that sea of data. And if you have a reason to assume things are different, you're likely to listen to differently, you're likely to focus on different things and you're likely to remember different things. Or maybe if you convinced yourself everything sounds the same, you'll steer yourself that way.

Whenever you do auditory testing, or any test for auditory stimuli, it must have a falsifiable design. Meaning you have to actually be able to tell if there's something you actually detected or if there's something you consciously or subconsciously steered yourself into noticing. You have to be able to test that. What that basically means is you have to do a blind test because, if you know what the two things are, your brain is gonna use that information. Doesn't matter how smart you are, how trained you are, who you are...There hasn't been an example of someone who can avoid it. It's just life.

Link to comment
Share on other sites

  • 0
7 minutes ago, Bruno de Souza Lino said:

What that basically means is you have to do a blind test because, if you know what the two things are, your brain is gonna use that information. Doesn't matter how smart you are, how trained you are, who you are...There hasn't been an example of someone who can avoid it. It's just life.

Absolutely agree. Which is why, when I put up my rendered files, I gave them arbitrary names. The remaining issue now is, ironically, that this doesn't do me any good, because I know what my naming system is. Maybe I'll look around, there must be some software that allows double blind testing of audio files to be done by the computer.

Maybe MCompare? I'll check and see.

Speaking of tests, Rick Beato famously set up a test to "prove" that even trained listeners like his studio assistants couldn't hear the difference between lossless audio and MP3's at various bit rates (I may not be remembering this exactly). Great idea. The conclusion that he came to is that they couldn't tell the difference.

But IMO, the test was useless. He was doing it using a web browser, from another site on the web. Why is that useless? Because web browsers have their own CODEC's for processing audio, and then after that, the audio goes through the OS' mixer. This is similar to running your DAW using the MME driver. An OS' mixer does all sorts of resampling and crap due to having to manage so many different audio streams. That's one of the reasons we use ASIO or WASAPI Exclusive. WASAPI Exclusive bypasses the Windows mixer.

Resampling algorithms are not created equal. There is objective proof of this on the SRC comparisons website that nobody ever seems to check out when I post the link. They tested a variety of audio programs' sample rate converters using sines and impulses. The results show some of them throwing off harmonics, ringing, all kinds of crap. I'm not going to bother posting the link, because when I do, nobody ever comments about having checked it out.

I'd like to be clear, BTW, that I really, really don't want there to be any sonic difference between DAW's. So when I happened to hear a difference, it caught my attention.

  • Like 1
Link to comment
Share on other sites

  • 0

Stepping back a sec, even with any difference, the point is actually moot. Music production isn't a cookie-cutter approach, which is why folks can take wildly varied inputs and tools to create a good/great product. Those who want to do will do the work required to accomplish the task (which often involves learning the tools); those who are easily swayed will find fault or excuses in everything else (but themselves) to rationalize why they cannot do.

There has never been a requirement on which computer/DAW/FX are required to make music. Musicians for centuries didn't even have the luxury of them.

  • Like 1
Link to comment
Share on other sites

  • 0
4 hours ago, Starship Krupa said:

Absolutely agree. Which is why, when I put up my rendered files, I gave them arbitrary names. The remaining issue now is, ironically, that this doesn't do me any good, because I know what my naming system is. Maybe I'll look around, there must be some software that allows double blind testing of audio files to be done by the computer.

Maybe MCompare? I'll check and see.

Speaking of tests, Rick Beato famously set up a test to "prove" that even trained listeners like his studio assistants couldn't hear the difference between lossless audio and MP3's at various bit rates (I may not be remembering this exactly). Great idea. The conclusion that he came to is that they couldn't tell the difference.

But IMO, the test was useless. He was doing it using a web browser, from another site on the web. Why is that useless? Because web browsers have their own CODEC's for processing audio, and then after that, the audio goes through the OS' mixer. This is similar to running your DAW using the MME driver. An OS' mixer does all sorts of resampling and crap due to having to manage so many different audio streams. That's one of the reasons we use ASIO or WASAPI Exclusive. WASAPI Exclusive bypasses the Windows mixer.

Resampling algorithms are not created equal. There is objective proof of this on the SRC comparisons website that nobody ever seems to check out when I post the link. They tested a variety of audio programs' sample rate converters using sines and impulses. The results show some of them throwing off harmonics, ringing, all kinds of crap. I'm not going to bother posting the link, because when I do, nobody ever comments about having checked it out.

I'd like to be clear, BTW, that I really, really don't want there to be any sonic difference between DAW's. So when I happened to hear a difference, it caught my attention.

https://www.pluginboutique.com/product/3-Studio-Tools/72-Utility/5360-4U-BlindTest

  • Thanks 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...