Jump to content

4 DAW's, 4 renders, 4 results


Recommended Posts

I just did an experiment using 4 different DAW's.

On each of them, I created 2 MIDI/Instrument tracks using the same 2 VSTi's using the same 2 complex arpeggio patches using the same note (G2).

One bar of silence, 8 bars of the note (velocity 100), followed by a bar of silence.

Channel faders were pulled down to -2dB, pan set to center. No FX. 120BPM.

In each one I made a selection of the first 10 bars (the 8 bar clips with a leading and trailing bar), then rendered each project in FLAC form.

The idea was to see what results would come from performing a set of steps, not necessarily to do something "scientific." If I were after objective, measurable results, I'd perhaps use test signals, impulses, whatever, as the source material and measure the results with the best test equipment I could get. But I don't record test signals and I don't listen using measurement equipment.

I will disclose that I am not one of those who believe that the mixing and panning and summing engines of all the different DAW's that are on the market generate results that sound exactly the same given the same source material. That would require too wide a variety of people coming up with the same solutions to too wide a set of problems for me to believe it were even possible. As a consumer and user of audio recording and mixing programs, the question interests me.

The point was primarily to create audio files that I could listen to critically, but I did run some analyses using Sound Forge's "Generate Statistics" function and the results were interesting.

According to Sound Forge, For instance the integrated LUFS ranged from 17.12 to 20.02. Maximum true peak ranged from -0.76 to -3.90dB. The length of the rendered audio files ranged from 18 seconds to 25 seconds, due to the way each DAW handles silent lead-ins and lead-outs that are selected.

I'll be subjecting the resulting files to close listening tests on a variety of speakers and headphones. The arps I used have a lot of good transient material and it will be interesting to see what the differences in imaging are.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

coolio. one thing i periodically do is to run a 1KHz signal through an effect to see (via an oscilloscope) what harmonics added (or not) versus published hardware data on the same. some high end effects claiming to match hw somehow miss the boat completely, and some low end effects nail it.

it will be interesting to see your results

  • Like 1
Link to comment
Share on other sites

8 hours ago, Bruno de Souza Lino said:

Only one person came up with the solution and everyone uses that solution. That person was Joseph Fourier.

Can you elaborate? I only know enough to create practical tests.

Joseph Fourier passed away in 1830. What did he come up with that ensured that in the future, multiple teams of programmers would all independently design and implement algorithms that mix streams of digital stereo audio together so as to produce exactly the same results?

The companies that have claimed in their marketing literature to have improved the sound of their DAWs' audio engine (as MAGIX, Acoustica, and Ableton have all done in the past decade), were they lying?

Please educate me. I'd love to learn more.

Link to comment
Share on other sites

9 hours ago, Bruno de Souza Lino said:

Only one person came up with the solution and everyone uses that solution. That person was Joseph Fourier.

Can you elaborate on this?

The programmers' work is based on his theories, but does it necessarily follow that everyone will implement them in the same way? I ask sincerely, not seeking debate, rather education.

What DAW's do goes beyond just recording the stream coming from the audio interface. There's hosting of VSTi's, FX plug-ins, mixing those streams, panning them, all sorts of things. Did Fourier's work account for all of that? I ask because I don't know.

In the past decade, multiple DAW manufacturers advertised that they improved the sound of their audio engines. What do you make of that?

Link to comment
Share on other sites

1 hour ago, OutrageProductions said:

Not if he was triggering samples that randomize round robin.

Definitely a consideration. I chose arp patches that produced the same notes when they are triggered. I may have got that wrong, though. It's all a big experiment.

Edited by Starship Krupa
Link to comment
Share on other sites

@Starship Krupa

Without diving into the bowels of theory regarding Laplace & Fourier transforms and their integration, imagine if you will two signals that are to be convolved (f and g) into an input (or output algorithm) to become (f*g). Who is to know in terms of clocking, since computers have to access and process the data in a serial manner, if the series of f is first in the function or is first? The outcome, in theory should be identical. But in practice, if you add jitter or other clocking anomalies, even miniscule, and/or more input data (h, i, j, k, et al) [as in mixing multiple tracks], can one reasonably predict that every operation of convolution be identical every time?

Granted, this all happens in a DAW at the boundary limit of the Nyquist parameters in use at the time (sample rate & bit depth), and the 'audible' outcome of the convolution subjectively appears identical, but on the individual sample level, may not actually be. It can be shown that (f*g) =(g*f)  mathematically and it can be examined scientifically,  but may not be heard as identical by the average human.

And I, for one, prefer not to sweat the small stuff.

To a dog it may appear as a 'threshold shift', similar to humans when air pressure differences occur on either ear.

Along those lines, for 25 years I have used a massive software suite called EASE to analyze and predict acoustic environment responses using convolution in a virtual space. I can tell you that, all other parameters being equal, no two convolutions will come out EXACTLY mathematically identical, even though the auditory stimulus is subjectively the same. Acoustics have a multitude of parameters that can be infinitely variable (temperature, humidity, air pressure, etc.) that can change instantaneously in time and space, and we learn to accept a certain margin of error.

I'm at risk of losing my "drone" license...

Edited by OutrageProductions
Bad spelling
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Without knowing the VSTi, Z3TA+2 had the interesting anomaly that when oscillators were unsyched it would give you a different performance each pass with the same input... nothing changed but the start position of the oscillators on each pass.

  • Like 1
Link to comment
Share on other sites

12 hours ago, Starship Krupa said:

I just did an experiment using 4 different DAW's.

...

According to Sound Forge, For instance the integrated LUFS ranged from 17.12 to 20.02. Maximum true peak ranged from -0.76 to -3.90dB. The length of the rendered audio files ranged from 18 seconds to 25 seconds, due to the way each DAW handles silent lead-ins and lead-outs that are selected.

...

If LUFS is a calculation over the whole file, the ones with silence will come out different from the ones without, even if the sound part were identical.

  • Like 1
Link to comment
Share on other sites

4 minutes ago, mettelus said:

Without knowing the VSTi, Z3TA+2 had the interesting anomaly that when oscillators were unsyched it would give you a different performance each pass with the same input... nothing changed but the start position of the oscillators on each pass.

I'm taking a closer look at whether the arp patches I'm using actually stay the same over 8 bars. Initially, I only listened to 1 bar to get an idea.

This stuff is why I haven't posted links to the actual renders yet. I know my "methodology" isn't ideal. This is all just an experiment to see what happens. I'm not setting out to prove any theory I have, I really just want to see what happens.

32 minutes ago, OutrageProductions said:

in practice, if you add jitter or other clocking anomalies, even miniscule....

Thank you for the extensive explanation. It'll take me some time to digest it, but this part I get right away. I found out about the effects of jitter as a reason that the sound of prosumer level audio interfaces has improved so much.

My first setup was a pair of Presonus Firepods (I got a deal on them and thought I could make use of 16 channels). I hand-waved them as a product that had been well-reviewed when they were introduced, used them for years. Then I got a Presonus Studio 2|4 to use with my laptop. I plugged it into my main DAW to give it a listen and was floored by how much better it sounded.

And when I say "better," it's in terms that some people disparage. The difference was in transient detail and the controversial "soundstage." If you follow the link in my sig about how I modded a crappy-sounding Alesis RA-100, you'll see an attack by an "if I can't see it on my test equipment it doesn't exist" type.

So I started researching how it could be that this budget interface could sound so much better than my Firepods and found out that the Firepod was designed and manufactured in the early 00's, just before JetPLL, a low cost technology for dropping jitter levels by orders of magnitude, had been introduced to prosumer DAC's. When the Firepod's successor, the Firestudio came out in 2007, it featured a DAC that used JetPLL. All subsequent interfaces from them use this technology, as do many other interface makers, such as Focusrite.

I immediately went shopping for a newer interface and got a great deal on a new-in-box Focusrite Saffire Pro 40 (I still prefer Firewire; I'll switch when Thunderbolt interfaces become available in my price range). The Firepods have been set aside to be sold on Craig's List.

Link to comment
Share on other sites

1 hour ago, bvideo said:

If LUFS is a calculation over the whole file, the ones with silence will come out different from the ones without, even if the sound part were identical.

Indeed, and that sort of thing is part of the "black box" nature of my experiment. The idea is to submit these DAW's to as close to the same stimulus as possible and seeing (and hearing) what happens.

The results may suggest that I should be mindful of just how different DAW's handle silences. I was surprised at the differences at just the length of the rendered files, given that I had selected 10 bars at 120 BPM. 10 bars is 40 beats, right?

Edited by Starship Krupa
Link to comment
Share on other sites

11 hours ago, Starship Krupa said:

Can you elaborate? I only know enough to create practical tests.

Joseph Fourier passed away in 1830. What did he come up with that ensured that in the future, multiple teams of programmers would all independently design and implement algorithms that mix streams of digital stereo audio together so as to produce exactly the same results?

The companies that have claimed in their marketing literature to have improved the sound of their DAWs' audio engine (as MAGIX, Acoustica, and Ableton have all done in the past decade), were they lying?

Please educate me. I'd love to learn more.

What Fourier claimed in 1821 (any function, whether continuous or discontinous, can be expanded into a series of sines) is applied in the Nyquist-Shannon sampling theorem, which is the backbone of digital audio. Technology Connections has a video which explains the theorem very well.

But the point here is there's no new revolutionary way to convert and treat digital sound and it hasn't been for several decades already and it's likely your test has some variant you haven't controlled yet which renders you different sounds from different DAWs. What manufacturers claim as "improved sound" could be several things, including marketing BS. While you might get slightly different results which could be measured depending on circumstance, those are irrelevant if they're outside of the audible spectrum or too low in the noise floor. After all, music is a form of media made to be listened to and if you can't hear the difference, fundamentally there's no difference at all.

9 hours ago, OutrageProductions said:

I can tell you that, all other parameters being equal, no two convolutions will come out EXACTLY mathematically identical, even though the auditory stimulus is subjectively the same

This most likely happens because computers are not capable of properly dealing with floating point numbers and have to approximate the result.

  • Haha 1
Link to comment
Share on other sites

7 hours ago, Starship Krupa said:

The idea is to submit these DAW's to as close to the same stimulus as possible and seeing (and hearing) what happens.

Therein lies the crux of your error - not that you didn't have the right idea but that it was derailed by one assumption: that the sound source was immutable.

Sample library developers and soft synth programmers go to a lot of trouble to introduce unpredictability, in order to make the instrument sound more natural, e.g. round robins, randomized modulations and effects. It isn't easy, given that samples are by nature static recordings and software oscillators are algorithmic. Unfortunately, such unpredictability torpedoes attempts to make objective, repeatable measurements. 

That's why we use test signals such as sine waves for testing, despite their being far removed from anything musical. It's about consistency, removing variables that might cause unpredictable or unreproducible results. I'd like to hear about your observations using unprocessed audio files instead of samples. You could even export the same loops you used initially and bring them back in as audio for the tests. You might well observe that the same DAW yielded different results this time!

As old Joe F himself might have said: Stay  Curious! Or the French equivalent, anyway.

  • Like 4
  • Haha 1
Link to comment
Share on other sites

1 hour ago, Bruno de Souza Lino said:

What manufacturers claim as "improved sound" could be several things, including marketing BS.

So are the people at the companies who actually make the software lying?

Related question: Is there something about Fourier, Nyquist, and Shannon's theorems that prevents programmers (and/or their programming tools) from making mistakes when creating software that uses them?

58 minutes ago, bitflipper said:

Therein lies the crux of your error - not that you didn't have the right idea but that it was derailed by one assumption: that the sound source was immutable

What "error?" Where did I say the sound source was "immutable?" You must have missed the part where I said this:

22 hours ago, Starship Krupa said:

The idea was to see what results would come from performing a set of steps, not necessarily to do something "scientific." If I were after objective, measurable results, I'd perhaps use test signals, impulses, whatever, as the source material and measure the results with the best test equipment I could get.

I think my only "error" was in overestimating folks' reading comprehension. ?

What do you think I'm trying to accomplish by doing this? I don't have some theory or other that I'm trying to prove or disprove. I just thought of a repeatable set of steps that I suspected might yield interesting results.

The idea of this is to mess about in a deliberately non-rigorous, non-scientific way and see what happens. Then maybe I can draw a conclusion or two based on the result. Maybe not. Certainly not a scientific conclusion, but maybe a practical one. I'm always fascinated by posts where people are finding that (for instance) their rendered files don't sound like their music does during mixing.

I like to try things and observe the results. This can be puzzling to people who prefer to already know what the results will be, but I find it fun and often even educational.

Here, the objective is to see what happens if I take 4 DAW's, use the same soft synths playing the same arp patches on them, with the same mixer settings, and render out the files. The operative words being "see what happens." I know the renders are going to come out different. I'm wondering how different and in what way(s).

  • Like 1
Link to comment
Share on other sites

When I made the video about testing 50 free compressors I also made the claim in the intro that this is only for my own curiosity and is not bench testing or scientific. I'm just sharing my observations and I even posted how I made the tests. Which was using Melda's Oscillator,  Span and a few other basic analyzing tools. Mostly I listened to different material and tried to use the exact same settings etc..  But I still ended up with dozens of comments about how my results were useless. I don't think so. I certainly was able to sort out which plug ins would defiantly not work for me. I found it very interesting.

We are nerds and nerds will poke around with things nobody else even bothers to think about. 

Your finding that the LUFS came out different might be at the heart of why some people claim their DAW sounds better. I need not state why because everyone involved in this thread knows the answer. 

Edited by John Vere
  • Thanks 1
Link to comment
Share on other sites

1 minute ago, John Vere said:

But I still ended up with dozens of comments about how my results were useless.

How many people countered your "useless" results by setting up their own tests and then showing their results? Wait, let me guess: absolutely nobody.

Link to comment
Share on other sites

Here's a page of comparisons of some measurable qualities of how various DAWs do sample rate conversion (SRC). It's one proof that all DAWs are not alike. Of course SRC is only one technology of audio processing, and depending on the project and other DAW optimizations, it may or may not play a part in outcomes.

SRC can be invoked when a project is rendered at a sample rate that is different from some of its source files. It is also used by some DAWs and plugins in an oversampling stage to improve results of audio processing that are sensitive to aliasing in high frequencies.

Maybe not all the audio outcomes shown in the various graphs on that page are audible. It looks like some of them should be. The audio material used in the tests is not the least bit musical. But it is convincing to me that DAWs are different.

Just for fun, you can note that some older versions of Sonar can be compared against the X3 version.

  • Thanks 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...