Jump to content

4 DAW's, 4 renders, 4 results


Recommended Posts

1 minute ago, John Vere said:

But I still ended up with dozens of comments about how my results were useless.

How many people countered your "useless" results by setting up their own tests and then showing their results? Wait, let me guess: absolutely nobody.

10 minutes ago, John Vere said:

You finding of that the LUFS came out different might be at the heart of why some people claim their DAW sounds better.

Oh heck yeah. If one of the bits of software I'm playing with generates rendered files that are even just higher in level, that will be interesting to find out.

Link to comment
Share on other sites

4 minutes ago, bvideo said:

Here's a page of comparisons of some measurable qualities of how various DAWs do sample rate conversion (SRC). It's one proof that all DAWs are not alike. Of course SRC is only one technology of audio processing, and depending on the project and other DAW optimizations, it may or may not play a part in outcomes.

It's an absolutely objectively proven fact that whenever someone posts this link to a forum, Joseph Fourier turns over in his grave. Even just the times I've done it would be enough for the other residents of the cemetery to call him "Whirlin' Joe."

Anyway, I'm not trying to prove that all DAW's do or don't sound the same bla bla bla. I was hoping that by saying so and by using deliberately imperfect, subjective testing methods that I would avoid that sort of thing, but whatever.

Link to comment
Share on other sites

5 minutes ago, Starship Krupa said:

It's an absolutely objectively proven fact that whenever someone posts this link to a forum, Joseph Fourier turns over in his grave. Even just the times I've done it would be enough for the other residents of the cemetery to call him "Whirlin' Joe."

Anyway, I'm not trying to prove that all DAW's do or don't sound the same bla bla bla. I was hoping that by saying so and by using deliberately imperfect, subjective testing methods that I would avoid that sort of thing, but whatever.

My goodness, how far off topic this thread has gone!?

  • Haha 1
Link to comment
Share on other sites

With roughly 3 db variance, I can’t help but wonder if the Pan laws of each were set differently.  Some -3 pan laws boost the sides by 3db whereas some will reduce the center by 3db . Were these all stereo tracks or were there mono ones involved?

Very interesting experiment though!

Someone at some point had mentioned that Ray Charles used Sonar because he said it sounded better than the rest! That was good enough for me! ? Of course Ray would sound good on any DAW. Great pickers can even make toy instruments sound great! 

 

 

  • Like 1
Link to comment
Share on other sites

2 hours ago, Starship Krupa said:

So are the people at the companies who actually make the software lying?

Related question: Is there something about Fourier, Nyquist, and Shannon's theorems that prevents programmers (and/or their programming tools) from making mistakes when creating software that uses them?

What "error?" Where did I say the sound source was "immutable?" You must have missed the part where I said this:

I think my only "error" was in overestimating folks' reading comprehension. ?

What do you think I'm trying to accomplish by doing this? I don't have some theory or other that I'm trying to prove or disprove. I just thought of a repeatable set of steps that I suspected might yield interesting results.

The idea of this is to mess about in a deliberately non-rigorous, non-scientific way and see what happens. Then maybe I can draw a conclusion or two based on the result. Maybe not. Certainly not a scientific conclusion, but maybe a practical one. I'm always fascinated by posts where people are finding that (for instance) their rendered files don't sound like their music does during mixing.

I like to try things and observe the results. This can be puzzling to people who prefer to already know what the results will be, but I find it fun and often even educational.

Here, the objective is to see what happens if I take 4 DAW's, use the same soft synths playing the same arp patches on them, with the same mixer settings, and render out the files. The operative words being "see what happens." I know the renders are going to come out different. I'm wondering how different and in what way(s).

Interesting experiment you came up with, now almost completely overthought (i.e. Fourier, Nyquist theorems, etc). Agree, seems like some people need to take time to read and dicern the point before jumping in. Carry on, will be interesting and fun to hear your results!

Edited by Ross Smithe
  • Thanks 1
Link to comment
Share on other sites

4 hours ago, Bruno de Souza Lino said:

This most likely happens because computers are not capable of properly dealing with floating point numbers and have to approximate the result.

There is some truth in what you say, but I wouldn't claim that computers are not capable of dealing with floating point numbers! ?

Sincerely, there are problems with rounding numbers converted to floating point (small differences), because the number of digits are limited! And it even depends what programming language or system/database you use (e.g. different algorithms to convert simple numbers to floating points). Thus if you make a lot of calculations the order and the choice of computer number fields used lead always to different results in calculations! And music is internally stored as numbers and most operations on it are of mathematical kind. Such things like pan law depend also on the conversion of numbers. Even if the same pan law is used in 2 DAWs the result maybe different, because one uses pan internally as integer and another DAW uses float directly and so on.

Link to comment
Share on other sites

17 hours ago, Starship Krupa said:

The companies that have claimed in their marketing literature to have improved the sound of their DAWs' audio engine (as MAGIX, Acoustica, and Ableton have all done in the past decade), were they lying?

In ancient times, audio engines had 16 bits of resolution. Then came 24-bit, 64-bit, 32-bit, floating point, sample-rate converters, etc. etc. Technically, every time there was a jump like that, the company could claim an improvement in the sound's audio engine.

  • Like 4
Link to comment
Share on other sites

44 minutes ago, bitflipper said:

While floating-point approximations are a real thing, and might be an issue for astrophysicists, they do not affect audio quality.

I was not talking about quality impact!

You are right that a single storage of a floating point number may have a very tiny difference, but I know that it can cause obvious differences after several operations/conversions! And there are so many in audio processing! I don't want to say that one result maybe better or not, I just want to explain why there maybe/are differences! ?

Link to comment
Share on other sites

There was many threads on many forums on the topic... DAWs have no "own sound". But there is no single "right" way to do almost any arithmetic operations a DAW should apply to the digital audio. So the results depend from the settings and design decisions.
Normally "comparison" threads are about audio only. Even there things are not "standard". When it comes to VSTi and MIDI, there are even more variations.
Some differences between 2 particular DAWs you can find in the documentation of my ReaCWP: https://www.azslow.com/index.php/topic,406.0.html

But most important for any tests with many VSTi (and some FXes), as was already mentioned, they are not producing exactly the same result. Even if they are called from the same DAW, playing the same input track. From what I remember, that was primary reason to introduce "AUX Tracks" in Cakewalk, as a possibility to record live performed VSTi output since playing recorded MIDI sometimes produce different audio (so different that during recording there could be "loud wide sound" and during playback "almost silence"). That is not a bug. That is expected.

 

  • Like 2
Link to comment
Share on other sites

Definitely don't take comments as personal affronts. I don't think that is anyone's intent here. When doing analysis, reducing the number of variables is important so that you can truly focus properly. Multi-variable analysis often yields differential equations as the "answer," which is not where you are trying to go.

@Colin Nicholls comment is where I would definitely begin, same test, same DAW, different takes. It is what most folks have been saying here. Many engineering processes rely on repeatable input->repeatable process->repeatable results. If the input is not repeatable, the process can never be refined properly to yield the expected results.

Edited by mettelus
  • Thanks 1
  • Great Idea 1
Link to comment
Share on other sites

Still working on getting a more uniform dataset.

Some things I've learned so far:

1. The synths/patches I used in the initial experiment both use arpeggiators that yield different results with each pass. (thanks to @OutrageProductions for reminding me of that possibility). The big reminder from this is that the "same" project can render differently from the same DAW twice in a row. I knew this already, even account for it in my usual rendering workflow (I render a finished song only once, to a lossless file, then convert that render to various other formats as needed). But it's good to remember that stochastic elements can lurk in places I might not expect them. Also: cool that two of my favorite synths can do stochastic.

It's a challenge to get consistent results with synths' built-in arpeggiators. I would try a 3rd-party arpeggiator, but not all of them support MIDI FX, and the ones that do, I'll bet they don't route as simply as Cakewalk, with its straightforward FX bin on each MIDI strip.

2. Among the four DAW's I'm working with, getting them to render at a given length can be tricky (and this includes our darling, Cakewalk). With one of them, it always wants to render everything out until it doesn't hear any more tails. That's fine, I suppose, one can always pull the render up in an audio editor and trim to heart's content, but still, I prefer having control over it.

3. Among the four DAW's, with a more consistent dataset, I'm seeing more consistent analyses as far as LUFS, but one of them is poking (way) out as far as LU (loudness range), and another one is poking out in regard to Maximum True Peak.

4. There's always something new to learn, even things that are right in front of me. For instance, whilst puzzling all this out, I've been setting the MIDI velocity of the triggering note to 100. Of the different DAW's, Cakewalk is the only one that (as far as I can figure out) can do separate MIDI and synth tracks, which is the way I use virtual instruments. But Cakewalk also has a "volume" control on its MIDI strips, and I'm not even sure what those do. By default, they're set to 101. Is that "unity" in the MIDI volume world? Do all softsynths respond to that volume setting in the same way? Does that setting even do anything?

  • Like 2
Link to comment
Share on other sites

26 minutes ago, Starship Krupa said:

But Cakewalk also has a "volume" control on its MIDI strips, and I'm not even sure what those do. By default, they're set to 101. Is that "unity" in the MIDI volume world? Do all softsynths respond to that volume setting in the same way? Does that setting even do anything?

That was a stickler fer sure, but it controls the relative (I guess you could say 'unity') gain of the note velocity from the MIDI channel. Sort of increases the dynamic velocity curve, if you will.

For example; in NI Scarbee Jay Bass (and IIRC) Pre Bass, I have to bump the MIDI volume from 101 to 127 before recording so that I can get a reasonable (-6 to 0db) level on the instrument for input monitoring. (yeah, yeah, I know there are ways to do it in the VSTi and save it, but it gets overwritten every update).

I found some really cool long evolving drones, risers, and soundscapes that I really like in things like NI Reaktor and Absynth, but they are extremely stochastic. Play it one time and I love it; next pass... not so much.

Edited by OutrageProductions
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

8 hours ago, bvideo said:

Here's a page of comparisons of some measurable qualities of how various DAWs do sample rate conversion (SRC). It's one proof that all DAWs are not alike.

BTW, I've been posting that link for years, and in all that time, nobody has ever commented on it.

It would seem to indicate that at least in the matter of sample rate conversion, their algorithms and implementations are not created equal.

The "proof" that people tend to cite is either theoretical work done decades before personal computer DAW's were a twinkle in the eye (Fourier, Nyquist, Fletcher-Munson) and/or words to the effect of "we talked about this on a bunch of forums a long time ago, and one guy tested it and we all agreed, therefore the topic must never be discussed again."

Is it really impossible that the actual practical implementation of the work of messrs. Fourier et al could be imperfect? Having toiled for years in the software QA biz, it seems....naive, I suppose, to assume that every group of programmers who attack the problem all get it right the first time. The SRC page seems to suggest that in the case of SONAR and others, there was room for improvement.

All of you who are so sure that there are no practical differences between DAW's, and have all these ideas about testing methodology, who among you has actually tried it? Anyone?

BTW, here's a picture of a square wave, generated by MOscilllator and read by MOscilloscope, both in the same FX bin in Cakewalk. MOscillator is set (internally) to 4X oversampling:

image.png.5773ff4bc8f14bce31f120ed1b876fdf.png

Anyone care to enlighten me as to why I see what looks like ringing?

Link to comment
Share on other sites

6 minutes ago, Starship Krupa said:

All of you who are so sure that there are no practical differences between DAW's, and have all these ideas about testing methodology, who among you has actually tried it? Anyone?

Seriously trying to ask about someone's favorite DAW (and why) is like herding cats in a rainstorm.
BTW, my favorite color is the smell of the number nine... but only in the vacuum of space. ?

Edited by OutrageProductions
  • Haha 2
Link to comment
Share on other sites

6 minutes ago, OutrageProductions said:

That was a stickler fer sure, but it controls the relative (I guess you could say 'unity') gain of the note velocity from the MIDI channel. Sort of increases the dynamic velocity curve, if you will.

For example; in NI Scarbee Jay Bass (and IIRC) Pre Bass, I have to bump the MIDI volume from 101 to 127 before recording so that I can get a reasonable (-6 to 0db) level on the instrument for input monitoring. (yeah, yeah, I know there are ways to do it in the VSTi and save it, but it gets overwritten every update).

Hmm. It actually affects velocity, not volume. So....is 101 the equivalent of "unity?" Meaning that setting will pass whatever velocity my notes are set to unchanged? I think this is one parameter I'm going to leave alone (for now). Another cool feature of Cakewalk that I haven't seen anywhere else, though.

In the case of your bass ROMplers, I'd think that cranking the MIDI "volume" would also have an effect on the timbre of the instrument. With higher velocity usually comes more spank and growl, no?

Link to comment
Share on other sites

4 minutes ago, Starship Krupa said:

With higher velocity usually comes more spank and growl, no?

With the Scarbee basses, the samples from 101 to 127 velo range are all the same dynamic range, so it's an increase in volume only.

In my Indiginus instruments (and especially in ISW Shreddage instruments) it can get you in serious trouble with velo scaled articulations. 

  • Like 1
Link to comment
Share on other sites

1 hour ago, Starship Krupa said:

Anyone care to enlighten me as to why I see what looks like ringing?

I would bet that both the oscillator and the oscilloscope were MODELED on some existing piece of hardware and the resulting algorithm inherently includes the anomalies found in 'real-world' impulse responses.

If they were strictly developed on a purely mathematical basis, I imagine you would see a more geometrically square wave without artifacts. Which would probably phreak out anyone with a minimal electronics background. 

Sinusoidal responses, and their derivatives,  are found to be generally symmetrical in the normal world.

Aside from cardiac infarction, obviously. ?

Edited by OutrageProductions
  • Like 1
Link to comment
Share on other sites

1 hour ago, Starship Krupa said:

BTW, here's a picture of a square wave, generated by MOscilllator and read by MOscilloscope, both in the same FX bin in Cakewalk. MOscillator is set (internally) to 4X oversampling:

image.png.5773ff4bc8f14bce31f120ed1b876fdf.png

Anyone care to enlighten me as to why I see what looks like ringing?

Haven't thought about DSP theory in a while.  However, this looks like Gibb's Phenomenon.  In this case, it seems like an artifact of FIR filtering around transients in the square wave.  I would guess there is a steep filter being applied to the waveform somewhere in the digital signal path. There are other possibilities too (e.g. windowing functions, etc).  Gibb's can also be seen when summing harmonics of the Fourier Series -- especially around transients.

BTW ...  FIR filters are based on Fourier Transforms, Convolution, IRs, etc. This is the subject of chapters in DSP books.

Edit: Just saw the replies from Bruno de Souza Lino and  OutrageProductions.   Good info from both.

Edited by Tom B
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...