Jump to content

4 DAW's, 4 renders, 4 results


Recommended Posts

1 hour ago, Starship Krupa said:

Anyone care to enlighten me as to why I see what looks like ringing?

That's how square waves are done in practice. It's impossible to create a perfectly sharp square wave as it would require infinite bandwidth. Even if you could do it digitally, that would be how a speaker would reproduce it once it came out of it, as it's physically impossible for a speaker to instantly move from one position to another.

Edited by Bruno de Souza Lino
  • Like 1
Link to comment
Share on other sites

2 hours ago, Starship Krupa said:

Cakewalk also has a "volume" control on its MIDI strips, and I'm not even sure what those do. By default, they're set to 101. Is that "unity" in the MIDI volume world? Do all softsynths respond to that volume setting in the same way? Does that setting even do anything?

There's always a mystery to tug at your brain, which is one of the reasons ours is the best hobby. I commend you for answering your own questions through experimentation. Most folks just ask a question on some forum, accept whatever explanation is offered and incorporate it as an eternal fact from then on.

As for what the volume slider does, it sets the starting value for CC7 for the track. It does not have anything to do with velocities. You can observe the action of the slider by setting it to something other than "(101)". If your soft synth respects CC7, you can watch its volume control move as you move the volume slider in the track header. You can also see that if the volume slider is first set to, say, 112, and you then add a volume automation envelope to the track, its initial value will also be 112.

"(101)" isn't a real value. It just indicates that the DAW will not be forcing an initial value for CC7. Do all synths respect CC7 for volume? No. It's entirely up to the developer how or whether they want to implement any continuous controller. Do all synths adjust volume based on velocity? No. Again, it's at the developer's discretion. Because volume often does go up with velocity, I suspect that's why people might conflate volume and velocity.
 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

30 minutes ago, Bruno de Souza Lino said:

That's how square waves are done in practice. It's impossible to create a perfectly sharp square wave as it would require infinite bandwidth. Even if you could do it digitally, that would be how a speaker would reproduce it once it came out of it, as it's physically impossible for a speaker to instantly move from one position to another.

Good answer!

The only place you'll ever see a square square wave is in the icon silkscreened onto your synthesizer next to the waveform selector.

Link to comment
Share on other sites

3 hours ago, bitflipper said:

Good answer!

The only place you'll ever see a square square wave is in the icon silkscreened onto your synthesizer next to the waveform selector.

My Hewlett-Packard (analog, still has a calibration tag from Apple's R&D lab)) signal generator and Tektronix 465 (the apex of analog 'scopes, IMO) oscilloscope seem to generate and display pretty clean ones. 🤷‍♂️

(if a unit under test on my bench were putting out a "square" wave that looked like that, I'd be looking for where the ringing was coming from)

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

6 hours ago, Starship Krupa said:

My Hewlett-Packard (analog, still has a calibration tag from Apple's R&D lab)) signal generator and Tektronix 465 (the apex of analog 'scopes, IMO) oscilloscope seem to generate and display pretty clean ones. 🤷‍♂️

(if a unit under test on my bench were putting out a "square" wave that looked like that, I'd be looking for where the ringing was coming from)

Square waves in any oscilloscope are also made of sine waves by the way. The 456 doesn't have enough bandwidth to display what you're seeing in MOscilloscope. Or the square wave it generates doesn't.

 

Edited by Bruno de Souza Lino
  • Like 1
Link to comment
Share on other sites

as a note: depends on where you're looking at the square waves - in the digital (and most times electrical) space - they're nearly instant vertical up and down voltage changes (depends on slew rate = slope) although path capacitance and other factors can distort them, but correct - a square wave produced audibly is a composition of sines (as all audio waves are). 

Link to comment
Share on other sites

The midi volume was sort of a half solved  mystery for me. 
In my zillions of projects with midi instruments involved I never touched the slider in the midi track. 
I normally set the volume in the instrument GUI. 
But occasionally if I was using a download midi file that setting would not stick. 
The infamous hidden events issue. 
Example I always set Ample P Bass at its default of 1.0.  The mystery event would change it to 2.0 on playback. 
The easy solution was set the midi tracks volume at 64 and the Ample Bass would now stay at 1.0. 

Reading this I am now assuming that if you leave the midi volume at it’s unity no cc7 event is created. But if you move it elsewhere it does.  That event will override hidden events.  
So back to our test does every DAW also behave this way?  

As I hinted at already and not sure people caught it. Play 2 examples of a song. Don’t change the mix. Just add + 0.5 db LUFS to one. Everyone will say it sounds better.  
So I was thinking one test will be to use only audio as that will have a known Loudness and peak level. 
Then using analytics and ears see if there is a difference.  

Edited by John Vere
Link to comment
Share on other sites

Okay, here's the moment we've been anticipating. I finally  was able to get what I think are similar results from 4 different DAW's.

What prompted me to want to try this was that I was trying to set up some test datasets for the DAW's I use, and happened to notice that even though the test projects were both playing back the same arps on the same synths, I could hear a difference between the two DAW's. So it got me thinking about why that might be, and how to figure out why, and that led to wondering how I might go about creating actual music files that would be the most suitable for "listening tests." Subjective, of course, using music and ears rather than test tones and impulses. That's not how to proceed were I trying to prove something to someone else, but I'm not.

It is trying to solve a problem, but approaching it from the other side: I started with the observation: I heard a difference between two different DAW's that were playing back very similar material at what their controls said were very similar levels. As discussed earlier, although I don't believe that it's "impossible" for the VSTi-hosting/mixing/playback subsystems of two different DAW's to sound perceptibly different (despite whatever proof was presented on a forum decades ago), I do think it would be unusual to hear as much of a difference as I did.

What happens if I render the projects out and listen, can I still hear the difference? Yes, as it turns out. Hmm. What if we try it on a couple more DAW's? One still sounds better than the other 3, which sound pretty similar. Hmm. Looks like we got us an outlier.

Various iterations of rendering and examining the peaks and valleys in Audacity revealed that there were some big variances between timing of peaks and valleys due to either randomization in the arpeggiators or LFO's not synched to BPM. Or whatever.

The hardest part was most definitely finding arpeggiated synth patches that were complex enough not to drive me nuts having to listen to them, yet didn't have any non-tempo sync'd motion, or at least as little as possible. Biggest chore after that was figuring out how to do things like create instrument tracks, open piano roll views, set note velocities, etc. in the two DAW's I was less familiar with.

Once I got all the dynamics sorted, the statistics generated by Sound Forge fell better in line.

But enough text for now, here are 4 renders from 4 DAW's, as close as I could make them without cheating any settings this way or that. Analyse them, importantly, listen to them, see what you think. I haven't subjected them to critical, close up listening yet, so I don't know how that shook out. The only piece of information I'll withold for now is which rendered file is from which DAW so that people can try it "blind."

If you think you hear any differences, it doesn't mean anything. Any audible differences are 100% the result of errors in my methodology, and besides, audible differences can't be determined by people just listening to stuff anyway. The only time you should trust your ears is when you are mixing, and when you are doing that, you should trust only your ears.

The Thwarps files, beta 3.

P.S. One of the audio files is a couple of seconds shorter than the others because I never did figure out how to render a selected length in that particular DAW.

Edited by Starship Krupa
Link to comment
Share on other sites

The first thing to fix in this test would be the patch you're using. Tracks 2 and 3 don't have the initial "stab" 1 and 4 have, which means they're not the same sound for effects of comparison. That will potentially bias some into thinking these two sound different, even if they don't. Even if the patch has some variance to it, all sequences have to start at the exact same point and end at the exact same point, otherwise you're not comparing the same thing.

  • Haha 1
Link to comment
Share on other sites

11 minutes ago, Bruno de Souza Lino said:

Tracks 2 and 3 don't have the initial "stab" 1 and 4 have, which means they're not the same sound for effects of comparison.

Crap! Thank you, Bruno, that's what I get for not listening closely enough. What you actually caught is the complete LACK of one of synth tracks firing off in the entire snippet.

Which is something I was going to mention later: good LORD did I ever learn that Cakewalk wasn't the only DAW where MIDI/virtual instrument tracks stop making sound for no apparent reason.

At one point or other, in each of the 4 DAW's I was playing with, I wound up having to do the dance of deleting an instrument track and its MIDI data and creating it afresh before the track would start making sound again.

Stand by for revision 2....

  • Like 1
Link to comment
Share on other sites

Wow.

Okay, I'm not going to be able to get back to this for several hours, but here's yet another thing I learned: MSoundFactory was changing the behavior of one of the arp patches I used in response to being told to "zero all controllers." Two of the DAW's were set to zero all controllers on stop.

So that synth track is there, it's just not doing the lively arp behavior it's supposed to. Thanks again to Bruno for noticing this before I did.

This now has me wondering if the famous "Suddenly Silent Synth Syndrome" is a condition that some VSTi's get into and don't recover from when you throw the "zero controllers" thing at them. Is "zero all controllers" a single message like "all notes off" or does it send individual zeroes to each controller?

This kind of thing is part of why I did this: I just never know what I might learn. It could be quirks of various synths I try, default or optional MIDI behaviors of the hosts, whatever.

Link to comment
Share on other sites

1 hour ago, Colin Nicholls said:

If we take away one of the DAW s you haven't chosen, does it make sense to switch to a different one?

Monty Hall /  three doors (doahs) scenario?

Edited by bvideo
  • Thanks 1
Link to comment
Share on other sites

I'm out on location this week and only skimming this thread when I get the chance but I'm wondering if this is being approached from the wrong end?

I'm seeing a lot of points of failure that could cause wacky results because too many things have the potential of not lining up (the most recent posts especially say as much). How do we know if there's an engine difference if the problem is with a synth, or how it responds to an arp or any live modulation baked into a patch?

That's not to say getting to that point is a waste of time, but starting there is asking for wailing and gnashing of teeth.

I'd personally start off with the same audio file, imported into each DAW at unity gain and then rendered out at various bit depths and then those files null compared. If the null tests are negligible then move on to different gain settings in each DAW and see if the math of that changes.

Then I'd try adding in another audio file on a different track and see if the summing math makes a difference, repeating the different gain settings, etc.

I'd start off with exports first, but also see if you could capture a live out from your audio interface to see if any live playback jitters are a thing.

Unlikely this would be 100% identical, but would it be *audible*? That's the only thing that really matters. If you have a -100dB difference, I'd say maybe concentrate on mixing better than hearing flies fart on the other side of a pond.

I'd use all of that as your baseline before getting into any live generated synths or anything like that because you have a 100% repeatable source that isn't getting any synth or effect variance muddying the results.

  • Like 1
Link to comment
Share on other sites

27 minutes ago, Lord Tim said:

How do we know if there's an engine difference if the problem is with a synth, or how it responds to an arp or any live modulation baked into a patch?

I know you said you only skimmed the thread, but I've explained in detail that I'm not trying to prove or disprove a difference in engines. I fully admit and concede that it's not likely to result in anything other than an interesting exercise.

My process here is a "wrong end" approach. It's like the Air Force's UFO investigation project. I heard something and I'm trying to eliminate/minimize as many variables as I can. It's also a method used by pro QA engineers, which I once was. Observe a blip and then try to come up with some way to reproduce it.

I've also disclosed that I do think it's possible that there's something about how a DAW implements things that can result in a difference in perceived sonic "quality."

For a subject that so many contend was laid to rest such a long time ago, this does seem to be getting a lot of attention. And I do appreciate the lack of dismissiveness toward my flailing about.

As for using audio files, yes, I may do that at some point. However, when I heard the difference between the two DAW's, it was when using virtual instruments. That gave me the idea that there might be something about how the DAW's were implementing their hosting of virtual instruments, and from that point forward, well, can't divert a fool from his folly. 🎿

Link to comment
Share on other sites

4 minutes ago, Bruno de Souza Lino said:

Essentially, you only want to change what's being tested, which are the DAWs. Everything else has to be exactly the same.

Yes, for it to produce objective results. I'm working backward, trying to figure out why I heard a difference. Part of that is eliminating as many variables as I can and then listening again as well as submitting my renders for peer review, which in the case of apparently only one person snagging them before I took down the link, has already yielded valuable feedback.

I have no illusions that I'll be able to eliminate all variables when working with virtual instruments. In the end, stock patches always have something like reverb or chorus baked in.

I'll likely be submitting the anomaliess you noticed to MeldaProduction. They should know that there are a couple of DAW's whose "zero all controllers" schtick messes up MSoundFactory's arpeggiator at render time.

Link to comment
Share on other sites

11 minutes ago, Starship Krupa said:

I'm working backward, trying to figure out why I heard a difference.

But then you'll never know for sure if the difference you hear actually exists or you've never noticed it before changing one variable. Auditory memory is pretty fragile and things like expectation bias and placebo effect are pretty effective at misleading people. It's that famous thing we've all done at least once: taking a mix then tweaking, say, a snare to perfection only to discover the EQ was bypassed the whole time or you were tweaking a different track and being sure your actually heard a change at what you thought you were tweaking. There are even talks at a "producer channel strip," which is something set up by the mix engineer so the producer can adjust his levels of satisfaction, even though said channel strip is not connected to anything.

Edited by Bruno de Souza Lino
  • Like 2
Link to comment
Share on other sites

5 hours ago, Starship Krupa said:

This now has me wondering if the famous "Suddenly Silent Synth Syndrome" is a condition that some VSTi's get into and don't recover from when you throw the "zero controllers" thing at them. Is "zero all controllers" a single message like "all notes off" or does it send individual zeroes to each controller?

I know this just adds to your woes... but the zero controllers setting - which I understand the reason for including as a CbB default for the casual user - is really tough to allow across the board, -especially if you are using MIDI in your testing. Not only does that command work differently than expected on synths & such that won't use the original MIDI standard it comes from, but I realized a while back that CbB sends it out across all my active MIDI ports - not just the one I may be concentrating on at the moment. It can cause really unexpected results on a project with a lot of MIDI implementation. -Not great for testing, let alone some complex projects.

Hopefully, you are at least testing with a MIDI file you created, so that you know it is clean of anything other than basic data, because without understanding the layering of MIDI commands (like volume in @John Vere's post), and how each DAW handles them, that will really always be a wildcard. I would highly suggest testing with only audio files themselves, not MIDI & VSTi combos. -FWIW.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Starship Krupa said:

Observe a blip and then try to come up with some way to reproduce it.

This is kind of my point, though. Did you hear the blip or did you hear one of the many things that might be masquerading as a blip? Pretty much what Bruno was saying.

Like I said, I'm not dismissing anything with what you're trying to do, as much as I am a fairly firm believer that most DAWs will null in true repeatable scenarios - I agree, it's an interesting exercise and you can certainly learn a lot about the differences in how VSTs are handled if nothing else, but you need to eliminate the known variables first or you're pretty much just chasing shadows. No results can be really definitive if there's something that may lead to a wrong conclusion.

And yes, apologies if I've skimmed over something and I've missed a crucial point of the thread! :)

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...