Jump to content

Starship Krupa

Members
  • Posts

    7,486
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by Starship Krupa

  1. Okay, here's the moment we've been anticipating. I finally was able to get what I think are similar results from 4 different DAW's. What prompted me to want to try this was that I was trying to set up some test datasets for the DAW's I use, and happened to notice that even though the test projects were both playing back the same arps on the same synths, I could hear a difference between the two DAW's. So it got me thinking about why that might be, and how to figure out why, and that led to wondering how I might go about creating actual music files that would be the most suitable for "listening tests." Subjective, of course, using music and ears rather than test tones and impulses. That's not how to proceed were I trying to prove something to someone else, but I'm not. It is trying to solve a problem, but approaching it from the other side: I started with the observation: I heard a difference between two different DAW's that were playing back very similar material at what their controls said were very similar levels. As discussed earlier, although I don't believe that it's "impossible" for the VSTi-hosting/mixing/playback subsystems of two different DAW's to sound perceptibly different (despite whatever proof was presented on a forum decades ago), I do think it would be unusual to hear as much of a difference as I did. What happens if I render the projects out and listen, can I still hear the difference? Yes, as it turns out. Hmm. What if we try it on a couple more DAW's? One still sounds better than the other 3, which sound pretty similar. Hmm. Looks like we got us an outlier. Various iterations of rendering and examining the peaks and valleys in Audacity revealed that there were some big variances between timing of peaks and valleys due to either randomization in the arpeggiators or LFO's not synched to BPM. Or whatever. The hardest part was most definitely finding arpeggiated synth patches that were complex enough not to drive me nuts having to listen to them, yet didn't have any non-tempo sync'd motion, or at least as little as possible. Biggest chore after that was figuring out how to do things like create instrument tracks, open piano roll views, set note velocities, etc. in the two DAW's I was less familiar with. Once I got all the dynamics sorted, the statistics generated by Sound Forge fell better in line. But enough text for now, here are 4 renders from 4 DAW's, as close as I could make them without cheating any settings this way or that. Analyse them, importantly, listen to them, see what you think. I haven't subjected them to critical, close up listening yet, so I don't know how that shook out. The only piece of information I'll withold for now is which rendered file is from which DAW so that people can try it "blind." If you think you hear any differences, it doesn't mean anything. Any audible differences are 100% the result of errors in my methodology, and besides, audible differences can't be determined by people just listening to stuff anyway. The only time you should trust your ears is when you are mixing, and when you are doing that, you should trust only your ears. The Thwarps files, beta 3. P.S. One of the audio files is a couple of seconds shorter than the others because I never did figure out how to render a selected length in that particular DAW.
  2. You want to hear the track you've already recorded while monitoring your current playing. Right? Do you have the UM2 selected as your output device (presumably from your Master bus)?
  3. My Hewlett-Packard (analog, still has a calibration tag from Apple's R&D lab)) signal generator and Tektronix 465 (the apex of analog 'scopes, IMO) oscilloscope seem to generate and display pretty clean ones. ?‍♂️ (if a unit under test on my bench were putting out a "square" wave that looked like that, I'd be looking for where the ringing was coming from)
  4. Hmm. It actually affects velocity, not volume. So....is 101 the equivalent of "unity?" Meaning that setting will pass whatever velocity my notes are set to unchanged? I think this is one parameter I'm going to leave alone (for now). Another cool feature of Cakewalk that I haven't seen anywhere else, though. In the case of your bass ROMplers, I'd think that cranking the MIDI "volume" would also have an effect on the timbre of the instrument. With higher velocity usually comes more spank and growl, no?
  5. BTW, I've been posting that link for years, and in all that time, nobody has ever commented on it. It would seem to indicate that at least in the matter of sample rate conversion, their algorithms and implementations are not created equal. The "proof" that people tend to cite is either theoretical work done decades before personal computer DAW's were a twinkle in the eye (Fourier, Nyquist, Fletcher-Munson) and/or words to the effect of "we talked about this on a bunch of forums a long time ago, and one guy tested it and we all agreed, therefore the topic must never be discussed again." Is it really impossible that the actual practical implementation of the work of messrs. Fourier et al could be imperfect? Having toiled for years in the software QA biz, it seems....naive, I suppose, to assume that every group of programmers who attack the problem all get it right the first time. The SRC page seems to suggest that in the case of SONAR and others, there was room for improvement. All of you who are so sure that there are no practical differences between DAW's, and have all these ideas about testing methodology, who among you has actually tried it? Anyone? BTW, here's a picture of a square wave, generated by MOscilllator and read by MOscilloscope, both in the same FX bin in Cakewalk. MOscillator is set (internally) to 4X oversampling: Anyone care to enlighten me as to why I see what looks like ringing?
  6. Still working on getting a more uniform dataset. Some things I've learned so far: 1. The synths/patches I used in the initial experiment both use arpeggiators that yield different results with each pass. (thanks to @OutrageProductions for reminding me of that possibility). The big reminder from this is that the "same" project can render differently from the same DAW twice in a row. I knew this already, even account for it in my usual rendering workflow (I render a finished song only once, to a lossless file, then convert that render to various other formats as needed). But it's good to remember that stochastic elements can lurk in places I might not expect them. Also: cool that two of my favorite synths can do stochastic. It's a challenge to get consistent results with synths' built-in arpeggiators. I would try a 3rd-party arpeggiator, but not all of them support MIDI FX, and the ones that do, I'll bet they don't route as simply as Cakewalk, with its straightforward FX bin on each MIDI strip. 2. Among the four DAW's I'm working with, getting them to render at a given length can be tricky (and this includes our darling, Cakewalk). With one of them, it always wants to render everything out until it doesn't hear any more tails. That's fine, I suppose, one can always pull the render up in an audio editor and trim to heart's content, but still, I prefer having control over it. 3. Among the four DAW's, with a more consistent dataset, I'm seeing more consistent analyses as far as LUFS, but one of them is poking (way) out as far as LU (loudness range), and another one is poking out in regard to Maximum True Peak. 4. There's always something new to learn, even things that are right in front of me. For instance, whilst puzzling all this out, I've been setting the MIDI velocity of the triggering note to 100. Of the different DAW's, Cakewalk is the only one that (as far as I can figure out) can do separate MIDI and synth tracks, which is the way I use virtual instruments. But Cakewalk also has a "volume" control on its MIDI strips, and I'm not even sure what those do. By default, they're set to 101. Is that "unity" in the MIDI volume world? Do all softsynths respond to that volume setting in the same way? Does that setting even do anything?
  7. It's an absolutely objectively proven fact that whenever someone posts this link to a forum, Joseph Fourier turns over in his grave. Even just the times I've done it would be enough for the other residents of the cemetery to call him "Whirlin' Joe." Anyway, I'm not trying to prove that all DAW's do or don't sound the same bla bla bla. I was hoping that by saying so and by using deliberately imperfect, subjective testing methods that I would avoid that sort of thing, but whatever.
  8. How many people countered your "useless" results by setting up their own tests and then showing their results? Wait, let me guess: absolutely nobody. Oh heck yeah. If one of the bits of software I'm playing with generates rendered files that are even just higher in level, that will be interesting to find out.
  9. How many people countered your "useless" results by setting up their own tests and then showing their results? Wait, let me guess: absolutely nobody.
  10. So are the people at the companies who actually make the software lying? Related question: Is there something about Fourier, Nyquist, and Shannon's theorems that prevents programmers (and/or their programming tools) from making mistakes when creating software that uses them? What "error?" Where did I say the sound source was "immutable?" You must have missed the part where I said this: I think my only "error" was in overestimating folks' reading comprehension. ? What do you think I'm trying to accomplish by doing this? I don't have some theory or other that I'm trying to prove or disprove. I just thought of a repeatable set of steps that I suspected might yield interesting results. The idea of this is to mess about in a deliberately non-rigorous, non-scientific way and see what happens. Then maybe I can draw a conclusion or two based on the result. Maybe not. Certainly not a scientific conclusion, but maybe a practical one. I'm always fascinated by posts where people are finding that (for instance) their rendered files don't sound like their music does during mixing. I like to try things and observe the results. This can be puzzling to people who prefer to already know what the results will be, but I find it fun and often even educational. Here, the objective is to see what happens if I take 4 DAW's, use the same soft synths playing the same arp patches on them, with the same mixer settings, and render out the files. The operative words being "see what happens." I know the renders are going to come out different. I'm wondering how different and in what way(s).
  11. Indeed, and that sort of thing is part of the "black box" nature of my experiment. The idea is to submit these DAW's to as close to the same stimulus as possible and seeing (and hearing) what happens. The results may suggest that I should be mindful of just how different DAW's handle silences. I was surprised at the differences at just the length of the rendered files, given that I had selected 10 bars at 120 BPM. 10 bars is 40 beats, right?
  12. I'm taking a closer look at whether the arp patches I'm using actually stay the same over 8 bars. Initially, I only listened to 1 bar to get an idea. This stuff is why I haven't posted links to the actual renders yet. I know my "methodology" isn't ideal. This is all just an experiment to see what happens. I'm not setting out to prove any theory I have, I really just want to see what happens. Thank you for the extensive explanation. It'll take me some time to digest it, but this part I get right away. I found out about the effects of jitter as a reason that the sound of prosumer level audio interfaces has improved so much. My first setup was a pair of Presonus Firepods (I got a deal on them and thought I could make use of 16 channels). I hand-waved them as a product that had been well-reviewed when they were introduced, used them for years. Then I got a Presonus Studio 2|4 to use with my laptop. I plugged it into my main DAW to give it a listen and was floored by how much better it sounded. And when I say "better," it's in terms that some people disparage. The difference was in transient detail and the controversial "soundstage." If you follow the link in my sig about how I modded a crappy-sounding Alesis RA-100, you'll see an attack by an "if I can't see it on my test equipment it doesn't exist" type. So I started researching how it could be that this budget interface could sound so much better than my Firepods and found out that the Firepod was designed and manufactured in the early 00's, just before JetPLL, a low cost technology for dropping jitter levels by orders of magnitude, had been introduced to prosumer DAC's. When the Firepod's successor, the Firestudio came out in 2007, it featured a DAC that used JetPLL. All subsequent interfaces from them use this technology, as do many other interface makers, such as Focusrite. I immediately went shopping for a newer interface and got a great deal on a new-in-box Focusrite Saffire Pro 40 (I still prefer Firewire; I'll switch when Thunderbolt interfaces become available in my price range). The Firepods have been set aside to be sold on Craig's List.
  13. Definitely a consideration. I chose arp patches that produced the same notes when they are triggered. I may have got that wrong, though. It's all a big experiment.
  14. Can you elaborate on this? The programmers' work is based on his theories, but does it necessarily follow that everyone will implement them in the same way? I ask sincerely, not seeking debate, rather education. What DAW's do goes beyond just recording the stream coming from the audio interface. There's hosting of VSTi's, FX plug-ins, mixing those streams, panning them, all sorts of things. Did Fourier's work account for all of that? I ask because I don't know. In the past decade, multiple DAW manufacturers advertised that they improved the sound of their audio engines. What do you make of that?
  15. Can you elaborate? I only know enough to create practical tests. Joseph Fourier passed away in 1830. What did he come up with that ensured that in the future, multiple teams of programmers would all independently design and implement algorithms that mix streams of digital stereo audio together so as to produce exactly the same results? The companies that have claimed in their marketing literature to have improved the sound of their DAWs' audio engine (as MAGIX, Acoustica, and Ableton have all done in the past decade), were they lying? Please educate me. I'd love to learn more.
  16. I just did an experiment using 4 different DAW's. On each of them, I created 2 MIDI/Instrument tracks using the same 2 VSTi's using the same 2 complex arpeggio patches using the same note (G2). One bar of silence, 8 bars of the note (velocity 100), followed by a bar of silence. Channel faders were pulled down to -2dB, pan set to center. No FX. 120BPM. In each one I made a selection of the first 10 bars (the 8 bar clips with a leading and trailing bar), then rendered each project in FLAC form. The idea was to see what results would come from performing a set of steps, not necessarily to do something "scientific." If I were after objective, measurable results, I'd perhaps use test signals, impulses, whatever, as the source material and measure the results with the best test equipment I could get. But I don't record test signals and I don't listen using measurement equipment. I will disclose that I am not one of those who believe that the mixing and panning and summing engines of all the different DAW's that are on the market generate results that sound exactly the same given the same source material. That would require too wide a variety of people coming up with the same solutions to too wide a set of problems for me to believe it were even possible. As a consumer and user of audio recording and mixing programs, the question interests me. The point was primarily to create audio files that I could listen to critically, but I did run some analyses using Sound Forge's "Generate Statistics" function and the results were interesting. According to Sound Forge, For instance the integrated LUFS ranged from 17.12 to 20.02. Maximum true peak ranged from -0.76 to -3.90dB. The length of the rendered audio files ranged from 18 seconds to 25 seconds, due to the way each DAW handles silent lead-ins and lead-outs that are selected. I'll be subjecting the resulting files to close listening tests on a variety of speakers and headphones. The arps I used have a lot of good transient material and it will be interesting to see what the differences in imaging are.
  17. Hello fellow Generation Joneser. B. 1961 myself, which, in keeping with the topic, makes me a "sexugenarian." I'm going through some major upheaval in my life right now and it's good to hear from someone coming out the other side of changes in an improved state. Merry Evenings Define Success!
  18. Yes! How could 6 different development teams all make the same decisions as to how to mix audio files together? To render them? I'm actually in the middle of setting up a listening test using a pair of virtual instruments playing arpeggio patches. I'm hoping that it will allow me (and others) to do some testing of subjective listening quality. I was just messing about keeping my chops up with my secondary DAW, Mixcraft and noticed a difference between how the same two synths playing overlapping arps sounded in relation to how the same dataset sounded in Cakewalk. The plan is to expand the testing to other DAW's I have access to like Ableton Live Lite and Studio One Artist 5. Apparently I was the only one who got the joke about your vocals sounding crappy. Practical hint on that, though: your voice is an instrument just like the others you play. What makes your playing get better? Practice. Same with singing. The more I practice, the better my pitch and overall confidence and performance. Record your vocal part as many times as you can stand to over a number of days. Record, listen to the take, repeat.
  19. Starship Krupa

    App crash

    Are you attempting to drag a plug-in from the Plug-In Browser and drop it on a track? If so, are you dropping it somewhere on a clip in the track or onto the FX bin in the Track Header? Those operations will have different results.
  20. Based on what evidence? I know we all want this to be true due to the can of worms we'd be facing otherwise. If you have a credible source of information I would love to know what it is.
  21. This is, unfortunately, a question that only the developers of the respective programs could answer definitively and objectively. From a user's perspective, all we can do is test it. Make a couple of small mixes using no plug-ins, and/or simple plug-ins using the same fader and plug-in settings. Then render the projects and listen. There are those end users who will tell you that "all DAW's sound the same, period," because they knew a guy who knew a guy who read on a forum that someone tried a null test of unknown parameters and source audio and it nulled perfectly. Presumably the fictional person ran this test on every DAW currently on the market. I can say that I ran null tests a few years ago using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null. They weren't terribly far apart, but at least at the panning settings I used, no FX, the render wasn't identical. In the meantime we have advertising material from different DAW's (including Samplitude/Music Creator and Mixcraft) touting how they improved the sound of their respective audio engines. You see why I am personally skeptical of the "all DAW's sound exactly alike" statements. I might go as far as saying that it's likely that at the task of recording whatever audio is coming in from the interface, you'll get the same results. However, as soon as you start creating mixes, that's when a whole lot of proprietary algorithms for mixing and panning and interfacing with plug-ins enter the picture. You did say "noticeable." That's subjective. Who's doing the listening and noticing? Experienced trained listeners? What programs and hardware are they using to play it back and listen? Audio player software for computers varies widely in sound quality. There are so many variables that the only way to get satisfactory answers, IMO, is to set up some tests ourselves and listen.
  22. Wow, right from the very start: "this software uses Windows 3.1 features which are ignored by Cakewalk's main rivals." and "If you click the right mouse button on a bar, Cakewalk presents you with a choice of editors for altering the information within that section of the track. Cakewalk was using right click context menus as early as their first release for Windows 3.1. Dang. I've been a fan of context menus from the first release of Windows 95. That's when they, uh, really started to click for me.
  23. Holy crap, Peter!! This sounds excellent! So much improvement in the mixing, instrument separation, vocal sound. If there's one thing I'd change, and it's pretty small in the huge scope of this piece, it would be to "dirty up" a couple of the instrument sounds, and maybe use a touch of delay to nestle them down in the rest of the mix. This would be the Rhodes (?) sound at the beginning and the electric guitar lead around the end of the first chorus. The rest of the song, the arrangement, the treatment of the vocals, man, you sound like you're on a TV special tribute to Jeff Lynne. And: Wasn't that what I said about "I Am The Walrus?" Ross agrees, you have a cool, distinctive voice. For perspective, I think my singing voice sounds like a middle school principal reading the lunch menu on a day when they're especially bored with the task.
  24. Of course for MAddicts who would like to someday get any bundle that includes MVibratoMB, this is a good principle financially. At upgrade time, Melda applies 50% of the list price of any plug-in purchased from a retailer (they assume you got a discount, and it's also an encouragement to buy direct). So in this case, since MVibratoMB lists for $43, your $9 eventually gets you $21 in upgrade credit. I had a policy of buying any MeldaProduction product that went on sale for under $10, and took advantage of a few intro sales. These added up to big savings on the way to the bundles I got during the 50% off everything sales. Including of course the Mother Of All Bundles. And BTW, when those bundles/everything go on sale at 50%, if you own any Melda licenses, always check your bundle upgrade prices. I was always surprised by how little the upgrade cost was. Those freebies, $9 specials and intro deals really add up. Yeah, there's also the basic original MRotary to confuse matters further. I think the MVintageRotary may add saturation and cab simulation? It can be puzzling to figure out what is what when there's (for instance) MReverb, MReverbMB, MTurboReverb, and MTurboReverbMB. Here's how I understand it, after having been perplexed for a good long time: "Turbo" means "Deluxe" or "Advanced" and sometimes "Vintage modeled." In the case of these reverbs, the difference is that the "Turbo" version has the feature where you can design your own algorithms. ? I have no idea who the target market for that level of customization is. More power to them (so to speak). The underlying algorithms in the stock presets for MTurboReverb sound better (to my ears it's the equal of any reverb I've heard). There are so many different "devices" (more about that later) and variations within the devices that I will likely never, unless I end up doing prison time where I get to have my laptop, even have time to audition all of them. The Turbo ones have more "devices," which is where they've taken various controls and grouped them together and put a more conventional GUI face on them. These grouped controls are what MeldaProduction calls "multiparameters." Confused the hell out of me for years until I finally figured it out: "multiparameters" are just controls that are tied together and may be operated with a single slider. The "devices" take those and use skeuomorphic knobs instead of the sliders. Skeuomorphic buttons to select different algorithms and such. Nice for people who like the way the MeldaProduction stuff sounds but are put off by the complexity and look of the UI's. Wow, that's a new one (to me). I wonder what his problem is with that. Unlike Steinberg, MAGIX, Avid, Waves, Apple, Microsoft, etc. I've never heard anyone say that Image-Line does evil (not that I'm saying any of the aforementioned companies do evil). Of course, it seems like no matter what their policies, any company can raise the ire of any individual. I've seen people rail against MeldaProduction's upgrade policies. They can seem weird at times; in the last go-around, the cost for me to upgrade to one of their bundles was about $400, where upgrading to the MComplete bundle (everything they have made or ever will make) was $120. But who cares? It's not as if they were trying to cheat me somehow. They credit for license purchases that are part of the bundle. Since every product is part of the MComplete bundle, well, of course, I got credit for every single thing I already had. But I've seen people just fume over that.
×
×
  • Create New...