-
Posts
8,103 -
Joined
-
Last visited
-
Days Won
29
Everything posted by Starship Krupa
-
Scott has a niche there. He works in a genre that seems kind of underserved (not "undeserved!") on YouTube. There's....plenty....of pop r 'n' b stuff on there, which, to be fair, uses slick production techniques to the hilt, but it's not a genre that I much care for, at least not in its current state.
-
Even Windows 10's Explorer has this functionality built in. I've had to use it from time to time to round up stray files.
-
Indeed they are. But all (physical) mixers don't behave and sound alike, although there was a time when most people believed that they did. And my questions have to do with what choices (and compromises) they make when copying said mixer. I was once a professional software QA engineer (at Macromedia/Adobe), and I know something about how programmers approach problems. Everyone wants to implement their own great ideas, and there's no magic hand that comes down out of the sky and slaps their hand when they decide not to adhere to accepted principles or practices. Here's Harrison on their claims about how their DAW, Mixbus, supposedly sounds "better" than other DAW's: "When the digital revolution came, we were asked to convert the analog "processor" into a digital processor, while leaving the control surface unchanged. Film mixers wanted the control surface to work and sound exactly like the analog mixer they were using for previous projects. This required us to develop a digital audio engine that operated and sounded exactly like the analog mixer they were using for previous projects. This transition was not undertaken by any other company, and it has provided us with techniques and proprietary technology that we have incorporated into all of our high-end mixers. Mixbus gives us an opportunity to share this technology with a much wider range of users." Sure, it's a marketing blurb. And so what? A pile of amateurs having a big debate. Gearspace is known for being a big weenie-waving fest. Yeah, I get that some people are tired to death of this discussion. "Gawd not this again!" So don't follow it! The culture over here is blessedly different. I 100% agree that if one is going to worry about some aspect of the sound resulting from their recording and mixing process, there are many many many many things to be concerned about before you get to "hmmm, I wonder if there is a difference between how different DAW's sound. I better get the 'best' one." The answer to that (from me) is if you're really concerned about it to just frickin' try out the DAW's you're considering. They all have either trial or freebie versions. If you hear a difference between Pro Tools First, Cakewalk, Ableton Live! Lite, REAPER, et al, then go with the one that sounds best to you. Don't go by the "wisdom" of a bunch of guys (and they're always guys) trying to top each other in some discussion. That's my approach: if you're curious, if it's important to you, then try it. I did, recently, and was brave enough to post about my progress through the process. One of the conclusions that I and the people sharing in the discussion have come to seems to be that its really hard to design tests that themselves do not introduce difference. Let's say someone wants to test the summing engine. If they go with already-created audio material, well, that right there bypasses the DAW's own recording engine. Then most DAW's want to convert imported audio files. There's another point of failure, at that point we're also testing their conversion algorithm. And so on and so on. It's really hard to do, and I'll bet that way less than 1% of people who debate this have ever actually tried to test it themselves. People get hung up on making it 100% absolutely objective. So my approach (in that other thread) is to loosen up a bit on trying for an ideal, and go with actual musical program material, generated by the DAW using plug-ins, entirely in the box. People have pointed out that my methodology is imperfect, but my methodology imitates how actual people use music software. We don't record pure sine waves and square waves from test equipment. We record and mix complex program material. I'm flailing around trying it right now, and as far as I'm concerned, the other people weighing in have been really cool about it. They're keepin' me honest. But I'm also emphasizing that what I'm coming up with is imperfect and not intended to prove anything other than maybe that it's really hard to design and implement a test for this. I posted the renders that came out of my exploration for anyone to listen to if they are curious. So now I'm one of those mythical guys (and we're always guys?) on a forum somewhere who tried testing it. And my "conclusion" is that objective tests are next to impossible. Which is kind of as it should be. When we're trying out other music tools, we're certainly not objective. We noodle around and listen, we claim that blue guitars with gold hardware are ugly, etc.
-
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Revised the package of files again. I listened to them and Thwarp C.FLAC sounded as if it were lower in volume than the others, so I opened the DAW project to check. All of the track levels were set to what my instructions call for....except the fader for the Master bus, which was set to -3. No idea how it got that way. I never deliberately touched it. Thwarps Beta 3 -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
At least once?? ? Yes, I've tweaked compression settings to perfection using bypassed compressors. About once a month. I expect the frequency to increase as I get further past 60. Ah, but the thing about this scenario is that tweaking the EQ and compression on one track will affect the sound of another track. Right? Make some cuts or compress over here, and then suddenly the track over there starts popping out in the mix. It can even happen with things like delay and reverb where the track we're working on gets moved back in the mix. In the Jenga game of mixing, what affects one track usually affects the others. I stumbled across that one, and while I already "knew" it intellectually due to studying masking and how to reduce collisions, the experience of tweaking the EQ on one track and hearing the results on another track was the revelation moment. ? Okay, here are the revised files. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Yes, for it to produce objective results. I'm working backward, trying to figure out why I heard a difference. Part of that is eliminating as many variables as I can and then listening again as well as submitting my renders for peer review, which in the case of apparently only one person snagging them before I took down the link, has already yielded valuable feedback. I have no illusions that I'll be able to eliminate all variables when working with virtual instruments. In the end, stock patches always have something like reverb or chorus baked in. I'll likely be submitting the anomaliess you noticed to MeldaProduction. They should know that there are a couple of DAW's whose "zero all controllers" schtick messes up MSoundFactory's arpeggiator at render time. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
I know you said you only skimmed the thread, but I've explained in detail that I'm not trying to prove or disprove a difference in engines. I fully admit and concede that it's not likely to result in anything other than an interesting exercise. My process here is a "wrong end" approach. It's like the Air Force's UFO investigation project. I heard something and I'm trying to eliminate/minimize as many variables as I can. It's also a method used by pro QA engineers, which I once was. Observe a blip and then try to come up with some way to reproduce it. I've also disclosed that I do think it's possible that there's something about how a DAW implements things that can result in a difference in perceived sonic "quality." For a subject that so many contend was laid to rest such a long time ago, this does seem to be getting a lot of attention. And I do appreciate the lack of dismissiveness toward my flailing about. As for using audio files, yes, I may do that at some point. However, when I heard the difference between the two DAW's, it was when using virtual instruments. That gave me the idea that there might be something about how the DAW's were implementing their hosting of virtual instruments, and from that point forward, well, can't divert a fool from his folly. ? -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Wow. Okay, I'm not going to be able to get back to this for several hours, but here's yet another thing I learned: MSoundFactory was changing the behavior of one of the arp patches I used in response to being told to "zero all controllers." Two of the DAW's were set to zero all controllers on stop. So that synth track is there, it's just not doing the lively arp behavior it's supposed to. Thanks again to Bruno for noticing this before I did. This now has me wondering if the famous "Suddenly Silent Synth Syndrome" is a condition that some VSTi's get into and don't recover from when you throw the "zero controllers" thing at them. Is "zero all controllers" a single message like "all notes off" or does it send individual zeroes to each controller? This kind of thing is part of why I did this: I just never know what I might learn. It could be quirks of various synths I try, default or optional MIDI behaviors of the hosts, whatever. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Crap! Thank you, Bruno, that's what I get for not listening closely enough. What you actually caught is the complete LACK of one of synth tracks firing off in the entire snippet. Which is something I was going to mention later: good LORD did I ever learn that Cakewalk wasn't the only DAW where MIDI/virtual instrument tracks stop making sound for no apparent reason. At one point or other, in each of the 4 DAW's I was playing with, I wound up having to do the dance of deleting an instrument track and its MIDI data and creating it afresh before the track would start making sound again. Stand by for revision 2.... -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Okay, here's the moment we've been anticipating. I finally was able to get what I think are similar results from 4 different DAW's. What prompted me to want to try this was that I was trying to set up some test datasets for the DAW's I use, and happened to notice that even though the test projects were both playing back the same arps on the same synths, I could hear a difference between the two DAW's. So it got me thinking about why that might be, and how to figure out why, and that led to wondering how I might go about creating actual music files that would be the most suitable for "listening tests." Subjective, of course, using music and ears rather than test tones and impulses. That's not how to proceed were I trying to prove something to someone else, but I'm not. It is trying to solve a problem, but approaching it from the other side: I started with the observation: I heard a difference between two different DAW's that were playing back very similar material at what their controls said were very similar levels. As discussed earlier, although I don't believe that it's "impossible" for the VSTi-hosting/mixing/playback subsystems of two different DAW's to sound perceptibly different (despite whatever proof was presented on a forum decades ago), I do think it would be unusual to hear as much of a difference as I did. What happens if I render the projects out and listen, can I still hear the difference? Yes, as it turns out. Hmm. What if we try it on a couple more DAW's? One still sounds better than the other 3, which sound pretty similar. Hmm. Looks like we got us an outlier. Various iterations of rendering and examining the peaks and valleys in Audacity revealed that there were some big variances between timing of peaks and valleys due to either randomization in the arpeggiators or LFO's not synched to BPM. Or whatever. The hardest part was most definitely finding arpeggiated synth patches that were complex enough not to drive me nuts having to listen to them, yet didn't have any non-tempo sync'd motion, or at least as little as possible. Biggest chore after that was figuring out how to do things like create instrument tracks, open piano roll views, set note velocities, etc. in the two DAW's I was less familiar with. Once I got all the dynamics sorted, the statistics generated by Sound Forge fell better in line. But enough text for now, here are 4 renders from 4 DAW's, as close as I could make them without cheating any settings this way or that. Analyse them, importantly, listen to them, see what you think. I haven't subjected them to critical, close up listening yet, so I don't know how that shook out. The only piece of information I'll withold for now is which rendered file is from which DAW so that people can try it "blind." If you think you hear any differences, it doesn't mean anything. Any audible differences are 100% the result of errors in my methodology, and besides, audible differences can't be determined by people just listening to stuff anyway. The only time you should trust your ears is when you are mixing, and when you are doing that, you should trust only your ears. The Thwarps files, beta 3. P.S. One of the audio files is a couple of seconds shorter than the others because I never did figure out how to render a selected length in that particular DAW. -
You want to hear the track you've already recorded while monitoring your current playing. Right? Do you have the UM2 selected as your output device (presumably from your Master bus)?
-
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
My Hewlett-Packard (analog, still has a calibration tag from Apple's R&D lab)) signal generator and Tektronix 465 (the apex of analog 'scopes, IMO) oscilloscope seem to generate and display pretty clean ones. ?♂️ (if a unit under test on my bench were putting out a "square" wave that looked like that, I'd be looking for where the ringing was coming from) -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Hmm. It actually affects velocity, not volume. So....is 101 the equivalent of "unity?" Meaning that setting will pass whatever velocity my notes are set to unchanged? I think this is one parameter I'm going to leave alone (for now). Another cool feature of Cakewalk that I haven't seen anywhere else, though. In the case of your bass ROMplers, I'd think that cranking the MIDI "volume" would also have an effect on the timbre of the instrument. With higher velocity usually comes more spank and growl, no? -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
BTW, I've been posting that link for years, and in all that time, nobody has ever commented on it. It would seem to indicate that at least in the matter of sample rate conversion, their algorithms and implementations are not created equal. The "proof" that people tend to cite is either theoretical work done decades before personal computer DAW's were a twinkle in the eye (Fourier, Nyquist, Fletcher-Munson) and/or words to the effect of "we talked about this on a bunch of forums a long time ago, and one guy tested it and we all agreed, therefore the topic must never be discussed again." Is it really impossible that the actual practical implementation of the work of messrs. Fourier et al could be imperfect? Having toiled for years in the software QA biz, it seems....naive, I suppose, to assume that every group of programmers who attack the problem all get it right the first time. The SRC page seems to suggest that in the case of SONAR and others, there was room for improvement. All of you who are so sure that there are no practical differences between DAW's, and have all these ideas about testing methodology, who among you has actually tried it? Anyone? BTW, here's a picture of a square wave, generated by MOscilllator and read by MOscilloscope, both in the same FX bin in Cakewalk. MOscillator is set (internally) to 4X oversampling: Anyone care to enlighten me as to why I see what looks like ringing? -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Still working on getting a more uniform dataset. Some things I've learned so far: 1. The synths/patches I used in the initial experiment both use arpeggiators that yield different results with each pass. (thanks to @OutrageProductions for reminding me of that possibility). The big reminder from this is that the "same" project can render differently from the same DAW twice in a row. I knew this already, even account for it in my usual rendering workflow (I render a finished song only once, to a lossless file, then convert that render to various other formats as needed). But it's good to remember that stochastic elements can lurk in places I might not expect them. Also: cool that two of my favorite synths can do stochastic. It's a challenge to get consistent results with synths' built-in arpeggiators. I would try a 3rd-party arpeggiator, but not all of them support MIDI FX, and the ones that do, I'll bet they don't route as simply as Cakewalk, with its straightforward FX bin on each MIDI strip. 2. Among the four DAW's I'm working with, getting them to render at a given length can be tricky (and this includes our darling, Cakewalk). With one of them, it always wants to render everything out until it doesn't hear any more tails. That's fine, I suppose, one can always pull the render up in an audio editor and trim to heart's content, but still, I prefer having control over it. 3. Among the four DAW's, with a more consistent dataset, I'm seeing more consistent analyses as far as LUFS, but one of them is poking (way) out as far as LU (loudness range), and another one is poking out in regard to Maximum True Peak. 4. There's always something new to learn, even things that are right in front of me. For instance, whilst puzzling all this out, I've been setting the MIDI velocity of the triggering note to 100. Of the different DAW's, Cakewalk is the only one that (as far as I can figure out) can do separate MIDI and synth tracks, which is the way I use virtual instruments. But Cakewalk also has a "volume" control on its MIDI strips, and I'm not even sure what those do. By default, they're set to 101. Is that "unity" in the MIDI volume world? Do all softsynths respond to that volume setting in the same way? Does that setting even do anything? -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
It's an absolutely objectively proven fact that whenever someone posts this link to a forum, Joseph Fourier turns over in his grave. Even just the times I've done it would be enough for the other residents of the cemetery to call him "Whirlin' Joe." Anyway, I'm not trying to prove that all DAW's do or don't sound the same bla bla bla. I was hoping that by saying so and by using deliberately imperfect, subjective testing methods that I would avoid that sort of thing, but whatever. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
How many people countered your "useless" results by setting up their own tests and then showing their results? Wait, let me guess: absolutely nobody. Oh heck yeah. If one of the bits of software I'm playing with generates rendered files that are even just higher in level, that will be interesting to find out. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
How many people countered your "useless" results by setting up their own tests and then showing their results? Wait, let me guess: absolutely nobody. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
So are the people at the companies who actually make the software lying? Related question: Is there something about Fourier, Nyquist, and Shannon's theorems that prevents programmers (and/or their programming tools) from making mistakes when creating software that uses them? What "error?" Where did I say the sound source was "immutable?" You must have missed the part where I said this: I think my only "error" was in overestimating folks' reading comprehension. ? What do you think I'm trying to accomplish by doing this? I don't have some theory or other that I'm trying to prove or disprove. I just thought of a repeatable set of steps that I suspected might yield interesting results. The idea of this is to mess about in a deliberately non-rigorous, non-scientific way and see what happens. Then maybe I can draw a conclusion or two based on the result. Maybe not. Certainly not a scientific conclusion, but maybe a practical one. I'm always fascinated by posts where people are finding that (for instance) their rendered files don't sound like their music does during mixing. I like to try things and observe the results. This can be puzzling to people who prefer to already know what the results will be, but I find it fun and often even educational. Here, the objective is to see what happens if I take 4 DAW's, use the same soft synths playing the same arp patches on them, with the same mixer settings, and render out the files. The operative words being "see what happens." I know the renders are going to come out different. I'm wondering how different and in what way(s). -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Indeed, and that sort of thing is part of the "black box" nature of my experiment. The idea is to submit these DAW's to as close to the same stimulus as possible and seeing (and hearing) what happens. The results may suggest that I should be mindful of just how different DAW's handle silences. I was surprised at the differences at just the length of the rendered files, given that I had selected 10 bars at 120 BPM. 10 bars is 40 beats, right? -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
I'm taking a closer look at whether the arp patches I'm using actually stay the same over 8 bars. Initially, I only listened to 1 bar to get an idea. This stuff is why I haven't posted links to the actual renders yet. I know my "methodology" isn't ideal. This is all just an experiment to see what happens. I'm not setting out to prove any theory I have, I really just want to see what happens. Thank you for the extensive explanation. It'll take me some time to digest it, but this part I get right away. I found out about the effects of jitter as a reason that the sound of prosumer level audio interfaces has improved so much. My first setup was a pair of Presonus Firepods (I got a deal on them and thought I could make use of 16 channels). I hand-waved them as a product that had been well-reviewed when they were introduced, used them for years. Then I got a Presonus Studio 2|4 to use with my laptop. I plugged it into my main DAW to give it a listen and was floored by how much better it sounded. And when I say "better," it's in terms that some people disparage. The difference was in transient detail and the controversial "soundstage." If you follow the link in my sig about how I modded a crappy-sounding Alesis RA-100, you'll see an attack by an "if I can't see it on my test equipment it doesn't exist" type. So I started researching how it could be that this budget interface could sound so much better than my Firepods and found out that the Firepod was designed and manufactured in the early 00's, just before JetPLL, a low cost technology for dropping jitter levels by orders of magnitude, had been introduced to prosumer DAC's. When the Firepod's successor, the Firestudio came out in 2007, it featured a DAC that used JetPLL. All subsequent interfaces from them use this technology, as do many other interface makers, such as Focusrite. I immediately went shopping for a newer interface and got a great deal on a new-in-box Focusrite Saffire Pro 40 (I still prefer Firewire; I'll switch when Thunderbolt interfaces become available in my price range). The Firepods have been set aside to be sold on Craig's List. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Definitely a consideration. I chose arp patches that produced the same notes when they are triggered. I may have got that wrong, though. It's all a big experiment. -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Can you elaborate on this? The programmers' work is based on his theories, but does it necessarily follow that everyone will implement them in the same way? I ask sincerely, not seeking debate, rather education. What DAW's do goes beyond just recording the stream coming from the audio interface. There's hosting of VSTi's, FX plug-ins, mixing those streams, panning them, all sorts of things. Did Fourier's work account for all of that? I ask because I don't know. In the past decade, multiple DAW manufacturers advertised that they improved the sound of their audio engines. What do you make of that? -
4 DAW's, 4 renders, 4 results
Starship Krupa replied to Starship Krupa's topic in Cakewalk by BandLab
Can you elaborate? I only know enough to create practical tests. Joseph Fourier passed away in 1830. What did he come up with that ensured that in the future, multiple teams of programmers would all independently design and implement algorithms that mix streams of digital stereo audio together so as to produce exactly the same results? The companies that have claimed in their marketing literature to have improved the sound of their DAWs' audio engine (as MAGIX, Acoustica, and Ableton have all done in the past decade), were they lying? Please educate me. I'd love to learn more. -
I just did an experiment using 4 different DAW's. On each of them, I created 2 MIDI/Instrument tracks using the same 2 VSTi's using the same 2 complex arpeggio patches using the same note (G2). One bar of silence, 8 bars of the note (velocity 100), followed by a bar of silence. Channel faders were pulled down to -2dB, pan set to center. No FX. 120BPM. In each one I made a selection of the first 10 bars (the 8 bar clips with a leading and trailing bar), then rendered each project in FLAC form. The idea was to see what results would come from performing a set of steps, not necessarily to do something "scientific." If I were after objective, measurable results, I'd perhaps use test signals, impulses, whatever, as the source material and measure the results with the best test equipment I could get. But I don't record test signals and I don't listen using measurement equipment. I will disclose that I am not one of those who believe that the mixing and panning and summing engines of all the different DAW's that are on the market generate results that sound exactly the same given the same source material. That would require too wide a variety of people coming up with the same solutions to too wide a set of problems for me to believe it were even possible. As a consumer and user of audio recording and mixing programs, the question interests me. The point was primarily to create audio files that I could listen to critically, but I did run some analyses using Sound Forge's "Generate Statistics" function and the results were interesting. According to Sound Forge, For instance the integrated LUFS ranged from 17.12 to 20.02. Maximum true peak ranged from -0.76 to -3.90dB. The length of the rendered audio files ranged from 18 seconds to 25 seconds, due to the way each DAW handles silent lead-ins and lead-outs that are selected. I'll be subjecting the resulting files to close listening tests on a variety of speakers and headphones. The arps I used have a lot of good transient material and it will be interesting to see what the differences in imaging are.