Jump to content

bitflipper

Members
  • Posts

    3,346
  • Joined

  • Last visited

  • Days Won

    21

Everything posted by bitflipper

  1. I'd caution against generalizing based on your observations. All you've determined is that your computer can handle the miniscule overhead of upsampling a specific compressor, in your specific project with your specific plugins. This is far more about the plugin's efficiency than that of Cakewalk, since nearly all the extra processing that upsampling incurs happens within the plugin. It'll be running exactly the same instructions under Cakewalk as it would under Reaper or Studio One. Yes, being able to upsample individual plugins is indeed a great feature worthy of bragging about in the marketplace, but I doubt any knowledgeable DAW user is going to jump ship based on that feature alone. This discussion is meaningless without talking about aliasing. Aliasing is not only relevant, in the context of compression it's literally the only reason oversampling is ever prescribed. If you have no audible aliasing, you don't need oversampling to begin with. If you don't need 2x you likewise don't need 4x or 6x.
  2. As I thought about this, a few dormant brain cells must have woken up because I suddenly recalled a vague memory of having the same problem long ago. Google brought up this thread (by me) from 2010: http://forum.cakewalk.com/Somewhat-Solved-Superior-Drummer-3db-louder-after-freezing-m2028203.aspx Although we were talking about SD2 at the time, the symptoms are identical to what you're experiencing. TL;DR the workaround I came up with back then: freeze the track normally but then set the track interleave to stereo.
  3. You should be able to select one side of the stereo output. Do you not see the same selections shown in my screenshot above?
  4. Well, I suppose there could be a problem within SD3 that I've just never noticed. Even if that's the case, you can still force mono within SD3 using the pan sliders. Note that SD3 gives you separate left and right sliders for every output channel. Put them both into the center and you've got mono out regardless of whether you use 1, 2 or 1 + 2.
  5. Makes sense that you'd instinctively protect your dominant hand with the other. Assuming a right-hander...when protecting your face, is it not your left arm you throw up? When taking a fighter's stance, is it not your left side you present to your opponent? Do you not flip off rude drivers with your left hand? Instinct designates it as the sacrificial limb. Of course, on very cold nights I have a very different strategy for hand placement.
  6. Make sure the audio track you're sending it to is set for mono interleave, then look at the frozen track and make sure it's still mono. If not, it may be because you've got a stereo plugin in its fx bin. Once you have it all set up and working properly, save it as a track template so next time you won't have to mess with it at all.
  7. Don't know if this will help, but here's how I normally set up SD3. Output channels are designated Kick, Snare, Toms, etc. Each of these goes to its own audio track. In the screenshot below, the kick is taken from the Left channel and routed to a mono audio track. Overheads and Room remain stereo.
  8. It won't matter which one you install first. If Essential is already installed along with Cakewalk, the Celemony installer will simply overwrite it with the Studio version. I prefer installing the DAW first and testing before any third-party add-ons are installed, so I've got a confident baseline to start with. (btw, I'd encourage you to consider investing in the Melodyne 5 upgrade. I was initially reluctant to do so, since version 4 already did everything I needed (so did versions 2 and 3, for that matter). But I am loving the new global leveling feature. It's a real timesaver if you have a lot of vocal tracks and/or multi-part harmonies.)
  9. Nothing's broken, it's just arithmetic. Combining any two tracks, which is what happens when you convert stereo to mono, requires them to be summed. IOW, the left and right channels get added together, sample-by-sample. Whenever they are in phase, even briefly, you'll get a sum that's larger than either channel. 2 + 2 = 4, and there's nothing we can do about that. Here's the easy solution. Most samplers, include SD3, let you choose between stereo and mono outputs. If you use a mono out for the kick, and route it to a mono track in Cakewalk, you've now got mono from start to finish and there will be no increase in peak levels when you freeze. Anything you want to have in stereo, such as overheads, goes to a stereo output to a stereo audio track. There will still be no rise in peaks as long as you're freezing a stereo track as stereo.
  10. I'm in a six-piece band. Pay's about the same as for a duo. In 1975. Thinking of a revised distribution scheme... 16.7% base pay less 1% for every song you screw up less 5% if you don't help set up the PA less 5% if you only pack out your own gear and then leave less 5% if we can't find you after break's over because you're chatting up the barmaid
  11. Well, that makes sense. Where else are you going to spend all those BJZ/FM/CHB royalties? btw, my favorite setup for final critical listening is a pair of HD650's, but they did not reveal their true potential until I bought a headphone amp for them. Hi-Z, ya know. They're a nice complement to my main speakers, so any time I'm happy with how both sound it raises confidence in the mix.
  12. EQ generally last because it's the one you're more likely to continue tweaking as the mix progresses. EQ changes levels, which assures you'll have to go back and readjust the comp threshold. Putting EQ last means you're less likely to screw up something else down the chain. This has long been the standard for hardware consoles that feature integrated channel compression, including my current Yamaha stage mixer. Of course, such compressors are primitive and don't offer anywhere near the flexibility of software compressors - such as sidechain filters. The exception to the EQ-last rule is if the compressor doesn't have a HPF on the sidechain input. In that case, EQ should precede compression, especially if you're cleaning up mud by rolling off lows on guitars and vocals.
  13. You have to explicitly tell Windows to not put USB ports to sleep. Open Device Manager and scroll down to the USB devices. There may be a bunch of them, but if you don't know which one connects your audio interface it's OK to do this to all of them. Un-check the "allow computer to turn off" option. (It's not unchecked in the screenshot below because I don't use a USB interface and therefore don't need to worry about it.)
  14. Try this: select a track, then choose Process -> Apply Effect -> Remove DC Offset, see if that changes the waveform. Or...apply a steep high-pass filter with a low cutoff frequency (< 50Hz) and see if that changes the waveform.
  15. Aren't we all? Don't sweat console emulation. It's an attempt to replicate the flaws of old gear, and its primary function is to introduce a small amount of harmonic distortion. That's all. As noted by Erik above, the flaws in classic gear were unintentional. Those expensive boards were designed to deliver the highest fidelity available with the electronics of the day. Nowadays, we can easily achieve more linear frequency response, greater dynamic range, lower noise, finer control and greater consistency using inexpensive gear. As an old timer who battled analog signal chains and magnetic tape for decades, I fully embrace the crystal clarity of digital audio and feel no nostalgia for days gone by. So why does anybody bother with console emulation and tape sims? Because the flaws inherent in those things made mixing easier. A little harmonic distortion adds texture. Even white noise helps glue a mix together. Tape saturation does a lot of the blending for you. Personally, I do not use any of that stuff. Embrace the purity of digital, I say. If your mix sounds thin and disjointed, keep working on the mix.
  16. Thanks, guys. I appreciate you giving it a listen. Looks like it's gonna be another weekend for musical doodling in the garage, as our guitarist has come down with COVID, forcing us to cancel tomorrow's gig. Unfortunately, I got the news AFTER packing up all my gear and loading it into the van. I think it's just going to stay in there for the week. Kills my gas mileage dragging all that stuff around, but moving it kills my back. And next week's gig might not happen, either.
  17. Odysee.com is another up-and-coming YT alternative. I go there to listen to classical music. YT is a victim of its own success. The sheer quantity of uploads makes it impossible to actually review even a tiny percentage of submissions, leaving that chore to automated processes and user complaints. The problem with the latter is that anybody can take down a video, even if just to troll (example: the lady who ran her harp through a distortion stompbox and had it removed). Robot copyright trolls have had peoples' own original music taken down for copyright infringement. What I'm suggesting is that no human actually made the decision to block your video. The "team" that made that decision was a bot. At least YT acknowledges the unreliability of such automation and 99% of the time will reinstate a video on appeal. You shouldn't have to do that, of course. But remember, it's a free service that lets you reach millions of people - in years gone by you'd have had to bribe disc jockeys to do that.
  18. Reminded me of a line by Rich Hall... If you can play guitar and harmonica at the same time like Bob Dylan or Neil Young, you're a genius. Add a pair of cymbals between your knees and people will cross the street to avoid you.
  19. It's not going to be easy to verify that upsampling is working. The effect is normally going to be quite subtle (assuming it does anything at all) unless you have some scenario where obvious aliasing can be heard (or seen with a spectrum analyzer). And that's actually a rare condition, difficult to make happen even on purpose. I've never had a synth or effect that caused audible aliasing, and if I ever did it would get retired immediately. In terms of CPU usage, I doubt you'd see a discernable change, unless maybe you upsampled every plugin. Even then, CPU usage will normally bounce around more than that just due to WIndows background processes. By contrast, bumping up your overall sample rate is far more impactful, as it affects everything, including filling output buffers for monitoring and input buffers for recording as well as plugin processing. Under most circumstances, upsampling individual plugins is not going to have a noticeable effect on overall latency. Of course, all this is moot if you can't hear (or even measure) the difference, in which case you didn't need upsampling to begin with.
  20. It's telling you that Windows can't see the interface. Try switching to WASAPI if you're currently using ASIO. Go to Preferences -> Playback and Recording and select WASAPI from the dropdown list at top.
  21. Fear not, Cactus, North America still dominates in stomp boxes, guitar strings and toilet seats.
  22. Shh, as a guitarist you're not even supposed to know such things exist. But no, even though I love Indiginus' slide guitars what you hear in this one are the Glorious Steel patch from Omnisphere and Indiginus Renegade Acoustic. Orchestral elements are Spitfire ONE Legendary Low Strings and Amadeus Symphonic Orchestra.
  23. My favorite brand is June's Blood Orange. It's a local product, and always on back order even here, so you may not be able to find it. It's a 1:1 CBD/THC. Very mellow. This is what combining June's chewie nuggets with a handful of newly-acquired virtual instruments produced. The project was just a platform for experimenting with some recently-acquired plugins: Boz's handclap, finger snaps and foot stomps, Spitfire's solo cello, Skaka and a $4 choir library called Singers2. A friend listened to it and was annoyed by its lack of thematic consistency. That's the downside of composing high: short attention span. https://soundclick.com/share.cfm?id=14244593
  24. Yeh, that's a pretty broad question, GXXX. Might help to narrow it down with a little more information, like if the audio interface is external or internal, what driver you're using, internal or external speakers, passive or active? Do you get sound from other sources, e.g. WMP or YouTube?
  25. Matching any two reverbs is going to be a challenge, because they'll have different controls and different implementations for each parameter. Even something as standard as reverb time won't necessarily be the same, e.g. one reverb decides it's the time to -70 dB and another to -60 dB. Most parameters won't have precise units that you can manually enter, e.g. modulation depth or the LFO frequency for modulation rate. One might have a 6dB/octave HPF and the other a 12dB/octave filter. Faced with your challenge, I would go project-by-project, toggling between the old and new reverbs and listening. It's about all you can do. I can't imagine any shortcuts to make it any easier. On second thought, if I was faced with your challenge, I'd be inclined to just leave the old reverb in place if it sounds good. As long as the plugin still works, there really isn't any great benefit to replacing a 32-bit plugin with a 64-bit plugin.
×
×
  • Create New...