Jump to content

Craig Anderton

Members
  • Posts

    872
  • Joined

  • Last visited

  • Days Won

    7

Posts posted by Craig Anderton

  1. On 9/1/2021 at 9:43 PM, Ron Caird said:

    Thanks.  I had not read that section and it has the best explanation of what is going on that I have read.  Interesting that the Reference Guide recommends placing the Console Emulator first in the chain while Craig A. recommends that it be last. (Thanks for the link to that video, Jackson White.)  I'm curious as to the reasoning behind each approach.

    I really think this article will be helpful to those following this thread, it describes how the Console Emulator works, and includes screen shots of a sine wave processed through the different console emulators. There's also a workflow tip about using them on page 77 of The Huge Book of Cakewalk by BandLab Tips.

    As to "before or after," it depends. Back in the day, due to track and hardware limitations, it was common to print effects to tape, and/or patch effects between the tape outs and mixer inputs. In either case, the console emulator would go after effects. If the effects were added to the mixer using insert jacks, then the effects bypassed the audio input transformers, and possibly some redundant input stages. In that case the audio was less affected by the console's signal path anyway. I really can't think of any use cases where the console went into an effects chain, with the exception of master bus effects (like limiting) prior to mixing down to a two-track. To emulate that, insert the console emulator into your master bus, before any effects you might be using. 

    Remember too that regardless of the design philosophy of wanting to make a transparent-sounding console, there is no such thing as a straight wire with gain, especially when input or output transformers are involved.

    Hope this helps...

    • Like 3
    • Thanks 1
  2. On 8/26/2021 at 10:02 PM, scook said:

    One way is add a bus after the master bus. 

    As long as all audio resolves to the master bus and the master bus is routed to another bus (I call it "To Mains") anything done this bus is heard in the monitors (or where ever this bus is routed).

    When exporting always select the master bus. This bypasses any buses added after the master bus.

    I remember when you first mentioned this, it made life much easier with Sonarworks.  With products like Slate's VSX and Waves CLA NX, this kind of listen function is becoming more important.

    • Like 1
  3. It should always work if you use the native Windows drivers. It can work with some interfaces that are set to ASIO with Cakewalk, but can also play back via Windows drivers if specified in the (ever-changing and confusing) "sounds" control panel. If you describe what interface you're using, that could help come up with a solution.

    Frankly - and remember, I use Windows for all my music and video work - this is an area where the Mac's Core Audio is way ahead. But Windows is catching up. Hopefully before too terribly long, the Windows native drivers will equal what ASIO can do.

  4. On 8/19/2021 at 6:27 PM, javahut said:

    I think there is a benefit to it when it comes to effects processing, vitural instruments, and the actual process of mixing multiple tracks together, in general, in some cases. When I've tried mixing 96kHz tracks vs original 44.1k files , the entire mix seems to come together much easier. I personally think the mix process and blend of instruments sounds much smoother and less harsh.

    That harshness may be the results of the virtual instruments and effects process not being oversampled. The simplest solution might be to leave what's 44.1 as 44.1, and upsample virtual instruments and effects, which is something Cakewalk does very well. This gives the performance advantage of higher sample rates in lower sample rate projects, because it's the rendering process at the higher sample rate that gives the cleaner sound. Once it's in the audio range, the sound quality is preserved, even when downsampling back down to 44.1 kHz. 

  5. There are several ways to do this. 

    8 hours ago, Michael Zagas said:

    I moved from reaper to cakewalk as I prefer the workflow.

    In Reaper there was 'Trim volume' which I used for adjusting volume of parts after doing volume automation.

    I assume your issue is that the automation moves are correct, but you want to raise the level of all the automation up or down. If that's the case, Cakewalk has several options.

    Offset mode is so my favorite. In Offset mode, each fader essentially becomes a ‘master’ to control the automation (note that offset mode works for all automation, not just volume). However, as soon as you’ve entered the appropriate Offset amount, I recommend that you immediately exit Offset mode and return to standard automation. Offset mode should be something you get into, make your level tweak, and then exit quickly. Otherwise you may accidentally do offset moves instead of level moves.

    However, if you want to offset the actual automation envelope as it appears in track view (not just add a virtual offset in Offset mode), there’s a way to do that too:

    1. Select the Smart Tool and the track with the automation you want to edit.

    2. Set the track’s Edit Filter to Automation, then choose the automated parameter you want to offset (we’ll assume for now that automation lanes are hidden and you're working on the clip itself).

    3. Drag the Smart Tool across the section of automation you want to offset in the track itself (or, because the track should still be selected, you can also drag across the timeline to select the automation).

    4. Hover the Smart Tool over a clip handle, or over an empty space in the track on the same horizontal plane as the clip handle. The cursor turns into a line with up and down arrows, called the Trim Cursor.

    5. Click and drag up to offset the automation upward, or drag down to offset the automation downward. Note that when you release the mouse, the automation is deselected to make sure you don’t accidentally vary it any further, so if you want to do more editing you’ll need to re-select the automation.

    You can also offset multiple envelopes by the same amount: select any of the existing track automation in the Edit Filter, then unfold any automation lanes you want to offset and follow the same procedure as above. With multiple envelopes, you’ll probably find that dragging in the timeline will be the fastest way to select a region of automation. Note that if automation exists in a lane that is not unfolded, it won’t be edited.

    You can also edit automation in individual automation lanes. Select the automation in only that lane, hover the cursor just below the top of the lane until it appears as the offset cursor, then drag up or down as described previously.

    • Like 2
    • Thanks 1
  6. 3 hours ago, Noel Borthwick said:

    The effort that has gone in the last 3 years has been more than ever in the history of Cakewalk, to focus on improving stability as well as building truly usable and useful features.

    As far as users cutting us slack because its free, while some more casual users might, we have thousands of professionals who make a living with this software, where the fact that its free makes no difference - the need to efficiently get their work done.

    I can certainly vouch for the first sentence. Noel & Co. have really moved Cakewalk up another notch (or two) since BandLab took over. 

    As to the second sentence, if you're a professional, you don't cut slack and you don't judge based on price. You judge something on whether it does the job you're paid to do. Whether it costs $0 or $2,000 doesn't matter, because the cost will (at least hopefully!) be a fraction of the income it allows you to earn.

    At my various workshops (even if I'm doing my demos with a different program), I always recommend that Windows users download Cakewalk, no matter what program they use. I wouldn't be able to make that recommendation if it wasn't free. There are functions you can do in Cakewalk (like create Acidized files or extract tempo) that are at least difficult (if not impossible) to do in other programs. 

    IMHO the "best" DAW is the one where you feel comfortable and inspired working with it.

     

    • Like 4
    • Great Idea 1
  7. 21 hours ago, msmcleod said:

    Maybe you're running a 32 bit version of Reason?  AFAIK for Rewire to work, both apps need to be the same bitwise. 

    Also, the .dll library needs to be same, bitwise. You can't use a 32-bit ReWire dll, even if Reason and your host are 64-bit.

    Apple Silicon kind of sealed the fate for ReWire because there's no ReWire library for it, and there probably never will be. Such a shame. I use ReWire a lot.

  8. 19 hours ago, Marcello said:

    I compared the volume of this FLAC song with the one on their Spotify, it's exactly the same bloody volume, which is quite high (-6 LUFS integrated), so someone is taking a ***** me here, unless SPotify allows you to pay extra to have -6 LUFS instead of -14

    Spotify  has an option to remove their loudness normalization process. I'm not sure, but I think it's available only to subscribers. Don't know if it's a default or not.

  9. 18 hours ago, Marcello said:

    Ok thanks for the suggestions.

    SO basically I think I have to work on the mix rather than the master trying to use some tools to increase the volume of the drums that's causing the peaks without actually increasing the DBs of the track.

    I have used a limiter on my drums bus, but still, is there any other plugin I can use for this purpose? Like a saturator maybe?

    A saturator will make the drums seem subjectively louder, but will also reduce peaks. I use saturation quite a bit on bass, and sometimes on drums.

  10. 2 hours ago, Marcello said:

     What would you suggest?

    Figuring out how to violate the laws of physics would be a good start 😀

    I already gave you suggestions on how to make the softer parts seem louder so that you can retain the dynamics you want. The other option is to have reduced dynamics. You can't retain dynamics while applying a process designed specifically to reduce dynamics.  

    • Great Idea 1
  11. On 7/21/2021 at 10:38 AM, Marcello said:

    should I make a different master version for each bloody online platform? Seriously? Since I'm paying for each master.

    I didn't see this addressed specifically in the thread, but when a streaming service says its target is -14 LUFS, that doesn't mean your master has to measure -14 LUFS. It can be whatever you want, and the streaming service will turn it down to meet their target LUFS level. Sometimes you want to master "hot" to get a certain character, and that character will be preserved when the song is turned down.

    One of the main reasons to meet a streaming services specs is because they often transcode to compressed audio. Meeting their specs will usually guarantee the least amount of distortion and other artifacts when the music is compressed. However, for the best transcoding performance, what's more important than meeting the LUFS spec is meeting the True Peak spec, which is typically -1 or -2 dB.

    The best aspect for me about a streaming service's spec is that it means I can master something like an acoustic or jazz album to -14 LUFS, which is a decent amount of dynamic range, and it won't sound super-soft compared to everything else. (BTW some streaming services will turn up music below -14 LUFS, but others don't. So when artistically possible, I make sure a master doesn't go below -14 LUFS.)

    • Like 1
  12. 12 minutes ago, bdickens said:

    The lowest note on a bass is about 40Hz, Guitar about 80. (Standard tuning, of course). Anything below that can go. Vocals? Unless you're recording an operatic Bass, anything below about 100Hz can go. Often higher.

    Obviously, if your ears are telling you that things are thinning out, you'll want to move your cutoff back down but you'll be surprised at just how much low end you can cut without losing anything.

    The irony is if you can cut the lows without losing anything, then there probably wasn't anything down there to cut anyway. This has been a controversial subject, because most vinyl cut the lows in order to accommodate bass, so people got used to hearing that sound. However there are many instances where energy exists below notes. Plosives on vocals are a good example, as are vocal wind blasts associated with "f" and other sounds. Often you want to reduce these, but you do not want to get rid of them entirely. 

    If you look at an instrument like guitar on a spectrum analyzer, you can see there's energy happening below the lowest notes. So then you have two issues:

    • Can you hear it?
    • Does it reduce headroom in your mix?

    Here's an experiment you can try. Do a mix with no high-pass filtering, and see how much you can turn up the master fader before the peaks hit 0. Then, high pass everything, and see how much you can turn up the master fader before the peaks hit 0. Then you can answer those two questions based on data instead of conjecture. FWIW, I high pass tracks rarely, and selectively. There definitely are cases where add a high-pass filter tightens the sound, and other cases where high passing below an instrument's notes takes something away.

    • Like 1
  13. 4 hours ago, Marcello said:

    Do I really have to choose between having an enough loud master (same volume lever as most of the songs) but having the snare sitting back in the mix, or having it slammed but the overall song volume is very much less loud compared to the songs out there?

    As far as I'm concerned, dynamics are a good thing, not a problem to be solved :)  In any event, the explosive drum part will determine how loud your master can be, and as you've found, the only way to change that is to limit the explosive part, which you don't want to do because then the overall volume isn't loud enough compared to your other tracks.

    If you're planning to release with a streaming service, then if the other songs are above their target, they'll be turned down to have the same perceived volume as the song with the explosive drums. So it may not end up being an issue anyway.

    If not, then you have to resort to workarounds to make the sections with the non-explosive drums sound subjectively louder. One way to do this is to automate EQ on the final mix by just a little bit, around 3.5 to 4 kHz (with a broad Q). Do this only in the parts that are softer, then "fade" it back to normal in the louder parts. The ear is more sensitive to this frequency range, so the music will seem louder. 

    You can also try using a transient shaper on the explosive drums to bring down the peaks slightly, without having to use limiting. This may allow raising the overall level by a few dB. 

    The real problem is that there will always be a tradeoff between dynamics and how "loud" you can make the music. In recent years, dynamics have been traded off for a louder perceived volume, because you can't have both.

    • Like 2
  14. On 3/9/2020 at 6:26 AM, Bill Phillips said:

    I wanted to look under the hood of the Acoustic Piezo FX Chain preset but couldn't extract the plugins. The extract plugins option (which works for other FX Chain presets) doesn't appear on the Acoustic Piezo FX Chain dropdown and the shift load option doesn't extract plugins. Any suggestions?

    Sorry for the delayed response! That was back when Cakewalk used a locking function for the FX Chains. I'm not sure if there's a way to extract it now, but I'll check next time I'm in the studio.

  15. Check for updates with other programs. I like MODO Drum, but one day (perhaps coincidentally, perhaps not, after a Windows update) it decided to start producing different sounds when I recalled a song. If I dragged over a new instance and fed it with MIDI tracks, then it was fine until the next time I saved and re-opened.

    I went to the IK site and there indeed was an update. Downloaded, installed, all is well.

  16. 23 hours ago, John Vere said:

    Funny you say loyalty and support for the people who brought you Command Center. They are the ones who dropped off the map with out warning. 

    I'm not sure I agree...Noel and many Cakewalkers, like Jesse Jost and Jon Sasor, are still involved. Only the company that owned it dropped off the map, into bankruptcy-land. 

    • Thanks 2
  17. I think pan laws are one of the main reasons why people conclude that DAWs sound different, because they exported the same file from two different DAWs, and they didn't null.

    I guess you could always just do LCR mixing and not have to think about it :)

    • Like 1
    • Thanks 1
  18. I've been sandbagged a few times by media players for Windows that add "enhancements." Sometimes you need to dig deep into the sound settings, and go down the "properties" road until you find something like a check box for  "SuperDuper XYZ Surround 3D Sound" or whatever. The resulting processing can be truly horrific. Laptops seem particularly prone to this.  A lot of them boost bass, which may be why your file sounds muddy.

    TUVDawp.jpg

     

×
×
  • Create New...