Jump to content

Craig Anderton

Members
  • Posts

    871
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Craig Anderton

  1. I really think this article will be helpful to those following this thread, it describes how the Console Emulator works, and includes screen shots of a sine wave processed through the different console emulators. There's also a workflow tip about using them on page 77 of The Huge Book of Cakewalk by BandLab Tips. As to "before or after," it depends. Back in the day, due to track and hardware limitations, it was common to print effects to tape, and/or patch effects between the tape outs and mixer inputs. In either case, the console emulator would go after effects. If the effects were added to the mixer using insert jacks, then the effects bypassed the audio input transformers, and possibly some redundant input stages. In that case the audio was less affected by the console's signal path anyway. I really can't think of any use cases where the console went into an effects chain, with the exception of master bus effects (like limiting) prior to mixing down to a two-track. To emulate that, insert the console emulator into your master bus, before any effects you might be using. Remember too that regardless of the design philosophy of wanting to make a transparent-sounding console, there is no such thing as a straight wire with gain, especially when input or output transformers are involved. Hope this helps...
  2. I remember when you first mentioned this, it made life much easier with Sonarworks. With products like Slate's VSX and Waves CLA NX, this kind of listen function is becoming more important.
  3. It should always work if you use the native Windows drivers. It can work with some interfaces that are set to ASIO with Cakewalk, but can also play back via Windows drivers if specified in the (ever-changing and confusing) "sounds" control panel. If you describe what interface you're using, that could help come up with a solution. Frankly - and remember, I use Windows for all my music and video work - this is an area where the Mac's Core Audio is way ahead. But Windows is catching up. Hopefully before too terribly long, the Windows native drivers will equal what ASIO can do.
  4. Might be time to check for driver, graphics card, and other system-oriented updates.
  5. That harshness may be the results of the virtual instruments and effects process not being oversampled. The simplest solution might be to leave what's 44.1 as 44.1, and upsample virtual instruments and effects, which is something Cakewalk does very well. This gives the performance advantage of higher sample rates in lower sample rate projects, because it's the rendering process at the higher sample rate that gives the cleaner sound. Once it's in the audio range, the sound quality is preserved, even when downsampling back down to 44.1 kHz.
  6. There are several ways to do this. I assume your issue is that the automation moves are correct, but you want to raise the level of all the automation up or down. If that's the case, Cakewalk has several options. Offset mode is so my favorite. In Offset mode, each fader essentially becomes a ‘master’ to control the automation (note that offset mode works for all automation, not just volume). However, as soon as you’ve entered the appropriate Offset amount, I recommend that you immediately exit Offset mode and return to standard automation. Offset mode should be something you get into, make your level tweak, and then exit quickly. Otherwise you may accidentally do offset moves instead of level moves. However, if you want to offset the actual automation envelope as it appears in track view (not just add a virtual offset in Offset mode), there’s a way to do that too: 1. Select the Smart Tool and the track with the automation you want to edit. 2. Set the track’s Edit Filter to Automation, then choose the automated parameter you want to offset (we’ll assume for now that automation lanes are hidden and you're working on the clip itself). 3. Drag the Smart Tool across the section of automation you want to offset in the track itself (or, because the track should still be selected, you can also drag across the timeline to select the automation). 4. Hover the Smart Tool over a clip handle, or over an empty space in the track on the same horizontal plane as the clip handle. The cursor turns into a line with up and down arrows, called the Trim Cursor. 5. Click and drag up to offset the automation upward, or drag down to offset the automation downward. Note that when you release the mouse, the automation is deselected to make sure you don’t accidentally vary it any further, so if you want to do more editing you’ll need to re-select the automation. You can also offset multiple envelopes by the same amount: select any of the existing track automation in the Edit Filter, then unfold any automation lanes you want to offset and follow the same procedure as above. With multiple envelopes, you’ll probably find that dragging in the timeline will be the fastest way to select a region of automation. Note that if automation exists in a lane that is not unfolded, it won’t be edited. You can also edit automation in individual automation lanes. Select the automation in only that lane, hover the cursor just below the top of the lane until it appears as the offset cursor, then drag up or down as described previously.
  7. I can certainly vouch for the first sentence. Noel & Co. have really moved Cakewalk up another notch (or two) since BandLab took over. As to the second sentence, if you're a professional, you don't cut slack and you don't judge based on price. You judge something on whether it does the job you're paid to do. Whether it costs $0 or $2,000 doesn't matter, because the cost will (at least hopefully!) be a fraction of the income it allows you to earn. At my various workshops (even if I'm doing my demos with a different program), I always recommend that Windows users download Cakewalk, no matter what program they use. I wouldn't be able to make that recommendation if it wasn't free. There are functions you can do in Cakewalk (like create Acidized files or extract tempo) that are at least difficult (if not impossible) to do in other programs. IMHO the "best" DAW is the one where you feel comfortable and inspired working with it.
  8. Also, the .dll library needs to be same, bitwise. You can't use a 32-bit ReWire dll, even if Reason and your host are 64-bit. Apple Silicon kind of sealed the fate for ReWire because there's no ReWire library for it, and there probably never will be. Such a shame. I use ReWire a lot.
  9. My understanding (which could very well be wrong) is that Gibson sold certain intellectual property to BandLab. I wonder if that included older versions of Sonar. It probably did, but if not, maybe no one owns the "rights," in the sense of being able or willing to enforce those rights.
  10. Try a repair install using Waves Central. That has fixed a few mysterious issues for me that otherwise had no obvious answer.
  11. Spotify has an option to remove their loudness normalization process. I'm not sure, but I think it's available only to subscribers. Don't know if it's a default or not.
  12. A saturator will make the drums seem subjectively louder, but will also reduce peaks. I use saturation quite a bit on bass, and sometimes on drums.
  13. Figuring out how to violate the laws of physics would be a good start 😀 I already gave you suggestions on how to make the softer parts seem louder so that you can retain the dynamics you want. The other option is to have reduced dynamics. You can't retain dynamics while applying a process designed specifically to reduce dynamics.
  14. I didn't see this addressed specifically in the thread, but when a streaming service says its target is -14 LUFS, that doesn't mean your master has to measure -14 LUFS. It can be whatever you want, and the streaming service will turn it down to meet their target LUFS level. Sometimes you want to master "hot" to get a certain character, and that character will be preserved when the song is turned down. One of the main reasons to meet a streaming services specs is because they often transcode to compressed audio. Meeting their specs will usually guarantee the least amount of distortion and other artifacts when the music is compressed. However, for the best transcoding performance, what's more important than meeting the LUFS spec is meeting the True Peak spec, which is typically -1 or -2 dB. The best aspect for me about a streaming service's spec is that it means I can master something like an acoustic or jazz album to -14 LUFS, which is a decent amount of dynamic range, and it won't sound super-soft compared to everything else. (BTW some streaming services will turn up music below -14 LUFS, but others don't. So when artistically possible, I make sure a master doesn't go below -14 LUFS.)
  15. The irony is if you can cut the lows without losing anything, then there probably wasn't anything down there to cut anyway. This has been a controversial subject, because most vinyl cut the lows in order to accommodate bass, so people got used to hearing that sound. However there are many instances where energy exists below notes. Plosives on vocals are a good example, as are vocal wind blasts associated with "f" and other sounds. Often you want to reduce these, but you do not want to get rid of them entirely. If you look at an instrument like guitar on a spectrum analyzer, you can see there's energy happening below the lowest notes. So then you have two issues: Can you hear it? Does it reduce headroom in your mix? Here's an experiment you can try. Do a mix with no high-pass filtering, and see how much you can turn up the master fader before the peaks hit 0. Then, high pass everything, and see how much you can turn up the master fader before the peaks hit 0. Then you can answer those two questions based on data instead of conjecture. FWIW, I high pass tracks rarely, and selectively. There definitely are cases where add a high-pass filter tightens the sound, and other cases where high passing below an instrument's notes takes something away.
  16. As far as I'm concerned, dynamics are a good thing, not a problem to be solved :) In any event, the explosive drum part will determine how loud your master can be, and as you've found, the only way to change that is to limit the explosive part, which you don't want to do because then the overall volume isn't loud enough compared to your other tracks. If you're planning to release with a streaming service, then if the other songs are above their target, they'll be turned down to have the same perceived volume as the song with the explosive drums. So it may not end up being an issue anyway. If not, then you have to resort to workarounds to make the sections with the non-explosive drums sound subjectively louder. One way to do this is to automate EQ on the final mix by just a little bit, around 3.5 to 4 kHz (with a broad Q). Do this only in the parts that are softer, then "fade" it back to normal in the louder parts. The ear is more sensitive to this frequency range, so the music will seem louder. You can also try using a transient shaper on the explosive drums to bring down the peaks slightly, without having to use limiting. This may allow raising the overall level by a few dB. The real problem is that there will always be a tradeoff between dynamics and how "loud" you can make the music. In recent years, dynamics have been traded off for a louder perceived volume, because you can't have both.
  17. Sorry for the delayed response! That was back when Cakewalk used a locking function for the FX Chains. I'm not sure if there's a way to extract it now, but I'll check next time I'm in the studio.
  18. You can also selectively install some things from pre-Platinum versions, like it you really really want the TimeWorks compressor
  19. Check for updates with other programs. I like MODO Drum, but one day (perhaps coincidentally, perhaps not, after a Windows update) it decided to start producing different sounds when I recalled a song. If I dragged over a new instance and fed it with MIDI tracks, then it was fine until the next time I saved and re-opened. I went to the IK site and there indeed was an update. Downloaded, installed, all is well.
  20. I'm not sure I agree...Noel and many Cakewalkers, like Jesse Jost and Jon Sasor, are still involved. Only the company that owned it dropped off the map, into bankruptcy-land.
  21. I think pan laws are one of the main reasons why people conclude that DAWs sound different, because they exported the same file from two different DAWs, and they didn't null. I guess you could always just do LCR mixing and not have to think about it
  22. Hope this helps... Could Sound Centre be the problem? I never use it, so I don't remember if it has some constraint, like 32-bit only or something like that.
  23. Studio USB driver is correct. I use the 1824c with Cakewalk, and it works fine.
  24. I've been sandbagged a few times by media players for Windows that add "enhancements." Sometimes you need to dig deep into the sound settings, and go down the "properties" road until you find something like a check box for "SuperDuper XYZ Surround 3D Sound" or whatever. The resulting processing can be truly horrific. Laptops seem particularly prone to this. A lot of them boost bass, which may be why your file sounds muddy.
  25. It won't take you long to get comfortable with Cakewalk. It's like if you visited a city you were familiar with, 10 years later. Most of the buildings you knew are still there and everybody still speaks the same language, but now there are some new restaurants, the walkway along the river has been spruced up, the movie theater improved their sound system, etc. There is a bit more traffic, but you'll quickly find some side roads that get you where you need to go Besides, you can install the two side-by-side and work with Cakewalk on new projects until it becomes familiar.
×
×
  • Create New...