Jump to content

sreams

Members
  • Posts

    51
  • Joined

  • Last visited

Posts posted by sreams

  1. Just curious... in a digital environment, why would you prefer a VU meter over the meter next to the fader for monitoring input gain? During recording, I'd think you'd only be concerned about avoiding clipping.

    As for gainstaging, with the 32/64bit float audio engine, you pretty much can't clip a bus input, even if you are hitting it at +20 or more (although some plugins might not like this).

    • Like 1
  2. I perform live with Cakewalk. A project file will typically host several audio stems and software instruments. In some situations, I want to simply control two different plugins using faders/knobs on one of my MIDI controller keyboards. One project has a sweepable filter on each of two audio tracks. With ACT, there is no way I know of to allow for control of each of these at one time.

    I know about the option of using "configure as synth," but this plugin (Waves OneKnob Filter) does not take that setting for some reason. I set it for both the VST and VST3 entries for the plugin in CW Plugin Manager and restart... and after all plugins are rescanned, it does not show up as a synth with MIDI connections. Going back into CW Plugin Manager, the "configure as synth" button is found to be unchecked. Even if it worked, this wouldn't be an ideal situation... with it requiring an additional MIDI track and extra effort to configure.
     

    It would be nice to have a control surface mode where, instead of allowing a "lock" to a particular instance of a plugin, all sliders/knobs could simply be locked individually to whatever they are set to control from whatever plugin each was assigned to. This would be *much* more useful for live situations than the current ACT behavior, which is better suited for recording/mixing.

  3. On 8/26/2020 at 11:56 AM, micv said:

    I want to apply fx to a track in parallel, meaning one fx will not affect the other, and then sum into a bus.

    Since FX bin is always feeding into the send, I'd have to create two send to two separate aux-tracks; then each aux-track have its set of FX, then route the two aux-tracks  to a bus.

    Is there another way to do this? Would be great if the track's FX bin can be configured to be "post send" so the raw track can have its FX also.

    In a pinch, you could duplicate the track/events and then link the clips to allow for easy editing.

  4. 1 hour ago, Noel Borthwick said:

    Unlike normal export or bounce to track, Bounce to clips can't easily be done in parallel - at least not easily.

    The bounce code essentially relies on the audio engine which is optimized to render independent tracks for contiguous sections in parallel. With bounce to clips you are rendering chunks of data at different points on the timeline. So its not as a simple process to do this in parallel since we'd need to have multiple instances of the engine render the discontiguous sections of audio in parallel. Additionally the radius rendering isn't currently optimized for multi processing and could have bugs handling this.

    In short this would be nice to have but it would take a lot of work to achieve.

    Makes sense. I wonder if the B2C algorithm could then analyze how the clips relate to each other in time before starting the process. Many times, the clips I'm bouncing all have the exact same start and end times, so being aware of that and then processing them together would make sense. For clips that don't start/end together (let's say you have one that spans 3:01:000-10:01:000 and another than spans 5:01:000-12:01:000), then you could still run the bounces in parallel by drawing a virtual "box" around them and running the bounce from the earliest start time to the latest end time and including each new clip as its start time is reached during the pass.

    Of course... I know very little about the complexity of coding such a thing.

  5. 1 hour ago, Jonathan Sasor said:

    If you can upload examples where the algorithm isn't working we can pass this along to zplane.

    Too big to attach here... so here you go:

    http://www.mistercatstudios.com/mtest.zip

    1) Open the project. It contains one audio track of an overhead mic from the drum kit. The tempo map has already been adjusted to match the human performance. No timing changes have been made to the audio at all.

    2) Check playback from various points along the timeline. The metronome should match the drums perfectly.

    3) Select the clip and open the AudioSnap Palette. Change the Online Stretch Method to "Elastique Efficient" or "Elastique Pro".

    4) Check playback again. The further along the timeline you go, the further away the drums get from the metronome.

  6. 47 minutes ago, Gswitz said:

     

     

    There is a button to balance FX across threads better. It depends on what you are doing as to whether this helps.

    Yeah... this is a different issue. "Bounce to Clips" is always done one track at a time with the current Cakewalk engine.

    BTW... looking at your video, you can see much more clearly what is going on with CPU usage in Task Manager by right-clicking on the CPU graph and choosing "Change graph to -> Logical Processors". It will then show all 8 threads and their usage on your Core i7.

    • Thanks 1
  7. So here's what I have that shows this issue:

    Started with a 7 minute long drum performance that was not played to any click
    Tapped out the tempo onto a MIDI track while listening to match the drum performance
    Used "Fit to Improvisation" to perfectly match the tempo map to the human performance
    In AudioSnap, set the clip Follow Options to "Auto stretch"
    Checked the box for "Follow Proj Tempo"

    After that last box is checked (I have made no timing changes to the drum tracks yet), everything looks fine on the timeline. Playback is perfect with the clip set to use "Groove" for online render. As soon as I switch to "Elastique Efficient" or "Elastique Pro", playback no longer matches the visible waveform. It sounds kind of okay for the first few measures, but playback is a little fast and gets more and more ahead of the click as playback progresses. Again... the visible waveform looks fine and matches beats/measures perfectly. Switching back to "Groove" or "Percussion" completely fixes the issue. It makes the Elastique algorithms pretty much unusable.

    I can upload trimmed-down project to show this behavior if it is helpful. It is 100% reproducible.

  8. I've been doing some drum timing adjustments using Audiosnap. I have 12 tracks of drums. All are set to use Radius Mix for offline rendering. When I'm happy with my adjustments, I select all of the clips, right click on one, and choose "Bounce to Clips".  It then takes a very long time to complete the task. Looking at Task Manager, I can see that only one thread of my 12-core/24-thread Ryzen 3900x is being utilized. Overall CPU usage shows as just 6%.

    It seems that if each bounce could be processed on it's own thread, processing time could be reduced by massive amounts. Any thoughts on this from the bakers?

×
×
  • Create New...