Jump to content

sreams

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by sreams

  1. I perform live with Cakewalk. A project file will typically host several audio stems and software instruments. In some situations, I want to simply control two different plugins using faders/knobs on one of my MIDI controller keyboards. One project has a sweepable filter on each of two audio tracks. With ACT, there is no way I know of to allow for control of each of these at one time. I know about the option of using "configure as synth," but this plugin (Waves OneKnob Filter) does not take that setting for some reason. I set it for both the VST and VST3 entries for the plugin in CW Plugin Manager and restart... and after all plugins are rescanned, it does not show up as a synth with MIDI connections. Going back into CW Plugin Manager, the "configure as synth" button is found to be unchecked. Even if it worked, this wouldn't be an ideal situation... with it requiring an additional MIDI track and extra effort to configure. It would be nice to have a control surface mode where, instead of allowing a "lock" to a particular instance of a plugin, all sliders/knobs could simply be locked individually to whatever they are set to control from whatever plugin each was assigned to. This would be *much* more useful for live situations than the current ACT behavior, which is better suited for recording/mixing.
  2. Not a bug. This image is created in order to show a project preview in the Start Screen.
  3. In a pinch, you could duplicate the track/events and then link the clips to allow for easy editing.
  4. So, you bounced your tracks to a whole new set of tracks? Did you delete or archive your old tracks? If not, could simply be you are pulling too much data from your hard drive.
  5. Makes sense. I wonder if the B2C algorithm could then analyze how the clips relate to each other in time before starting the process. Many times, the clips I'm bouncing all have the exact same start and end times, so being aware of that and then processing them together would make sense. For clips that don't start/end together (let's say you have one that spans 3:01:000-10:01:000 and another than spans 5:01:000-12:01:000), then you could still run the bounces in parallel by drawing a virtual "box" around them and running the bounce from the earliest start time to the latest end time and including each new clip as its start time is reached during the pass. Of course... I know very little about the complexity of coding such a thing.
  6. These are all valid points, but kind of off-topic. I'm referring to a very specific bug that occurs with Elastique algorithms only. The usefulness of AudioSnap is a separate conversation that deserves its own thread.
  7. Too big to attach here... so here you go: http://www.mistercatstudios.com/mtest.zip 1) Open the project. It contains one audio track of an overhead mic from the drum kit. The tempo map has already been adjusted to match the human performance. No timing changes have been made to the audio at all. 2) Check playback from various points along the timeline. The metronome should match the drums perfectly. 3) Select the clip and open the AudioSnap Palette. Change the Online Stretch Method to "Elastique Efficient" or "Elastique Pro". 4) Check playback again. The further along the timeline you go, the further away the drums get from the metronome.
  8. Radius Mix works and sounds great, but is not available for realtime processing. The issue I'm describing affects realtime playback. It means "Groove" and "Percussion" algorithms are usable, but both Elastique algorithms are not.
  9. Yeah... this is a different issue. "Bounce to Clips" is always done one track at a time with the current Cakewalk engine. BTW... looking at your video, you can see much more clearly what is going on with CPU usage in Task Manager by right-clicking on the CPU graph and choosing "Change graph to -> Logical Processors". It will then show all 8 threads and their usage on your Core i7.
  10. So here's what I have that shows this issue: Started with a 7 minute long drum performance that was not played to any click Tapped out the tempo onto a MIDI track while listening to match the drum performance Used "Fit to Improvisation" to perfectly match the tempo map to the human performance In AudioSnap, set the clip Follow Options to "Auto stretch" Checked the box for "Follow Proj Tempo" After that last box is checked (I have made no timing changes to the drum tracks yet), everything looks fine on the timeline. Playback is perfect with the clip set to use "Groove" for online render. As soon as I switch to "Elastique Efficient" or "Elastique Pro", playback no longer matches the visible waveform. It sounds kind of okay for the first few measures, but playback is a little fast and gets more and more ahead of the click as playback progresses. Again... the visible waveform looks fine and matches beats/measures perfectly. Switching back to "Groove" or "Percussion" completely fixes the issue. It makes the Elastique algorithms pretty much unusable. I can upload trimmed-down project to show this behavior if it is helpful. It is 100% reproducible.
  11. I've been doing some drum timing adjustments using Audiosnap. I have 12 tracks of drums. All are set to use Radius Mix for offline rendering. When I'm happy with my adjustments, I select all of the clips, right click on one, and choose "Bounce to Clips". It then takes a very long time to complete the task. Looking at Task Manager, I can see that only one thread of my 12-core/24-thread Ryzen 3900x is being utilized. Overall CPU usage shows as just 6%. It seems that if each bounce could be processed on it's own thread, processing time could be reduced by massive amounts. Any thoughts on this from the bakers?
×
×
  • Create New...