Jump to content

sreams

Members
  • Posts

    49
  • Joined

  • Last visited

Posts posted by sreams

  1. On 3/26/2021 at 11:24 AM, Ben Staton said:

    FWIW, we can't reproduce it here. I also checked the code and it doesn't seem to be doing anything unusual or CPU intensive at all when moving the mouse over track names. Indeed, moving the mouse over track names actually does significantly less than moving it over some of the other controls nearby.

    Are there any other clues you can give us?

    I am seeing the same issue when moving the mouse over some text fields in other applications while Cakewalk is playing. So now looking unlikely to be a Cakewalk issue.

  2. I have two systems with Cakewalk installed. One is a Ryzen 3900x desktop system with an NVidia 2070 Super graphics card. The other is a Ryzen 5900HX laptop with dual graphics (integrated AMD Radeon graphics and discrete NVidia 3070).

    This problem is only appearing on the laptop, but is very consistent. At low latencies (under 256 samples), there are distinct glitches during audio playback whenever the mouse cursor changes to the I-shaped icon when moving the mouse over a track name field in the Track View. There is no glitch when the cursor is moved out of the field and it reverts to a mouse cursor. This also happens in the Help Module when moving the cursor over the help text. There are never any glitches when the mouse cursor changes anywhere else in the GUI.

    I have tried two different audio interfaces with the same results... Behringer XR18 and RME Fireface UFX.

  3. Just curious... in a digital environment, why would you prefer a VU meter over the meter next to the fader for monitoring input gain? During recording, I'd think you'd only be concerned about avoiding clipping.

    As for gainstaging, with the 32/64bit float audio engine, you pretty much can't clip a bus input, even if you are hitting it at +20 or more (although some plugins might not like this).

    • Like 1
  4. I perform live with Cakewalk. A project file will typically host several audio stems and software instruments. In some situations, I want to simply control two different plugins using faders/knobs on one of my MIDI controller keyboards. One project has a sweepable filter on each of two audio tracks. With ACT, there is no way I know of to allow for control of each of these at one time.

    I know about the option of using "configure as synth," but this plugin (Waves OneKnob Filter) does not take that setting for some reason. I set it for both the VST and VST3 entries for the plugin in CW Plugin Manager and restart... and after all plugins are rescanned, it does not show up as a synth with MIDI connections. Going back into CW Plugin Manager, the "configure as synth" button is found to be unchecked. Even if it worked, this wouldn't be an ideal situation... with it requiring an additional MIDI track and extra effort to configure.
     

    It would be nice to have a control surface mode where, instead of allowing a "lock" to a particular instance of a plugin, all sliders/knobs could simply be locked individually to whatever they are set to control from whatever plugin each was assigned to. This would be *much* more useful for live situations than the current ACT behavior, which is better suited for recording/mixing.

  5. 4 minutes ago, Canopus said:

    Articulation Maps is a great addition. But when starting to create articulation maps of my own, I notice that the space required to uniquely identify a map through its name is a bit scarce. Of course, I would have preferred that articulation maps could have been put in sub-folders under the default Articulation Maps folder. That would have made it possible to make the unique part of the name shorter, i.e. by having the sub folders sorted by Manufacturer and then Library. But as that unfortunately doesn’t seem to be an option at least in this initial implementation, very long names are sometimes necessary. So what happens when you need to call a map something like “NI Symphony Series Brass Ensemble Trumpets”? Well, as can be seen in the picture below, the name of the articulation map will be truncated in the list box. And I can foresee much longer names being necessary in the future…

    67K9wyN.png

    So, if possible, allow for nested folder structures. And if that’s not possible, please make the full name somehow (at least more) accessible. Perhaps the icon can be removed and the articulation map name be left-aligned in the Articulation Map listbox?

    Maybe don't write out the entire name of the product (for now)? Instead of "NI Symphony Series Brass Ensemble Trumpets", why not "NISS Trumpets" (since trumpets are obviously part of a brass ensemble)?

  6. 51 minutes ago, Noel Borthwick said:

    @sreams glad you are seeing noticeable improvements. Curious in what specific area you saw this - is it larger projects or just something that previously had issues?

    I have some recent projects that have 10-12 tracks of drums with 5-6 takes (clips are linked). Navigation was always quite slow, especially after comping/editing. I checked one of these projects just before installing the update and then again after install. Significant improvement. I also have a client who sometimes brings me 60-80 takes of his lead vocal. This can really bog things down. This should help with that quite a bit, I'd assume.

  7. On 8/26/2020 at 11:56 AM, micv said:

    I want to apply fx to a track in parallel, meaning one fx will not affect the other, and then sum into a bus.

    Since FX bin is always feeding into the send, I'd have to create two send to two separate aux-tracks; then each aux-track have its set of FX, then route the two aux-tracks  to a bus.

    Is there another way to do this? Would be great if the track's FX bin can be configured to be "post send" so the raw track can have its FX also.

    In a pinch, you could duplicate the track/events and then link the clips to allow for easy editing.

  8. 1 hour ago, Noel Borthwick said:

    Unlike normal export or bounce to track, Bounce to clips can't easily be done in parallel - at least not easily.

    The bounce code essentially relies on the audio engine which is optimized to render independent tracks for contiguous sections in parallel. With bounce to clips you are rendering chunks of data at different points on the timeline. So its not as a simple process to do this in parallel since we'd need to have multiple instances of the engine render the discontiguous sections of audio in parallel. Additionally the radius rendering isn't currently optimized for multi processing and could have bugs handling this.

    In short this would be nice to have but it would take a lot of work to achieve.

    Makes sense. I wonder if the B2C algorithm could then analyze how the clips relate to each other in time before starting the process. Many times, the clips I'm bouncing all have the exact same start and end times, so being aware of that and then processing them together would make sense. For clips that don't start/end together (let's say you have one that spans 3:01:000-10:01:000 and another than spans 5:01:000-12:01:000), then you could still run the bounces in parallel by drawing a virtual "box" around them and running the bounce from the earliest start time to the latest end time and including each new clip as its start time is reached during the pass.

    Of course... I know very little about the complexity of coding such a thing.

  9. 1 hour ago, Jonathan Sasor said:

    If you can upload examples where the algorithm isn't working we can pass this along to zplane.

    Too big to attach here... so here you go:

    http://www.mistercatstudios.com/mtest.zip

    1) Open the project. It contains one audio track of an overhead mic from the drum kit. The tempo map has already been adjusted to match the human performance. No timing changes have been made to the audio at all.

    2) Check playback from various points along the timeline. The metronome should match the drums perfectly.

    3) Select the clip and open the AudioSnap Palette. Change the Online Stretch Method to "Elastique Efficient" or "Elastique Pro".

    4) Check playback again. The further along the timeline you go, the further away the drums get from the metronome.

  10. 47 minutes ago, Gswitz said:

     

     

    There is a button to balance FX across threads better. It depends on what you are doing as to whether this helps.

    Yeah... this is a different issue. "Bounce to Clips" is always done one track at a time with the current Cakewalk engine.

    BTW... looking at your video, you can see much more clearly what is going on with CPU usage in Task Manager by right-clicking on the CPU graph and choosing "Change graph to -> Logical Processors". It will then show all 8 threads and their usage on your Core i7.

    • Thanks 1
  11. So here's what I have that shows this issue:

    Started with a 7 minute long drum performance that was not played to any click
    Tapped out the tempo onto a MIDI track while listening to match the drum performance
    Used "Fit to Improvisation" to perfectly match the tempo map to the human performance
    In AudioSnap, set the clip Follow Options to "Auto stretch"
    Checked the box for "Follow Proj Tempo"

    After that last box is checked (I have made no timing changes to the drum tracks yet), everything looks fine on the timeline. Playback is perfect with the clip set to use "Groove" for online render. As soon as I switch to "Elastique Efficient" or "Elastique Pro", playback no longer matches the visible waveform. It sounds kind of okay for the first few measures, but playback is a little fast and gets more and more ahead of the click as playback progresses. Again... the visible waveform looks fine and matches beats/measures perfectly. Switching back to "Groove" or "Percussion" completely fixes the issue. It makes the Elastique algorithms pretty much unusable.

    I can upload trimmed-down project to show this behavior if it is helpful. It is 100% reproducible.

  12. I've been doing some drum timing adjustments using Audiosnap. I have 12 tracks of drums. All are set to use Radius Mix for offline rendering. When I'm happy with my adjustments, I select all of the clips, right click on one, and choose "Bounce to Clips".  It then takes a very long time to complete the task. Looking at Task Manager, I can see that only one thread of my 12-core/24-thread Ryzen 3900x is being utilized. Overall CPU usage shows as just 6%.

    It seems that if each bounce could be processed on it's own thread, processing time could be reduced by massive amounts. Any thoughts on this from the bakers?

  13. 16 hours ago, enmamusic said:

    Friend, in the picture it is more than evident that cakewalk has the smallest faders graphically.  This for users of other Daw is especially annoying since they are used to visualize their fader with more precision because it is larger.  Although the precision is the same it is graphically uncomfortable.  Most Daw have that going for them and you've seen this review on various cakewalk forums.  About the Side Chain I mean the ease of linking plugin from its interface to sidechain mode.  In Cakewalk it is more difficult because it is not in sight.  Then I send a screenshot for you to watch.478435377_SequoiaFader.JPG.e19af049bf8cc533dbbabfb11e58b0bd.JPG

    Sequoia Side Chain.JPG

    Ah... so you do mean the fader throw length, not the size of the faders themselves. I'd be a fan of allowing for user adjustable fader throw. I just wouldn't change it for myself, and I'd hope that the more screen-space efficient current size would remain as an option.

    I'm also a fan of adding sidechain routing to the plugin window.

    • Like 2
  14. I ran into an issue a couple of times today. I was zoomed in on the timeline to find an audible glitch in a waveform. I then pressed play. Playback started, and of course, the fully zoomed-in timeline is scrolling by very quickly at this point. All expected... but Cakewalk stopped responding to all inputs at this point. I could not stop playback by pressing the spacebar or the stop button on my control surface. I could not change the zoom level. Playback would just continue and eventually Cakewalk would respond to key presses from 15-20 seconds prior. This is on a Ryzen 3900x system.

    • Like 1
  15. 4 hours ago, enmamusic said:

    I would love a change in the size of the console faders for the next version. We need bigger faders like other Daw's like Cubase, Sequoia, Fl Studio etc. It would be a very positive development in the workflow and the visual interface. Another important improvement is the ease of working with Side Chain from the plugins inserted in the channel, as Sequoia does for example. This is something that colleagues working on other Daws have criticized. Thank you, Cakewalk is my Daw for more of 10 years.

    I'm confused by what you mean when you say the console faders are too small. I've attached a screenshot showing the FL Studio, Cakewalk, and Cubase faders side by side. They all look about the same to me. Do you mean how long the throw is? If that's the case... I don't entirely agree. Fader accuracy when using the mouse is defined entirely by how far you have to move the mouse itself, not by the graphic on the screen. I can move even the tiny Track View faders with just as much accuracy as any 100mm fader (and with even more precision while holding shift). I'd rather not see faders taking up more precious screen space if there is no real-world benefit. That said... I'd be all for the fader throw being adjustable.

    Could you describe what Sequoia does differently in regards to sidechains?

    faders.gif

    • Like 2
×
×
  • Create New...