Jump to content

David Baay

  • Content Count

  • Joined

  • Last visited

Community Reputation

617 Excellent


Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I found it to have a pretty jarring transition to the higher overtones present in a harder strike; they come in all at once with a very small change in velocity. I've encountered this issue in a lot of sampled pianos (both acoustic and electric ) that I've tried over the years. I haven't checked it out thoroughly to determine exactly how many velocity layers it has or where the changes occur, but if I didn't know better (and I don't), I would suspect it has only two or three. I generally prefer synths that use some form of modeling or actual 'synthesis' of sounds to help deliver a more naturally progressive timbral response to velocity change, and pure sampled pianos with no modeling component really expose shortcomings in the number of velocity layers. If you're even a moderately capable keyboard player, you're likely to be disappointed with the playability of this instrument, more so if you have experience with hardware digital pianos or some of the more advanced software pianos that model resonance, release, and continuous pedal response.
  2. To me it's pretty well expected that a button that is intended to globally enable/disable a function will affect all instances and modes of that function. If it didn't affect one mode, it wouldn't be very useful, and the alternative would be to go through all plugins in the project and individually disable Upsample On Render, which would be a nightmare, and could justifiably be called a bug. Performing a null test of a track rendered both ways should give you the answer. Just make sure it's a plugin that renders repeatably in a given mode.
  3. Good bet that's the issue. I don't use Workspaces (set to None), and I use the default D binding to toggle the multidock open and closed a lot without issue as I'm sure many other users do.
  4. @Maxim Vasilyev It would be good to see the sends from the source track to the patch points to make sure the first one didn't get doubled up. I would also be curious what is shown if you arm the Drums Aux to show the input level vs. the output level and/or change the Drums New Aux to use the same patch point as the Drums Aux for input. This will tell you if the patch point itself is raising the level or something is happening in the track between the input and output. If it's actually a bug, any of those actions might conceivably reset something, making it difficult to diagnose, but if the behavior remains consistently wrong, you should send a copy of the project to the Bakers for analysis - preferably stripped down to just the tracks that show the problem with all unnecessary plugins removed. Also, though it's probably not relevant in this case since both Aux tracks are getting input from the same source track, you should change the Input to your Kick Instrument track (and all other MIDI/Instrument tracks) from 'Omni' to a specific channel of your keyboard/controller to avoid receiving unexpected MIDI input from other physical/virtual ports or channels. Incidentally, regarding an earlier post about frozen tracks gaining 3dB, this is a long-standing issue with freezing tracks that have a mono input from a soft synth. I haven't checked it for a while, but I believe that problem still exists.
  5. Which key(s) for which function(s) in which context(s)? Setting up custom bindings can be quirky, but once established should work consistently unless maybe there's come conflict of assignments in different contexts.
  6. In very broad terms, I would say that it's unusual to see the Audio Engine running over 100% and showing a lot of late buffers with the Audio Process load below 50-65%, and the Audio Engine reading typically averages closer to 1.5x to 2x the Audio Processing load than the 3x factor you're seeing. But it depends on the plugins being used, and it's a rare project of mine that needs more than a 256-sample buffer to run smoothly on a lowly 4-core i7-6700K processor. I just don't use that many heavy-weight plugins in any one project.
  7. Disable Non-destructive MIDI Editing in Preferences, and the clips will be trimmed automatically when you split them. With NDME enabled, splitting creates two slip-edited copies of the clip, each of which contains all the data of the original clip as indicated by the beveled corners at the split point (I think you were mistaken that these appeared after the first trim.). The first 'trim' discards the part of each clip that's hidden by the slip-edit as indicated by the now-squared corners, and the second one throws away the empty space. It could argued that one 'trim' should do both jobs in one go, but that's how it works currently.
  8. My reading of the documentation is that the button enables/disables whichever upsampling mode(s) you have enabled for a given plugin: http://www.cakewalk.com/Documentation?product=Cakewalk&language=3&help=Mixing.25.html
  9. Plugin Load Balancing is not always guaranteed to improve overall performance, especially when the distribution of load across cores is already good without it. I generally leave it off as I don't often use that many FX plugins in a single bin (or in a whole project for that matter), and most of my load comes from VST instruments which can't be load-balanced. The documentation provides guidelines for when it can be helpful and when it might not: https://www.cakewalk.com/Documentation?product=Cakewalk&language=3&help=AudioPerformance.14.html
  10. Maybe try re-naming AUD.INI, and let CW build a new default one. Do you have any of the old SONAR demo projects and bundled plugins? I'd be curious to know what kind of performance you see with a known baseline. EDIT: A couple other possibilities... any chance you have the Steinberg Generic Low latency ASIO Driver installed or have enabled the non-default 'aggressive' ThreadSchedulingModel=3 in AUD.INI (Preferences > Audio > Config File)? These suggestions come from a similar thread from last year: And a golden oldie problem generator: Setting Windows processor scheduling for best performance of Background Services in Advanced System Settings
  11. That DPC latency would be on the high side if you were trying to run a 32-64-sample buffer, but should not be a problem at 1024. In a well-optimized system, I would expect to see consistently under 300us. Mine manages that even with WiFi enabled. Beyond that, I'd be looking at the project content. It would have to be a pretty heavy-duty project to run a 16-core processor that hard at that buffer size. Seems likely there are just some individual plugins driving the engine load that be can't load-balanced any further. I would suggest you try selectively disabling plugins, and see if anything is driving the engine load particularly hard. Then search for reports of issues with the efficiency of that particular plugin - may just be the nature of the beast in some cases.
  12. Care to share a screenshot of LatencyMon after letting it run for 10+ minutes? Also, if you hover your mouse over the Performance Meter in Cakewalk while you are hearing "peaks, clicks, etc." does it show the empty buffer count increasing?
  13. If the issue is clicks/pops/dropouts when streaming audio in real time, you probably have issues with Deferred Procedure Call (DPC) latency spiking. Check it with LatencyMon, and go after the usual suspects: WIFI and Bluetooth drivers: https://www.resplendence.com/latencymon Also be sure to disable CPU-throttling functions in BIOS like Speedstep and C-States as well as turning off all Windows power-saving measures.
  14. Oops, missed that second post. I did a quick test and confirmed the 140-samples of uncompensated delay using independent audio clips and Channel Tools with upsampling enabled for both playback and render. I also encountered some issues bouncing the master bus to a track with 64-bit DPE enabled. I reported it all to the Bakers for investigation. Incidentally. setting your three nudge values to something like 1, 12, and 24 samples makes measuring sync errors a lot easier; just count the nudges, and add/subtract as you go until the zoomed waveforms are aligned or the audio is nulling.
  15. I could be wrong, and would have to test myself, but I suspect there's a logical problem in using a send to an aux track instead of a separate signal source. What I'm thinking is that any plugin delay compensation applied to the source track to keep it in sync with the upsampling delay will delay the send signal as well such that it's basically no possible to sync the two outputs unless the timing of the send is divorced from the delay of the track output signal. I haven't completely thought through how this squares with the plugin's internal upsampling being properly compensated, but i'd be interested to see if the problem reproduces when both tracks have an indepenndent-but-identical audio source
  • Create New...