Jump to content

Latency/Tracking Questions?


RexRed

Recommended Posts

Is there a way to assign priority to one track so I don't have to shut off my effects and everything to get some good latency for midi recording, real-time guitar effects or vocal recording on a certain track?

It would be nice if there was a button in Cakewalk on a track that I could click and my midi or vocal recording and effects on that track would receive priority or rather everything else would not. But then that makes me think, what good would be getting better latency out of one track if everything else would get its latency thrown off?

I am just tossing this out there and seeing what others do to remedy this?

I have a 12 core Intel I9 CPU with an Nvidia  3090 graphics card. It would seem there should be a special tracking buffer optimized for latency with all of this horse power available. 

I usually click "bypass FX racks of this type" (this throws off all of my mastering volumes) and then open my Audio Interface settings and lower the latency timing.

This seems like a terrible way to have to do this every time I want to track another track.

Any suggestions or comments? Thanks Cakewalk peeps!

Edited by RexRed
Link to comment
Share on other sites

Thanks for your questions BDickens. 

I track, mix and master nearly simultaneously, doesn't everyone who has my setup?

I usually track first but there are 'many' times when I get to the mastering phase and say, wow a piano or other instrument would sound nice in this song.

That happens on nearly every song I make.

When I mix and master simultaneously I am only using up about  1 to 6 % of my CPU power.

My first Cakewalk I bought was Cakewalk for MSDOS, what kinds of effects?

Guitar distortion/overdrive, delays, flanges, reverbs (mostly convolution), doubling, EQs, compressors, tube, console, sidechaining...

I use many effects on the tacks and busses (many of them parallel).

I usually only use one effect on my mastering bus, a Fab Filter L2 brick wall limiter but sometimes I use Izotope Ozone 9 too.

I don't need to do much mastering when I mix and master simultaneously.

My audio interface is a Steinberg UR22C.

Your questions seem to miss my point...

It is about a separate latency buffer for tracking during the mastering phase.

Link to comment
Share on other sites

1 hour ago, RexRed said:

It would be nice if there was a button in Cakewalk on a track that I could click and my midi or vocal recording and effects on that track would receive priority or rather everything else would not. But then that makes me think, what good would be getting better latency out of one track if everything else would get its latency thrown off?

Exactly. You want all the tracks to be in sync with one another, and Cakewalk goes to great pains to assure that's always the case. Consequently, total latency can never be shorter than the most time-consuming effect.

There is really only one reliable solution: track dry and add effects later. Even with a very fast computer, any fx that require significant internal buffering (e.g. reverb) is going to determine latency regardless of CPU speed or buffer sizes. 

Everybody's approach is going to be a little different. Personally, I am never concerned with latency at all. That's because I treat fx as part of the mixing and/or mastering phase, separate from tracking. My buffers are always at 2048. Direct monitoring through my interface lets me do that for audio inputs, which are always either recorded dry or with external fx. My solution for MIDI tracking might not suit you, though: I use an external synthesizer, recording only MIDI, and then substitute a soft synth afterward. It's a simple process and I don't have to jump through any hoops to keep everything in sync.

 

  • Like 3
Link to comment
Share on other sites

I think you need to rethink your work flow. When you say “doesn’t everyone work this way? “

Well, no. Experience taught us to just bypass the effects while tracking. End of story. Only exception would be guitar sims. Best to do those very early in the game. 
I have not had any issues when tracking if all I have going on is pro channel stuff. 
Because I’m using direct monitoring, just like bitflipper latency is not an issue for me. 
Latency is more of a issue when I try and add new midi tracks later on. So I just hit the bypass button 


The type of effects that causes the most added latency are most always mastering effects. 
Nobody should be mastering while they track. Ya we all sort of start a raw mix as we roll along but that’s mixing not mastering. 
 

There is a toggle for “bypass effects of this type “ might sort of work for you. I’ve never tried it. 

Edited by John Vere
  • Like 2
Link to comment
Share on other sites

3 hours ago, RexRed said:

I track, mix and master nearly simultaneously, doesn't everyone who has my setup?

3 hours ago, RexRed said:

Your questions seem to miss my point...

My questions are exactly on your point. I asked them to fill in essential, but missing details. 

If you are trying to use your onboard sound chip and ASIO4ALL, for example, you wouldn't have a hope in this world of achieving low latency. 90% of your issues would disappear with proper equipment. You didn't say, so I had to ask.

If you're tracking through resource-heavy effects like linear phase EQs,

3 hours ago, RexRed said:

reverbs (mostly convolution)

Or

3 hours ago, RexRed said:

Ozone

Well, those add a lot of latency. You didn't say, so I had to ask.

 

3 hours ago, RexRed said:

I track, mix and master nearly simultaneously, doesn't everyone who has my setup?

No.

John already said it:

2 hours ago, John Vere said:

Well, no. Experience taught us to just bypass the effects while tracking. End of story. Only exception would be guitar sims. Best to do those very early in the game. 

 

 

3 hours ago, RexRed said:

there are 'many' times when I get to the mastering phase and say, wow a piano or other instrument would sound nice in this song.

You're certainly free to do whatever you want, but in my opinion you have a horribly inefficient workflow. Those are decisions best made no later than the tracking phase. Preferably in pre-production.  When you add instruments at such a late stage, itcan throw off your balance and you end up having to redo all of your mixing & "mastering."

  • Like 1
Link to comment
Share on other sites

I understand most of your points now and your request for more background seems legit.

I spend less time "redoing all of my mixing and mastering" than I would constantly rendering mixes only to have to constantly go back and re-render multiple times then set up an entirely new mastering layout I can surgically insert things into the mix while mastering. That is efficient.

It is not really redoing my mixing and mastering it is simply slipping in a new item which is usually filling a spot anyway.

I choose to differ, I find my workflow efficient and my decision making process is not "horribly" hurt by a workflow that is static and not dynamic. It is just my opinion.

I do not have to master out errors, I can mix them out and my mastering then is not fixing sub par mixes. 

I digress.

We need Windows to allow us multiple audio interfaces with timing sync in a  project and then the latency problems would be solved.

One AI could be used for mixing and one for tracking.

Voice Meeter Potato can use virtual sound sources, why can't Cakewalk do this too?

Scook that is a great article I will be experimenting with that feature, very helpful indeed!

Way back when everyone was calling ProTools the "industry standard" and everyone was buying hype and ridiculously over priced Macs and recording on 8 track cassette, I was using Cakewalk. 

I had to put up with a lot of sneers and insult for doing so. They referred to themselves as purists. lol

I do not have 8 track cassette demos I have Cakewalk demos from all the way back to MSDOS when I synced Cakewalk to my Fostex reel to reel. through SMPTE time code.

As soon as Cakewalk went midi my synth samples went hard drive based. A soon as Cakewalk went digital audio, I dropped the reel to reel, everyone was buying DAT machines and 8 track cassettes and mixers... I was buying hard drives and ram. They sneered at me and the insults flew.

Eventually you will all be mixing and mastering simultaneously.

I don't need to use stems, I have lush projects that contain my tracking, mixes and mastering all in one project. I have total and absolute recall and THAT, is efficiency. 😁

Thanks for the help on this.

The technology for sound modules to sync is available, it should be used to solve the latency problem so we can monitor the recorded material and not the prerecorded material.

Thanks for the insight.

Because I mix and master simultaneously my mastering really in the end only requires a brick wall and or boost/leveling.

When I need to bring a vocal or instrument up out of a mic I simply turn it up...

Again, that is efficient.

If you want to do great things set your face to the wind...

Edited by RexRed
Link to comment
Share on other sites

14 hours ago, RexRed said:

Is there a way to assign priority to one track so I don't have to shut off my effects and everything to get some good latency for midi recording, real-time guitar effects or vocal recording on a certain track?

I have a couple of ways I handle this.  One way is surgical but limited, the other way is absolutely perfect but a bit more time consuming to pull off.

First way - surgical but limited:

Shut down all effects using the global effects off button, then select just the effects I want to hear. When I do this, I can set the buffer size in preferences (ASIO for me) to minimal so there is almost no latency.

Now, there is a chance when you shut down the effects that things will sound so "off", like vocals too low because compressors aren't engaged etc.  I deal with this in a couple of ways:

a. I almost always "Freeze" my tracks, which of course locks those track effects in and reduces CPU load.  There are still bus effects to contend with, but the larger the number of tracks, the more benefits you get from freezing a track. I always freeze my tracks where possible.

b. Cakewalk allows you for 64 bit VST's to "upsample". There are two settings for each VST: upsample on render, and upsample on play.  I personally always have upsample on render, and always have upsample on play off.

Did also want to mention that the better you balance things before mixing, the less impact shutting off effects tends to have on the overall sound. Consider what happens if you use a compressor on vocals, but you crank up the gain on the compressor. When you shut off the effects, the gain goes back to none for the compressor.  To that I always manually up the gain on recorded tracks so that an approximate gain is baked into the track itself.  Then if I shut off the effects it has less of an impact.

Second Way, Absolutely Perfect But More Time Consuming to Pull Off:

Let's say I don't want to sacrifice sound quality during recording, and I still want very small latency.   I will output the song, usually from time 0, with all effects on and the song in all its glory.  Then I will create a new blank Cake project, and import that song (Single audio file).  Then record any midi (or audio) stuff I want. I do this as well for other instruments and vocals if I really want the best of all worlds in terms of latency and sound. Since I am recording against the full version of the song  I get to have my "cake" and eat it too....

When I am done, I take the new work (e.g, midi) and bring it into the main project.  Perfect!!!!

Caveat: If you vary the tempo in your songs, the temporary new cake project must include the same tempo data.  If your tempo is fixed, then you are all set! Just set the tempo to match the main project. ( I copy the tempo by just making a copy of the original project, then stripping out all the tacks and effects. (There may be a more graceful way to do it... )

Edited by Rickddd
  • Like 1
Link to comment
Share on other sites

8 hours ago, RexRed said:

I spend less time "redoing all of my mixing and mastering" than I would constantly rendering mixes only to have to constantly go back and re-render multiple times then set up an entirely new mastering layout I can surgically insert things into the mix while mastering. That is efficient.

I think you misunderstood what I was talking about. There is no  " constantly rendering mixes." You lay down all your tracks. Then you mix them. Then you master.

 

8 hours ago, RexRed said:

We need Windows to allow us multiple audio interfaces with timing sync in a  project

I'm pretty sure that if they could, they would.

 

8 hours ago, RexRed said:

and then the latency problems would be solved.

Or you could not track through all those latency-inducing effects. Because that's how so many pros do it. And they don't have to  "master out errors" either.

 

But anyway, the your workflow is why you're having the latency issues you're describing.

Edited by bdickens
  • Like 1
Link to comment
Share on other sites

it sounds like @RexRed has all his "tracks" feeding a set of "mix busses" that outputs to a "master" buss (for monitoring) from which ultimately a WAV file is rendered.

not a problem unless you're trying to use all those effects chained across the tracks and busses. at that point you're at the mercy of the slowest effect being used. for example, if all my effects have a 5ms latency, but i use a reverb with 30ms latency, then ALL routes will be delayed as  necessary to accommodate the 30ms. so where you put that reverb will have more or less impact on synchronization of tracks. if it's on a single track - viola! all tracks will be delayed. if you put it on your master buss, then all tracks will sync to the lowest effect latency, but all will be delayed at the master buss. so then turning on or off the master buss reverb (as an example) will change not only the sound (the reverb is being used presumably because you want reverb) but the overall latency of the system will drop.

you could use the PDC button to turn off the latency compensation but there are additional side effects from that. best bet - turn off all the stuff you absolutely don't need while tracking, including muting tracks that aren't needed while tracking.

i have something similar for workflow in my templates except i turn off all my effects when tracking, and if later in the mix i need to add or significantly change something, i disable the effects to (re)track something and the turn it back on to continue the mix. once far enough along on the mix, the master buss limiter is kicked on to simulate the master levels i'll do in Ozone later. this way the mix is balanced and will pretty much be how i expect it when i apply the final bits.

i use low latency settings for recording, and high latency for mixing. high latency recording of soft-synths once the performance is completed in MIDI. freezing to lock in if i think i'll be doing more performances later. archive all non-used synths and effect. 

WASAPI is what i'm using with my built-in sound card, or the ASIO specific for my larger IO units.

 

  • Like 1
Link to comment
Share on other sites

Glenn that was a very precise and well thought out process of exactly how it works and what I am doing.

I wish my workflow was such that my tracking was all done before I applied any effects.

I get bit by the curiosity bug of, I wonder what this will all sound like under some gluey compression? ...and then I am off to the races to catch up with the latency.

Glenn you explained it better than what my understanding of this all was. The part about certain effects having different latency and moving the entire project to a new timing was very informative.

Things I experience but never really thought it through.

I do use a lot of master busses and when i turn on Fab Filter L2 oversampling this is the most extreme case of all.

Turning everything off is the work around that suits me best.

If there is still a tiny bit of latency after all of that  knowing the PDC (stand for, "pretty damned quick") button exists is good info too!

I really wish a second audio interface would remedy this so I could just leave the effects on.

I am not lazy or anything and bypassing the effects is not that big of a deal. 

I have a certain lead guitar player who won't come by and record anymore.

He hates the latency and I think it is a kind of phobia of his or pet quirk. He is not that tech savvy and I think he distrusts my ability because of this. I think he assumes he will not have this issue with his Apple laptop. lol

At first it was something that threw me for a loop and I had to always scramble to figure out how to bring the latency down.

Now I can usually get a decent latency after tweaking a few things but it is usually still a bit noticeable when I have a full project loaded.

I will see how the PDC button works in those situations.

I assume that if I have everything bypassed and then use the PDC button that I will not encounter any syncing issues.

Then I should be able to disable the PDC and then reenable my effects.

I will try this.

This has been a very good thread and experience for me on a quite complex and debated topic.

I think beginner Cakewalk users have it really hard understanding latency and how to set it.

I do not envy them during this process of learning.

We need a magic bullet here. Like enabling  two sound cards  in Cakewalk...

If two cards could sync and one could handle tracking that would be so fine!

I don't think Windows will even recognize two audio interfaces at once. Last time I tried plugging two USB AIs in, if i recall correctly, one did not appear in my device manager.

It will recognize two sound cards but Cakewalk will only allow one to be  selected in the audio preferences. 

That is the part that I wish could be developed if it is even logistically possible... It seems the project would still be out of sync with the effects present in the mix. My brain is boggled by all of this. I wish I understood this more. 😉

Edited by RexRed
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...