Jump to content

Beginner CbB and MIDI-user trying to get SSD5 sampler to play thru a TD-17 and failing.


Twub

Recommended Posts

Ok, that's gonna take some time, and it's probably a school night for you. 

Plus I never even got so far in SSD5 to tailor any kits, so I have to dive into that. 

This is GREAT!

But, ALL the kicks? ALL the snares? Wait, so I get whichever one I go with, recorded, right?

I still have to figure out why that mapper in SS wasnt working, by the way. 

Maybe it was the whole time and i was overriding the sounds. 

There's all sorts of rims and clicks and stuff that have to be assigned. 

RIght now T2 on the 17 is intermittently triggering a crash cymbal for some reason. 

I think that's a sensitivity thing internally. All those sensitivity settings and such still apply to this 17 right?

Ok, but what this sounds like is what I was told days ago WASNT POSSIBLE

- to set up a track in advance, in order to record the next strike of that tracks dedicated trigger.  That's what I'd hoped to be able to do from the very start. Go look at my 1st or second post on that other thread. 

OH BOY THIS IS GREAT!

 

 

 

  • Like 1
Link to comment
Share on other sites

Yeah, this is where mapping comes in - some stuff won't line up, but now you've got to this point, it should be fairly straightforward to say "OK my T2 is coming in on note 38, but the synth plays a tom on note 43 so let's change the mapping for that note so it triggers the right sound."

Each element in SSD5 has multiple parts, so you might have Kick mic in, Kick mic out, Kick trigger, etc.  and the same for snare - you might have Snare Top, Snare Bottom, Snare Ring, Snare Sample... rather than having a separate track in Cakewalk for each one of those things (which you can do, mind you, but I wouldn't) then I'd set all of the parts of those elements in the Mix page of SSD5 to logical outputs, so grouping all of the stuff for Snare onto out2, all of the stuff for Kick to out1, etc. - you get the idea.

30 minutes ago, Twub said:

- to set up a track in advance, in order to record the next strike of that tracks dedicated trigger.  That's what I'd hoped to be able to do from the very start. Go look at my 1st or second post on that other thread. 

So this should work as you'd expect (if I'm understanding this right):

This is playing the live SSD5 synth and any sounds it makes will be coming out those individual track outputs we set up.  So if you hit a snare pad, SSD5 will play a snare note live. You have to make sure the Input Echo on the MIDI track is enabled and that track itself is selected (note that the track name has been clicked on and is highlighted):

Screenshot2023-08-31151655.jpg.a57485a11982b85b8df549811e4cbcdd.jpg

The thing to remember is this is where audio latency comes into play.

I'm going to assume you've downloaded the correct Scarlett ASIO drivers and installed them, but if not that's super important. ASIO will give you the best possible performance with this interface. So how it works is you hit a pad, that sends MIDI into Cakewalk, Cakewalk sends that MIDI to SSD5, and then the sound of that goes out of your Scarlett outputs.

All of that takes processing time and the lower you set that on your interface, the less delay between you hitting a pad and the sound coming out - that's called latency.

You'll want to go into Cakewalk > Preferences ? Audio > Driver settings and get this as low as you can before the audio futzes out. How low you get it will depend on how powerful your computer is for a start, and generally how good the ASIO driver is for your interface. This is mine:

Screenshot2023-08-31151724.jpg.f6a589039ebaeca203540807ec2472eb.jpg

 

This is workable for me playing keyboard drums (a *little* spongy feeling) but everyone feels this delay differently. Live guitars through effects needs this to be dropped lower for it to feel OK for me, for example. If you're used to an immediate response from your drums, how you set this will be important.

After this is all happening and triggering sounds live, you can adjust the volume sliders on each one of those tracks to mix them how you like, or put effects in on each track like EQ or whatever... it's as if you have a live drummer at this point, and you're mixing the kit.

Link to comment
Share on other sites

1 hour ago, Twub said:

But, ALL the kicks? ALL the snares? Wait, so I get whichever one I go with, recorded, right?

MIDI can be a bit daunting when first learning, but a big advantage to MIDI is that if you capture a MIDI recording, you can modify the sample sounds after the performance during the editing phase. With a VSTi it is as simple as changing/modifying the kit piece in SSD5(swap snares, change the kick parameters, etc.). It is not until you record the audio output that you are "locked in" as it were. @Lord Tim is giving you a lot of good advice here, so I don't want to step on what he is doing. Just keep in mind that you are working with two distinct yet interrelated things: 1) recording your MIDI performance from the TD-17 and it being mapped correctly to fire SSD5 (you can edit the MIDI as you see fit after performing as well), and 2) the audio performance of SSD5 which is quite similar to what you have done to tailor your TD-17 kit.

  • Great Idea 1
Link to comment
Share on other sites

^^ yep, this is super important to understand.

MIDI is just data that instructs a synth what to play, eg: SSD5.  But that synth is live, so it's like telling a drummer "play this part exactly the same every time" and then you can make decision to swap out the snare on his kit for a different one at any time (watch out for flying drum sticks).

If you keep this all in the "MIDI playing a softsynth" domain, that's all you need to know. But if you want to actually have this stuff bounced down as separate tracks with those sounds, that's a different additional step.

Link to comment
Share on other sites

8 hours ago, Twub said:

It what this sounds like is what I was told days ago WASNT POSSIBLE

- to set up a track in advance, in order to record the next strike of that tracks dedicated trigger.  That's what I'd hoped to be able to do from the very start. Go look at my 1st or second post on that other thread.

When you get to the end of this you'll see that everything you were told you can do - you CAN do.  You just have no understanding of what you're doing...yet.  When Tim gets you THERE...you see how the puzzle fits together and you'll be sad you didn't know all of this for all the years you've been recording audio off your drum module.

Link to comment
Share on other sites

Yeah, it's hard to remember sometimes that not everyone has this inherent understanding of what MIDI does.

I've been doing MIDI sequencing since... wow, 1986 ? [turns into a skeleton and crumbles to dust] ... so the idea of it all being data, driving synths, choosing channels, note mapping, etc. is all second-nature, so it's tricky to take a step back and not skip over important stuff that you just assume people would understand, but without all of that prior exposure, how could anyone know that stuff?

And if you've purely worked with audio only, understanding the difference between audio and MIDI, how that pertains to softsynths, and exporting that to discrete tracks, etc. is a mind-freak to say the least! Even some of the best video tutorials (like the excellent ones posted in the previous thread) rightly skip over some steps because it's fairly common knowledge if you've ever seen a DAW or MIDI sequencer before, but for a complete beginner it's a wall of confusion a lot of times. It's easy to get stuff mixed up and conflate terms or processes.

We'll get there :)

  • Like 1
Link to comment
Share on other sites

9 hours ago, mettelus said:

With a VSTi it is as simple as changing/modifying the kit piece in SSD5(swap snares, change the kick parameters, etc.).

I remember a tutorial from Session Drummer (the original) on how to use SD as your lead guitarist. ?

Link to comment
Share on other sites

10 hours ago, Lord Tim said:

The thing to remember is this is where audio latency comes into play.

I'm going to assume you've downloaded the correct Scarlett ASIO drivers and installed them, but if not that's super important. ASIO will give you the best possible performance with this interface. So how it works is you hit a pad, that sends MIDI into Cakewalk, Cakewalk sends that MIDI to SSD5, and then the sound of that goes out of your Scarlett outputs.

Here's a ton of miscellaneous stuff, that needs said. 

Yes, I've had the ASIO driver as soon as I first hooked up the Scarlett Solo. I've heard about latency from the start, but I've yet to experience ANY latency. Initially I thought that having an interface with a direct monitor setting kept latency at bay. 

Understand what brought me to Cakewalk in the first place: Recording songs remotely with the rest of my band, and having the options to apply treatments to our tracks in order to end up with a REALLY polished sound in the final product.  We have about 15 projects going in CbB right now. (just Tracks. We haven't even delved into final mixing and mastering yet) 

And I was recording audio, on my end. We all were, this band of mine.  The Scarlett Solo has two inputs: one XLR and one instrument. They have differing impedances, and I figured I'd fry something if I tried plugging one of my outputs into the XLR, so I'd been going into the Scarlett from my L Mono output. I took the track after I'd recorded it and converted it to mono.  Panning my kit left and right was a thing I never did. Mostly because of gigs. I play clubs  in smallish rooms where the audience is close. Even big club rooms really aren't that big.

I figured if I ever played Wembley Stadium , it might be cool to hear the kit pan from (my) right to left with some big descending fill across the entire kit. (I'm left-handed) but in some smallish room, why would I want parts of my kit more inaudible than others to folks sitting on say, the right side of the room?

But anyway, soon after I started putting out these drum tracks, the old problems surfaced again - how to mix the kit from two outputs. 

For me, most gigs were fraught with fear: Some new soundguy I hadnt worked with yet, was going to have to be told that I was the mix. That I was going to send him an EQd signal to ONE channel, and he HAD to understand that my channel's freqs were to be kept at zero on his board, and how he could tweak in tiny increments, but if he wanted more highs on my snare, and twisted that knob indiscriminately, he was going to frag everything - my cymbals were going to clip high, the works.  And brother, don't EVER even TOUCH the MIDS! On and on and on. I can count on one hand the gigs I played where I thought I actually sounded good. That's a real shame. 

I'd have people all the time telling me I sounded great, but I didn't. Most civilians are incapable of even knowing the differences in sound quality and would clap and dance to a jack-hammer. 

Anyway, Latency, this is what this all comes down to now. It seems I no longer even need my audio outs from the TD-17. But with that said, now the Scarlett Solo with it's Direct Monitoring, which used to be right at the front of the signal path, seems to be at the end of it. I don't know what to expect. 

When those raw sounds from the SSD5 were FINALLY audible to me last night, I detected no latency whatsoever as I noodled about. 

I made that little recording where I first heard the SSD5 and went back and put EQ on the audio track. It sounded very promising. In fact I was (guardedly) thrilled. 

If treatments and effects and alterations and tweaks to the kits contribute to Latency, then I might be in good shape here. I'm a simple soul. I'm not looking right now to do much more than see if I can emulate, say, the fat, wet thuds of Bill Ward's super-distinctive drum-sound, or capture that wonderful organically produced ambience of Bonham's "When The Levee Breaks". By the way, "Nearly Lost You" by the Screaming Trees has just beautifully recorded drums from start to finish. That's all I'm after. 

One good kit, for me, will be sufficient for 70% of the material we cover. All I really expect to do is add or subtract reverb or ambience here and there. Maybe tune a snare up or down occasionally. 

Covering, say, Rush, or the Police? That's coming - that's going to be a different animal altogether. 

And this whole time, I'm hoping that things become EASIER for me with MIDI. Lord Tim's statement about saving the Template, is music to my ears.  I really find that I enjoy the process of tweaking and refining my sound, but I want to spend as much time as possible playing these things and recording tracks, rather than endlessly fighting the " robbing Peter to pay Paul" battle, with my overall equalization and sound.

After 30 years, I deserve it, I think. I paid my dues. 

You guys are just great, by the way. I'm so happy I found you.

Lord Tim -I can't say enough thanks.

So, now I have to go dive in - read and reread all this thread, and encounter the next obstacle, whatever that may be. 

I see there's two new hits on this thread while I've been babbling away here, which I havent seen yet. 

 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Twub said:

Anyway, Latency, this is what this all comes down to now. It seems I no longer even need my audio outs from the TD-17. But with that said, now the Scarlett Solo with it's Direct Monitoring, which used to be right at the front of the signal path, seems to be at the end of it. I don't know what to expect. 

The Direct Monitoring stuff really is not used at all in this scenario, in fact you should just turn the balance to hear just the sound from Cakewalk. This is why latency is so important - every sound you'll hear will be going through Cakewalk and the synths and effects in it, so if your buffers are set too high, it'll make stuff entirely unplayable.

One thing I'll give the Scarlett interfaces - the drivers are great for the price. You can really push the latency pretty low before it freaks out on you!

Link to comment
Share on other sites

11 minutes ago, Lord Tim said:

This is why latency is so important - every sound you'll hear will be going through Cakewalk and the synths and effects in it, so if your buffers are set too high, it'll make stuff entirely unplayable.

I don't know what to expect. This PC is very old and that's why it's relegated to the basement. If I encounter unplayable latency conditions, then all this is wasted effort and it's going to break my heart. I don't need a whole new nightmarish battle, with an insurmountable enemy. 

Link to comment
Share on other sites

12 hours ago, Lord Tim said:

You'll want to go into Cakewalk > Preferences ? Audio > Driver settings and get this as low as you can before the audio futzes out. How low you get it will depend on how powerful your computer is for a start, and generally how good the ASIO driver is for your interface.

 

 

A quick look at that setting shows me that I'm here:

 

Screenshot(16).thumb.png.93a035fe294b11c695a6b7a62d6390ed.png

Link to comment
Share on other sites

 

 

Ok, so the latency that is caused by these multiple tracks separating my kit components hopefully is directly related to the number of tracks involved?

If that's the case, I could combine, say the three toms as one track, cymbals in another and lessen my latency?

Find a workable happy medium that way?

Please?

Toms for instance in my experience, unless they are very small or very huge, respond nicely to essentially the same EQ settings, with cuts in the low mids, and maybe a boost around 4k for some attack. I'd probably be quite satisfied to treat them all as one, if that could make a difference.

Would that help? I'm scared again. I can't play with latency, and I'm not about to try to just "get used" to an acceptable amount.

The only way this is going to work is: OK, technically there's latency, but I can't detect it. 

I just know this machine of mine is going to return a signal 14 minutes later, once I get these tracks all set up. 

 

 

Link to comment
Share on other sites

With a 22ms round trip I can't imagine you're not experiencing that as latency already.

This is typically what I prefer, but I can usually live with anything less than a 10ms round trip with drums.  In my experience, adding tracks so that the VST can route drum sound to different channels won't add to a latency problem.

 

image.png.0a743630d65667397d40f07e8a0bd753.png

Link to comment
Share on other sites

And yes, you can combine your toms to one channel if you wish.    Here's a snapshot of my drum break out.  I send all toms to one stereo channel.  All overheads to the same stereo channel.  And one for rooms.

 

image.png.3e3ca5e9b0a8a63bd2280a958f0bf793.png

Link to comment
Share on other sites

Nevertheless, none detected so far. 

I'm setting these routing track now, and granted, I've only added the snare, the three toms and the kick.

(Because I don't know how to use SSD5 yet and I cant find the cymbals in the SSD5 mix screen)

But no delay. I went back and added EQ to each, then reverb, to see if that would up the latency. 

No latency yet. 

A couple things though: What does that crazy high sampling rate of yours (compared to my 44100) buy you?

and I can't get CbB to save my EQ presets, either - in a bank, not in a bank, nothing. 

That's a pretty big deal and a show-stopper if I can't recall EQs for these instruments. 

Link to comment
Share on other sites

3 hours ago, Twub said:

But anyway, soon after I started putting out these drum tracks, the old problems surfaced again - how to mix the kit from two outputs. 

This is the MIDI versus audio again, but you also mentioned tracking versus mixing, so will hit those quick.

For tracking (recording MIDI from the TD-17), you want buffers lower (may need to shut FX off globally with the "E" hotkey) so that latency is minimal. This is so you can capture your MIDI performance with kit pieces on separate tracks (preferred).

For mixing (post production), you are now working with editing MIDI performance in Cakewalk, and tailoring the audio from SSD5 to fill the stage appropriately. During this part, you will want buffers higher (you will get latency and should not be tracking/recording during this stage) and be focused on the mix. If you do need to track again during this, the global FX bypass can be helpful, but mixing is more the relaxing, drinking coffee and focusing on post production part.

Keep your tracking a post-production stages separate and realize that the buffer size varies between them as well.

Link to comment
Share on other sites

Yes, I see.  I'll certainly take note of "FX contributing to latency" and try to adjust accordingly when tracking. 

Like I said though, so far I don't seem to have any at this early stage of separating these instruments to tracks. 

When I said "problems with mixing the kit from two outputs, I probably should have said "EQing the kit from two outputs. " That's what I meant anyway. The same issues surfaced in this new studio environment that I had been seeing for years onstage. You just cant hope to EQ a kit that's only on one channel.

Link to comment
Share on other sites

Just to clarify, the number of tracks really doesn't have much to do with latency (with this number of tracks anyway, monster sized projects might) - that's all a function of your buffer slider primarily. The lower that can go, the less latency you'll have, but the more CPU usage you'll have.

Most effects like the built in Pro Channel EQ or Sonitus EQ won't add anything noticeable. It's when you get into things like mastering compressors or special linear phase EQs that will cause you the biggest additional latency because they have to look ahead in the project to work. You shouldn't use those while tracking or playing live synths anyway, though.

You might need to show us how you're putting the EQ on these tracks so we can see why nothing is saving.

The high sample rates are probably not something you'll be able to pull off on an older computer. That might give you slightly better audio quality but at the expense of a LOT of extra CPU grunt needed, which will bite you if you need that to allow for a low buffer size to keep your audio latency down low.

Now that you've gotten this far in, a couple of @JohnnyV's excellent video tutorials for adding in effects or exporting audio will likely help you in a huge way!

 

Link to comment
Share on other sites

1 hour ago, Twub said:

Yes, I see.  I'll certainly take note of "FX contributing to latency" and try to adjust accordingly when tracking. 

Like I said though, so far I don't seem to have any at this early stage of separating these instruments to tracks. 

When I said "problems with mixing the kit from two outputs, I probably should have said "EQing the kit from two outputs. " That's what I meant anyway. The same issues surfaced in this new studio environment that I had been seeing for years onstage. You just cant hope to EQ a kit that's only on one channel.

I am not familiar with SSD5 specifically, but most of that (if not all) can be done from inside the VSTi. Each kit piece should have its own EQ, pan, fader, etc., as well as overhead and room channels, and often their own built-in FX racks. The audio outputs of many VSTis also include multiple outputs (almost like their own mini-DAW) allowing you to record the audio to separate tracks or mixing entirely within the VSTi depending on your preference. When learning that aspect, you could focus on just learning SSD5 and use MIDI samples that either come with SSD5 or from the loop library (drag/drop into MIDI tracks) to learn the routing and functionality of SSD5 itself.

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...