Jump to content

In Your Face BASS - What's your strategy?


Recommended Posts

13 hours ago, musikman1 said:

So what do you usually look for as far as seeing a freq range that can be eliminated without ruining the sound?  Just curious..

 

13 hours ago, musikman1 said:

I would think that some tracks don't interfere at all and don't need any freq's gutted. So how to tell which instruments need gutting?

These 2 questions go hand in hand, really.

What I would start with is a bit of common sense about what the sound is.

If it's something like a high string patch or cymbals or vocal or anything that typically doesn't have a lot of "thud" in it, high-pass those. Again, don't be scared to be aggressive with how high you take up the high-pass, you'd be surprised at how much you don't miss in a mix (as a great example, in the mixes I linked to above, those guitars sound super thick with a lot of low end, but in reality, if you solo'd them, they're really not particularly bass-heavy at all; your ears fool you into thinking they are from the other instruments around them. If they *did* have extended low end, it'd end up being muddy).

So narrow down your tracks to ones where you know you'll have low frequencies to deal with: kick drums, bass, low synth hits, sub drops, drum loops with kick drums or low toms in them, etc.

Then on each of those, solo them and look at the spectrum, particularly in the areas I circled in my post further back, and take note of where that's sitting. For a lot of those tracks, rolling off the sub-sonics is a good idea because that's not where the sound is hitting you. A lot of bass sounds, like Erik mentioned above about Paul McCartney's bass, aren't actually extending that far down, so you can safely roll off the super lows. Really, for anything other than sub drops or special effects, it's probably a good idea to get rid of sub-sonics on everything until you miss it being there.

Have a good listen to evolving sounds - when it plays through the sound (usually some kind of filter sweep) are the bass frequencies poking out in the spectrum? That might be something to address, either with EQ or compression.

Almost everything will benefit from a little bit of a cut in the mud frequencies in general, but look for anything poking out there in particular.

Now that you've checked everything individually, bring each thing back into the mix one by one, starting with your drum loops. 

How does the spectrum look now? Is it sounding too bass heavy? If it's too much, it's definitely your drums, start there with trying to EQ out the low end of it so it sits better.

Then bring in your bass. This is likely where what sounded like a "just right" amount of bass will combine with the kick drum of the drum loop and start to overpower things. This is where you need to make a choice if you EQ the bass to drop those parts, or you need to do some more advanced stuff.

A lot of what we hear as how loud the bass is, is actually much higher up the frequency spectrum - maybe you need to drop the bass volume in general but then give it a boost at 900hz or thereabouts, rather than leaving the bass louder and dropping the bass frequencies of it. This is where that "adding hair with distortion" trick is great for increasing punch and presence but without making the low end too crazy.

Then keep going with each one of your other bass heavy instruments and see what the additive results of those are in the mix.

Really, after a while this becomes second nature and you can cut a lot of this process in half, but if you're still kind of learning how to feel when something is too much (and it *is* hard to judge in the low end, especially if your monitors or room aren't set up for it) then being methodical is the key here.

  • Like 1
Link to comment
Share on other sites

10 hours ago, Lord Tim said:

A lot of what we hear as how loud the bass is, is actually much higher up the frequency spectrum - maybe you need to drop the bass volume in general but then give it a boost at 900hz or thereabouts, rather than leaving the bass louder and dropping the bass frequencies of it. This is where that "adding hair with distortion" trick is great for increasing punch and presence but without making the low end too crazy.

900hz bass boost kind of surprised me, that is quite a bit higher in the spectrum, but you know it works so I'm looking forward to trying that out!  I have a couple of distortion plugs, one is SaturatorX by IK Multimedia, though right now I don't know which setting I'd use, so I'll have to try a few and see how they sound.   SaturatorX...... https://www.ikmultimedia.com/products/trsaturatorx/?pkey=t-racks-single-saturator-x

I'm beginning to better understand this whole concept for the most part, thank you so much for taking the time to explain this, very helpful!  I will have a couple hours each afternoon beginning tomorrow,  so I'm going to begin the process of checking each instrument, seeing what the SPAN looks like and start cutting where it seems needed. I figure as I go through each track, I'll throw on a hi-pass, cut the sub freq, then move along from there,  listening with the hp EQ off, and then listen with it on as I go, going back and forth as I gradually roll off more low end. That way I can compare and make sure whatever instrument I'm working on doesn't start to lose its original fullness.  Then I can make adjustments after listening to it in the full mix later.  I think I had watched a video awhile back on getting clean tracks on distortion guitar, and acoustic guitar.  I have both on the track I'm working on, and so I did already roll off the low end on those and it made a big difference as far as the guitars cutting through better, especially the acoustic guitar, which was originally getting lost in the mix.  It was surprising to me how after hi-passing them, the guitars sound so thin when soloing the tracks, but in the mix they sound so full!

Btw, what did you think of the SPAN screenshot I posted for my own mix as compared to the one I posted for the commercial CD track?  To me there were some differences, but the differences didn't seem drastic.  Checking mixes with SPAN is new to me, so I couldn't really tell where the differences were between my mix and the commercial CD mix.   I do know mine wasn't mastered, and the CD was, but just looking at the SPAN image they seemed relatively close in appearance to me.  Were you able to see anything going on there?  Just curious.

Edited by musikman1
Link to comment
Share on other sites

It's always hard to tell from a static image and without hearing the source material, understanding how loud something is, where the transient peaks are, what the magnification is of the waveform... all of that stuff, so I don't really have a lot that I can offer, but on a quick glance it generally looks fine. I'd personally have more subsonics in there, but if you're not sure how to treat those properly yet, losing them entirely is a FAR better thing than having them ruin your mix by not having them treated correctly.

 

Link to comment
Share on other sites

Wanted to mention: one of my favorite tools for getting rid of collisions is Trackspacer. The way it works is that if the sound of one instrument, we'll say piano, is obscuring another, we'll say guitar, you put Trackspacer on the piano track and make a send from the guitar track to the piano track. Et voila, whenever the guitar is playing, its frequency range is reduced in the piano track.

So if we think about what Trackspacer is doing, we can do it manually by deciding which instrument we want to emphasize in each frequency range. Let the piano take the mids, let the guitar take the upper mids.

As Tim mentioned, our ears/brain fill in the "missing" information from the track that's had the cuts applied to it.

We haven't gone much into compression yet, but creative use of that (rather than just using it to smooth out poky transients) can really make a track pop, while at the same time helping to keep it out of the way of the other tracks. On bass, start with about 7ms attack, 50ms release, 4:1 ratio. Tune threshold until the gain reduction meter is peaking at around 5dB. Use 0% or "hard" knee. Then play around with the release until you hear the bass "swinging" with the rhythm. I suggest not using a "vintage emulation" type compressor for this as you're learning, because the quirks they add can obscure things.

Many people have learned their compression chops using Meldaproduction's free MCompressor. Don't let the price fool you, it's one of the best workhorse compressors out there (the only thing it lacks is a mix control for easy parallel compression, although there are fairly simple solutions to this). There's no compressor I would recommend more when learning compression. The sonitus fx compressor that comes with CbB sounds good, but it has that tiny display.

  • Like 1
Link to comment
Share on other sites

11 hours ago, Lord Tim said:

I'd personally have more subsonics in there, but if you're not sure how to treat those properly yet, losing them entirely is a FAR better thing than having them ruin your mix by not having them treated correctly.

Thanks LT, we've been talking mostly about cutting out the sub lows, so you're saying there should be some subs left in and "treated".  I wouldn't know which subs to leave in and/or how to treat or process them, so until I do I'll just use the info that you guys have been sharing with me to try to first cut the mud and separate the frequency ranges of each instrument.  I would ask though, if our ears can't hear the sub lows, would the purpose of leaving some in there be just for when listening on a surround system that has a sub speaker?

Thanks Starship, I'll keep that TrackSpacer in mind next time I have a few dollars in my music budget. 

1 hour ago, Starship Krupa said:

We haven't gone much into compression yet, but creative use of that (rather than just using it to smooth out poky transients) can really make a track pop, while at the same time helping to keep it out of the way of the other tracks. On bass, start with about 7ms attack, 50ms release, 4:1 ratio. Tune threshold until the gain reduction meter is peaking at around 5dB. Use 0% or "hard" knee. Then play around with the release until you hear the bass "swinging" with the rhythm. I suggest not using a "vintage emulation" type compressor for this as you're learning, because the quirks they add can obscure things.

Many people have learned their compression chops using Meldaproduction's free MCompressor. Don't let the price fool you, it's one of the best workhorse compressors out there (the only thing it lacks is a mix control for easy parallel compression, although there are fairly simple solutions to this). There's no compressor I would recommend more when learning compression. The sonitus fx compressor that comes with CbB sounds good, but it has that tiny display.

Thanks for giving me a road map for some comp settings.  I typically will try to find a compressor preset that works and tweak from there if needed. A lot of the presets will be named for what they would best be used for, which helps.  I will likely download and try that MCompressor, but I also have quite a few already.  I'm not sure if you are familiar with any of these, but if you do recognize any of these and can make a recommendation on which to use, that would be great.  With T-Racks 5 Plugin Suite I have T-R5 Bus Compressor, T-R Classic Compressor, TR5 Opto Compressor, TR5 Precision Compressor/Limiter, TR5 White Channel (which is a Compressor & Gate/Exp & EQ combo). With Waves bundle I have C1 Comp, H Comp, V Comp.  Some seem a bit more complicated than the rest, so again, to save time I usually check out the presets as a starting point.  A couple of these have the word "Bass" in the preset name, which are in the screenshots, so you can see the settings they start you off with for bass compression.  It seems these Bass preset settings don't line up with the starting settings you mentioned above. The TR5 Comp Attack is 67.6ms, you recommended 7ms, and the Release is 346.0ms, you recommended 50ms.  I'm sure it varies for each individual's preference, so the differences probably don't mean much, I just noticed the numbers were quite different. 

 

T-Racks 5 Classic Comp.jpg

Waves V Compressor.jpg

Edited by musikman1
  • Like 1
Link to comment
Share on other sites

I'd say that if you want to use something that's already in your locker, go with Waves C1. Turn off the EQ function, and pay attention to what the screen with the curves on it is showing you. It has a similar display to sonitus fx and MCompressor. I like the one in MCompressor because it's BIG.

I love the sound of the T-Racks processors I have, but the metering in the compressors is pretty slavish to the vintage vibe they're going for. Not the best for learning on.

Re: MCompressor, it's part of the Meldaproduction FreeFX Bundle, which includes 36 other top quality FX and utilities. I use MStereoscope on every project, and MNoiseGenerator and MOscillator are great for equipment setup. I mentioned MAnalyzer earlier. They're all part of the bundle, all free to use.

Okay, yes, the settings on those "bass" presets are way "slower" than the ones I suggested. They're set up for the thing that compressors were originally designed for, which is to control the dynamic range. Bass notes can leap out at you in a bad way, bass players put dynamics into what we play with slaps, picking harder or softer, etc. Sounds great "in the room," but not so much for a recording where you don't want a sound to dominate. I know, counterintuitive, if you're looking for the bass to "pop," why reduce the dynamic range?

In short, reducing the momentary level spikes allows you to set the bass track's overall level higher without it poking your ears every time a high-velocity note sounds.

My quest when I started recording vocals was "how the heck do these mix engineers get the lead vocals to sit out in front of the speakers like a hologram?" I played my engineer friend a track from Elliott Smith's XO to demonstrate. He had me do two things to my vocal track. First was smoothing compression, second was sweeping for the "honk," which has become a topic of controversy due to people misusing it. Sweeping for honks is a way to find any obnoxious freq buildups that are due to the imperfections of the recording chain and the nature of instruments and voice. You listen to your track for something that's unpleasantly poking out (and I emphasize that because part of the misuse is that some people think you have to do it for any freq that sticks out, and that's not true. Just the one(s) that are clashing and poking the ear), and first exaggerate them to make sure you have it right, then drop them a bit with the EQ. My rule is no more than 2 honk notches on a given track.

It sounded great, a big step toward what I was looking for, but it seemed counterintuitive: why would I take things away from an instrument (my voice) that I wanted to stand out. Well, duh, if you knock down the spiky things, you can crank it higher in the mix without it sounding obnoxious.

Get the concept? We tone down the spiky things so that we can turn it up and have it not hurt.

So what is the role of the "smoothing" settings vs. the "make it bounce" settings? With one, you're emphasizing the instrument's rhythm (and this was not, I think, the original intention of the compressor, engineers just figured it out), with the other, you're making sure that the level is reasonably smooth. We use two compressors. The "bouncy" (or "punchy") one goes on first, then the "smoothing" one. If you have ever refinished something, coarse sandpaper first, fine later. Same idea. All along the chain, dynamics processors are smoothing it out so that it can be louder.

What are we listening for as far as gain reduction? Until you tune your ears to hear the effect of compression, a good all around number on your meter is averaging 3-6dB. It's hard to do too much damage at that number. When you get better at it, the "bouncy" one can have more GR.

P.S., for the "smoothing" compression, I like something like the T-Racks 670 (it sounds so good it's like cheating) or whatever LA-2A they have.

Edited by Starship Krupa
  • Like 2
Link to comment
Share on other sites

1 hour ago, Starship Krupa said:

We tone down the spiky things so that we can turn it up and have it not hurt.

So what is the role of the "smoothing" settings vs. the "make it bounce" settings? With one, you're emphasizing the instrument's rhythm (and this was not, I think, the original intention of the compressor, engineers just figured it out), with the other, you're making sure that the level is reasonably smooth. We use two compressors. The "bouncy" (or "punchy") one goes on first, then the "smoothing" one. If you have ever refinished something, coarse sandpaper first, fine later. Same idea. All along the chain, dynamics processors are smoothing it out so that it can be louder.

Thank you for the reply, much appreciate the advice! Not much time at the moment, so I will address the rest of what you said tomorrow, but for now just a quick couple thoughts....I am getting the concept, and keep the analogies coming, it's making things easier to understand, thank you!

I did notice today that just the overall level of my bass track in my current project is lower than all the rest of the submixes, and I can't bring up the individual bass track without it peaking the meters. So tomorrow I think I'm gonna have to turn down the rest of the mix so I can get more headroom on the bass, cuz right now it has nowhere to go. Maybe the comp will also help once I've gotten that far.

The way I have most of my projects set up is all the instrument and vocal tracks are routed to submixes for each instrument.  So I'm getting the concept that Lord Tim and others have shown me here as far as using the EQ to carve frequencies to make room, etc......but after trying some of that today, I think I need to practice a bit more, it seems easier said than done.   That said, although I think it was Tim who mentioned it better to use the plugins on the regular track instead of the submixes, it's not CPU friendly for me to put an EQ and/or Compressor on every individual track when many of my projects will end up with for example, ...7 keyboard tracks, 4 elec gtr tracks, 4 acoustic gtr tracks, 6 vocal tracks, so my thought is just put one on the subs for each instrument, since they are all routed there anyway.  I probably can do the individual track with bass cuz there's usually only one or two bass tracks.   Sometimes when I have multiple tracks for one instrument, it may just be a duplicate or overdub to get a fatter sound, so I can't put plugins on one of the tracks without putting them on all the others too.

So what I'm asking is about the initial setup and placement..., according to what I've been learning, and what you've been saying about compressors, (punchy one and smooth one), where and in which order do I place these plugs? I like to keep things organized, and right now I have plugs all over the place, some on individual tracks, others on the submixes.  I'm assuming a typical scenario, going by what you guys are suggesting, would be to put the EQ first in line in the submix(es) for carving frequencies as mentioned previously, then the punchy comp 2nd  in  line (in the submix), then the smooth comp 3rd in the chain in the submix, then whatever other plugs I use, like delay or reverb, doubling, etc, after that,  depending on which instrument submix I'm working with.  Does that sound like the right approach as far as where to put these plugs, and in that order?  I'm only saying use the submixes for the instruments that have many tracks feeding them.  I would think I could use the same FX chain order in the individual bass track(s) since there usually is only one or two. 

I'm just looking to see if I'm in the ballpark here, just as far as setting things up, I don't want to start off on the wrong foot....set it up correctly first, then I can experiment with the plugin settings from there, using the guidelines you guys have given me.

Edited by musikman1
Link to comment
Share on other sites

It's not as CPU friendly, sure, but track level EQ and compression is what every pro uses to get the sound you're after, and then they typically even add more stuff at the bus level. It's the only way to achieve what you're after.

So as far as track level stuff goes, it's very common to chain a bunch of effects together on each track, sometimes several of the same type to solve a different problem, like Erik was getting at.

I would put an EQ on a track first. Look for problem frequencies first (too much bass? Too much honk? Way too bright? Sort this out first!) so that you're not making the rest of your chain work harder than it needs to on stuff you're simply going to throw away.

Then I would put a compressor on to shape the sound, and adjust the dynamics of the attack. A slow attack and aggressive threshold+ratio will give the sound more of a pop at the start of each note, that can make things sound more percussive. A fast attack will do the opposite.

Then I would put another compressor on with a much slower attack if the sound evolves into something where the volume is causing issues in the mix over the lifespan of each hit. EG: the initial pluck of the bass might sound good, but as the note develops, it could seem to get louder because of the resonance of the sound. Control that here.

Then I would put a final shaping EQ on the end. Is it still getting lost? If it's now consistent, you may need to boost some more mids to make it stand out in a mix. You may need to do parallel processing with distortion. Whatever it takes.

If you're worried your machine is going to crack under this amount of stuff on each track, do what you can, get it sounding more like what you're after and then freeze the track, rather than trying to fix it at the end. Like I said earlier, fixing it at the end is a hammer approach to something that may need a screwdriver. You can always selectively unfreeze tracks to do further tweaks.

It's important to keep in mind that not everything will need this kind of processing! I'll typically EQ every track to get rid of junk frequencies but if it doesn't need compression, I leave that well alone. Just because we CAN strap 200 instances of a compressor in a project when you have a beefy machine doesn't mean you SHOULD ;)

  • Like 1
Link to comment
Share on other sites

Time based effects (delay, reverb, etc) are the big CPU hogs and I'd definitely recommend using an Aux send for those rather than putting them on at the track level, unless there's something very specific you have in mind for a particular track. The added bonus of this is you get a nice consistency with a lot of tracks sharing the same reverb rather than different ones not quite feeling like the sound is in the same space.

Link to comment
Share on other sites

2 hours ago, musikman1 said:

right now I have plugs all over the place, some on individual tracks, others on the submixes.

That is par for the course.

There's another thing that a compressor can do, which is "glue" the individual instruments in a submix together. This helps them sound like they all exist in the same virtual space, it complements how our ears hear things. This would be your bus compressor, which is typically slow and more characterful. The T-Racks 670 sounds great on buses. Finally, the master bus compressor, another slowbie.

I don't know if this is considered a hard and fast rule, but if I want to do the punchy compressor thing, I put it on the individual track. Then, depending, I put the smoothing compressor next in line. Thing is, it's a judgment call on all of this stuff. Some synth sounds don't need any carving or compression, they come out of the box ready to go.

So far, I never use reverb as an insert effect, only as a send effect. I need to learn more about using multiple reverbs and especially delays, to help place instruments in a 3-D space. I want to create mixes that you can walk around in!

2 hours ago, musikman1 said:

it's not CPU friendly for me to put an EQ and/or Compressor on every individual track when many of my projects will end up with for example, ...7 keyboard tracks, 4 elec gtr tracks, 4 acoustic gtr tracks, 6 vocal tracks

Well, what are you saving your CPU power for? 😄I'd guess that your system has more capacity than mine, and 20 tracks with instances of compression on each one would be fine on my system, as long as the plug-ins were well-coded (and it sounds as if you're using pretty good stuff). As Tim mentioned, the cycle eaters tend to be the ones doing heavier lifting. A compression algo looks at the level of a signal and responds to it with level adjustments. Depending on how much "mojo modeling" is going on, that's not typically an "expensive" computation. Fortunately for this equation, the type of compressor I tend to use first in line is MCompressor or elysia mpressor, which don't do a lot of mojo modeling. The smoothing compressors are a better place to sprinkle that pixie dust.

Every channel of my DAW has one of the best para EQ's I've used built right into it, the ProChannel QuadCurve.

Matter of fact, the ProChannel has a few pretty good compressors in it. I would've recommended one of them if they had more extensive metering.

If you find your system bogging down under the weight of too many plug-in instances, then it's time to start bouncing synth tracks to audio. Virtual instruments are usually way bigger cycle eaters than FX. Then Cakewalk can just read them as audio files.

3 hours ago, musikman1 said:

I think I need to practice a bit more, it seems easier said than done.

As someone who has learned to play 4 instruments, I'll say that I've probably put as much time and work into getting better at mix engineering as I have into any one of the instruments I play. By the time I became a rock music fan, at around age 5, Revolver and Pet Sounds  had come out and firmly established the concept of recording-studio-as-instrument. The Moody Blues were already recording Days of Future Passed. So I've always thought of it as such, and I'm beyond psyched to get to live my dream of being able to produce music all by myself, using nothing but my own gear, in an environment I can access 24/7.

It's just like learning to play an instrument. It takes building basic skills, such as we've been discussing, and it also takes practice practice practice. I can tell someone verbally how to play an Em chord on the piano, and a C, whatever but for them to learn enough on piano or organ to create music is going to take a while. Watch YouTube videos, subscribe to Tape Op (it's free), read Sound on Sound at the library or electronically. It takes woodshedding.

It's taken me years to be able to speak with any true confidence about these topics, and I still consider myself a n00b. And what I'm telling you is merely the way I've learned to do it. 2 years from now, you and I, presented with the same basic tracks, could make all the "right" mix engineering decisions and still come up with different sounding finished songs.

Link to comment
Share on other sites

Many good suggestions here so far.

I can relate to hearing a mix I really want to copy. Not sure what genre you are recording? My approach differs some with genre.

I'm a keys player who plays some guitar and bass on the side among other things. I am often trying to make my sampled basses sound as good as a real bass player. Using MODO bass or similar I can usually get very close.

I'll try to cap off a few points that generally have worked well for me. I'll assume you are making something similar to electronic music because you mentioned synth basses over real basses.

  • Decent balanced mixing space
  • Alternately monitor or headphone correction
  • lower volumes when mixing

I hear you saying you want the bass to be pronounced, to cut through the mix, so I would ask, what gives bass 'cut' or emphasis? What causes what mixing engineers call 'mud' in a mix? Doesn't necessarily need to be all in the bass since mud can come from almost anything.

All good bass is just well mixed mud because if you take the mids and lows away all that's left is mid rangy tone. Mixing is more about perception than what actually seems to be right. What seems to be right often isn't. Many people think since the bass sounds powerful in a good mix it should be boosted way up. This is not true. 

WE can't hear transients below a certain frequency, and since transients are where we hear the bass attack this detail should get attention in any bass unless we don't desire to hear the attack. I will generally use Fabfilter ProQ 3 and engage monitoring with headphones on  at a frequency point. Then I will slide that point around at a medium Q to hear the sweet spot, the plink, the attack call it what you will. If I can't readily hear it I will raise the db some maybe a few db. You don't want to hear bass at this point. All you want is the attack.

One you have this narrowed you can EQ it based on the surrounding mix. Since mixes don't stay the same through any song these variables may change depending on the material. You might be fortunate and get a fairly consistent mix using one setting. If not, then either multi band compression or EQ can be used to make space between other tracks that might be working against you. Generally we want the emphasis to be forward at least enough to identify the bass. With maybe the exception of Hip Hop I would roll off bass below 60hz. I will often use a steeper roll off. Most DAWs default to a 12bd roll off. I might put my roll off to 24 db. I will do this on non bass tracks too if needed. A lot of this is going to be individual taste, so what I like you might not like.

Another thing that blurs crisp attacks is small micro attacks not hitting on the beat. If I have a rhythm guitar playing a few milliseconds before the main beat or a drum kit not lining up on the beat exactly it can contribute to the feeling that things are out of kilter. We can use track alignment technology or sample level alignment. Check there are no phase reversals or comb filtering.

I try to keep in mind that in mixing everything is cumulative, reverb can build up mud, so I roll the bass off the reverb and almost NEVER put any bass in a reverb. Reverb is often the enemy of clarity in a mix. Reverb can be set to either work with or detract from a mix. Sometimes a small adjustment on the diffusion or delay can clean a blurry mix up.

I'm getting a little long winded here so I'll stop for now :)

 

  • Like 1
Link to comment
Share on other sites

On 5/24/2022 at 11:03 PM, Starship Krupa said:

I'd say that if you want to use something that's already in your locker, go with Waves C1. Turn off the EQ function, and pay attention to what the screen with the curves on it is showing you. It has a similar display to sonitus fx and MCompressor. I like the one in MCompressor because it's BIG.

I love the sound of the T-Racks processors I have, but the metering in the compressors is pretty slavish to the vintage vibe they're going for. Not the best for learning on.

Re: MCompressor, it's part of the Meldaproduction FreeFX Bundle, which includes 36 other top quality FX and utilities. I use MStereoscope on every project, and MNoiseGenerator and MOscillator are great for equipment setup. I mentioned MAnalyzer earlier. They're all part of the bundle, all free to use.

Btw, I checked the IK MM site and searched TRacks 670, and it came up with this "Fairchild Vintage Compressor"

https://www.ikmultimedia.com/products/trvintubcomplim/?pkey=t-racks-single-vintage-compressor-model-670

I had a brief look at the Waves C1, I didn't see an EQ shutoff switch, maybe you could point it out.  The one I'm looking at is Comp/Gate.  As for the "screen with the curves",   both screens on the C1 sorta have curves,  the top one is for the comp, I think that would be what you're referring to.

 https://www.waves.com/plugins/c1-compressor

I did check out the website for the MCompressor and noticed that it is a bundle.  Nice deal for free. Do you happen to have the two different C1's setup, one for punchy then the second for smoothing? Maybe a screenshot of it showing the exact settings would be helpful. If not, no biggie,  I'm sure I can manage by the numbers you posted earlier.

20 hours ago, Starship Krupa said:

If you find your system bogging down under the weight of too many plug-in instances, then it's time to start bouncing synth tracks to audio. Virtual instruments are usually way bigger cycle eaters than FX. Then Cakewalk can just read them as audio files.

I have been bouncing the synth tracks. In fact before I upgraded my PC 6 months ago, I had to do that pretty regular. This PC can handle quite a bit, it's just that I'm so used to my old setup which could only handle a few CPU heavy plugs or vsts.   I'm just used to thinking efficiently when it comes to that I guess.

23 hours ago, Lord Tim said:

It's not as CPU friendly, sure, but track level EQ and compression is what every pro uses to get the sound you're after, and then they typically even add more stuff at the bus level. It's the only way to achieve what you're after.

If that is the way to go, then I'm down.  I have at least 2-3 plugs on every instrument, whether they are in the individual tracks or the busses, and my CPU is handling it, so I think I could add a few more without any issues.

23 hours ago, Lord Tim said:

It's important to keep in mind that not everything will need this kind of processing! I'll typically EQ every track to get rid of junk frequencies but if it doesn't need compression, I leave that well alone. Just because we CAN strap 200 instances of a compressor in a project when you have a beefy machine doesn't mean you SHOULD ;)

Gotcha. When you said earlier about cutting all the sub freq (on whichever instrument) to get rid of junk frequencies, the specific range for that was to cut everything below 20HZ right?....or was it 30HZ?  I know you said I could cut even higher if it still sounds ok, but the frequencies our ears can't even hear was below 20 or 30, correct?

23 hours ago, Lord Tim said:

I would put an EQ on a track first. Look for problem frequencies first (too much bass? Too much honk? Way too bright? Sort this out first!) so that you're not making the rest of your chain work harder than it needs to on stuff you're simply going to throw away.

Then I would put a compressor on to shape the sound, and adjust the dynamics of the attack. A slow attack and aggressive threshold+ratio will give the sound more of a pop at the start of each note, that can make things sound more percussive. A fast attack will do the opposite.

Then I would put another compressor on with a much slower attack if the sound evolves into something where the volume is causing issues in the mix over the lifespan of each hit. EG: the initial pluck of the bass might sound good, but as the note develops, it could seem to get louder because of the resonance of the sound. Control that here.

Then I would put a final shaping EQ on the end. Is it still getting lost? If it's now consistent, you may need to boost some more mids to make it stand out in a mix. You may need to do parallel processing with distortion. Whatever it takes.

Thank you for the road map LT, that helps a lot.  I'm a person who tends to learn and remember technical stuff better by watching someone do it, rather than tons of reading, so any step by step instructions or screenshots make it much easier to sort it all out faster.  Learning the details of how these EQ and Comp plugins work is going to help.  I've always had a basic understanding of what they contribute to the sound, but there are so many, and so when I just want to get on with a project I tend to just find a preset that I know will be in the ball park for what I want to do with a plug, then tweak it a little.  It's faster that way, but I know it's going to be much better and worth my time and effort to learn the details going forward.  Plus how to use different ones strategically in a chain is something that I had not explored enough in the past.  After reading all the posts here, there are obvious advantages to knowing how to make them all work together, rather than just knowing what they do individually.

14 hours ago, Tim Smith said:

Many good suggestions here so far.

I can relate to hearing a mix I really want to copy. Not sure what genre you are recording? My approach differs some with genre.

I'm a keys player who plays some guitar and bass on the side among other things. I am often trying to make my sampled basses sound as good as a real bass player. Using MODO bass or similar I can usually get very close.

I agree, priceless info here so far!   Welcome to the college of mixing!  🙂  Thank you for adding your strategy Tim, I appreciate it.   I hear you, my approach in general is different sometimes depending on genre, and my songwriting covers quite a range.   I'm same as you, keyboardist my whole life, plus been learning guitar for about 8 yrs now, (and loving it!). My current project is basic rock, but I've written and recorded blues, country, Pop, Electronic instrumental, and background music for film or animation (in which I like to mix orchestra sounds with synth sounds).  As far as songwriting, I haven't really settled into one lane yet, not sure if I ever will.  I do have many definite influences that go as far back as the 60s, 70, 80s, (I'm old lol) and I don't really like to try to fit into one style, at least for now.  I like my sound to be unique. Music keeps me young, and it's always first priority. And like Starship said, I too am thrilled to be able to make original music in a space I have access to 24/7.   I started small when I was very young, recording music with a portable cassette recorder, and doing overdubs using a double cassette recorder, and it was a big deal when I graduated to a Tascam 4 track cassette recorder.   So what I'm doing, what we're all doing here, is something that I could only dream of back then.  This is a lot of fun!  😉 I haven't tried MODO, but I recently purchased Trilian by Spectrasonics.  I have to say it provides any kind of bass I could possibly need, whether it be synth or acoustic. Many sounds are modeled directly from flagship keyboards, and classic bass guitars. It's got a built in, very sophisticated, and versatile arpeggiator that is loads of fun to use.  All the sounds are high quality samples too.  So far I haven't really even scratched the surface of the library, there are just plenty of sounds to choose from.   These videos are older, but you can see what it can do. 

Trilian Acoustic Demo   https://www.youtube.com/watch?v=dsj26oEoBfo Trilian Electric Demo https://www.youtube.com/watch?v=ZYU93OtvzjA

Yesterday I thought that before I begin adding plugins, I might try giving my bass sound some tweaks right in the Trilian controls.  It has an "up front" control panel which is super convenient, and most of the controls there directly tie in with its on board FX rack.  I had mentioned that in this particular project the bass is getting kinda washed out a bit, and I noticed on this particular sound that the compression control is not activated, and that is the default setting.  So I activated it and that seemed to help, plus there's a basic EQ section there which helped when I increased the mids a little.  So I figure if I can improve on the sound source before adding plugins, it may make things easier going forward. 

Here are a couple screenshots of the Controls, FX Rack, Arp, and Patch browser.   One kool thing is the Control panel changes depending on which bass patch you're using, so it's uniquely tailored controls for each patch. 

 

 

Trilian Arpeggiator.jpg

Trilian Control Panel.jpg

Trilian FX Rack 1.jpg

Trilian Patch Browser.jpg

Edited by musikman1
  • Like 1
Link to comment
Share on other sites

5 hours ago, musikman1 said:

When you said earlier about cutting all the sub freq (on whichever instrument) to get rid of junk frequencies, the specific range for that was to cut everything below 20HZ right?....or was it 30HZ?  I know you said I could cut even higher if it still sounds ok, but the frequencies our ears can't even hear was below 20 or 30, correct?

Subs are kind of contentious where people would say they start but I'd say 30Hz and down are in the sub area for me.

But I would definitely say this is your starting point - push this as high as you can go for any track that you definitely know doesn't need any frequencies down there.

Link to comment
Share on other sites

Trillian is a nice piece of kit there @musikman1 

We all have our favorites. I don't think the software is necessarily as important as the technique. I like all the new stuff same as everyone else does and also being a tech lover I could get all caught up in just the tools we have access to. I believe this applies to not only instruments but most plugins as well. Lots of great sampled basses out there Trillian being one.

I have a bunch of the Waves plugins. I see them as both relevant and old school. My perceptions are largely shaped by the fact that back in the day it was PT and Waves in all the big studios. Both of those are still as good as ever. Waves has made many improvements in their plugins. I also use TRacks and many others. I can make any decent compressor handle bass input. If you are familiar with the Waves line you might have heard of Bass Rider. It works like a compressor and is made for bass. Has a bunch of presets. Any good compressor will do though so long as we can set it up. I admit, most of the things I learned about compression were simply through use of it. Some people use it like a ketchup lover uses ketchup. They slather it on everything.  I  use compression judiciously or not at all. In the natural world compression does not exist. We hear distant sounds and we hear close sound. If Pedro is singing a bit too loud in the choir we can ask him to lower his voice to balance with the rest. Living in an analog world sounds don't often splat with sudden bursts of energy in context to the rest of the surrounding noise. A firecracker or gunshot might be an exception. If we were mixing a gunshot we would probably be compressing it with a very fast attack. Recorded gunshots don't sound anything like real gunshots and what comparison do we have in the real world to 'mud' in a mix?

The 300-500 hz range seems to be the area where we can make or break bass. We tend to feel sub bass but we hear the mids. Carving well in this area  is usually where I can gain an advantage.

Many have said mono for bass is a good idea and I would agree. There are occasions where a properly mixed synth bass in stereo adds something to a mix. Either through use of plugins or in using EQ in multiple tracks the higher elements of bass can be separated from the lower elements. That way only the higher elements can be stereo while leaving the lower elements as mono if preferred. Bass tends to overpower both mixes and sound systems often designed with a bass heavy emphasis. It can be a challenge to find a middle ground that cuts and defines itself, doesn't over power and yet still sounds great on multiple systems and this defines the difference between a good mixer and a person with mixing software ( not saying I've arrived).

 

 

Link to comment
Share on other sites

lately (because my studio is packed up in anticipation of moving) i've been playing bass lines on my stratocaster, cleaning via melodyne, converting to midi, doing midi things to it, then playing it using the professor vst or even the CW studio instrument bass. but - i keep one line at the recorded pitch (guitar) and the main line down an octave (bass). then blend so these "overtones" are the same line (like an octaver)... then audio process - compression, eq, distortion/harmonics, Waves maxxbass or renaissance bass to add some "middle" harmonics (something 120hz etc), and also a ducking compressor side chained from the kick. depending on the material - maybe really crushed on the compressor, or very little. finally some eq (i like the Izotope masking feature) to fit everything as best as possible. the high pass usually (for me) starts at 90hz on the low end, and 300hz on the high end. and sometimes a pre-send to another track which allows some of the deep bits through, or simple a third midi with is an octave lower than the bass and minimal levels on that. still learning but so far using 2-3 midi tracks and various mixes of similar basses on one or more vst, or synth bass seems to be getting pretty close to what i'm looking for.

Link to comment
Share on other sites

5 hours ago, Tim Smith said:

If you are familiar with the Waves line you might have heard of Bass Rider. It works like a compressor and is made for bass. Has a bunch of presets. Any good compressor will do though so long as we can set it up. I admit, most of the things I learned about compression were simply through use of it.

Yes Trilian is nice for a lot of things, the arpeggiator has so many options, and they keep adding more.  I haven't checked out Bass Rider, but I have heard of it. I just picked up Vocal Rider which I assume is a similar concept, keeping a track's peaking within a set range.  Also Glenn mentioned MaxxBass, which I do have as part of my bundle. Last time I tried it out was quite some time ago, so I'll have to revisit that one.  Any recommendation for a good MaxxBass preset, or setting that would be a good starting point?

2 hours ago, Glenn Stanton said:

i've been playing bass lines on my stratocaster, cleaning via melodyne, converting to midi, doing midi things to it, then playing it using the professor vst or even the CW studio instrument bass. but - i keep one line at the recorded pitch (guitar) and the main line down an octave (bass). then blend so these "overtones" are the same line (like an octaver)... then audio process - compression, eq, distortion/harmonics

Glenn, couple questions....what do you mean when you say you are "using melodyne to clean", and how are you converting an audio track to midi?  I usually don't run into a scenario where I need to, but I can see where it would be beneficial if I want to have the same line played by different vst instruments.   Can CW convert audio to midi?   That octave idea using the Strat and a vst bass is pretty kool, I'll have to try that sometime.  I've used an octave pedal for guitar before, and it really widens the sound, but I've only used it for a guitar line or riff, haven't tried it for bass yet.  I think I have an octave pedal in the Waves stompbox.

Edited by musikman1
Link to comment
Share on other sites

1 hour ago, musikman1 said:

if I want to have the same line played by different vst instruments

Best way to do that is duplicate the track and then replace the synth. I do it all the time. I'm working on something that has a bass sound and a flute doubling each other, which, or course, seems like it might be ridiculous, but it actually sounds really cool because so few people do it.

I came up with the flute part first, and figured I could duplicate it and change it up a bit for the bass part, but when I hit Play it sounded pretty great.

Link to comment
Share on other sites

5 hours ago, Starship Krupa said:

Best way to do that is duplicate the track and then replace the synth

I have done that many times when the track is originally recorded as midi using a vst synth, it's a good way to audition sounds, just keep dropping in a new synth.  However, what I was asking was about what Glenn said.  If I'm reading it correctly, he said he records a guitar line (as audio) on an audio track with his Strat, so it's audio to begin with, no midi.  Then he said he's "cleaning with Melodyne, and converting it to midi, then playing it using the professor vst or the CW studio instrument bass"....  So my question is, how is the audio being converted to midi?? What tool in CW is used to do that? I assumed it might be possible by now, but I've never really looked into whether or not it could be done within CW or Melodyne.  So what is the procedure to converting audio to midi? (Is Melodyne doing the conversion to midi?)

Link to comment
Share on other sites

6 hours ago, musikman1 said:

how is the audio being converted to midi?? What tool in CW is used to do that?

With a single notes performance, try dragging an audio clip and dropping it onto an empty MIDI track. No special dance required.

Yes, it is Melodyne doing the MIDI conversion; the Essentials version that comes with Cakewalk can handle monophonic pitch-to-MIDI. I think the next level up of Melodyne can do polyphonic.

Of course, the cleaner the signal you give it, the more accurate results you'll get, and don't get fancy with pitch bends and whatnot.

I stumbled upon this feature by accident, I buttermoused an audio clip onto a MIDI track and saw it happen and said WTactualF was that? 😲

The DAW I used before Cakewalk had no such feature, and a popular n00b question was "how do I convert an audio track to MIDI,?" This was always answered by "with special software." Which now seems odd given that it's Melodyne doing the conversion and the other program comes with Essentials as well.

Okay, so now, for the staggering price of free, I have a DAW where I can hum into it and get back a MIDI track....

Another bit of software that can supposedly do pitch-to-MIDI, although I've never tried it, is Meldaproduction MTuner, comes in that same FreeFX bundle as MCompressor. I think someone else on the forum tried it and had good luck.

Link to comment
Share on other sites

i use melodyne to clean up - noises which may confuse the midi notes, perhaps some notes i may have hit incorrectly 😉, maybe some amplitude adjustments, and the melodyne region has the midi you can copy and paste. i have the editor version so polyphonic is an option for me. generally if there is any quantizing, it is minimal and done on the midi notes before making copies and transposing.

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...