Jump to content

In Your Face BASS - What's your strategy?


Recommended Posts

2 hours ago, musikman1 said:

right now I have plugs all over the place, some on individual tracks, others on the submixes.

That is par for the course.

There's another thing that a compressor can do, which is "glue" the individual instruments in a submix together. This helps them sound like they all exist in the same virtual space, it complements how our ears hear things. This would be your bus compressor, which is typically slow and more characterful. The T-Racks 670 sounds great on buses. Finally, the master bus compressor, another slowbie.

I don't know if this is considered a hard and fast rule, but if I want to do the punchy compressor thing, I put it on the individual track. Then, depending, I put the smoothing compressor next in line. Thing is, it's a judgment call on all of this stuff. Some synth sounds don't need any carving or compression, they come out of the box ready to go.

So far, I never use reverb as an insert effect, only as a send effect. I need to learn more about using multiple reverbs and especially delays, to help place instruments in a 3-D space. I want to create mixes that you can walk around in!

2 hours ago, musikman1 said:

it's not CPU friendly for me to put an EQ and/or Compressor on every individual track when many of my projects will end up with for example, ...7 keyboard tracks, 4 elec gtr tracks, 4 acoustic gtr tracks, 6 vocal tracks

Well, what are you saving your CPU power for? ?I'd guess that your system has more capacity than mine, and 20 tracks with instances of compression on each one would be fine on my system, as long as the plug-ins were well-coded (and it sounds as if you're using pretty good stuff). As Tim mentioned, the cycle eaters tend to be the ones doing heavier lifting. A compression algo looks at the level of a signal and responds to it with level adjustments. Depending on how much "mojo modeling" is going on, that's not typically an "expensive" computation. Fortunately for this equation, the type of compressor I tend to use first in line is MCompressor or elysia mpressor, which don't do a lot of mojo modeling. The smoothing compressors are a better place to sprinkle that pixie dust.

Every channel of my DAW has one of the best para EQ's I've used built right into it, the ProChannel QuadCurve.

Matter of fact, the ProChannel has a few pretty good compressors in it. I would've recommended one of them if they had more extensive metering.

If you find your system bogging down under the weight of too many plug-in instances, then it's time to start bouncing synth tracks to audio. Virtual instruments are usually way bigger cycle eaters than FX. Then Cakewalk can just read them as audio files.

3 hours ago, musikman1 said:

I think I need to practice a bit more, it seems easier said than done.

As someone who has learned to play 4 instruments, I'll say that I've probably put as much time and work into getting better at mix engineering as I have into any one of the instruments I play. By the time I became a rock music fan, at around age 5, Revolver and Pet Sounds  had come out and firmly established the concept of recording-studio-as-instrument. The Moody Blues were already recording Days of Future Passed. So I've always thought of it as such, and I'm beyond psyched to get to live my dream of being able to produce music all by myself, using nothing but my own gear, in an environment I can access 24/7.

It's just like learning to play an instrument. It takes building basic skills, such as we've been discussing, and it also takes practice practice practice. I can tell someone verbally how to play an Em chord on the piano, and a C, whatever but for them to learn enough on piano or organ to create music is going to take a while. Watch YouTube videos, subscribe to Tape Op (it's free), read Sound on Sound at the library or electronically. It takes woodshedding.

It's taken me years to be able to speak with any true confidence about these topics, and I still consider myself a n00b. And what I'm telling you is merely the way I've learned to do it. 2 years from now, you and I, presented with the same basic tracks, could make all the "right" mix engineering decisions and still come up with different sounding finished songs.

Link to comment
Share on other sites

Many good suggestions here so far.

I can relate to hearing a mix I really want to copy. Not sure what genre you are recording? My approach differs some with genre.

I'm a keys player who plays some guitar and bass on the side among other things. I am often trying to make my sampled basses sound as good as a real bass player. Using MODO bass or similar I can usually get very close.

I'll try to cap off a few points that generally have worked well for me. I'll assume you are making something similar to electronic music because you mentioned synth basses over real basses.

  • Decent balanced mixing space
  • Alternately monitor or headphone correction
  • lower volumes when mixing

I hear you saying you want the bass to be pronounced, to cut through the mix, so I would ask, what gives bass 'cut' or emphasis? What causes what mixing engineers call 'mud' in a mix? Doesn't necessarily need to be all in the bass since mud can come from almost anything.

All good bass is just well mixed mud because if you take the mids and lows away all that's left is mid rangy tone. Mixing is more about perception than what actually seems to be right. What seems to be right often isn't. Many people think since the bass sounds powerful in a good mix it should be boosted way up. This is not true. 

WE can't hear transients below a certain frequency, and since transients are where we hear the bass attack this detail should get attention in any bass unless we don't desire to hear the attack. I will generally use Fabfilter ProQ 3 and engage monitoring with headphones on  at a frequency point. Then I will slide that point around at a medium Q to hear the sweet spot, the plink, the attack call it what you will. If I can't readily hear it I will raise the db some maybe a few db. You don't want to hear bass at this point. All you want is the attack.

One you have this narrowed you can EQ it based on the surrounding mix. Since mixes don't stay the same through any song these variables may change depending on the material. You might be fortunate and get a fairly consistent mix using one setting. If not, then either multi band compression or EQ can be used to make space between other tracks that might be working against you. Generally we want the emphasis to be forward at least enough to identify the bass. With maybe the exception of Hip Hop I would roll off bass below 60hz. I will often use a steeper roll off. Most DAWs default to a 12bd roll off. I might put my roll off to 24 db. I will do this on non bass tracks too if needed. A lot of this is going to be individual taste, so what I like you might not like.

Another thing that blurs crisp attacks is small micro attacks not hitting on the beat. If I have a rhythm guitar playing a few milliseconds before the main beat or a drum kit not lining up on the beat exactly it can contribute to the feeling that things are out of kilter. We can use track alignment technology or sample level alignment. Check there are no phase reversals or comb filtering.

I try to keep in mind that in mixing everything is cumulative, reverb can build up mud, so I roll the bass off the reverb and almost NEVER put any bass in a reverb. Reverb is often the enemy of clarity in a mix. Reverb can be set to either work with or detract from a mix. Sometimes a small adjustment on the diffusion or delay can clean a blurry mix up.

I'm getting a little long winded here so I'll stop for now :)

 

  • Like 1
Link to comment
Share on other sites

On 5/24/2022 at 11:03 PM, Starship Krupa said:

I'd say that if you want to use something that's already in your locker, go with Waves C1. Turn off the EQ function, and pay attention to what the screen with the curves on it is showing you. It has a similar display to sonitus fx and MCompressor. I like the one in MCompressor because it's BIG.

I love the sound of the T-Racks processors I have, but the metering in the compressors is pretty slavish to the vintage vibe they're going for. Not the best for learning on.

Re: MCompressor, it's part of the Meldaproduction FreeFX Bundle, which includes 36 other top quality FX and utilities. I use MStereoscope on every project, and MNoiseGenerator and MOscillator are great for equipment setup. I mentioned MAnalyzer earlier. They're all part of the bundle, all free to use.

Btw, I checked the IK MM site and searched TRacks 670, and it came up with this "Fairchild Vintage Compressor"

https://www.ikmultimedia.com/products/trvintubcomplim/?pkey=t-racks-single-vintage-compressor-model-670

I had a brief look at the Waves C1, I didn't see an EQ shutoff switch, maybe you could point it out.  The one I'm looking at is Comp/Gate.  As for the "screen with the curves",   both screens on the C1 sorta have curves,  the top one is for the comp, I think that would be what you're referring to.

 https://www.waves.com/plugins/c1-compressor

I did check out the website for the MCompressor and noticed that it is a bundle.  Nice deal for free. Do you happen to have the two different C1's setup, one for punchy then the second for smoothing? Maybe a screenshot of it showing the exact settings would be helpful. If not, no biggie,  I'm sure I can manage by the numbers you posted earlier.

20 hours ago, Starship Krupa said:

If you find your system bogging down under the weight of too many plug-in instances, then it's time to start bouncing synth tracks to audio. Virtual instruments are usually way bigger cycle eaters than FX. Then Cakewalk can just read them as audio files.

I have been bouncing the synth tracks. In fact before I upgraded my PC 6 months ago, I had to do that pretty regular. This PC can handle quite a bit, it's just that I'm so used to my old setup which could only handle a few CPU heavy plugs or vsts.   I'm just used to thinking efficiently when it comes to that I guess.

23 hours ago, Lord Tim said:

It's not as CPU friendly, sure, but track level EQ and compression is what every pro uses to get the sound you're after, and then they typically even add more stuff at the bus level. It's the only way to achieve what you're after.

If that is the way to go, then I'm down.  I have at least 2-3 plugs on every instrument, whether they are in the individual tracks or the busses, and my CPU is handling it, so I think I could add a few more without any issues.

23 hours ago, Lord Tim said:

It's important to keep in mind that not everything will need this kind of processing! I'll typically EQ every track to get rid of junk frequencies but if it doesn't need compression, I leave that well alone. Just because we CAN strap 200 instances of a compressor in a project when you have a beefy machine doesn't mean you SHOULD ;)

Gotcha. When you said earlier about cutting all the sub freq (on whichever instrument) to get rid of junk frequencies, the specific range for that was to cut everything below 20HZ right?....or was it 30HZ?  I know you said I could cut even higher if it still sounds ok, but the frequencies our ears can't even hear was below 20 or 30, correct?

23 hours ago, Lord Tim said:

I would put an EQ on a track first. Look for problem frequencies first (too much bass? Too much honk? Way too bright? Sort this out first!) so that you're not making the rest of your chain work harder than it needs to on stuff you're simply going to throw away.

Then I would put a compressor on to shape the sound, and adjust the dynamics of the attack. A slow attack and aggressive threshold+ratio will give the sound more of a pop at the start of each note, that can make things sound more percussive. A fast attack will do the opposite.

Then I would put another compressor on with a much slower attack if the sound evolves into something where the volume is causing issues in the mix over the lifespan of each hit. EG: the initial pluck of the bass might sound good, but as the note develops, it could seem to get louder because of the resonance of the sound. Control that here.

Then I would put a final shaping EQ on the end. Is it still getting lost? If it's now consistent, you may need to boost some more mids to make it stand out in a mix. You may need to do parallel processing with distortion. Whatever it takes.

Thank you for the road map LT, that helps a lot.  I'm a person who tends to learn and remember technical stuff better by watching someone do it, rather than tons of reading, so any step by step instructions or screenshots make it much easier to sort it all out faster.  Learning the details of how these EQ and Comp plugins work is going to help.  I've always had a basic understanding of what they contribute to the sound, but there are so many, and so when I just want to get on with a project I tend to just find a preset that I know will be in the ball park for what I want to do with a plug, then tweak it a little.  It's faster that way, but I know it's going to be much better and worth my time and effort to learn the details going forward.  Plus how to use different ones strategically in a chain is something that I had not explored enough in the past.  After reading all the posts here, there are obvious advantages to knowing how to make them all work together, rather than just knowing what they do individually.

14 hours ago, Tim Smith said:

Many good suggestions here so far.

I can relate to hearing a mix I really want to copy. Not sure what genre you are recording? My approach differs some with genre.

I'm a keys player who plays some guitar and bass on the side among other things. I am often trying to make my sampled basses sound as good as a real bass player. Using MODO bass or similar I can usually get very close.

I agree, priceless info here so far!   Welcome to the college of mixing!  ?  Thank you for adding your strategy Tim, I appreciate it.   I hear you, my approach in general is different sometimes depending on genre, and my songwriting covers quite a range.   I'm same as you, keyboardist my whole life, plus been learning guitar for about 8 yrs now, (and loving it!). My current project is basic rock, but I've written and recorded blues, country, Pop, Electronic instrumental, and background music for film or animation (in which I like to mix orchestra sounds with synth sounds).  As far as songwriting, I haven't really settled into one lane yet, not sure if I ever will.  I do have many definite influences that go as far back as the 60s, 70, 80s, (I'm old lol) and I don't really like to try to fit into one style, at least for now.  I like my sound to be unique. Music keeps me young, and it's always first priority. And like Starship said, I too am thrilled to be able to make original music in a space I have access to 24/7.   I started small when I was very young, recording music with a portable cassette recorder, and doing overdubs using a double cassette recorder, and it was a big deal when I graduated to a Tascam 4 track cassette recorder.   So what I'm doing, what we're all doing here, is something that I could only dream of back then.  This is a lot of fun!  ? I haven't tried MODO, but I recently purchased Trilian by Spectrasonics.  I have to say it provides any kind of bass I could possibly need, whether it be synth or acoustic. Many sounds are modeled directly from flagship keyboards, and classic bass guitars. It's got a built in, very sophisticated, and versatile arpeggiator that is loads of fun to use.  All the sounds are high quality samples too.  So far I haven't really even scratched the surface of the library, there are just plenty of sounds to choose from.   These videos are older, but you can see what it can do. 

Trilian Acoustic Demo   https://www.youtube.com/watch?v=dsj26oEoBfo Trilian Electric Demo https://www.youtube.com/watch?v=ZYU93OtvzjA

Yesterday I thought that before I begin adding plugins, I might try giving my bass sound some tweaks right in the Trilian controls.  It has an "up front" control panel which is super convenient, and most of the controls there directly tie in with its on board FX rack.  I had mentioned that in this particular project the bass is getting kinda washed out a bit, and I noticed on this particular sound that the compression control is not activated, and that is the default setting.  So I activated it and that seemed to help, plus there's a basic EQ section there which helped when I increased the mids a little.  So I figure if I can improve on the sound source before adding plugins, it may make things easier going forward. 

Here are a couple screenshots of the Controls, FX Rack, Arp, and Patch browser.   One kool thing is the Control panel changes depending on which bass patch you're using, so it's uniquely tailored controls for each patch. 

 

 

Trilian Arpeggiator.jpg

Trilian Control Panel.jpg

Trilian FX Rack 1.jpg

Trilian Patch Browser.jpg

Edited by musikman1
  • Like 1
Link to comment
Share on other sites

Trillian is a nice piece of kit there @musikman1 

We all have our favorites. I don't think the software is necessarily as important as the technique. I like all the new stuff same as everyone else does and also being a tech lover I could get all caught up in just the tools we have access to. I believe this applies to not only instruments but most plugins as well. Lots of great sampled basses out there Trillian being one.

I have a bunch of the Waves plugins. I see them as both relevant and old school. My perceptions are largely shaped by the fact that back in the day it was PT and Waves in all the big studios. Both of those are still as good as ever. Waves has made many improvements in their plugins. I also use TRacks and many others. I can make any decent compressor handle bass input. If you are familiar with the Waves line you might have heard of Bass Rider. It works like a compressor and is made for bass. Has a bunch of presets. Any good compressor will do though so long as we can set it up. I admit, most of the things I learned about compression were simply through use of it. Some people use it like a ketchup lover uses ketchup. They slather it on everything.  I  use compression judiciously or not at all. In the natural world compression does not exist. We hear distant sounds and we hear close sound. If Pedro is singing a bit too loud in the choir we can ask him to lower his voice to balance with the rest. Living in an analog world sounds don't often splat with sudden bursts of energy in context to the rest of the surrounding noise. A firecracker or gunshot might be an exception. If we were mixing a gunshot we would probably be compressing it with a very fast attack. Recorded gunshots don't sound anything like real gunshots and what comparison do we have in the real world to 'mud' in a mix?

The 300-500 hz range seems to be the area where we can make or break bass. We tend to feel sub bass but we hear the mids. Carving well in this area  is usually where I can gain an advantage.

Many have said mono for bass is a good idea and I would agree. There are occasions where a properly mixed synth bass in stereo adds something to a mix. Either through use of plugins or in using EQ in multiple tracks the higher elements of bass can be separated from the lower elements. That way only the higher elements can be stereo while leaving the lower elements as mono if preferred. Bass tends to overpower both mixes and sound systems often designed with a bass heavy emphasis. It can be a challenge to find a middle ground that cuts and defines itself, doesn't over power and yet still sounds great on multiple systems and this defines the difference between a good mixer and a person with mixing software ( not saying I've arrived).

 

 

Link to comment
Share on other sites

lately (because my studio is packed up in anticipation of moving) i've been playing bass lines on my stratocaster, cleaning via melodyne, converting to midi, doing midi things to it, then playing it using the professor vst or even the CW studio instrument bass. but - i keep one line at the recorded pitch (guitar) and the main line down an octave (bass). then blend so these "overtones" are the same line (like an octaver)... then audio process - compression, eq, distortion/harmonics, Waves maxxbass or renaissance bass to add some "middle" harmonics (something 120hz etc), and also a ducking compressor side chained from the kick. depending on the material - maybe really crushed on the compressor, or very little. finally some eq (i like the Izotope masking feature) to fit everything as best as possible. the high pass usually (for me) starts at 90hz on the low end, and 300hz on the high end. and sometimes a pre-send to another track which allows some of the deep bits through, or simple a third midi with is an octave lower than the bass and minimal levels on that. still learning but so far using 2-3 midi tracks and various mixes of similar basses on one or more vst, or synth bass seems to be getting pretty close to what i'm looking for.

Link to comment
Share on other sites

5 hours ago, Tim Smith said:

If you are familiar with the Waves line you might have heard of Bass Rider. It works like a compressor and is made for bass. Has a bunch of presets. Any good compressor will do though so long as we can set it up. I admit, most of the things I learned about compression were simply through use of it.

Yes Trilian is nice for a lot of things, the arpeggiator has so many options, and they keep adding more.  I haven't checked out Bass Rider, but I have heard of it. I just picked up Vocal Rider which I assume is a similar concept, keeping a track's peaking within a set range.  Also Glenn mentioned MaxxBass, which I do have as part of my bundle. Last time I tried it out was quite some time ago, so I'll have to revisit that one.  Any recommendation for a good MaxxBass preset, or setting that would be a good starting point?

2 hours ago, Glenn Stanton said:

i've been playing bass lines on my stratocaster, cleaning via melodyne, converting to midi, doing midi things to it, then playing it using the professor vst or even the CW studio instrument bass. but - i keep one line at the recorded pitch (guitar) and the main line down an octave (bass). then blend so these "overtones" are the same line (like an octaver)... then audio process - compression, eq, distortion/harmonics

Glenn, couple questions....what do you mean when you say you are "using melodyne to clean", and how are you converting an audio track to midi?  I usually don't run into a scenario where I need to, but I can see where it would be beneficial if I want to have the same line played by different vst instruments.   Can CW convert audio to midi?   That octave idea using the Strat and a vst bass is pretty kool, I'll have to try that sometime.  I've used an octave pedal for guitar before, and it really widens the sound, but I've only used it for a guitar line or riff, haven't tried it for bass yet.  I think I have an octave pedal in the Waves stompbox.

Edited by musikman1
Link to comment
Share on other sites

1 hour ago, musikman1 said:

if I want to have the same line played by different vst instruments

Best way to do that is duplicate the track and then replace the synth. I do it all the time. I'm working on something that has a bass sound and a flute doubling each other, which, or course, seems like it might be ridiculous, but it actually sounds really cool because so few people do it.

I came up with the flute part first, and figured I could duplicate it and change it up a bit for the bass part, but when I hit Play it sounded pretty great.

Link to comment
Share on other sites

5 hours ago, Starship Krupa said:

Best way to do that is duplicate the track and then replace the synth

I have done that many times when the track is originally recorded as midi using a vst synth, it's a good way to audition sounds, just keep dropping in a new synth.  However, what I was asking was about what Glenn said.  If I'm reading it correctly, he said he records a guitar line (as audio) on an audio track with his Strat, so it's audio to begin with, no midi.  Then he said he's "cleaning with Melodyne, and converting it to midi, then playing it using the professor vst or the CW studio instrument bass"....  So my question is, how is the audio being converted to midi?? What tool in CW is used to do that? I assumed it might be possible by now, but I've never really looked into whether or not it could be done within CW or Melodyne.  So what is the procedure to converting audio to midi? (Is Melodyne doing the conversion to midi?)

Link to comment
Share on other sites

6 hours ago, musikman1 said:

how is the audio being converted to midi?? What tool in CW is used to do that?

With a single notes performance, try dragging an audio clip and dropping it onto an empty MIDI track. No special dance required.

Yes, it is Melodyne doing the MIDI conversion; the Essentials version that comes with Cakewalk can handle monophonic pitch-to-MIDI. I think the next level up of Melodyne can do polyphonic.

Of course, the cleaner the signal you give it, the more accurate results you'll get, and don't get fancy with pitch bends and whatnot.

I stumbled upon this feature by accident, I buttermoused an audio clip onto a MIDI track and saw it happen and said WTactualF was that? ?

The DAW I used before Cakewalk had no such feature, and a popular n00b question was "how do I convert an audio track to MIDI,?" This was always answered by "with special software." Which now seems odd given that it's Melodyne doing the conversion and the other program comes with Essentials as well.

Okay, so now, for the staggering price of free, I have a DAW where I can hum into it and get back a MIDI track....

Another bit of software that can supposedly do pitch-to-MIDI, although I've never tried it, is Meldaproduction MTuner, comes in that same FreeFX bundle as MCompressor. I think someone else on the forum tried it and had good luck.

Link to comment
Share on other sites

i use melodyne to clean up - noises which may confuse the midi notes, perhaps some notes i may have hit incorrectly ?, maybe some amplitude adjustments, and the melodyne region has the midi you can copy and paste. i have the editor version so polyphonic is an option for me. generally if there is any quantizing, it is minimal and done on the midi notes before making copies and transposing.

Link to comment
Share on other sites

16 hours ago, Starship Krupa said:

With a single notes performance, try dragging an audio clip and dropping it onto an empty MIDI track. No special dance required.

This did not work for me! At least not completely.... I just tried drag copying an audio clip that is a short single acoustic guitar line.  I dropped the copy into the midi track, and I got midi notes that played the same rhythm as the played a/c guitar line, but all the notes converted at the same pitch all the way across.  I'm thinking maybe there's a setting in this dropdown menu that I'm supposed to use instead of "default"?  I do have Melodyne Assistant, so maybe I can do poly too, once I get this working properly.

Must be some setting that's not right, cuz the acoustic guitar line is a pretty clean one. I didn't need to open Melodyne, right?  I didn't open MD, just drag copied the audio to a midi track.

EDIT:  Ok, I just tried using the "Melodic" setting and I got the note pitches right, that worked, kool!    It's not perfect though, so I guess there's no tweaking the conversion settings.

EDIT 2: I just found this thread that has a good simple video on the process, although the "Select Algorithm" dialog didn't come up, so it may have not been added to the CWbB version at the time of that video. 

 

 

CW Audio-Midi drop menu.jpg

CW Audio-Midi Piano roll.jpg

Edited by musikman1
  • Like 1
Link to comment
Share on other sites

On 5/26/2022 at 9:11 AM, Tim Smith said:

In the natural world compression does not exist. We hear distant sounds and we hear close sound. If Pedro is singing a bit too loud in the choir we can ask him to lower his voice to balance with the rest. Living in an analog world sounds don't often splat with sudden bursts of energy in context to the rest of the surrounding noise. A firecracker or gunshot might be an exception. If we were mixing a gunshot we would probably be compressing it with a very fast attack. Recorded gunshots don't sound anything like real gunshots and what comparison do we have in the real world to 'mud' in a mix?

While I absolutely respect your opinions and (greater) experience, my perceptions in these matters differs. It's part of my understanding of the magic that a compressor can work at quick release settings is to make us perceive sounds as "loud" when they're not actually higher in level than the surrounding information. One of my "lightbulb" moments that took compression from a utility to a creative effect.

Compression does exist in the natural world. Our hearing perception reacts to loud, sharp sounds (and continuous loud stimulus) by closing down somewhat. A compressor can do the psychoacoustic trick of mimicking that closing down. Our brain thinks "oh, there's a sharp sound that sharply attenuated a moment later, it must be LOUD." Think of how sensitive your ears sound if you walk around with industrial earplugs in for 15 minutes or so. Take them out, and for a short period of time it's like you can hear ants walking around, but this quickly attenuates as soon as you're exposed to the sounds of the human environment. Our hearing perception adjusts all the time.

As for what real world situations are similar to collisions and masking in a mix, one of the symptoms I have when I have been working my ears too hard is that I have a hard time hearing conversation in crowded restaurants. This is due to the other human speech occurring around me at the same frequencies as the ones required to hear what my dining companions are saying. Sounds overpower and obscure other sounds all the time in daily life. Listening to music or trying to converse in the car while the windows are rolled down? Someone talking while you're trying to hear what's coming out of your television?

Right now, I'm watching Free Practice at the Monaco Grand Prix and noticing how the trackside commentators raise their voices when cars come around. They raise in not only volume, but pitch, and timbre. Isn't that a natural way of adjusting pitch to avoid collisions (no pun intended here) with the sound of the engines?

This is just my understanding and thinking, please let me know if you think I have it wrong (or my understanding of what you said wrong). Trying to describe these things verbally is always a challenge. ? I'm here to learn (and to help others, if possible).

Link to comment
Share on other sites

On 5/27/2022 at 8:33 PM, Starship Krupa said:

While I absolutely respect your opinions and (greater) experience, my perceptions in these matters differs. It's part of my understanding of the magic that a compressor can work at quick release settings is to make us perceive sounds as "loud" when they're not actually higher in level than the surrounding information. One of my "lightbulb" moments that took compression from a utility to a creative effect.

Compression does exist in the natural world. Our hearing perception reacts to loud, sharp sounds (and continuous loud stimulus) by closing down somewhat. A compressor can do the psychoacoustic trick of mimicking that closing down. Our brain thinks "oh, there's a sharp sound that sharply attenuated a moment later, it must be LOUD." Think of how sensitive your ears sound if you walk around with industrial earplugs in for 15 minutes or so. Take them out, and for a short period of time it's like you can hear ants walking around, but this quickly attenuates as soon as you're exposed to the sounds of the human environment. Our hearing perception adjusts all the time.

As for what real world situations are similar to collisions and masking in a mix, one of the symptoms I have when I have been working my ears too hard is that I have a hard time hearing conversation in crowded restaurants. This is due to the other human speech occurring around me at the same frequencies as the ones required to hear what my dining companions are saying. Sounds overpower and obscure other sounds all the time in daily life. Listening to music or trying to converse in the car while the windows are rolled down? Someone talking while you're trying to hear what's coming out of your television?

Right now, I'm watching Free Practice at the Monaco Grand Prix and noticing how the trackside commentators raise their voices when cars come around. They raise in not only volume, but pitch, and timbre. Isn't that a natural way of adjusting pitch to avoid collisions (no pun intended here) with the sound of the engines?

This is just my understanding and thinking, please let me know if you think I have it wrong (or my understanding of what you said wrong). Trying to describe these things verbally is always a challenge. ? I'm here to learn (and to help others, if possible).

I would agree our ears have an internal "compressor like"  function. They try to protect themselves from suddenly overly loud situations or from ongoing louder situations. Too loud and I guess that's what we have hands to cover our ears and legs for :)

The way the brain selects sounds by order of importance is associated with the process the brain uses to hone on on those sounds. It's an interesting science to study. If a very loud sound happens suddenly our ears can't protect from this to the degree a compressor can respond. I'll never forget kneeling beside a riding mower while servicing it. I had just turned it off. My left ear was right next to the muffler when the engine backfired. I have tinnitus from it to this day. The only way to protect from that would be to have had  the foresight to know it was going to backfire and move away from it. A look ahead compressor might be able to catch and harness it in a mix.

If I hear a loud waterfall, people talking, and a woman screaming in the distance, my brain is going to hone on on the woman screaming even if I can barely hear her. Using both of my ears I am going to try and locate the direction of the scream. If the sound was louder from the right I'm going to move my head in that direction to further discern direction.

In the real world I think more about ear fatigue and the Fletcher Munson effect as ways our ears 'shut down' or adjust. Tinny annoying sounds at high levels tend to wear us down. I say "us" but I don't have a pair of your ears so possibly you like more or less than I do. As a general rule we tend to like some of that in a mix. This is another area we have more control over in a mix than we do in the real world so we can hype these frequencies. We get to accentuate the things we like, while in the real world we don't.

TBH I don't use compression in bass nearly as often as I use it on other kinds of tracks or maybe on the master. For me, bass in relation to other tracks is probably easier to over do than reverb :)

If I have a good bass sound properly EQ'd hopefully I won't need compression. I won't even side chain to the bass drum if I don't think I need it. If I want to knock the peak down to make room for other instruments I will do that. Sometimes the peak works with the material and other times it needs some attention. Back in the days of AM everything was compressed in the carrier wave and it's really an odd way to hear music compared to hearing it live. FM stereo changed all of that but we still had limits. Today most are using bluetooth ear buds and that is yet another approach. The trick being to make a small speaker in each ear sound like open space in the real world, unless you're making a genre that doesn't require it.

Edited by Tim Smith
  • Like 1
Link to comment
Share on other sites

7 hours ago, Tim Smith said:

Today most are using bluetooth ear buds

That raises some questions I've had but have never researched. One of them is what effect the CODEC's used for Bluetooth audio have on sound transmitted over the link. What frequency bandwidth is it capable of? What dynamic level? Does it mess with the transients like MP3?

I've kind of been scared to find out.

Link to comment
Share on other sites

16 hours ago, Starship Krupa said:

That raises some questions I've had but have never researched. One of them is what effect the CODEC's used for Bluetooth audio have on sound transmitted over the link. What frequency bandwidth is it capable of? What dynamic level? Does it mess with the transients like MP3?

I've kind of been scared to find out.

I have several pairs of bluetooth ear buds I occasionally use. As would probably be expected, the cheaper ones don't sound as good as the better ones do. While I wouldn't use mine for critical listening, they do a pretty decent job of covering the bases.

I haven't researched it either. One thing I notice in using any headphones, reverb sounds different in my mixes on headphones compared to my studio monitors. This was using headphone calibration. No air gap between ears and sound drivers to speak of makes a huge difference. Probably is a width thing too. The brain aurally locates using monitors. Using headphones we have to fake the brain into thinking there is space.

I have begun to use a lot more mono or the existing stereo pan baked into the track with less panning, as panning even slightly kills a lot of emphasis in my mixes on cheaper systems and smart phones. Sound great on a nice pair or monitors or stereo system. Greatly diminished on those other systems. Even a nice bose system like the one I have in my car is crap for balance. If I pan at all it isn't much. maybe 20% .

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...