Jump to content
Sign in to follow this  
Johnbee58

Something I Didn't Know

Recommended Posts

I've been watching the Creative Sauce tutorials for CbB and he's done a great job on them.  He's teaching me some things not only about CbB but audio engineering in general I never knew before.  The new fact that stands out so far is how an audio signal makes its way down the channel, through the various effect modules till it gets to the fader then out to the Master buss (if that's where you send it).

What I found interesting is the order you have your effect plug ins stacked effects the overall sound and I've been thinking that this may be the problem with some of my mixes.  Currently, I've been just randomly throwing them anywhere because I never knew until now which order was the best.  So, help me here.  Going down through the channel, is it best to EQ first before compression, or vice versa?  And then where it is best to place the effects?  I use them (reverb, delay) mostly on Aux/Send busses.  Give me some of your thoughts.

Thanks

🙂John B.

Share this post


Link to post
Share on other sites

IT DEPENDS.

Seriously, there is no "right way". I have used compressors before and after EQ before and after reverbs... it's all about the end result you are trying to get.

I could think of a few critical takeaways from FX chain building:

The most important thing to "get" is that compressors color the output differently depending on how hot their input is. So gain staging - setting the audio level at each step in the chain - becomes important.

Try putting an EQ before a compressor and then compare with after - you'll get different results (let alone if you don't pay attention to gain staging). What is best for your particular application will depend on what you're trying to do.

  • Like 3

Share this post


Link to post
Share on other sites

I agree with Colin. However I tend to have EQ at the top and compression as the last item.  If you notice in Ozone from early on it always had the ability to put a module in any order. 

  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)

i dont know wich order is better i always experiment for ex on my vocal-reverb-buss it look like this

vocal-reverb.png

(im mixing at the buss level,no fx on tracks view,appart for synthé..

Edited by martsave martin s
  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)

Like stated, there's no correct way in this. It's worth experimenting what the order does to the signal, time permitting. Sky is the limit in daw environment. However, as far as eq and comp, I tend to 'compress the corrected' signal, especially if the signal needs heavy eq:ing (like a high pass filter), so the compressor doesn't react to frequencies which will be cut off anyway or cause unwanted or weird gain reduction.

Edited by lmu2002
  • Like 3

Share this post


Link to post
Share on other sites

Sometimes screwing up (doing it out of order) has pleasant outcomes 😀

  • Like 3

Share this post


Link to post
Share on other sites

Thanks for all the info.  The creative sauce guy seems pretty adamant about the order.  Guess I'll experiment to see how much of a difference it makes in my world.

❤️JB

Share this post


Link to post
Share on other sites

Like everyone said there is no right way but my default method is:

1 - put a clean/transparent EQ  or a HP/LP filters to tame the dominant freq that could affect the compressor

2- Compressor

3 - other fx

last - a character/color EQ to add mojo.

  • Like 3

Share this post


Link to post
Share on other sites
5 hours ago, lmu2002 said:

I tend to 'compress the corrected' signal,

This advice applies to ALL FX; i.e., be very conscious at all times of what is passing through the signal chain (and showing up in the final mix). If you compress the unwanted portion of a signal, it takes more EQ to remove it, which can also adversely affect more of the desired signal.

Bottom line, be careful of what you are passing, that it will fit the mix (if it won't, don't pass it), and what the next FX is going to do. Time-based FX can do the most damage to an unwanted component, because they effectively "smear" those frequencies down the track, so always be judicious with those (and why they almost always come last).

  • Like 1

Share this post


Link to post
Share on other sites
11 hours ago, mettelus said:

This advice applies to ALL FX; i.e., be very conscious at all times of what is passing through the signal chain (and showing up in the final mix). If you compress the unwanted portion of a signal, it takes more EQ to remove it, which can also adversely affect more of the desired signal.

Bottom line, be careful of what you are passing, that it will fit the mix (if it won't, don't pass it), and what the next FX is going to do. Time-based FX can do the most damage to an unwanted component, because they effectively "smear" those frequencies down the track, so always be judicious with those (and why they almost always come last).

I appreciate this.  But it is a bit vague to me, especially the second paragraph.

😁JB

Share this post


Link to post
Share on other sites
4 hours ago, Johnbee58 said:

I appreciate this.  But it is a bit vague to me, especially the second paragraph.

😁JB

I habitually type on my phone, so replies can be a bit curt at times. The first paragraph is more an extension of what Imu2002 said, where passing on a signal should be what you want from the start, not fighting with mirror EQ (complimentary boosts/cuts to prevent collisions). HPFs are common to strip the low end off most tracks (they add mud to the mix, as well as energy), but LPFs can serve the same purpose (if everything is bright, they combine to detract from what is supposed to stand out). Choosing instruments to mix which do not collide from the composition stage makes mixing them easier. If you google "EQ Cheat Sheet" these two come up that have nice overviews of components for common instruments:

http://blog.sonicbids.com/the-ultimate-eq-cheat-sheet-for-every-common-instrument

https://producelikeapro.com/blog/eq-cheat-sheet/

There is no limit on EQs that you can use, or compressors for that matter (slightly compressing vocals twice is often more effective that one big jump), so EQ (for content scoping)->Compressor->EQ (for mixing) is common, but never absolute.

As to the second paragraph, reverb (and delays) can cause grief since if inserted too early, then the follow on processing will process reflections the same as if it were the main signal (which they are not). Most "presets," especially on synths, are often loud, cover much of the frequency spectrum, and are swimming in reverb... all three of those need to be addressed before even considering mixing it with something else. Maybe another way to explain this would be an extreme example... reverb is to replicate "environment" since most DAW practice is to strip that up front to facilitate mixing, then put it back in at the end... if a signal with heavy reverb is passed into an amp sim, it will make a mess of things. The reverb is more how the final signal/mix will sound in the "created environment."

Again, there are definitely no hard/fast rules to anything, but a lot of the above is to make mixing easier, and the result cleaner.

Lastly, if you have never seen this video before it is worth watching. Dan Worrall does an incredible job packing a lot of information into 10 minutes, and explains the reasons why as he goes. It is a video for FabFilter Pro-Q, but what he is doing applies regardless of the FX used.

 

  • Like 1

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×
×
  • Create New...