Jump to content
figuly

Internal VST Instrument effects

Recommended Posts

HI, I'm looking for some advice in respect to VSTs. Somebody out there has probably already thought about or answered this question. I use Sampletank 4, EZ keys, EZ Drummer, Kontakt etc. All of these VST instruments have inbuilt effects..Eq, reverb compression etc.. particularly Sampletank. Would it be better to switch these off (where possible) and just use insert effects in the Cakewalk console view and say use just one reverb master buss rather than have different reverbs etc  generated from each individual VST ?  I wonder if it would produce a better workflow and end result. I suppose I could do my own test but wanted to find out the opinions and experiences of others first. Thanks in advance for any help or advice. PS  i have no idea how to master tracks but am learning... listening to my midi compostions on my AKG  headphones, some istruments  sound unfocussed.

  • Like 1

Share this post


Link to post
Share on other sites

That depends ;)  It is common to have a bus with reverb. If I want to create a sense of space I do this or if it's atmospheric and so on. Reverb is also an fx - spring reverb on guitar or EP or something will be on an individual instrument. Compression is similar. You may want to compress individual instruments; you may want to level out the bass or rhythm guitar or it may give you a sound, but you may also have compression on a buss that groups some instruments together or the on the whole mix.

Share this post


Link to post
Share on other sites

For me, it's about 50-50. Like rsinger says, it depends.

Reasons for using internal effects:

  • Convenience, as they're often integrated into presets, and some are customized for a specific synth.
  • Allows separate FX for each voice in a multi-timbral synth while still using a single stereo pair as its output and not needing an extra bus.
  • Some effects are synced to or modulated by an internal synth parameter.
  • Efficiency. Many built-in FX are simpler and more CPU-efficient.
  • Saves having to own every effect, e.g. a separate flanger
  • Simplicity. Most internal effects have limited controls; great if you don't need every parameter.

Reasons for using plugins instead:

  • Third-party plugins are often superior to a synth's built-in effects.
  • You can freeze synths independent of their effects, leaving more options for the final mix.
  • Reverb, in particular, is best applied to many instruments in a common bus if you're after a natural sound, as if those virtual instruments were actually in the same room. Plus it conserves CPU.
  • Routing, e.g. sidechaining.
  • Lots more possibilities for making your mix more dynamic through automation.
  • More options, finer control and a larger UI.
  • Ability to upgrade independent of the instrument. 
  • Fewer FX overall means you can learn them better, have a deeper understanding of how they work.

Share this post


Link to post
Share on other sites

I agree with the advice offered above. most people probably use a combination of the two (internal + external). Another example of "it's not either/or, or this or that., or one size fits all. It's whatever works best. And it will be different in different projects, even those by the same composer/musician. Keep experimenting, and keep an open mind.

Share this post


Link to post
Share on other sites

You can think of those internal effects and routing as a way for you to submix things. Many old recordings (like the Beatles ones) had things like drums recorded multichannel, then mixed and bounced to a single stereo or mono track that would be added to the rest of the mix. If you can route external signals into those plugins (You can in Reaktor and Kontakt), that gives you more sound options.

Share this post


Link to post
Share on other sites

I guess you could look at it like as it may have been in a major studio back in the day.

For example a guitar player or keyboard player could have had stomp boxes or an FX rack located with their gear, that they use to shape their sound, just like live on stage. I imagine that time based effects (reverb & delay) and modulations would have a major role here. The effects that make up a significant part of the instrument's sound character would have likely been used at the source, which could compare today with "onboard" FX in a synth these days, or the FX bin in an instrument track.

Then those "live" sounds could be recorded with microphones (wet) at the amplifier, and/or mixed with a dry DI (direct in) signal at the mixing console.

In that scenario, those signals probably went into channel strips at the mixing console, where EQ & compression were applied, and limiting, saturation, or any combination of dynamic effects could be used to achieve the desired sound in the mix. At that point the mixer could use sends to FX busses to blend things into submixes.

There are no absolutes, but that's how I generally visualize the role of onboard effects. But bottom line, you have so many tools today, that one needs to just decide what works best for you.

Edited by abacab
  • Like 1

Share this post


Link to post
Share on other sites
37 minutes ago, abacab said:

There are no absolutes., but that's how I generally visualize the role of onboard effects. But bottom line, you have so many tools today, that one needs to just decide what works best for you.

👍

Share this post


Link to post
Share on other sites

I usually switch them off if they're just room emulations because I have a room reverb plug-in that I like much better than the ones that come with software instruments and because I like to use the technique of sending most instruments/subs to a single instance. It helps me to create that aural sculpture where the ear hears it like everything being in one space (I won't say"room," because most of the time I don't imagine an actual room, just a space where all the elements could be).

In this case, integrated room reverb can pile up and sound confusing and soupy.

Sometimes, though, character reverbs are part of the patch, this is often the case when using A|A|S sound packs in their Player. Those huge ambient sounds use long, dense reverb as an integral part of the preset.

So: mostly the first way, but sometimes the other, depending on the sound. If the instrument is a sampled or emulated piano, organ, strings, wind, brass, drums, or other physical instrument, I turn it off. If it's a "sound not heard in the real world," then it depends.

As always, the final answer is "whatever best helps you get the sound you're trying to get." Listen with your eyes closed. Get a mental picture.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...