Michael McBroom Posted August 4, 2020 Share Posted August 4, 2020 I'm working on a piece that has 15 tracks, all MIDI, using four different instances of TTS-1 for the MIDI voices. I like TTS-1 as a multi-timbral synth because it's cheap and it sounds decent. I don't know how much of a factor using TTS-1 for the voices is with this phenomenon. And I can't honestly say if I would have this same problem with audio tracks because I've never recorded a piece with more than 6 audio tracks. So anyway, what's happening is, as I've added instruments -- or tracks -- to the piece, the tracks get buried in the mix. If I solo a track, it jumps out at me in volume and clarity, but as soon as I un-solo it, it disappears into the mix again, noticeably quieter in volume. With this particular piece, this has become annoying with respect to the drum track. I really want it to stand out and have a lot of punchiness -- which is the way it sounds when soloed -- but it's getting buried in the mix. I can increase its volume, but even though it gets somewhat louder, its sound is still buried, and it begins to peg the meter, so I'm limited as to how much I can do. This doesn't just happen with drums. It happens with guitar -- especially acoustic guitar -- and piano tracks. Oh, and strings. They get so buried I can't even tell they're there, but they definitely affect the meter if I turn them up. Oh, and I'm careful to balance the mix as well, trying hard not to stack instruments up in any area of the pan range. This has actually helped my mixes a lot, but it hasn't cured this burial issue. So what do smarter-than-me studio gurus do to keep their tracks separated from each other? Record in audio on million dollar boards? Well, given that's way out of my range of affordability (I'm having to exist on a fixed income at the moment), is there anything us poor composers/engineers can do to improve their mix? Link to comment Share on other sites More sharing options...
DeeringAmps Posted August 4, 2020 Share Posted August 4, 2020 4 instances of TTS-1 implies you have 16 stereo pairs available. Can I assume each of the 15 tracks has been assigned its own stereo pair? How are these tracks then routed? (I work with a combo of audio and midi tracks as a matter of course; separation either way shouldn't be an issue) I would suggest you break up your instrumentation to several busses (for me its VOCALS GUITARS DRUMS KEYS/STRINGS) that then are routed to the MASTER. (I also use a SUB-MIX buss {all instruments} so I can quickly raise or lower the Vocals) When you solo the Drums where is the track peaking? If the TTS drums are sent to the DRUMS buss pull it down so you have some headroom on the MASTER (-18 is a good place to start). Now slowly bring up the rest of your tracks. Level and Pan each and get them "sitting" in the mix: hopefully you'll start to get everything working together. I find controlling "levels" in the mix easier at the buss level than everything going straight to the MASTER; YMMV... Sounds like you have some "masking" going on, and several instruments are competing for the same frequency space. That's a bit more complex subject, lets get the basic mix working, then move on to fine tuning it. HTH, tom Link to comment Share on other sites More sharing options...
Bristol_Jonesey Posted August 4, 2020 Share Posted August 4, 2020 Read up on complementary EQ techniques Link to comment Share on other sites More sharing options...
Chaps Posted August 4, 2020 Share Posted August 4, 2020 One tool that you might find useful is a Spectrum Analyzer. It lets you see where the frequencies are 'thick' and 'thin' and can help you see what frequencies can be cut or boosted most effectively. I've used Voxengo's SPAN (free) and I really like its interface and the ability to click and drag the cursor horizontally through the window so you hear only the frequencies your cursor is over. https://www.voxengo.com/product/span/ 1 Link to comment Share on other sites More sharing options...
tonemangler Posted August 4, 2020 Share Posted August 4, 2020 3 hours ago, Michael McBroom said: So what do smarter-than-me studio gurus do to keep their tracks separated from each other? Hi Michael, I'm by no means a studio guru but this is what I've learned over the years. What you describe is masking, this happens when two or more sound sources live in similar frequency ranges, therefore when combined those frequencies accumulate or add up to blur and bury the individual sources. The first step to overcome this is to choose your sources wisely. Don't choose sounds because they sound good in solo, choose them in context of the mix. When it's possible, try to program your parts in different frequency ranges so they compliment each other. For example, if you find your string section is getting lost try transposing up or down to see if it sits better. Second, make frequency cuts or boosts to individual sound sources that occupy similar frequency ranges. Solo the drums and then add the bass, if the kick becomes slightly indistinct that means you have to carve out frequencies so they will both sound good together. Check the frequency spectrum of the kick drum with the Pro Channel EQ and see where the prominent build-up is ( usually between 50Hz - 80Hz ) and then apply a gentle cut to the bass track in that region. You should be listening to both tracks when doing this and sweep the bass cut until the kick sounds the clearest. You then may want to apply a gentle cut to the kick if it is conflicting with the bass fundamental. Keep adding different tracks to see if more EQ moves are needed. Acoustic guitars can have a lot of energy in the low frequencies that can cause havoc so a good rule of thumb is to High Pass until the bass and kick retain their desired clarity. However, in a sparse mix where the acoustic is a featured instrument you may want to keep it's fullness. This process applies to mid and high frequencies as well, electric guitars, strings, voice etc. You will probably have to convert all your midi performances to audio to do all this. Third is panning, which I believe you mentioned that you already do. Remember that in a mix it doesn't matter what things sound like in solo. nobody but you will ever here that, after doing certain EQ moves individual parts may not sound great in solo, but what matters is what they sound like combined with all the other parts. One thing I find when using multi-timbral synths is that often the patches are programmed to sound big and expansive as a selling point, however when you try to combine a lot of big expansive stereo sources the result is soon everything sounds like mush. I don't have experience with TTS-1, but if you can, reduce reverbs and convert some sources to mono. Remember that all parts in a mix can't sound up front and punchy, good mixes have depth where some sources are upfront and others sit farther back. Lastly, from experience in composition, sometimes when having too many parts the result is a smaller sounding track where the individual sources become indistinct. Sometimes the best fix is the mute button ( don't mute the whole thing, just try muting certain parts ?) Good luck with your song! Link to comment Share on other sites More sharing options...
Michael McBroom Posted August 4, 2020 Author Share Posted August 4, 2020 3 hours ago, DeeringAmps said: 4 instances of TTS-1 implies you have 16 stereo pairs available. Can I assume each of the 15 tracks has been assigned its own stereo pair? How are these tracks then routed? (I work with a combo of audio and midi tracks as a matter of course; separation either way shouldn't be an issue) I would suggest you break up your instrumentation to several busses (for me its VOCALS GUITARS DRUMS KEYS/STRINGS) that then are routed to the MASTER. (I also use a SUB-MIX buss {all instruments} so I can quickly raise or lower the Vocals) When you solo the Drums where is the track peaking? If the TTS drums are sent to the DRUMS buss pull it down so you have some headroom on the MASTER (-18 is a good place to start). Now slowly bring up the rest of your tracks. Level and Pan each and get them "sitting" in the mix: hopefully you'll start to get everything working together. I find controlling "levels" in the mix easier at the buss level than everything going straight to the MASTER; YMMV... Sounds like you have some "masking" going on, and several instruments are competing for the same frequency space. That's a bit more complex subject, lets get the basic mix working, then move on to fine tuning it. HTH, tom I started with Pro Audio 8 that came with for-real paper manuals, and I studied them. Then Pro Audio 9 came along and I made the transition pretty easily. I wasn't a power user by any means back then, but I knew my way around the software pretty well, I guess. I didn't get into many of its intricacies though, mostly because I never felt I needed to. And then I had a long break in composition -- finally got back into it with Sonar Platinum shortly before CW was shut down. I've been fairly active with it ever since. But I'm still not a power user. Since I started using Sonar, I've been pretty much self taught because honestly I remember not very much from back in the old Pro Audio days. When I get stuck on something I try and find an answer online and if I don't have any luck, then I ask here because this forum is such a tremendous resource. As fortune would have it -- or maybe not -- after starting this thread, I was browsing through others here and came across "Help needed/clarification on TTS-1 usage" which piqued my interest. Well, I didn't get past the first response, by Jim Fogle, where he mentioned a couple of articles written by Craig Anderton on TTS-1 for SOS Magazine. I found these articles to be full of information, a lot of which I was either only peripherally aware, or not aware of at all. So I read both articles. So now I have a little better understanding of TTS-1 at least, such that I think I know how to address your first couple of statements. It would not be correct to assume that 4 instances of TTS-1 implies 16 available stereo pairs. First of all, whenever I've loaded TTS-1, I've never checked "All Synth Audio Outputs: Stereo." the way Anderton recommends. I've always checked "First Synth Audio Output." However, when it comes time to record my MIDI tracks to audio (using the bounce to tracks feature), they always record in stereo, so is it really necessary to check that box? Secondly, often when I select multiple instances of TTS-1, it's so I can access its audio capabilities to be used on a MIDI track, which aren't available for the MIDI instruments. Things like EQ and all sorts of other effects and stuff. I also have much greater control over a channel's volume if I use TTS-1's audio controls, and it is for this latter capability that I will most often set up a new TTS-1 track. In the above example's case, I have three instances of TTS-1 using two tracks each, and the other 9 tracks are routed to the last instance. I'm doing this mostly for EQ and volume control. It's worth noting, perhaps, that the drum track that's getting buried is in one of the two-track instances of TTS-1. I don't know if each MIDI track is assigned its own stereo pair. I spent some time looking for any instances of stereo anything associated with the MIDI tracks and couldn't find anything, except for the "All Synth Audio . . ." stuff when preparing to insert an instance of TTS-1. But this has to do with audio so I don't see how it applies to MIDI anyway. I have been routing all of my tracks, both MIDI and audio, through a single Master bus, which I'm beginning to think, after reading your comments, might not be the wisest course of action. The drums are peaking with the snare strikes. I'll try and follow your advice regarding the drums and see if I can tame their output -- or at least that of the snare. I'm not familiar with the term "masking," but yes, I'd have to say that there are instruments playing in the same frequency space. I guess it will be figuring out how to achieve separation in this case that may end up providing me with the best results. I don't know if it's a goal that I can ever achieve, but I would like to be able to put together mixes that sound like a live event. I'm thinking about how, when listening to a symphony orchestra, for example, the human ear is able to separate easily the strings from the woodwinds and the brass. And I ask myself, why does it have to be so dadblamed difficult to get a recording to sound just as good? Link to comment Share on other sites More sharing options...
Lynn Wilson Posted August 4, 2020 Share Posted August 4, 2020 Here's a quick solution for me when tracks get buried or lost in the mix: pull everything down by 10 to 20 dbs, then push up the tracks that are buried to where they are plenty loud. Finally, bring everything else up one by one until you get what you want. Once everything is balanced, you can bring the entire mix up in proportion to where you want it. There's so much headroom in digital recording that you won't be anywhere near the bottom end of the signal to noise ratio by lowering the volume of your mix. In the days of tape recorders, 60db of s/n ratio was the standard for excellence, and some of today's converters are almost twice that much. 1 Link to comment Share on other sites More sharing options...
msmcleod Posted August 5, 2020 Share Posted August 5, 2020 Having each instrument coming out on its own audio out allows you to apply EQ each instrument individually. Even just simply applying a high pass filter on everything apart from the bass can drastically reduce the muddiness of a mix. Frequency clashes between instruments can be a problem in even the simplest mixes; so being able to identify the frequencies that matter for a particular instrument, and lowering the frequencies that don't, can really help. Be wary about using the solo button though - you may find that after EQ'ing an instrument it sounds sounds terrible in solo, but fits into the mix perfectly (and vice versa). This is normal... so it's best to mix as much as possible with everything playing. Another technique I've been using lately (thanks to @Craig Anderton for this one!) - is to put a pink noise generator on an empty track with the noise up pretty high (way more than you'd hear on an old cassette, and more like the wind pelting your walkman headphones on a really windy day). I use Melda Production's free noise generator for this. Having the noise there really helps your ears to avoid focusing on a particular instrument, allowing you to hear the mix as a whole. Just don't forget to mute the noise generator on mixdown! Link to comment Share on other sites More sharing options...
Bill Ruys Posted August 5, 2020 Share Posted August 5, 2020 11 hours ago, Lynn said: Here's a quick solution for me when tracks get buried or lost in the mix: pull everything down by 10 to 20 dbs, then push up the tracks that are buried to where they are plenty loud. Finally, bring everything else up one by one until you get what you want. Once everything is balanced, you can bring the entire mix up in proportion to where you want it. There's so much headroom in digital recording that you won't be anywhere near the bottom end of the signal to noise ratio by lowering the volume of your mix. In the days of tape recorders, 60db of s/n ratio was the standard for excellence, and some of today's converters are almost twice that much. That was going to be my suggestion too. Lots of good advice in this thread so far. Making room, spectrally for each instrument (EQ, etc) and also making room in the stereo image with panning can help. All the instruments in TTS-1 are stereo, which can lead people to leaving each instrument panned dead centre, but don't be afraid to move them around. You generally want bass, kick and vocals panned centre, but separate other instruments in the stereo image. Just like Lynn says, I really started making strides in my mixes when I started leaving plenty of headroom. Aim for a much quieter mix so that you can sit the instruments you want to highlight above the remaining mix. Leave getting the overall volume to last - which is more a mastering task. Also, don't be afraid to automate the mix to highlight the instruments you want to showcase in different parts of the song. Mixing audio is a little bit like mixing paint. If you try to use all the colours at once, you just end up with muddy brown. Link to comment Share on other sites More sharing options...
Byron Dickens Posted August 5, 2020 Share Posted August 5, 2020 Start with your orchestration and arrangement. If you have too many parts, the ear can't make out what's going on. If you have instruments occupying the same frequency ranges, they mask each other. Even in a massive Late Romantic symphony with a 100+ piece orchestra, you usually have at most four parts - or voices - going. (think 4-part harmony. ) Parts are doubled with different instruments and across octaves. Sometimes a line is harmonized, but there are still really only four distinct, unique parts. Also, instruments come in and out; no one plays constantly except the strings. After that, you need to use what's called complementary EQ. The gist of it is that if you have two instruments occupying the same frequency, you use EQ to carve out a space for each one. Kick drum and bass are a common example. You might boost 100Hz on the kick & then cut it on the bass while boosting 80Hz on the bass and cutting that frequency on the kick. That's just a rough example to give you the idea. Link to comment Share on other sites More sharing options...
Michael McBroom Posted August 7, 2020 Author Share Posted August 7, 2020 Thanks for the additional info, guys. This gives me a lot of room for thought. Headroom is something I try to keep in mind at all times. I try not to let my audio channels' volumes go above 0 dB. The fine tune control at the top of the strip is where I adjust things, and I try to start there at -18 dB. To increase a signal's volume, usually I use the Pro Channel's EQ, but I will boost this fine-tune control some too if needed. Using the EQ for volume boost has a two-fold advantage, I've found. Not only does it allow for volume adjustment, but it allows for the volume of specific frequencies to be adjusted. And speaking of EQ, I also sometimes use one of the Nomad BT EQ utilities for specific, narrow instances -- most often drums. Using EQ to carve out spaces for each instrument that may occupy the same frequency range is still something I need to work on more. I suspect the Pro Channel EQ has more capabilities than I'm currently exploiting -- and I'm exploiting all that I know, including the high and low boost and cut, bandwidth adjustment, and use of one of the four frequency controls as a cut throughout the instrument's frequency range. I'm not as concerned about a MIDI instrument's volume -- I'm concerned more about the volume from its audio channel (such as TTS-1's audio channel). Also, that fine-tune volume control at the top of the MIDI strip I've found is more than a volume control. It also determines -- uhm, for lack of a better term, the "attack" of an instrument's sound. Using a nylon string guitar as an example, if the volume is kept at 12 o'clock or less, the guitar has a mellow sound. But if the control is turned up much past 12 o"clock, there's an increased brilliance to the guitar's sound that is determined to an extent by the velocity of any given note. So I have direct control over an instrument's "attack," or "brilliance" via this single control. Quite useful. It has been one way that I've been able to keep MIDI instruments from falling into a mix's mud. I think that perhaps a large portion of any sort of blurry or muddy mix I've been experiencing I managed to correct yesterday. I've been using an old stereo receiver to power my monitors and one of the channels has been failing. It still works, it's just its output has become somewhat distorted. I've compensated for this by using my headphones more, but recently they've developed a problem themselves -- lows causing a faint buzz -- just loud enough to be annoying. Anyway, given that my playback was becoming more and more compromised, I decided it was time to replace that receiver with a better one. Still a stereo receiver, but this new (well, used) one works WAY better. Both channels are crystal clear at all volumes. At any rate, as I listen to these same tunes that seem to have a muddy mix, it is much clearer now, and I'm thinking I might not need to do as much tweaking as I'd originally thought. Conversely, I've also discovered instances where I've found instruments that were buried in the mix and I was able to successfully extricate them from it and boost their clarity substantially. So this new receiver has been very helpful in that respect as well. Anyway, onward and upward. I still have a stack of tunes to remix, hopefully with improved results now thanks to the perspective I've gained from this forum. Link to comment Share on other sites More sharing options...
scook Posted August 7, 2020 Share Posted August 7, 2020 If you need help figuring out where the audio overlaps and relative track levels a tool like Melda Production's MMultiAnalyzer can help. Like all Melda plug-ins it goes on sale for 50% off at some time through the year. Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now