Jump to content

Glenn Stanton

Members
  • Posts

    1,706
  • Joined

  • Last visited

Everything posted by Glenn Stanton

  1. are you using a mixer to record the vocal and listen to the recorded material? if so, you're likely mixing them there and that is being fed into the vocal track. if you have an IO unit with a separate monitoring output - maybe your monitoring is too loud and that is getting pickup by the mic. another option is you have some software mixer with your audio card which is mixing them. and lastly you have something routing into your vocal track from the instrument track that is perhaps your monitoring source. i.e. you want to hear both vocals and instruments, and have routed the instruments and vocals together (like a patch point for example or aux track).
  2. pretty sure using sidechain compression has been around for at least 40 years and longer to duck things like: bass from kick drums, instruments from vocals and solos, reverbs/delays from vocals and solos, etc. in combination with delays on reverb and echoes for either clarity or "slap" purposes. and one suspects this was common across genres...
  3. make sure everything being exported is output to the "master buss" OR the speakers. if any of your tracks are going only to the speakers and the export is happening only on the master buss, then you will be missing parts.
  4. +1 on Dell - until just this past year, i only used Dell computers (and since like 1990) and my previous laptop was a secondhand-me-down Dell which once cleaned, new SSD drives (replaced DVD drive with SSD OS disk and used higher performing internal connect for the project drive), and OS (W10), no option to expand memory on this version beyond the 8GB it had, it worked perfectly for several years as a DAW machine. the Dell is still alive and sits under my HP at the moment. i only really updated to the HP because a) was on sale, b) i needed it more for my studio design work (vector CAD and rendering) (and it brought my processing down into the seconds and minutes vs hour or more rendering - a CPU-intensive CAD and a GPU-based rendering product).
  5. Glenn Stanton

    No audio in playback

    so try this: ALWAYS connect your bluetooth headset BEFORE opening this particular project 🙂 or create a new clean project as it seems like it's corrupted for some reason (based your comment that ALL OTHER projects WORK). I doubt it's a "bug" since it's only the one project, and only when you don't connect your headset before opening it. to me, that says something is wrong in the project.
  6. for composing and arranging outside of the DAW, i use Hookpad, Musescore, Scaler 2, and depending on granularity, the NI Session instrument patterns, EZ products (generate and grid edit), and the Ample "riff" mechanisms to get instrument specific "subsections". then some additional bits like Arpeggiators, Chordz, and other note to chord type translators to add elements. for the EZ Keys, Bass, Drums, i'll re-create the arrangement chords in EZK, get it the way i like it, then take it over the EZB and EZD (if i think the drums could be more interesting), and get those to where i think they'll play well. usually, if the arrangements in MIDI are good, then importing and applying those tracks in my recording template, are fairly fast. some selection of instruments, and some edits, but if there are significant arrangement changes that i want to make, i simply go back to the MIDI source(s) and regenerate, re-import into the project with the already selected instrumentation. this keeps things sync'd consistently for only some moderate extra work. most of the rendering is then done in the DAW and i can export the tracks (as well as intermediate WAV and MP3) for import into the mix template (which has all the routing, typical effects and settings etc). which at this point is effectively the same as if getting client files to mix.
  7. i only really use the MIDI take lanes when i'm recording someone who want to do multiple takes over a given loop, but most times its much simpler - do the take, ok? try again, ok? repeat... 🙂 because in those situations i'm recording audio of the instrument (keys, guitar, MIDI horn etc) and sometimes their vocals as well. for myself, in general, i do all the MIDI composition outside of the DAW.
  8. i should have mentioned the unmasking EQ stuff is usually very small (~1db) type moves across several instruments (if needed) and some clip gain adjustments to elevate slightly where needed as well (like that errant tambourine hit - i might reduce that one or two to get it lower - 1db or so and maybe boost 1db on that instrument being masked). so 99% of the time is small moves - for most songs - i depend heavily on arrangement to get a good mix...
  9. usually just #1 but if i have a dense mix i'll use my Izotope tools to find and unmask neutron/nectar. ultimately though i also check with MP3 listens (either via the Ozone Codec or an exported MP3) since the MP3 algorithm (even on its highest setting of 320K) will make choices on masking and even small mix changes can have significant results in the MP3 output (one example comes to mind - a tambourine hit which is just barely there in the mix suddenly becomes the next loudest thing in an MP3 - why? - the algorithm felt it was the strongest signal that the particular frequency and then suppressed other). so, i would say checking your target output file type is critical. and if you're streaming, recheck there as well - some streaming services (in an effort to make things "sound better") seem to do some EQ'ing and loudness adjustments (perhaps they call it "mastering") even if you don't want it. for example, Reverbnation - seems to do some slight bandwidth reduction (stuff seems slightly crunchy compared to my source file) cause a very slight darkening on the material, otoh, Broadjam seems to add some additional HF and loudness making things brighter and louder (regardless of the LUFS i submit - whether -12 or -14). Reverbnation target artists who want to sell their stuff, gigs, etc; Broadjam is about getting paid to help find deals for your music so tries to make it "sound better" to "assist". maybe i'm incorrect but that's my observations using those two services.
  10. as always - check your buffers settings - low latency with plugin hogs can yield unsatisfactory behaviour 🙂 as a general rule - set the buffers to as high as possible that you are comfortable with. for me that is 2048. and most times even with Nectar (major hog) and Abbey Roads Chambers - seldom over 10% CPU.
  11. yeah, finding a properly weighted 88 key which is fair portable will be expensive no matter. maybe he just needs one for his room for private practice and use the real one in class when needed. you could probably find a reasonably priced one, perhaps even used (which may just need some TLC) for his room.
  12. yeah, it's listed often as "clean" but there is definitely some character there so i've been using it on vocals when i don't want too much character but i want some colorization and yet clean smooth response. i've had it for a while but never really tried it out until after some backing vocals sounded "saturated" with my usual suspects. i popped this on in and suddenly, voila! it was right on the money. put it into my record and mix templates and moved along. PWM is definitely a unique technique for a compressor, normal something we equate with digital circuits but implemented in analog circuits (to be fair, it's all transistors etc no matter how small)...
  13. yeah just don't lose a piece or break one... it would be interesting to see how much a replacement part costs... might be best to get a high quality 49 key unit and a backpack for it...
  14. that may have been why the OP name and email are completely different... it's a setup to get someone spammed... not that this is the source for that, in the US every time you do something with the motor vehicles department the state sells your information. some states even include your photo in the package (mostly the states with the strictest privacy rules oddly enough). 🥸
  15. one of the nice features of Melodyne - you can adjust levels across the selected nodes - bring up lower volume ones, and turn down louder ones as a means of balancing the overall volume. as well as doing some de-essing and note corrections. then use compression (or limiting etc) to get that final finish. lately, i've been using the Kramer PIE compressor w/ ~2-3db compression to glue backing vocals. it seems "clean" but it does have "something" which lately just seems to be working well for me. https://www.waves.com/plugins/kramer-pie-compressor
  16. pretty sure bagpipes were the original punk rock 🙂 i think there are some soundfonts out there and some Kontakt libraries which may be worth looking for. SF https://www.musical-artifacts.com/artifacts/2221 VST https://www.universal-piper.com/en/the-virtual-bagpipe-lab/ https://plugins4free.com/plugin/3296/ https://integraudio.com/7-best-bagpipes-pipe-plugin/
  17. the staging of the hardware versions help even out the peaks and can add some volume to lower sounds. the idea partly is to prevent clipping (in the video the dbx166), and provide steady enough signal to clear over any circuit noise (which with digital tends to be low although room noise and cheap mics etc can all contribute). for in-the-box, you have to prevent clipping before you capture the signal (unless you have a compressor on the mic side or a nice preamp which is more graceful when presented with louder signals), and then you can use the internal compressors to "level" the track - reduce peaks and increase lower volume (makeup gain) -- with the caveat that you can use just about any compressor you want really as long as you can control the essentials - input level, output level, and preferably attack and release. you could use 2 1176 - one set to soft and one set to hard. and you can flip the order to see which has the best effect on your track (or buss).
  18. yeah content gets large quickly. i was considering Superior Drummer 3 until i saw it was 230GB of samples... then decided i a) didn't need a 4th drive just for a drum machine, b) whilst the quality is no doubt the finest, it is beyond my skill set to need the nuances and pristine sound of a 192Khz sample of a stick hitting something... 🙂 so, definitely find a way to add a "content" (and/or "project") drive to get your C drive as clean as possible. ok to leave your VST etc (and since some hard code C even if you have a junction) and move the content. you'll be glad you did.
  19. Glenn Stanton

    Fire

    long ago, i bought a Fostex 4-track recorder (like their first model...) and of course after getting it all connected, and doing some day drinking, i figured i'd give it a work out. recording like 10 tracks w/ bouncing etc. so to say that the original version of this was "grunge" (before grunge was a popular thing), would be, erm, a disservice to grunge... but my brother found it and liked it and managed (for live purposes) to turn into something decent because he is an extraordinary keyboardist (although he liked my big piano slides and 8th note bopping - not sure why, maybe it's just rock and roll) so now fast forward 40 years... trying to keep the arrangement somewhat similar, basically just a fun bit of innuendo and steady beats... BPM 128 Key A c'mon and take me dancing darlin' underneath this pale moonlight the way you hold me and squeeze me i can feel the temperature rise us, baby i can see the fire in your eyes can you feel this heat between us? oh, this must be the fire yeah, this must be the fire c'mon and take my lovin' baby slowly dance me through the night the way you touch me and kiss me i can feel these flames ignite us, baby i can see the fire in your eyes can you feel this heat between us? oh, this must be the fire yeah, this must be the fire [solo] c'mon and take me dancing darlin' underneath this pale moonlight the way you touch me and kiss me i can feel these flames ignite us, baby i see the fire in your eyes can you feel this heat between us? this must be the fire oh, this must be the fire yeah! this must be the fire oh, let's take this higher c'mon and light my fire c'mon and feed the fire yeah, do you feel the fire fire! yeah! this must be the fire ----------------------------------- besides which, how many times can you yell fire in a dance club? LOL 🙂 instruments: drums, bass, acoustic guitar, 2x electric guitars, solo guitar, piano, e. piano, organ, synth (that sound like an excess reverb or thin organ for a pseudo pad on breaks), horns (2x trumpet, sax, trombone), lead vocal and 2x backing vocals. the horns are my new toy "session horns" and i haven't played much with them yet, but i found an interesting riff which i felt good about and used that during intros and on the end... the "burning with desire" thingie during the drums & bass break is something the backup vocals came up with... not part of the original... comments welcome.
  20. there are a few possible explanations: 1) your monitoring - depending on the levels and software you're using on playback could result in different EQ / room response which changes your perception. good idea to make sure the calibration on levels is consistent as well as any EQ / effects / etc on playback. listening on a mono-like sound source (e.g. portable USB speaker) can alter the mix you hear vs your stereo mixing. 2) MP3 (if that's being exported and listened to) is a lossy algorithm which masks frequencies in order to reduce file size and less than 320K, it's gets exponentially worse... as a general note: i routinely check my monitor levels to be consistent across all apps and devices, ensure OS and other updates haven't switched on some effects or mystery EQ and enhancement settings, etc. then listen to the mix on several of them before exporting. fwiw - one of the reasons people still use NS-10 or other bandwidth restricted speakers is to get a separation from the LF and HF components of a mix to see if the midrange is OK. then use full range to tweak the LF and HF.
  21. someone mentioned a PCI-e x16 riser card where you could install dual m.2 drives, so if looking to create RAID or simply expand storage maybe look for that as well...
  22. yes, i have a couple of projects coming up which need steel drums, so i bought them. they sound so much better than the SF2 i have been using over the years, and the hand tapping articulation is excellent. so for $28... the soundpaint version (also 8Dio) seems like it uses the same samples base (12.3GB) but the options and articulations is fewer. $15.
  23. it looks like you're using sends in lieu of directly routing things: 1) vocals being sent to master (track output) + vocal buss (send) -- maybe set the vocal output to the vocal buss, then route the vocal buss to the master buss. eliminate the vocal send. 2) same for any other tracks going to busses - drop the send, and use the direct output routing. 3) if using a buss for shared usage (like reverbs, delays, parallel compression etc) then a send from the track (or buss) is ok. 4) all outputs eventually make it to the master either via a buss or via a direct output. sends are generally not your main routing to the master (from my master i have sends to my monitoring and print busses but all things to the master are direct outputs). it's one of the fun things about having nearly infinite routing options 🙂
×
×
  • Create New...