Jump to content

Glenn Stanton

Members
  • Posts

    1,756
  • Joined

  • Last visited

Everything posted by Glenn Stanton

  1. it sounds like one of the analytical plugin type functions to give you stats on the project. i use Expose on rendered files but plugins like YouLean, Insight, etc you can get LUF type info. as far as L-R + phase (stereo/mono compatability) balance, EQ balance per genre, etc you can use some products / plugins like Izotope RX, Ozone, Acoustica, SoundForge etc. to measure those things.
  2. so if you start a blank project and set this up, it still doesn't work?
  3. try something like this to check everything is working as expected, then adjust levels:
  4. same here. i only have the Slate subscription because there are some plugins and synths i use a lot. otherwise i'm avoiding subscriptions and trying to wean off the Slate.
  5. if you use a high quality usb key (don't skimp on this) to host your Waves licenses, you can readily switch between machines, just not be able to run both at the same time. my second secret trick is to have a bunch of free plugins (or paid ones without restriction) which i can substitute - i have the Cakewalk ones from Sonar 7 onwards, and a bunch of free ones which provide a lot of capabilities (and this how i have several friends setup their systems). so, probably 80/20 - 80% of the time i'm using Waves, Slate, etc and other single machine at a time plugins, and 20% of the time using freebies or builtin.
  6. presumably you are trying: track -> master buss (or equiv) track -> send to reverb buss or aux track -> reverb 100% wet / 0% dry -> output to master buss ? a few things to check: 1) is the reverb getting a signal from the track? 2) is the reverb output set to the same output as the tracks? 3) is the mix (wet/dry) settings on the reverb set to 100% wet and 0% dry? (otherwise you're just getting more dry signal) 4) is the reverb powered on / enabled / not controlled by an envelope which could be turning off the sends or the buss itself?
  7. i keep the licenses on my ilok and waves thumb drives, then use the installers to install the plugins (as expected) but use freefile sync to keep the plugin folders, settings / presets, and content sync'd to the cloud and then sync from cloud to devices (laptop and desktop machines). whichever machine is the "current one" where i may have installed updates or new presets, overwrites the cloud set and then the switchover is simply move the USB keys and hit the sync, and after a few minutes (most times) it's all done. application updates occur via the vendor plugin managers as normal. the nice thing about the keys and cloud - if i'm doing something with my friends setup, i can quickly load the plugins, load the presets and content, and activate and get to work. although most times i'll just bring my laptop and drives. πŸ™‚
  8. i'm finding i do a lot of composition in Hookpad and Musescore. or both. a rough draft in Hookpad, export MIDI and bring into Musescore to then fine tune, merge, expand, create the melodies etc. export to MIDI for CbB and any final tweaks and edits. maybe re-export from the DAW in needed to ensure my sheet music is aligned. it's a weird combination of "blocks" and music notation. i'm not really a fan of tabs (for me it's harder than plain old sheet music, no idea why). πŸ™‚
  9. not true, for all spambots to truly get it all, they have to buy the list from the DMV or voter rolls πŸ™‚ lol
  10. thanks, but in reality, the ink system is embedded in the typing system - so the "ink workspace" can be turned off, but the internals of the ink system are embedded in the typing system - so it's never really turned off from a collection of keystrokes and mouse movements. my guess -- people were using drawing to communicate on chats and meeting product whiteboards to avoid typing and therefore a means of collecting the mouse (or other drawing means) was needed to collect it. not sure if they track cursor key movements though. so maybe the old etch-a-sketch means of signaling could be used? (like square wave binary code or even morse code lol) πŸ™‚
  11. another thought (and what i do) is you can use the arranger as in #1 - then export a 2-track (or mono) WAV file, then import that into a new project (e.g. projectname VOX.cwp) and do all your vocals in that project: lead, comps, backing, ad libs, etc etc. then export those finished vocals and import into the main project. the arranger removes the solo (and any other sections you don't need) and the 2-track WAV file let's you do all the vocal work without the weight of all the tracks in the main project (which lightens the CPU load and let's you work with lower latency).
  12. make sure in the device manager they're listed and enabled in "software devices" and "sound, video, and game controllers".
  13. "artificial" or "automatic" "double tracking" ADR was invented by an Abbey Road engineer (Ken Townsend) (https://en.wikipedia.org/wiki/Automatic_double_tracking) because John was always complaining about having to do double takes on vocals. lol. Waves has a plugin (which i use often on background vocals, not so much lead vocals) which simulates that effect. https://assets.wavescdn.com/pdf/plugins/reel-adt.pdf the double tracking technique is more than just shifting the timing but doing small (or large) variations whilst it is running to create differences which make it appear more lifelike. so in addition to shifting the time, some modulation on one or both tracks can add more realistic double tracking (or cool effects). before i got the Waves plugin i would do as mettelus suggested as well as use the Sonitus modulation plugin on each track. or on a separate use the Sonitus (or other split delay effect) with the modulation to create a stereo split using the delay timing + modulations and then center the lead vocal. the delay + modulation was mainly for background vocals.
  14. not really - i like to use no latency approach to recording live instruments and vocals - so the monitoring is not echoed from the DAW, but from the mixer and monitoring gear. so that also means i can specify the latency on the recording. say 100ms one-way. then after i record, i know the tracks are shifted 100ms and simply move them to the left 100ms. done. no possibilities of crackles, pops, or problems with performances impacted by latency from the DAW and IO echo. some folks are ok with 3, 5, 10, or even 20ms latency (roundtrip) with the DAW recording-echo approach. i think it's just simpler to accept it and move it to align the tracks. takes mere seconds to set and ready for next round of recording. actually, if you watch the Paul McCartney @ Abbey Roads special (2011? https://youtu.be/9elQeVfrLOo?si=qm8jLR_xLK-e2T_Z) where he is demonstrating (all solo performances except audience sing & clap along) the old recording processes he (and the Beatles) used with tape, and then his tech using pro tools, you can see the tech doing the manual latency shifts in near real time after each take and before Paul begins a new track (granted much older technology but same principle). much less complex than say trying to time align or phase align drums, etc. so no issues for me. note that my UMC1820 can get roundtrip down to 2.3ms on my desktop, and 5.2ms on my laptop, but really, why bother with the potential for combing effects from even minor latency when direct monitoring basically solves that. and my mixers (Behringer EURORACK UB1202FX ) built-in effects like reverb and delay provide singers with some of that without using up any of my CPU processing as well. so my path = mic / DI -> preamp -> IO -> IO USB to/from PC (record in + play out) + direct out -> mixer into headphone monitoring w/ any effects desired. so all clean mic and DI signals in via IO, and the direct monitoring and recorded play out via the mixer/monitor.
  15. thanks! maybe another song for more cowbell πŸ™‚ the xylophone with the Bb (which i think if louder would be too dragging) is used to make the swing between G & C slightly discordant and resolving alternately... cheers!
  16. thanks! these are Synth V voices - Kevin is the lead male on the verses and singalong, and Natalie is the female voice on the bridge. both are also the backup vocals as well as a two more harmony voices - Anri and Solaria. https://dreamtonics.com/synthesizerv/ https://synthv.fandom.com/wiki/SynthV_Wiki i did use a xylophone to provide a constant Bb chord throughout the verse (when i play it live on guitar, it's the Bb note on the C and G chords) which somewhat acts like a cowbell πŸ™‚ i did tweak the mix a bit yesterday to bring in the instruments a tiny bit (1db or so) more and reduce some of the bass level (as i hadn't really done a proper check across listening devices like i should have...)
  17. i also noticed since they stopped using the SETI apps on your PC to help, erm, decode space signals, that the newer auto-wreck features on text messaging seems to find and attempts to use words which a) you never knew existed, and b) would never have used in any conversation you ever had but somehow the AI determined it was just the word you needed. so either a) the aliens are already here and running things and use this to keep the masses busy correcting it and leveling curses at it, and/or b) they dropped something into the water/vaccines/etc which make the population so stoopid that we'll never be a threat to the rest of the galaxy...
  18. couple of thoughts -- if you're stuck using the realtek drivers for your audio out, ok, but if you have an IO device which supports better audio, i'd disable the realtek ones. and use the WASAPI shared or exclusive modes (or if the IO device has proper ASIO drivers use those). -- you do not need a MIDI device to play MIDI via virtual instruments (such as TTS-1, SI- instruments like drums, piano, strings, bass) etc (assuming you install the SI instruments that come with CbB). the MIDI on a given track is sent to the VI and then out via your audio output. -- in 99% of cases, people are looking to "disable" the MS GM synth as it has had a lot of issues and none have been given attention... not sure why you "lost it" from an update, but there are numerous OS updates happening and its possible one of those recent ones (e.g. i've had 6 updates in 2 weeks now) which could have disabled it (esp Edge updates are notorious for screwing things up lately)
  19. agreed. realistically, generally the object model is project -> tracks + envelopes -> clips + envelopes -> busses + envelopes -- each with settings, effects w/ settings, plus all the written notes, and then the associated audio and/or MIDI files in formats that are generally well known by now. one inhibitor would be the envelopes data if not smoothed could be significantly sized. otherwise, most of the metadata would be fairly minimal (say <10mb uncompressed?)
  20. and apparently everything you type, every mouse movement (the ink services you cannot disable oddly enough) and any related audio/video/electromagnetic signals (like bluetooth and wifi, NFS, credit card w/ chips, etc) are also all scanned and shared to someone on the internet to "help" make everything better. not trying to be paranoid but the timing of these "features" in windows and apple OSes seemed to coincide with the completion of the massive NSA facility in Utah a number of years ago (when the fort meade facility couldn't handle the load and the old TRW crypto facility with the quantum processor farms (which is why encryption became mostly all allowed back in 1995) needed more storage).
  21. yes, you could have a bunch of instruments all responding on the same or different MIDI channels as well as outputs - separate or combined. even multiples of a single instrument.
  22. part of a series of songs written during my time doing open mics at the Backstage Hotel which was all about creating and performing originals. this one is somewhat like a Romeo & Juliet story, erm, perhaps in reverse? lol. anyways - this is early in the mix process and i have some lead vocal phrasing i want to tweak... feel free to leave some comments. ----------------------------------------------------------------- she thinks about how her life has been insane and contemplates the mysteries of eternal pain she sings her favorite song but then she hums the refrain she dances in the rain her boyfriend's gone away and she wonders who to blame on this date in the calendar she cries out his name to the hole in the ceiling where it echoes through the frame she dances in the rain he crawls up to the surface like a zombie from its bed screams this ain't no way to live to those voices in his head but it's to that distant melody, a secret place he's led he's walking with the dead he rushes down that shrouded lane and he finds a clearing there he sees a lonely figure dancing 'round a candle on a chair it's a losing proposition you get burned if you win the game she dances in the rain he moves into her motion, she swirls to embrace those raindrops sparkle on her beautiful face the music seems fantastic but the song its got no name she dances in the rain [solo] two worlds collide they're not the same he walks with the dead she dances in the rain now miles apart in the same dark room they pull apart the curtains to release the gloom they fall to their knees to that higher power above and they begin to pray they say love shine on me love
  23. honestly, i'm only here for the sneaker ads anyways...
  24. whoa! wait! you have friends? πŸ˜Άβ€πŸŒ«οΈ
  25. note: Will's technique of recording the input via the AUX track is how you print the effects on the WAV clip(s). otherwise, just having plugins on the input track does NOT print those effects. if you turn off the effects, you're simply left with the raw input. and which i recommend recording BOTH - so if you find the effect(s) printed on the aux track need to be adjusted, you have the source material to work with... much like re-amping - record your guitars both DI and post-amp / post-effects so you can redo the guitar amping and effects later.
Γ—
Γ—
  • Create New...