Jump to content

Glenn Stanton

Members
  • Posts

    1,740
  • Joined

  • Last visited

Everything posted by Glenn Stanton

  1. my laptop came with W11 home and it seems to run all my software ok. when installing an upgraded OS like home to pro or W10 to W11 - i recommended clean install followed by re-install all software etc, PITA but much more reliable that way.
  2. look under the audio devices. if the device has its own ASIO, then you will need to use that exclusively for the IO as ASIO is a single type only. whereas if you can use the WASAPI Shared mode, then you'll have more options for input vs output.
  3. this ^^^ always record the MIDI along with the audio (on any MIDI source really), then as John suggests you can replace the piano (with another piano, organ, or guitar, etc) if you're not happy with the keyboard audio, or simply want to do another parts in the arrangement with something else. Hammersmith free is decent, EZ Keys2 has a nice grand, most times i'm using Hammersmith Pro. or oddly enough, a very old Steinway D soundfont (sf2), or an old 1890 Bösendorfer piano soundfont, and they both sit in a mix nicely... played via the free sforzando VST.
  4. yeah but what happens when someone else is the singer? and the tracks are done and dusted and now need to be mixed? so vocal training is nice, vocal caching is also nice, and having only / nearly perfect vocals arrive from parts unknown is also nice. 🙂 but -- if i may -- i tend to do call the comping and then apply the pitch fixes as it's faster for me to get the emotional content and timing first, then adjust pitch if needed. i guess it depends on the workload and time pressures that its not always feasible to go back to fix a take's pitch but rather get it all in one shot as much as possible.
  5. the Melodyne "separations" are the small blob audio clips it uses to store the audio + the edit metadata.
  6. all this just in time for engineers and artists to be replaced by AI 🙂 you know, those AI / Machine Learning plugins we have nothing to worry about 🙂
  7. maybe the mix recall function is applied? have not seen the case where the envelopes changed but i have had a few projects over the years where i was mixing it, then closed it, did some other work, re-opened it and suddenly the overall levels were way too hot, and sometimes i find i tweak the HW IO volume for something and forgot, but sometimes, when i checked my clip gain it was what i set it to but the volume was suddenly higher than when i had it open previous - i'd say 2-3 times over the last 2 years. i don't use mix recall (often) but i did notice several of my new projects since the last update or two, now create a mix recall folder.
  8. the Voice products are very nice - one thing i've noted from different people using them - if you do not have good pitch, turn off the harmonizer 🙂 but i've some friends using these who do have great pitch and man oh man, the harmonies are amazing. and as John noted - blend in the pitch corrections. one addition i forgot to mention wrt levels - the Expose 2 app is very nice (not sure if it's still free or not) but you can drag a bunch of tracks onto it and get a rough estimate on the levels, eq and LR balance etc and compared to "standard" for streaming, CD, etc. and get text content about the track as well as export the full set of results into a text file. EXPOSE 2 | Audio Quality Control Application (masteringthemix.com) https://www.masteringthemix.com/products/expose-2
  9. yeah, as a general rule - low latency during live recording w/ monitoring via the DAW (typically not necessary if using virtual instruments, or using the direct monitoring from the IO), and high latency when mixing/mastering to allow maximum resources for the processing.
  10. not necessarily an "external" drive but typically a separate one to avoid OS and other activities on the drive causing issues with the audio streaming on/off the disk. seems to be less of an issue with SSD (imho). my content disk, OS disk, and recording project disks are three separate SSD drives with only the OS being internal and the other two attached to my USB3 ports.
  11. no, the other 20 just like it... where RTFM means "real time fun machine" and "i don't need to read directions" or "don't tell me what to do"....
  12. it looks like you have stereo tracks and master buss and the waveforms look to be stereo - so maybe somehow your conversion to "mono" didn't make the tracks mono? but just one side of stereo or split mono?
  13. it would be interesting to get a list of the words that are offensive in Washington state...
  14. the making of his So album - where the producer/engineer got him to use cymbals resulting an another massive album... one thing i found cool - even in his new studio - using an SM57 on a flex neck for recording over a boombox for monitoring his vocals... scary good skills there.
  15. ASIO4ALL is a wrapper for the WDM drivers - so it might work as you can adjust the ASIO4ALL buffer size. EXOVERB, per their user guide is restricted to: Supported sample rates by installation: 44.1 kHz, 48 kHz and 96 kHz Download reverb files for additional support: 88.2 kHz, 176.4 kHz and 192 kHz Supported buffer sizes: 32, 64, 128, 256, 512 and 1024 i use ASIO4ALL when an older softsynth needs even boundary buffers (which EXOVERB requires) the SHARED WASAPI is itself limiting to "best" buffer size in a shared usage. ASIO, EXCLUSIVE WASAPI, etc all being full control have the option to give the active app the control to adjust the buffer. so not a "bug" in either product, just an unfortunate intersection of restrictions due to your IO HW.
  16. one method is to setup a volume leveler (like concrete limiter etc) to set max levels and even the dynamic range (reasonably) and then a measuring tool to set the overall levels. for example: using a limiter (or maximizer) (i use Ozone) and a metering solution which can report on the overall loudness as well as peaks and dynamic range (i use Insight but there are YouLean and other metering products) to set the overall level (in this case - the "integrated loudness" which is the sum of all), short term (longer than peaks but maybe several seconds worth), momentary loudness (peaks generally), and the dynamic range (loudest vs softer sounds). so pick a level - for backing tracks in live situations - say -11LUFS (loudness units full scale = ~db) and a dynamic range of say 3-5db would (imho) give you enough loudness for those tracks and not have them disappear during lower levels (your volume control). your material will dictate the dynamic range but for live popular forms of music, you probably want it flatter than wider (whereas in a recording you might want it more dramatic) one other aspect - if the source material is WAV, maybe export as WAV and play as WAV. MP3 (and other lossy formats can change the mix somewhat because of the masking algorithms used to compress the file). and so an MP3 version could have unexpected changes due to that effect whereas the WAV file tends to preserve the mix "more correctly".
  17. maybe it's something that the WASAPI doesn't like about the virtualized IO using Jack? https://jackaudio.org/ i've seen a number of products which require an "even" boundary on the samples in order to operate (e.g. 441 does not work for a specific VST, but 512 does).
  18. it's important to always ask ourselves: what kind of a world are we going to leave for Keith Richards?
  19. dear pay-for-software-company admin, please reset my password and send to my new email: joe.hakker@darkweb.net as i'm desperate to order new software - here's my licence # to prove i'm authentic. this way i can get to my credit card and other personal information like shipping address in order to buy lots of your great products for my, erm, friends. best regards and love, joe
  20. divide and conquer. always try a new blank project when troubleshooting problems to define if it is system or project related....
  21. just thinking out loud that the region fx state of takes in sequential takes embedded are being handled differently than the composite level. so one short fix would be to apply the region fx to the composite level vs at the take level. understood separate takes = separate physical file, and comp takes are embedded as appended.
  22. unless i'm mistaken, like a database, new takes for a given clip are appended physically to a given track's wav files, and the "clips"/"takes" are the logical representations - so pointers are needed to create the composite view.
  23. is the volume control on the IO unit up? sometimes when i switch i turn my master volume on the IO i'm not using. down. and then of course sometimes forget to turn it back up. so i suggest getting into "pilot" mode of checking settings: IO volume up - check, sound id profile correct for headphone type - check, master fader volume up - check, H/W fader (usually hidden) up - check. and now several years of this habit i seldom ever get any points during which no audio happens due to system settings. still happens when i forget to unmute a track here and there though... lol.
×
×
  • Create New...