Jump to content

mettelus

Members
  • Posts

    2,267
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by mettelus

  1. Quick note embedded in the bottom of the KVR voting email from yesterday: "We'd also like you to be the first to know that, barring any problems in testing, we're releasing Scaler 3.2 on Wednesday November 12th. It's a major upgrade with lots of new features, functions and improvements that takes Scaler 3 to another level - we can't wait to share!"
  2. I am curious with the OP as well. As mentioned above Aux Tracks/Patch Points were specifically introduced for this purposes, but signals are traditionally recorded dry so that they can be adjusted dynamically in post-production. However, there are certain features that are more static in nature, and those are embedded in some interfaces. PreSonus interfaces (via the Universal Control) include their FAT channel (HPF, gate/expander, compressor, EQ, limiter, and reverb), which are more universal and sometimes desired for baking into the recorded track (HPF is often used on anything not bass or kick). The reason why dry is the default is that it opens options with composition/post-production... even with things like recording guitar from hardware it is very common to also record the dry DI output so that can be re-amped or processed separately as things flesh themselves out. Also, some FX do not take kindly at all to baked-in time-based effects (delays, reverbs, etc.) so if you baked those in you could easily buy yourself into re-tracking those parts... FX (and things like Melodyne) do not see those tails as delays/chorus/reverbs, but as signal, so they will blindly process those as well (and rarely desired).
  3. I bold-faced terms so you can Google and read up on them below. Okay, now I have a clearer picture. What you are seeking is something to visually see frequency collisions, and then apply mirror EQ techniques to accommodate that. While SPAN will identify frequency collisions, it is a little clunky, so I use MeldaProductions MMultiAnalyzer for this (it goes on sale for 50% off regularly, especially Black Friday, and you can get $10 more off your first purchase if you sign up for Melda's newsletter). However... having said that, I would rather recommend a process called ducking audio for what you are trying to achieve. It is the same process used by radios/documentary TV for years. Basically you put and Compressor that will accept a side-chain input (Sonitus is one of them) on your main track, and feed the vocal track into the side-chain (via a send, post fader). What that does is makes the compressor's threshold trigger from the send (not the track it is sitting on), so when the vocal is speaking, you can compress (aka, "duck") the backing track 2-3dB, so that the vocal is clear to the listener. A psycho-acoustic phenomenon called frequency masking is your friend here.... that is where sounds at the same frequency are drowned out by the louder one (and only needs to be 2-3dB, nothing egregious). This video is more an FYI, but covers a lot of ground on mixing techniques in 10 minutes, especially methods to identify collisions with a parametric EQ ("frequency microscope") and common mirror EQ techniques (which you can do with most any parametric EQ). Again, in your specific case I would recommend using ducking instead (it is more universal, and you have one track you want to be heard without worry for "making it fit," as it were).
  4. ^^^^ Check and see if the "Allow Only One Open Project at a Time" is checked in your preferences from the image above.
  5. What FX do you have active in the project? Some FX don't take kindly to small buffers or can cause issues if they have look-ahead functionality. Unbounced Melodyne FX being one of them, with mastering plugins or convolution reverbs being others (depending on settings). When tracking/recording, buffers low (32-128) and most FX bypassed will yield good enough latency to record. During mixing/post-production, buffers may need to be increased (512, 1024, 2048) based on what FX are in play at that point. If you are trying to record into a project that is already seen some mixing work, the global FX bypass will "usually" fit the bill to allow the lower buffers needed, but again unbounced Region FX are something to be aware of as well. Because Melodyne separations data is saved in a CWP file, a good indicator if you have unbounced Melodyne edits is the CWP will be larger than typical (in the MB+ range). If Melodyne is active, higher buffers are needed to accommodate the ARA functionality.
  6. I am not familiar with the new layout, but I assume the depth is the threshold (no idea, hopefully someone can chime in with the real answer)? That is simply what causes the gate to open and allow the signal to pass, so will vary depending on the level of the signal in. I am not sure what "pops and crackling" entails, as that seems more like a buffering issue (not sure), but what does stand out is the release time being so low for toms. I tom needs to ring out, or it will get chopped off (is that what you are hearing?). This is a nice "cheat sheet" I just found for gating drums, and without numbers on that "Release" knob I am assuming it is fairly low in value, which will make it chop off the tom unless the input gain is jacked up so it is basically always open.
  7. I just checked my attachments and it includes PMs... I deleted 3 attachments that were in PMs so that space was recovered (same thread), but then went into the thread and they are still there.
  8. Minimum tech specs (SDXL-Turbo) Operating System: Windows 10 or Windows 11 (64-bit, latest updates) Processor: Multicore Intel Core (11th Gen or newer) or AMD Ryzen (2 GHz or faster, 64-bit support) RAM: 16 GB GPU: Integrated or discrete graphics (Intel Iris Xe or newer) Hard Drive Space: 15 GB available (models downloaded separately) Monitor Resolution: 1280 x 800 Side note: It seems Intel is actually partnered with Distinct AI... I came across an article published by Intel on them while searching.
  9. In replying on my cell the bottom of this box says 4.88MB max (total and file). Do you have other attachments taking up the missing space? (Reports of shrinking appendages are going to scare off forum members (for fear of their members)!)
  10. Yeah, this has taken up the place of folks accidentally hitting O and enabling offset in the past... so many posts on that one until the keybinding was removed. A red halo effect around the entire GUI like when you are near death in a video game would be obvious too!
  11. Interesting. Render FX doesn't even show on Distinct AIs product page (??), but it looks like a beefed up version of Vision FX from the teaser video. Their AI generation is based on local system generation, so it can get intense on your GPU. Between the last offering and this one, they also released GPU models to make generation faster and more efficient. Upside is no credits or storing ideas on a cloud server, but the downside is can be significantly slower depending on the detail involved. Still scratching my head on this Render FX (is definitely new), as it includes aspect ratio and more features that Vision FX does not (says post-processing, etc. in the teaser)... is just odd that it is not on their product page. Also of note is the lack of Painter, PaintShop Pro, and VideoStudio... both PaintShop and VideoStudio were suspended ("put on hiatus") in 2023 to focus on Painter, but Painter has not gotten an update since 2023 either. All three of those apps are now 3 years old.
  12. Don't get wrapped around the wheel on this, just focus on describing the issue as much as you can (especially the OP title... you can also edit the OP to change the title as needed). As long as what is occurring is clear, folks will often respond with the terminology you need and even bold face that as @tparker24 did above. Once you have that, it helps you to search things on your own. It is all part of the learning process. What wraps us around the wheel is when OP has so little detail it could be a dozen things... and then the OP never returns!
  13. Even Melodyne is working with the file written to disc during recording. If this is someone you work with fairly frequently, the goal of improving that recorded file (to minimize post-production) would be another focus to consider. If you Google "Apps to Train Voice Pitch," there are several out there, with some more like games to make them more fun. Some of those apps also focus on the unique range of the singer, which is another thing that should be taken into consideration with song choice/composition. As alluded to above, if post-production becomes tedious (with any instrument), a large portion of that falls back onto the performance, and the performer should also be improving their art rather than relying on you for post-production.
  14. I have rarely seen this effective unless the correction is minuscule. The "real-time" versions tend to add CPU overhead and miss the mark with corrections farily readily. If you need to do significant changes, this is better to do in post-production with Melodyne, either the whole track in one go with its macros, or surgically for specific issues. Also bear in mind that if this is in an FX rack, it doesn't alter what was saved to disc (is simply processing that). You would need to bounce that FX to get the file corrected, unless you plan to run the FX forever (not recommended either).
  15. I would recommend doing a cleanup of that project, especially if it is in its final stages. Before you start, do a File->Save As... with a new name (you can put it in the existing project folder). Once that is done, I would do the following: Delete all of the Archived tracks. If you are not using them, get rid of them. Trim off the right end of the project (unless your export end is really 321 bars?). Some of the tracks appear to have no data, but still go on (Tracks 5, 6, and 9 from the OP). If specific to only certain tracks, you can split those and delete everything to the right, especially if they are truly empty... is that automation data in those? Do a deep dive into what FX and automation you are using. Some FX are not CPU friendly at all, but unfortunately Cakewalk doesn't give you a list of FX running in the project and load/lag associated with them. As this is happening in the same place, I am highly inclined to assume it is an FX choking. When tracking, you want buffers low, but when mixing you want to bump those up in preferences (to 512, 1024, or sometimes 2048). This gives the CPU more time to fill buffers and is required if you have unbounced Melodyne Region FX in the project. Check the event list for where the crash is occurring just to see if there is something in there that is obvious (this is unlikely, but good to check anyway). If that will offload at some point, I would also consider creating a another project at 44.1/24 and drag/dropping the new cwp from your browser into it (that will populate the new project with the data from the cleaned up one), then do another Save to that second new one so it is in a 44.1/24 format. Just bear in mind, that even with a capable computer, the right recipe will drive it to its knees. Rather than save/keep everything (this is better done through saving versions as you go), try to keep things simpler in the working version.
  16. When something is that repeatable, it is often something coming online (FX or instrument) at that point in time. Are there any other tracks that start where the glitch is occurring?
  17. If you go into Preferences->Audio->Sync and Caching (be sure "Advanced" is checked at the bottom of the preferences window), do you have read and write caching enabled? For larger files, enabling those and setting either 512 (I think this is the default) or 1024 may help with disc reads/writes. Is the CPU meter in Sonar showing high usage during that export? Intense FX, especially ones with lookahead buffers can cause issues in larger projects. A notable example is iZotope's Ozone, that is a mastering plugin, so is not intended to be used on tracks.
  18. I wanted to touch base on this at a high level for you quick. You cannot "add detail" to something that wasn't recorded, so larger sample rates will just add a CPU hit; and larger bit depth/word lengths will just add file size, but neither will give you more detail than you had in the project. By default, projects save at 24-bit, but the DAW process at 32-bit "under the hood" to allow more detail with calculations and prevent clipping inside the DAW. While you can increase detail with upsampling, it is actually the VSTi or FX used that are adding the detail, since they have more data points to play with. Regardless, the sample rate needs to be at least double what humans can hear (essentially 20KHz) to prevent audible distortion. 44.1KHz was chosen for CDs, but with video production becoming the mainstay, 48KHz is a better "default" just in case you want to use your work in videos at some point (it is easier to downsample than upsample). The debate on higher sample rates has been perpetual, but the reality is most young adults cannot hear over 16KHz, and it just goes downhill from there.
  19. That was the other question I was going to ask earlier is if you had any VSTis kicking in where the glitch occurs, since your track count scrolled off the screen. This can happen when new samples are loaded during playback with some VSTis, other VSTis load the sample set into RAM when a bank is selected so there is no read delay. A couple things you can try with your situation: Play the song through before the export so samples are already in RAM (this is not always 100%, and adds time to the process). Bounce/freeze just those offending tracks so that they are now "just audio" rather than a VSTi processing samples (this should work and saves the time of having to play the track through).
  20. Have you tried it with Global FX bypass enabled to see if that works? With such a high sample rate and bit depth going, your CPU may be struggling as well. If the project is set to 44.1K, why are you exporting to 88.2K? You could also try opening the Windows task manager during an export to check (will probably crash it for sure, but at least you would see what the computer is doing as it goes).
  21. +1, the whole handshaking has gotten out of hand. I cannot even fathom using software in a gig or studio these days if internet connection wasn't 100% reliable (but even then service can go offline with zero notice). The "solution" suggested in one thread to do an offline authorization with a computer that has internet (when there is no internet in the first place) and clients are sitting there relying on you is a real loss of the big picture.
  22. This is really the crux of it all, since the input side is where things get tricky. RealTek has always been more focused on the output side so my comment was a bit more sarcastic, but is a never say never scenario. PreSonus' Revelator mics are actually audio interfaces (with FAT Channel XT included in the mixer), which is why they are so big. What totally shocked me at one point testing them was I got some hefty lag (from one ASIO input)... other than that it didn't miss a beat. Turns out I had opened a Cakewalk project at a different sampling rate than everything else, but the mic just kept on chugging along. Once I realized that I was thinking "that shouldn't work," but I was rather impressed that it did. Melodyne requires massive buffers to run, so the same trick buried Melodyne (it crashed out to desktop)... the workaround for that was I played that back through the RealTek chip (via speakers) and recorded it into the microphone. Again, RealTek is geared toward output consistency, but this may open the doors on mixing opportunities in the future. As Mark said, don't hold your breath. Caveat to this... I did not do any testing with OBS, I was using PreSonus' Universal Control to do the grunt work and sending one ASIO output to Camtasia.
  23. That is rather interesting, as it is focused on OBS. PreSonus actually wrote proprietary OBS ASIO drivers for their hardware (and has recommended OBS but never officially sponsored them), so is nice to see that functionality will be open to everyone now. This may actually mean that RealTek's ASIO drivers may actually work in the future too!
  24. The positive flip side to this is singers accused of lip syncing and then packing it up the accuser's pooper. I tend to find those moments far more memorable than people caught cheating... a bit more rare, but definitely more satisfying.
  25. I was checking through what Treesha had posted above and it seems the AI model cannot be appended there either (the scripting looks pretty identical to what Resolve has so it could be the same code). A way to bypass this (I would recommend what she posted above to build the AI model), is if you have multiple recordings of yourself to dovetail those vocal tracks end-to-end so you have a single (and possibly HUGE wav file)... then send that into the modeler (then walk away or take a nap while it processes). The more phonics you feed it, the better the model will be, and if it won't let you append a model, feed it everything you have in one go. Bear in mind, this is also a replacement tool, so you can sing a track with your current voice and then apply the AI model to it. Again, Melodyne can assist greatly for both the current voice track (in my case, hitting high notes isn't what it used to be) before replacement, as well as post-production after replacement (in tests I ran the key was embedded into the AI model, so that may need polishing after replacement).
×
×
  • Create New...