-
Posts
2,270 -
Joined
-
Last visited
-
Days Won
2
Everything posted by mettelus
-
Open source licensing for ASIO at last!
mettelus replied to Starship Krupa's topic in Computer Systems
That is rather interesting, as it is focused on OBS. PreSonus actually wrote proprietary OBS ASIO drivers for their hardware (and has recommended OBS but never officially sponsored them), so is nice to see that functionality will be open to everyone now. This may actually mean that RealTek's ASIO drivers may actually work in the future too! -
The positive flip side to this is singers accused of lip syncing and then packing it up the accuser's pooper. I tend to find those moments far more memorable than people caught cheating... a bit more rare, but definitely more satisfying.
-
I was checking through what Treesha had posted above and it seems the AI model cannot be appended there either (the scripting looks pretty identical to what Resolve has so it could be the same code). A way to bypass this (I would recommend what she posted above to build the AI model), is if you have multiple recordings of yourself to dovetail those vocal tracks end-to-end so you have a single (and possibly HUGE wav file)... then send that into the modeler (then walk away or take a nap while it processes). The more phonics you feed it, the better the model will be, and if it won't let you append a model, feed it everything you have in one go. Bear in mind, this is also a replacement tool, so you can sing a track with your current voice and then apply the AI model to it. Again, Melodyne can assist greatly for both the current voice track (in my case, hitting high notes isn't what it used to be) before replacement, as well as post-production after replacement (in tests I ran the key was embedded into the AI model, so that may need polishing after replacement).
-
way to find clips that need the "bounce to clip" function?
mettelus replied to LNovik@aol.com's question in Q&A
After you select them with this option, did you then try Bounce to Clip(s)? Even if not not visible, they should be selected (even in hidden tracks, but not sure about archived as mentioned above). Just be sure not to select anything else between those two operations. Another thing is any Region FX cannot be saved in a bun file. Did you use Melodyne anywhere in the project? -
This is probably coming soon, as most Voice AI is based on actual/adjusted sound models. The closest things I know of that exists now is the AI Vocal Modeling/Replacement that is available in DaVinci Resolve Studio 20, but that is designed for dialogue only (at the moment) and their are a few important pieces with it: The model audio needs to be as dry as possible. Noise and FX (particularly reverb) should be removed to get the best result. The model needs to enunciate properly. For dialogue this is not as often an issue, but some singers have a real challenge with this. Similarly, the audio to be replaced needs to be enunciated as well to replace properly. The AI Modeler is slow, taking roughly 40 minutes to model 2 minutes of vocal. Because it is modeling a vocal, the pitch/key of that model is embedded into the model itself, so this needs to be taken into account for the replacement. Although there "are" tools in Resolve for this, Melodyne is a better choice for its precision capabilities. I posted a little more detail in this post testing it with singing, but the bottom line with that is it is targeted for dialogue replacement (not generation from lyrics), and in order for it to accurately map "everything," the model must be exposed to everything (meaning all phonics, diphthongs, constants, vowels, etc.... VERY similar to how Realivox Blue was made). Right now, there is no way to "add to" an AI voice model (which would be super helpful), so if your sample only contains a subset of phonics, getting it to replace phonics it has never sampled may miss the mark. I am assuming the OP is intending to use a "younger voice" (rather than current) to generate more/different lyrics here. Now that there are AI modeler(s) available, the ability to parse sung vocals is not that far away, BUT... their are very significant and serious copyright implications to such, since the modeler has no clue who the subject is... anyone who has had their singing voice recorded (even if long since dead) could be modeled. That aspect alone can be a big stumbling block for release, since there is no way to differentiate someone who wants to model their own voice versus someone that wants to model someone else's without permission. A way to ensure this for someone capturing their voice "right now" would be to have them interactively train the AI modeler very similar to the training that Dragon Naturally Speaking does for its voice-to-text functionality.
-
How to change whole project tempo and stretch audio tracks to fit?
mettelus replied to saxon1066's topic in Cakewalk Sonar
Melodyne can assist with this as tempo adjustments can be mirrored into the project's tempo track via ARA2. The hurdle lies in adjusting clips that span across tempo changes (even Groove clips and REX clips); there is no way to assign audio clips to "fit to project tempo" and have the DAW do (ALL of) the grunt work that exists seamlessly elsewhere. Sonar uses the Melodyne tempo algorithm to create tempo tracks, but the ability to have "everything and anything" adapt to tempo track adjustments would be a good feature request. -
How to change whole project tempo and stretch audio tracks to fit?
mettelus replied to saxon1066's topic in Cakewalk Sonar
Depending on the number of tracks (and clips) involved, this can get into a fairly convoluted and often frustrating process. If you are waist deep into a project, another alternative is to finish the mix and do tempo refinement on a mixdown track (can pull that into another project), so you are only working with one combined audio track, which is far simpler and straightforward. If you suddenly decide to tweak tempos again, working with one audio track (even a mixdown as a demo) avoids going through the convoluted process repeatedly. -
Thanks Alexey. I was curious about this still so started digging into the old forums (all posts with detail are 10-15 years old). This post ([UPDATED] Configuring Your A-Pro to Work With/Without ACT) seems to have the most detail of the ones I saw, with an interesting comment near the top on setting up a control map regarding this comment from above. The comment he made in that post was "Maps 1 thru 18 are fair game. Sonar/ACT apparently uses map 19, so it might be best to avoid that one." He also made a comment (possibly in another post I cannot seem to find) that enabling ACT reset the control map to zero(??). The other two threads he made comments in are "Getting a Controller map for A-500" and "Anyone Use an A300/500/500 Pro as Sonar Control Surface?" in case the OP is interested, but my takeaway from those was to avoid using Controller Map 19 as that one seems to be reserved.
-
Can Drum Replacer work with MIDI notes? *SOLVED*
mettelus replied to Gary Lehmann's topic in Instruments & Effects
This is what you want to do. MIDI has no audio (is simply note data), so there is nothing for drum replacer to process. In essence, drum replacer allows you to fine tune the audio you want to replace, converts them to MIDI hits, and replaces (or blends) that audio with another sound set (and a limited selection to choose from). Since you already have the MIDI, just point it to another drum virtual instrument (VSTi) of your choice and select/edit the kit pieces as needed. -
Bass tracks do not have as clean a sound as other instrument tracks.
mettelus replied to Gerry 1943's topic in Cakewalk Sonar
This came to mind for me as well. Does this also exist in an export that is played on another system? As mentioned above, if you could post a short snippet (like 8-10 seconds) so we could hear it for the situation you are experiencing, it would make things far simpler on our end to better help you. -
That is odd, I typically just use Control Map 1, but do not get carried away with anything inside the DAW. Do you have control surfaces assigned in Preferences->MIDI->Control Surfaces? The only other thing that comes to mind may be the File->Initialization File (since this happens on Sonar launch every time). The A-Pro Editor actually syncs to the keyboard with MIDI, and while changing the control map doesn't appear to be assignable (I thought it was hardwired only), it may very well be included in any ACT data if you are using that as well. Both @azslow3 and @msmcleod may have better insight on this as they both understand Mackie Control and ACT far better than I do.
-
Quick question, are you using the A-PRO Editor for setting up the 500? It is more convenient to use that for mapping than inside a DAW (for most functions), and the default map (the keyboard will use) is either toggled in the keyboard itself or with the A-PRO Editor. Another aspect with them is they have two MIDI outputs, the first is generic MIDI controller stuff (keys, pitch, bend), while the other sends all of the control functions (knobs, sliders, buttons) from the keyboard to the DAW. Do you have both of the MIDI inputs enabled in Sonar preferences?
-
Jim makes a good point. In addition, some nuts are slotted to a preferred string gauge, so anything bigger can get pinched if the slots are too deep and you are using a bigger gauge. TUSQ XL nuts are one of the best replacements out there (they have teflon in them) and IIRC are designed to accept 9-11 string sets. StewMac sells them a little cheaper, but not sure about the shipping comparison. If you get outside their intended gauge range (the TUSQ site has a lot of info), a luthier may be needed to open the slots on them.
-
The "Imported Audio Files" link you need to drill into from the one referenced above, but another thing I missed was "Copies are always made if the imported audio does not match the current project’s sampling rate" (in addition to edits). This is to reduce CPU load so the program can just run from a reference file rather than processing sample rate conversion (SRC) continuously.
-
Also see this thread. Cakewalk defaults to per-project audio folders and there is no way to disable that anymore. If you do any destructive editing (bounces, etc.), audio that shows up in the project audio folder may actually be an edit and not an import, so be careful with blindly deleting content from that folder. Pointers in the past used to be a nightmare when the global audio folder was used. It is more common practice to work on a project, then remove audio that is no longer used by doing a Save As... to a new folder and check "copy audio with project" (only audio actually in use by that cwp will copy to the new folder). Alternatively, you can simply move your projects folder (in preferences, file locations) to where you are importing from. That will keep data off the C drive completely and it is common to have projects located on another drive.
-
This is probably coming swifter than most expect. With advances in holography and AI generated "artists"/CGI, it won't be long before those become more prevalent. As with the Milli Vanilli example, the day will come when there may be no artists involved at all for an act, and everything in a performance and behind the scenes can be created with AI (just the business machine running the show). Those artists will never age and their voices will never change, and they can even "perform" at multiple venues at once (almost like a movie release). People will get tired of seeing the same artist ad infinitum, but behind the scenes it will be a simple change to make somebody new. If you have not already read about "Tilly Norwood" (AI generated actor), this is already causing an uproar and it has not even been used in films yet, but the concept is not very far beyond the same technology that brought "Thanos" to life, just no actor is needed at all. Even the Sphere is doing a 3D rendition of the Wizard of Oz now where AI is interpolating motion in the original picture to expand the action outside of the original frame view into a 3D format. Positive flip side though... a woman in front of me the other day was buying "The Game of Life" so I said, "Wow, I have not seen one of those in years." The version I have tucked away is from 1960 so she joked it is pretty much the same, just the jobs have changed. Then she mentioned she likes to do real interaction with her son, so I told her to stay that course... social media is no substitute for human interaction and it saddens me to see people fixated on their phones to avoid human interaction. People will literally pull out their phone to avoid saying hello to someone, and we wonder why social skills are lacking.
-
+1, so many concerts are based more on visual, so if there is much exertion involved at all (especially with vocals), it will affect performance. Go jog a mile and sing a song while doing so... see how long the performance sounds good. Examples of this abound. The funniest one I ever saw was Justin Bieber yacking his guts out on the stage while the song kept going with his voice in tact. It only seems to get attention when someone makes an issue or a technical glitch happens (the technical glitches make them obvious), but many don't seem to care.
-
Welcome to the forum. The most obvious question is the track you are trying to record armed? When you hit "Record" in the transport, only tracks that are armed (the record button on the track) will actually record; the rest simply play.
-
Google 3D spectrogram. iZotope's Insight (included with Ozone bundles) is the one I use, and I also believe T-Racks has a 3D spectrogram as well, but not sure. There are also probably others. Although 3D can look cool, precision editing is better achieved in the 2D format (intensity = vertical height), with the utmost importance on the resolution of the spectrogram... as long as the resolution is high enough, harmonics stand out readily in 2D format and are easy to select/edit. The other advantage to SpectraLayers and Ozone is that you can separate stems first (not perfect, but workable), which allows you to isolate harmonics to the instrument you want to focus on (if the wave file is already mixed). You can also do such in a two-step process (there are free stem separators out there)... separate stems, edit via spectrogram in a DAW, then re-assemble in the DAW. A caution with stem separation though, they often leave residual frequencies in one stem belonging to another, so isolating a stem to be "pristine" may require a lot more effort than you bargained for.
-
Bass tracks do not have as clean a sound as other instrument tracks.
mettelus replied to Gerry 1943's topic in Cakewalk Sonar
+1 to the above, we definitely need more background to understand this better. Also, are you referring to the bass track when soloed or when mixing? An audio snippet of whichever case would help even more than trying to describe in text. -
Roger that. The term for that one is "spectrogram" which gives a visual representation of frequency intensity and creates a "track view." Spectrum Analyzers (like SPAN) are basically if you turn the "now time" into a knife and are viewing the spectrogram's cross section as the play head moves (just the now time location is displayed). They each have their purpose in one's tool kit. Regarding spectrograms in particular, they are often used in post-production for things like noise removal, surgical tweaks, and other editing tasks. What differentiates them are the tools included, but most have selection tools that literally work with the spectrogram as if it were a picture (lasso/selection, erasers, "heal" tools to meld content uniformly (essentially a "blur" brush in a picture app)). Depending on what you want to do with a spectrogram app, you may need to get a higher (i.e., not free) version, with Steinberg's SpectraLayers and iZotope's RX being the most comprehensive IMO. For noise removal in particular, apps that can capture a "noise print" from an area that is supposed to be silent (often the lead in/fade out portion of an audio track) and then remove that noise print from the rest of the track are the most effective.
-
The unwound strings in particular (the ones you say are de-tuning) have less friction to each other on the post wraps, so I have always wrapped at least 4 turns on those strings. The purpose of the kink in the "luthier's knot" is to provide additional friction against the string being unwound (from slippage) on the post. Even when changing strings I loosen them just enough so I can "unwrap" them fully by pulling vertically at the post, then a slight tug will unbend that kink and pull it out with ease. The more wraps, the more friction... BB King was notorious for wrapping the entire string on posts. It is the same principle with ropes, especially things like rappelling... without a wrap of rope behind you to add friction, there is no way on earth you can control your body weight with one hand for five minutes with any degree of control. Even ascending ropes, a wrap is commonly used on one leg so you can pinch the rope on top of that foot with the free foot with very little force to rest your arms (even people with little upper body strength can do this method). People zooming up a rope with just upper body strength like the Man in Black at the Cliffs of Insanity is not common at all, but no one messes with the Man in Black (it's inconceivable)!!! Then again (back to the OP), locking tuners are a means to keep the string from slipping on the post... the "King Kong" version of the "luthier's knot" kink, so fewer wraps are needed and string changes are quicker. If you change strings often and don't have a string winder, locking tuners will certainly pay for themselves with the timed saved.
-
Yeah, if you have a hotmail account (free or not), you have OneDrive... due to the timing of this, a pop-up just sprung up automatically last week asking me if I wanted to sign up to ESU for free. Simple things like using Bing as your default search engine will rack up reward points (the other free method)... having your MS account linked to your computer will make any Bing searches rack up points automatically just by being logged into Windows.
-
Shame on you! In all seriousness, there are a lot of "hidden" features built into the free SPAN that not many use or even know about. I cheat even more to an extent by Googling "Can [this app] to [this detailed task]" quite often. Even simple things like overlapping tracks to visually see frequency collisions the free version can do just fine. I posted a gif analyzing a phase switch I installed in my guitar on the the old forums here (hard to believe that was over 10 years ago already). I inserted the SPAN gif from that post below.
-
Real Time Support Needs to be available for dynamic problem solving
mettelus replied to JakeJordan's topic in Cakewalk Sonar
Remember when Roland use to do their 2-hour interactive webinars? I actually just tried Googling to see if they are still alive, but cannot find any. I forget now if they were ever posted to YT or stayed resident on Roland's site.
