Jump to content

mettelus

Members
  • Posts

    2,156
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by mettelus

  1. Like @bluzdog mentioned above, I also spot checked some files for the two links posted AFTER the OP, and they appear to be duplicates of what is posted in the OP. The advantage to both of those is that you can instantly download all the .gpx files, but by letter. Since they have to be extracted anyway, you can easily "assemble" the entire database on your own from either of those sites as well. Given this, the only "advantage" to the OP (if the other two sites are true duplicates) is that the assembly part is already done for you. I had half-considered assembling one of those other sites and running a duplicate file finder against it over the weekend; but for $9 it wasn't worth the effort, and I have wasted way more than $9 on things I never use anyway.
  2. If you are also using the M-Track as a general playback device (for Windows), the first application to use it may be locking the sample rate and bit depth on you. Check in "Sound Settings" (in the Windows search bar)->Sound Control Panel (upper right) of that next screen. Highlight the M-Track, then right click->Properties. At the top of that next screen click the "Advanced" tab at the top, and make sure the "Default Format" matches your project. Also, uncheck both options in the "Exclusive Mode" box (this is what locks it to the first app that touches it). Also, while in there, on the "Enhancement" tab, check "Disable all sound effects," and on the "Spatial Sound" tab, select "Off" for that drop down... both of these tap into Microsoft's internal Mixer and you do not want either for DAW use. Even with the above done, if two apps are trying to use the device at different settings, the first one will lock it (i.e., your browser). When using a DAW, make sure all other apps are closed out first so your DAW can match the M-Track to the project you have open. Once that is done you "can" use your browser... unlike DAWs, browsers default to "whatever" output format is available, but a DAW must have the ability to sync the interface to the project (which it cannot do if it is locked out).
  3. Hopefully someone who uses these can chime in on which is preferred, but there are MIDI Mapper/MIDIMapper (some need to be found without using the space) VSTs that can be inserted between a controller/MIDI track and the virtual instrument you are driving to reconfigure the data. Some are very specific on what they can modify, but a handful specifically support CC mapping.
  4. Can you post the specs on the computer as it is now? That will help us a lot. If that computer has all spinners in it, you will see a major improvement just from swapping out the O/S drive. Samsung's Magician software will run clones easily, and for an O/S drive 500GB is more than sufficient. Reason I say this is for imaging purposes (smaller they are, faster they image/restore). Swapping data drives with SSDs will give improvement, but not as noticeable as with the O/S drive unless you have massive sample libraries you use in projects (in that case you will). I have carried forward a 3 TB HDD through 3 computers now, and it is the only HDD in my machine. I primarily use it for internal backup and download storage, but spinner drives tend to not catastrophically fail if they do go and are permanent (barring sticking a massive magnet on the disc and/or opening the enclosure). SSDs have gotten far better in this regard, but write cycles specifically will degrade longevity (so as an O/S drive they will see this... keep that in mind). SSDs primary advantage is read speed over an HDD (and the O/S drive will use this most). The only two SSDs I have had to replace (due to noticeable degradation) were both O/S drives.
  5. I cannot even find a manual on that guy. The manufacturer's site circles back to Amazon! Something to check because I cannot find the manual, and you "should" have a hard copy... The USB capabilities of that unit are not explained well at all... it seems you can either record directly to a USB drive or connect that to a PC/MAC. It does not specify the number of channels (I assume this is the stereo mix only); but, if there are unit specific drivers for it, you can install those and connect it to your PC and see what Sonar is telling you is available from it. It is highly likely that is only a stereo mix output, but you would need to test that to actually see for yourself. As mentioned above, even higher level mixers often have limited number stereo mix outputs (and often no A/D conversion), so to get that information into a computer is going to need an audio interface capable of the task.
  6. I drilled straight into the Def Leppard folder when I opened it up, so the three referenced above were Animal, Armageddon It, and Bringing on the Heartbreak. That format comment won't be across the board, of course, since it is 100% dependent on who made (or last edited) the file.
  7. One interesting perk to this database came to light immediately. There are quite a few files that have both a gp3 and gp4 file formats (and when downloading them for free there is no distinction between which is in what format). Of the three sets I just checked out, the gp3 version is more accurate (by far in one case for the lead). Changes the playing field a bit when you can just double-click files and "boom" they are available to use immediately.
  8. Sorry for the late post on this but I just noticed it! Guitar Lesson has had a free database in place for years, the only catch is that you need to wait 15 seconds for each one before it downloads. This is nice if you want to hunt and peck and only want a few. It has also has had the ability to pay them to download the entire database (link at the top of that page), which is normally $18, but is on sale for $9 through April (ends today). I finally got sick of the hunt and peck aspect, so just got it. The download is 125MB (it unpacks to nearly 1.3GB), and contains over 52K gpx files. Bear in mind these are files for Guitar Pro (but will also open in some other VSTs, like Ample Sound). As an aside, they also have a standing 10% discount on the Guitar Pro app for anyone interested, which I believe can also be used during a Guitar Pro sale event. Again, sorry for the late post on this, but the regular price is $18 anyway, so not a "massive" deal, BUT this site is a nice alternative to Guitar Pro's "My Songbook" subscription (which as far as I can tell is using this site's gpx files anyway, but not sure on that one).
  9. Many VSTs have fixed this behavior (it is really on them, not Sonar, if this is what is occurring), but there are still ones out there that can intercept the SPACE bar if they are in focus.
  10. Do a quick check on ALL of the inputs for the Universal Control mix you are using. The reason I say this is I was doing video work over a year ago and every other input was in 48KHz, but CbB was running in 44.1KHz. UC refused to crash (using a Revelator, no less), and it trudged along and did its thing but with a massive amount of latency/overhead. I didn't think anything like this was possible (and still function), but it did. It is almost like the "handshake" between CbB and UC was broken by the order in which things were added to the mixer. See if you have multiple inputs active on your UC Mixer and and make sure that all of the bit rates match each other there.
  11. Sorry about the loss of the original files Unless you are focused specifically on re-mixing only, having the final mix would be something you could take to the Songs sub-forum and ask about. There are a lot of folks that do collaborations there, so it is quite possible to find someone to recreate the guitar riffs for you if you ask. Stem extractors can be frustrating, even with hi-end wav editors to supplement that work; so if you hit a wall with that, consider poking around in the Songs sub forum.
  12. Also bear in mind that there are a handful of 32:9 (5120x1440) ultrawide monitors out there as well (roughly 45-49" wide). Mine came with a screen-split app to automatically chop up the display into sections as needed (each acting like a sub-monitor basically), which is convenient to "full screen" mode windows, then put them where you want them without fiddling with resizing (the app tells the window how big that sub-monitor is). As far as monitors themselves, refresh rate and response time tend to be the biggest differentiators these days... "true black" capability has become more standard but is also something to keep in mind (this can dramatically affect contrast capability).
  13. In addition to this, the audio files were saved in a global audio folder... Cakewalk Projects/Audio Data (or just Audio) was the default IIRC. If far enough back they were also in 8.3 file-name formats, so some were gobbledygook. I forget now if right clicking the clip and selecting "Associated Audio Files..." will allow you to force a search for them, but it will give you the exact name of the audio file so you can manually search for it with Windows Explorer. Do you still have the Global Audio folder from when you created those .wrk files?
  14. Barring obvious ones like Bohemian Rhapsody, this one doesn't fall into "groundbreaking" as much as it does "memorable." I was in a discussion a few weeks ago and commented, "'Simple' doesn't necessarily imply 'simplistic'," and the first example that came to mind was a song I haven't heard in over 45 years!! Not sure "why" that came to mind, but it did. I had no clue who even sang it (I was 8 at the time), but it was easy to find and I remembered it verbatim... Red Sovine's "Teddy Bear." Definitely not traditional in format, and terrible background "music," but will make you stop and listen to the words, even if you only hear it one time in your life. Reminded me that years ago @garybrun had commented about retaking the vocal from "One More Day" because his voice showed the emotion of it... I think "Teddy Bear" was the reason I had strongly agreed with leaving that in. This is not the recorded version, but I like the performance aspect and it is "close enough."
  15. If wanting to isolate Melodyne specifically (rather than all plugins), in the Melodyne app itself (best to do this in the stand-alone version)... click Help->Check for Updates... (this opens the same window as if you drill in manually via File->Preferences->Check for Updates (is a drop down menu at the top)... but that is a bit clunky), then change Check for Updates to "Manually." Checking for updates manually is the same route, but can do when you see fit.
  16. One thing to check is a deep dive on Services (can type that in the Windows Search bar). So many apps tend to enable services when installing (or worse, every time you manually run them), yet a good number of them can have Properties set to Manual (for when you yourself actively open the associated app), or outright Disabled (for things like "Adobe Acrobat Updater Service"). Unfortunately, there is no good guide for that task, but the descriptions are often fairly detailed for what they do, and services related to a specific app would be ones to call into question for automatic starts. The Task Manager and/or utility apps like LatencyMon, Moo0 System Monitor, or similar can also be helpful. Many background services are set to high priority, so they can be disruptive if you have never taken a look at what is running behind the scenes on your machine. You can also open Services just to see what is "Running" and actively Start/Stop them (right click, same as for "Properties") without actually changing their Properties.... this is often safer for testing, since if you shut everything off, you will eventually hit one that forces the machine to shut down. Side note with Moo0 System Monitor specifically. The "Portable Version" is highly preferred, since that installs nothing... you just unpack it someplace and run it as needed. It is convenient if you want to just unpack it to a thumb drive so you can check any computer when the need arises. The very two top fields in that... the [Bottleneck] and [Burdened By] will get populated when the machine starts to struggle and is a nice indicator for where to look first to save you time.
  17. This seems to have shifted quite a bit from the Beta version as far as the GUI/functionality is concerned. When analyzing a reference file it seemed to keep shifting back to the input analysis section on me. Just to be sure I haven't lost all my marbles (yet), I pulled up MAutoEqualizer and the functionality worked there (specifically the bottom of that GUI) as I "remember" from the MMatcher Beta. Did anyone else notice this?
  18. +1 to what Amberwolf said above (if you leave all inputs "active"). Ironically, Cakewalk got its start as MIDI-only, yet has no ability for the user to easily track MIDI data internal to the DAW... I call it "loose data" and a lot can also be dependent on your work flow. When using only one MIDI controller at a time (often the case), the input echo active track (default) will often suffice (even using Omni), but manually overriding Input Echo on any track can throw a wrench into the works. Working with channels is the real solution; but again, there is no "MIDI mapper" internal to Cakewalk to visualize MIDI routing for complex setups (project notes are helpful here, but not quite the same if you are coming back to a project after a hiatus).
  19. I need to check this out still. It looks like it could be another "Dialogue Match" tool, but not sure of its functionality yet. Out of curiosity I just opened the page for iZotope's Dialog Match and it has a 2.03 rating from 32 reviews (ouch).
  20. Does hitting Shift-F to fit the project to the screen show a track longer than you are expecting? If not, Jonesey's solution above is the simplest route to go with things.
  21. +1 to the above. Increasing resolution to 4K affects both dimensions, so take that into consideration as it may strain your eyes. I use a 34" Ultrawide (21:9) as my main monitor, which just gives the added width without changing resolution. Similar to seeing more of the console, I prefer to see more of the timeline. The TV option is often cheaper (most are also 21:9) and there are more sizes to choose from if your work space can accommodate them.
  22. I seem to recall that saving a project as a bun file, then unpacking it will strip out all Region FX as well, but it has been a while.
  23. Boy, I really like the strings view on the Piano Roll and being able to edit imported tabs in Riffer. Please say that is underway for the guitars too
  24. The voice is an incredibly complex instrument. If you delve into the weeds with a voice, there are multiple harmonics and resonators that can be working at once. While those can be copied/analyzed, it is the "from scratch" part that creates the challenges beyond the basics. There is a quick snippet in the "Never Surrender: A Galaxy Quest Documentary" where Enrico Colantoni (who played the alien leader) talks about the tonal practice he was taught for voice getting him the part (he was initially turned down, but did that as he was leaving the audition and they changed their minds). If you analyze a voice that is "full," there is often a range of that voice where distinct harmonics exists ("chest" voice, "middle" voice, and "head" voice all in play at the same time). A really big challenge with generation are diphthongs (vowels that are really two, like the long "I," which is really "ah-ee")... I tested Synthesizer V and 2 of the three free voices had problems with "why," the other had an issue with "you"... ironically, the first three words of the chorus I was testing are "Why don't you..." so that stopped my test fairly quick. I have not tried it a second time with the newer release. For replacement, that is simpler (in some ways), but requires a voice model that is accurate/well sampled. DaVinci Resolve just released 20 Beta that has both a voice modeler and replacement algorithm that I tested with singing, and does fairly well, but the obvious downside of that if you MUST have permission to use a voice to use it. This doesn't fall under generation, and in some ways has more challenges in the the harmonics of the model are going to be based on the piece it was sampled from (again, the complexity of voice itself). It probably won't be too long before the AI voice generation gets these intricacies ironed out. Quick rewind. I forgot to comment on this earlier, then forgot which thread it was in! Corel "put a pause" on development for Painter, PaintShop Pro, and VideoStudio last year (there are no "2024" versions), so when the Humble Bundle hit, I didn't even comment about it. However, what they did release that was "new" was the Distinct Plugins they incorporate as extras. Of those, Vision FX 2.0 is the most notable, and does have the ability to build on existing images. Quick synopsis of that guy: (Both a pro and con) It is local, not cloud-based, so all processing is done locally, and can put a strain on your machine for high-detail pics (can take a few minutes per pic to generate, and defaults to doing 5... after 5 are done, you can change the text/setting for #5 only if you want to do just one, but have to have them all completed to do that... OR... you can cancel it when it shifts to #2 and restart the whole process... pita either way). (Pro) It works from an image, but only two sliders are available, basically "how much can it alter the original image" and "how much must it follow the text description." For generation, you can use a blank image and say "modify this 100%," with existing images knock that down. I went through a phase of testing medusa images (nice challenge for AI), so a lot of that is the text description... as mentioned above, there is no "definitive list" of what it understands, so there is a lot of hit-or-miss here. Since is local, best to test run on low resolutions/passes to see if the returns are right, then jack up those settings. I also tested with screen caps of AI images posted online and it improved them (dramatically in some cases, as I wanted to add photo-realistic qualities). It does seem to follow some camera/lighting specifics, but this can be hit-or-miss as well. (Con) It makes 5 images per pass, but only one is "save-able." You can screen cap the others to save (more a pita really). (Con) No good documentation, so the keywords and artists recognized are a Hail Mary. Oddly enough, it seems "youthful portrait" is baked into it (there is a pic on their website "somewhere" of a elderly woman rendered as a young girl... well, I'll be... I actually found it, lol). (Con) The "seed" is not select-able, it just cycles randomly when you want to change it. (Pro) The samples on the gallery include the text prompts used (similar to other online sites). Overall it has its uses (specifically if tweaking an existing image), and being a local plugin, it is a perpetual license. My apologies for not posting about that when was in effect (that bundle was $30 but the only thing new to it was the Distinct Plugins stuff (and Corel DRAW basic, IIRC). The rest was "2023" stuff, and brushes (both previously offered). Quick edit: If you scroll down on that "youthful" portrait above (or the one below), you can see "related" images... which have a slider for the before/after image. Another example of a good tweak is this one.
  25. I wanted to drop quick feedback on this for folks interested. Two new features I have not touched yet, but are on my to do list are: AI Dialogue Matcher matches clip tone, level and reverberance. AI Audio Assistant automatically creates a finished mix. Both of these are already available from iZotope, but their Dialogue Match is PT only. After they tried to sell Dialogue Match to me for a couple years, when I finally got it with an MPS upgrade I realized I couldn't use it anyway. Someone had specifically asked about this in another thread a while ago, so I want to check that out specifically at some point. **** What I did take the time to test **** As more and more audio tools are coming to bear in Resolve, I really wanted to stress test the AI Vocal Modeling/Replacement for a vocal track (singing). As I delved into it, a few things came to light: The model audio needs to be as dry as possible. Noise and FX (particularly reverb) should be removed to get the best result. The model needs to enunciate properly. For dialogue this is not as often an issue, but some singers have a real challenge with this. Similarly, the audio to be replaced needs to be enunciated as well to replace properly. The AI Modeler is slow, taking roughly 40 minutes to model 2 minutes of vocal. Because it is modeling a vocal, the pitch/key of that model is embedded into the model itself, so this needs to be taken into account for the replacement. Although there "are" tools in Resolve for this, Melodyne is a better choice for its precision capabilities. That said, I was surprised by the results. Applying a model is far quicker (15 seconds or so), and though not good enough quality for a lead, for harmony/backing tracks the results are usable. After seeing this in action, I think it is working under the same premise as Realivox Blue; i.e, the AI script is chumming through audio to match all constants/vowels/diphthongs it can, and without a full map it makes its best guestimate during replacement (and may butcher some). The built-in AI models seem to have a full articulation map available, but again the tone/pitch/inflection in those is dialogue, not singing. Of course I went way beyond its intended purpose for my test, but I wanted to set the bar fairly high to get a feel for its capabilities. The tweaking required for dialogue replacement is far less. Side Note: I buried Resolve while it was in the middle of doing an AI model by multitasking and had to force the program to close. To my surprise, the voice modeler picked up where it left off when I re-opened Resolve (good surprise for once). Lesson learned... when creating a voice model, better to check the status timer and walk away from the computer. Let it do its thing, then come back.
×
×
  • Create New...