Jump to content

mettelus

Members
  • Posts

    2,073
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by mettelus

  1. One interesting perk to this database came to light immediately. There are quite a few files that have both a gp3 and gp4 file formats (and when downloading them for free there is no distinction between which is in what format). Of the three sets I just checked out, the gp3 version is more accurate (by far in one case for the lead). Changes the playing field a bit when you can just double-click files and "boom" they are available to use immediately.
  2. Sorry for the late post on this but I just noticed it! Guitar Lesson has had a free database in place for years, the only catch is that you need to wait 15 seconds for each one before it downloads. This is nice if you want to hunt and peck and only want a few. It has also has had the ability to pay them to download the entire database (link at the top of that page), which is normally $18, but is on sale for $9 through April (ends today). I finally got sick of the hunt and peck aspect, so just got it. The download is 125MB (it unpacks to nearly 1.3GB), and contains over 52K gpx files. Bear in mind these are files for Guitar Pro (but will also open in some other VSTs, like Ample Sound). As an aside, they also have a standing 10% discount on the Guitar Pro app for anyone interested, which I believe can also be used during a Guitar Pro sale event. Again, sorry for the late post on this, but the regular price is $18 anyway, so not a "massive" deal, BUT this site is a nice alternative to Guitar Pro's "My Songbook" subscription (which as far as I can tell is using this site's gpx files anyway, but not sure on that one).
  3. Many VSTs have fixed this behavior (it is really on them, not Sonar, if this is what is occurring), but there are still ones out there that can intercept the SPACE bar if they are in focus.
  4. Do a quick check on ALL of the inputs for the Universal Control mix you are using. The reason I say this is I was doing video work over a year ago and every other input was in 48KHz, but CbB was running in 44.1KHz. UC refused to crash (using a Revelator, no less), and it trudged along and did its thing but with a massive amount of latency/overhead. I didn't think anything like this was possible (and still function), but it did. It is almost like the "handshake" between CbB and UC was broken by the order in which things were added to the mixer. See if you have multiple inputs active on your UC Mixer and and make sure that all of the bit rates match each other there.
  5. Sorry about the loss of the original files Unless you are focused specifically on re-mixing only, having the final mix would be something you could take to the Songs sub-forum and ask about. There are a lot of folks that do collaborations there, so it is quite possible to find someone to recreate the guitar riffs for you if you ask. Stem extractors can be frustrating, even with hi-end wav editors to supplement that work; so if you hit a wall with that, consider poking around in the Songs sub forum.
  6. Also bear in mind that there are a handful of 32:9 (5120x1440) ultrawide monitors out there as well (roughly 45-49" wide). Mine came with a screen-split app to automatically chop up the display into sections as needed (each acting like a sub-monitor basically), which is convenient to "full screen" mode windows, then put them where you want them without fiddling with resizing (the app tells the window how big that sub-monitor is). As far as monitors themselves, refresh rate and response time tend to be the biggest differentiators these days... "true black" capability has become more standard but is also something to keep in mind (this can dramatically affect contrast capability).
  7. In addition to this, the audio files were saved in a global audio folder... Cakewalk Projects/Audio Data (or just Audio) was the default IIRC. If far enough back they were also in 8.3 file-name formats, so some were gobbledygook. I forget now if right clicking the clip and selecting "Associated Audio Files..." will allow you to force a search for them, but it will give you the exact name of the audio file so you can manually search for it with Windows Explorer. Do you still have the Global Audio folder from when you created those .wrk files?
  8. Barring obvious ones like Bohemian Rhapsody, this one doesn't fall into "groundbreaking" as much as it does "memorable." I was in a discussion a few weeks ago and commented, "'Simple' doesn't necessarily imply 'simplistic'," and the first example that came to mind was a song I haven't heard in over 45 years!! Not sure "why" that came to mind, but it did. I had no clue who even sang it (I was 8 at the time), but it was easy to find and I remembered it verbatim... Red Sovine's "Teddy Bear." Definitely not traditional in format, and terrible background "music," but will make you stop and listen to the words, even if you only hear it one time in your life. Reminded me that years ago @garybrun had commented about retaking the vocal from "One More Day" because his voice showed the emotion of it... I think "Teddy Bear" was the reason I had strongly agreed with leaving that in. This is not the recorded version, but I like the performance aspect and it is "close enough."
  9. If wanting to isolate Melodyne specifically (rather than all plugins), in the Melodyne app itself (best to do this in the stand-alone version)... click Help->Check for Updates... (this opens the same window as if you drill in manually via File->Preferences->Check for Updates (is a drop down menu at the top)... but that is a bit clunky), then change Check for Updates to "Manually." Checking for updates manually is the same route, but can do when you see fit.
  10. One thing to check is a deep dive on Services (can type that in the Windows Search bar). So many apps tend to enable services when installing (or worse, every time you manually run them), yet a good number of them can have Properties set to Manual (for when you yourself actively open the associated app), or outright Disabled (for things like "Adobe Acrobat Updater Service"). Unfortunately, there is no good guide for that task, but the descriptions are often fairly detailed for what they do, and services related to a specific app would be ones to call into question for automatic starts. The Task Manager and/or utility apps like LatencyMon, Moo0 System Monitor, or similar can also be helpful. Many background services are set to high priority, so they can be disruptive if you have never taken a look at what is running behind the scenes on your machine. You can also open Services just to see what is "Running" and actively Start/Stop them (right click, same as for "Properties") without actually changing their Properties.... this is often safer for testing, since if you shut everything off, you will eventually hit one that forces the machine to shut down. Side note with Moo0 System Monitor specifically. The "Portable Version" is highly preferred, since that installs nothing... you just unpack it someplace and run it as needed. It is convenient if you want to just unpack it to a thumb drive so you can check any computer when the need arises. The very two top fields in that... the [Bottleneck] and [Burdened By] will get populated when the machine starts to struggle and is a nice indicator for where to look first to save you time.
  11. This seems to have shifted quite a bit from the Beta version as far as the GUI/functionality is concerned. When analyzing a reference file it seemed to keep shifting back to the input analysis section on me. Just to be sure I haven't lost all my marbles (yet), I pulled up MAutoEqualizer and the functionality worked there (specifically the bottom of that GUI) as I "remember" from the MMatcher Beta. Did anyone else notice this?
  12. +1 to what Amberwolf said above (if you leave all inputs "active"). Ironically, Cakewalk got its start as MIDI-only, yet has no ability for the user to easily track MIDI data internal to the DAW... I call it "loose data" and a lot can also be dependent on your work flow. When using only one MIDI controller at a time (often the case), the input echo active track (default) will often suffice (even using Omni), but manually overriding Input Echo on any track can throw a wrench into the works. Working with channels is the real solution; but again, there is no "MIDI mapper" internal to Cakewalk to visualize MIDI routing for complex setups (project notes are helpful here, but not quite the same if you are coming back to a project after a hiatus).
  13. I need to check this out still. It looks like it could be another "Dialogue Match" tool, but not sure of its functionality yet. Out of curiosity I just opened the page for iZotope's Dialog Match and it has a 2.03 rating from 32 reviews (ouch).
  14. Does hitting Shift-F to fit the project to the screen show a track longer than you are expecting? If not, Jonesey's solution above is the simplest route to go with things.
  15. +1 to the above. Increasing resolution to 4K affects both dimensions, so take that into consideration as it may strain your eyes. I use a 34" Ultrawide (21:9) as my main monitor, which just gives the added width without changing resolution. Similar to seeing more of the console, I prefer to see more of the timeline. The TV option is often cheaper (most are also 21:9) and there are more sizes to choose from if your work space can accommodate them.
  16. I seem to recall that saving a project as a bun file, then unpacking it will strip out all Region FX as well, but it has been a while.
  17. Boy, I really like the strings view on the Piano Roll and being able to edit imported tabs in Riffer. Please say that is underway for the guitars too
  18. The voice is an incredibly complex instrument. If you delve into the weeds with a voice, there are multiple harmonics and resonators that can be working at once. While those can be copied/analyzed, it is the "from scratch" part that creates the challenges beyond the basics. There is a quick snippet in the "Never Surrender: A Galaxy Quest Documentary" where Enrico Colantoni (who played the alien leader) talks about the tonal practice he was taught for voice getting him the part (he was initially turned down, but did that as he was leaving the audition and they changed their minds). If you analyze a voice that is "full," there is often a range of that voice where distinct harmonics exists ("chest" voice, "middle" voice, and "head" voice all in play at the same time). A really big challenge with generation are diphthongs (vowels that are really two, like the long "I," which is really "ah-ee")... I tested Synthesizer V and 2 of the three free voices had problems with "why," the other had an issue with "you"... ironically, the first three words of the chorus I was testing are "Why don't you..." so that stopped my test fairly quick. I have not tried it a second time with the newer release. For replacement, that is simpler (in some ways), but requires a voice model that is accurate/well sampled. DaVinci Resolve just released 20 Beta that has both a voice modeler and replacement algorithm that I tested with singing, and does fairly well, but the obvious downside of that if you MUST have permission to use a voice to use it. This doesn't fall under generation, and in some ways has more challenges in the the harmonics of the model are going to be based on the piece it was sampled from (again, the complexity of voice itself). It probably won't be too long before the AI voice generation gets these intricacies ironed out. Quick rewind. I forgot to comment on this earlier, then forgot which thread it was in! Corel "put a pause" on development for Painter, PaintShop Pro, and VideoStudio last year (there are no "2024" versions), so when the Humble Bundle hit, I didn't even comment about it. However, what they did release that was "new" was the Distinct Plugins they incorporate as extras. Of those, Vision FX 2.0 is the most notable, and does have the ability to build on existing images. Quick synopsis of that guy: (Both a pro and con) It is local, not cloud-based, so all processing is done locally, and can put a strain on your machine for high-detail pics (can take a few minutes per pic to generate, and defaults to doing 5... after 5 are done, you can change the text/setting for #5 only if you want to do just one, but have to have them all completed to do that... OR... you can cancel it when it shifts to #2 and restart the whole process... pita either way). (Pro) It works from an image, but only two sliders are available, basically "how much can it alter the original image" and "how much must it follow the text description." For generation, you can use a blank image and say "modify this 100%," with existing images knock that down. I went through a phase of testing medusa images (nice challenge for AI), so a lot of that is the text description... as mentioned above, there is no "definitive list" of what it understands, so there is a lot of hit-or-miss here. Since is local, best to test run on low resolutions/passes to see if the returns are right, then jack up those settings. I also tested with screen caps of AI images posted online and it improved them (dramatically in some cases, as I wanted to add photo-realistic qualities). It does seem to follow some camera/lighting specifics, but this can be hit-or-miss as well. (Con) It makes 5 images per pass, but only one is "save-able." You can screen cap the others to save (more a pita really). (Con) No good documentation, so the keywords and artists recognized are a Hail Mary. Oddly enough, it seems "youthful portrait" is baked into it (there is a pic on their website "somewhere" of a elderly woman rendered as a young girl... well, I'll be... I actually found it, lol). (Con) The "seed" is not select-able, it just cycles randomly when you want to change it. (Pro) The samples on the gallery include the text prompts used (similar to other online sites). Overall it has its uses (specifically if tweaking an existing image), and being a local plugin, it is a perpetual license. My apologies for not posting about that when was in effect (that bundle was $30 but the only thing new to it was the Distinct Plugins stuff (and Corel DRAW basic, IIRC). The rest was "2023" stuff, and brushes (both previously offered). Quick edit: If you scroll down on that "youthful" portrait above (or the one below), you can see "related" images... which have a slider for the before/after image. Another example of a good tweak is this one.
  19. I wanted to drop quick feedback on this for folks interested. Two new features I have not touched yet, but are on my to do list are: AI Dialogue Matcher matches clip tone, level and reverberance. AI Audio Assistant automatically creates a finished mix. Both of these are already available from iZotope, but their Dialogue Match is PT only. After they tried to sell Dialogue Match to me for a couple years, when I finally got it with an MPS upgrade I realized I couldn't use it anyway. Someone had specifically asked about this in another thread a while ago, so I want to check that out specifically at some point. **** What I did take the time to test **** As more and more audio tools are coming to bear in Resolve, I really wanted to stress test the AI Vocal Modeling/Replacement for a vocal track (singing). As I delved into it, a few things came to light: The model audio needs to be as dry as possible. Noise and FX (particularly reverb) should be removed to get the best result. The model needs to enunciate properly. For dialogue this is not as often an issue, but some singers have a real challenge with this. Similarly, the audio to be replaced needs to be enunciated as well to replace properly. The AI Modeler is slow, taking roughly 40 minutes to model 2 minutes of vocal. Because it is modeling a vocal, the pitch/key of that model is embedded into the model itself, so this needs to be taken into account for the replacement. Although there "are" tools in Resolve for this, Melodyne is a better choice for its precision capabilities. That said, I was surprised by the results. Applying a model is far quicker (15 seconds or so), and though not good enough quality for a lead, for harmony/backing tracks the results are usable. After seeing this in action, I think it is working under the same premise as Realivox Blue; i.e, the AI script is chumming through audio to match all constants/vowels/diphthongs it can, and without a full map it makes its best guestimate during replacement (and may butcher some). The built-in AI models seem to have a full articulation map available, but again the tone/pitch/inflection in those is dialogue, not singing. Of course I went way beyond its intended purpose for my test, but I wanted to set the bar fairly high to get a feel for its capabilities. The tweaking required for dialogue replacement is far less. Side Note: I buried Resolve while it was in the middle of doing an AI model by multitasking and had to force the program to close. To my surprise, the voice modeler picked up where it left off when I re-opened Resolve (good surprise for once). Lesson learned... when creating a voice model, better to check the status timer and walk away from the computer. Let it do its thing, then come back.
  20. +1 to this, you want to fully understand what they do and expect to receive from you. For mixing purposes, stems are usually dry (they cannot "unbake" FX you applied), and broadcast wav files are often used so that stems import at the proper timestamp. Again, talking to them to understand the exact requirements they have will give you a better start to things.
  21. mettelus

    Power conditioner

    20 hours is a good operational test That Neve only draws 35W, so is pretty small. I checked the Samson quick and there were a few articles about grounding issues, most seemed to be either mode related (half-normal in particular), or from the use of unbalanced cables for any of the connections (one was getting grief from directly plugging his guitar into the bay). I didn't dig into the Samson too much, but sounds like a better plan anyway.
  22. mettelus

    Power conditioner

    +1 Which Neve model are you using Geoff? I did a quick read on several rack units yesterday, and they do not have a significant current draw to speak of but the manuals do reference the ground as being critical. They even go so far as to mention not using power extensions. When I saw how low the current draw is on them, the issue may very well be the grounding Mark mentioned. As you have interacted with Neve already, a quicker route may be to provide them with a detailed layout of your setup and see what they say.
  23. There are also a handful of VSTs that can take focus away from the DAW, so the spacebar (or any keyboard input) is not sent to them. Additionally, you can give keyboard inputs to the VST directly, which may be enabled (upper right of the VST window). Does this behavior still occur if no VST windows are open at all?
  24. This stands out like a big red flag for me. Is it possible you have those project/audio files set as read-only? Sometimes when simply copying/moving files from another (external) drive, that happens. It made me wonder if that could also be the cause of the crashes, but Cakewalk should throw an access violation (although might not if it is a crash to desktop).
  25. This was my reaction as well. If the performance wasn't great, song analyzers won't recognize them anyway. With 6000, that is a lot of time to invest in something. You can still download MIDI files if you putz around on the internet (sites like BitMidi, Midis101, and the like), so it may be a better route to seek out something you want to work with. Over the years, the MIDI renditions have gotten far better than they were 30 years ago. Also, MIDI files that were well done often have blank tracks in them with the song title, artist, and performers as track names. Those you can easily identify just by opening with Cakewalk if TTS-1 is enabled and no MIDI output is selected. I have a slew of MIDI files from that era too, many I listened to only once, I know what they are, and I have never even considered using them.
×
×
  • Create New...