Jump to content

mettelus

Members
  • Posts

    1,987
  • Joined

  • Last visited

  • Days Won

    1

mettelus last won the day on January 28 2024

mettelus had the most liked content!

Reputation

1,647 Excellent

3 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. +1 to what Jonesey said above. By inserting Melodyne into the FX bin, you bypass the ARA functionality of it, which allows it to communicate back and forth with the DAW in real-time for the region selected. Only as a Region FX do you have all of those tools at your disposal. A simpler way of putting it is that most FX only see what is coming in real time (or a brief window beyond that), but a Region FX passes all of the data to the plugin so that it can work with the whole thing at once (and send changes back to the DAW as you do). While you "can" leave Region FX in place, it is better practice to make them small, do your work, then bounce the result (the original audio will remain in your audio folder). Not only do Region FX have limitations (at the bottom of that page, with splitting a clip being the most common one), but leaving them active will make your project file larger and take longer to load (i.e., Melodyne will launch and do its analysis each time you load a project file with Melodyne Region FX's left enabled).
  2. This just stands out for me. The OS drive is the only one that requires imaging, and the others are really more "data drives" to the system. I use xcopy/robocopy scripts to update data drives (and copy newer data from C onto D). Setting these to only copy newer/modified files is very quick (after the first pass, which copies everything). Copying newer data from C to D takes roughly 10 seconds, and the data drives to external can take several minutes doing this. I keep my OS drive small on purpose (>200GB for imaging), so the back and forth (image<->restore) takes 12-16 minutes. When you said "10 hours" it made me wonder if you have done a restore, and how you protect files since the last image done prior to that restoration. Data files that have never changed since installation do not require constant backups, more a "once and done" deal. Running drives through their paces can also lower the lifespan. I did a few write ups on this in both this and the old forum with more details on scripts and usage, but the "10 hours" really stands out for me... I have actually had several instances with removing programs or bad installs that doing the 15-minutes restore was the easier way to clean the registry (by far).
  3. I agree with this, and may be linked to their steadfast launch date. I didn't delve into their forums much yet, but caught several things out of the chute that were odd or unintuitive. I didn't look at the manual yet either, so will delve deeper this weekend. After the uber long VST scan I only spent about 30 minutes with it thus far. The top three that caught my eye were: Using the VST host feature when Scaler 3 is inserted into a DAW causes the DAW to take focus when Scaler 3 is touched and shifts the VST behind the DAW, requiring a second monitor to actually use (unless you love Alt-TAB). Regardless of voicing used, it seems all the notes of the chord are activated on the fretboard view, so the VSTs I tried are only registering the highest notes (i.e., no folk chords). I did see this asked on their forum, but still need to follow up with the manual and more use, chords are set to straight bar count, and although you can hover over edges I didn't find a way to change duration in the Arrange View short of the x.5, x2, etc. that were used in Scaler 2. I was hoping to be able to just adjust length on them with the mouse. Ironically, I was specifically intending to check something else, but those three sidetracked me in short order.
  4. This was the same model that SONAR Platinum had. The downside to this is people can wait for the release of a feature they really want before buying a copy; not only do they get everything they skipped in the version they buy into, but also the next 12 months of updates (whether they care about them or not). In a market that has already matured, it is a liability to employ such a model. To put this into an extreme example... how many people get excited (or care) about new word processing features? I can do everything I ever needed in Word 2007 as easily as the "current" version and not miss a beat. The demographics of the market also needs to be considered. The younger generation is very much "right now" focused, and the "AI" freight train is coming up fast... It won't be too long before one can hum/sing a melody into a cell phone and it can be snapped to key/tempo and build upon without any music theory knowledge whatsoever. The listening market is already saturated, but the future will have an even greater disparity between "live performers" and "software musicians."
  5. Thanks for that. Similar happened with RiffStation when they released it to everyone. The activation servers went offline, so the purchased version couldn't be activated, only the free (unlocked) one. Luckily I had downloaded two copies of the free version "just in case" but didn't realize I needed them till I loaded up this machine... it is a difficult app to find now, but the paid version can no longer be activated.
  6. I have never swapped sample rates mid-project, but this got me wondering. Since you can drag/drop a cwp from the Sonar browser to populate a new project, if you lock that new project with audio at a different sample rate, does that desync things in Sonar? I was under the assumption that the embedded SRC would kick in for that situation (so for the OP the 96K content would all get converted to 48K), but never thought to try it.
  7. Something to check would be the Disk Management for each OS after it loads (ironically, "Create and format hard disk partitions" in the search bar is how to call it up). Check the bottom portion of that and see if the drive assignments jive between the OS drives when swapped. That is always something to check when swapping/installing new drives, especially if you have junctions in use. Windows gets rather weird and decides drives assignments on its own sometimes if you don't manually tell it how to function in that window.
  8. Forgot to mention, attachments are limited to roughly 4MB, so if you have something bigger, it needs to be hosted on a server (YouTube or similar) and the link can be posted so others can access it.
  9. I came from an older version of Premiere Pro (CS 5.5), so looked for that on Google to find a more relevant version to be sure the menus weren't drastically different. Below is a quick video bookmarked where to show the waveform. Be patient with the learning curve, especially with video editors. Adobe has a massive submarket on tutorials/classes, so it is often simpler to do a very detailed Google search of exactly what you want to do and eat the elephant a bite at a time that way. As with Vegas/Sound Forge, one of Adobe's strongest assets is that data can be shared almost willy-nilly between Adobe apps, specifically Premiere and Audition (their wave editor). Another "trick" with video editors is the snap function with them, so if you razor/ripple edit a video first, you can export the audio from that to import into Cakewalk... if you prefer to line up transients in Cakewalk, you can then use the camera audio as your guide for alignment and split the audio clip in Cakewalk, do your work, then export that and import it into Premiere and snap it to align with the original clip. I would recommend focusing on transients first (whether in Premiere or Cakewalk) rather than the camera aspect, and find out what workflow you meld with best. In many ways it is akin to creating a tempo map from something done without a click... you are going to face that issue at some point, there is more than one way to achieve it, and one of those ways is going to become your preferred method. Especially with learning video, keep tasks small and specific, and there is a good chance someone posted about that online for you to read.
  10. Roger that. With multiple cameras some sort of visual "clapboard" helps if those feeds are separated, and depending on the view. Are you recording those cameras stand alone, or fed into a common host? The reason I mention that is there are video plugins that will enable multiple camera capture into editors, but depending how intense that is, it could require a laptop dedicated to video only (so it won't interfere with your gigging audio at all). More stuff to lug around and worry about can be stressful though. That would also accept a monitor feed from the mixer to sync everything during recording. Of course, as soon as I mention that OBS Studio came to mind as well... I have used that all of two times, but need to delve into that guy deeper to see what its capabilities are (especially since it is free and more and more hardware manufacturers are making drivers specific to OBS).
  11. I am not familiar with Vegas, but this seems a little odd to me. I just checked a quick video on that and it seems the U and G shortcuts (Ungroup and Group?). He didn't specifically delete the audio, but scooted it well after the video. Maybe someone can chime in on that, but I assume you can delete as well. I have shifted over to DaVinci Resolve Studio, but also work with video >4K, for which Studio is required. I am always hesitant to mention other software, since you are already familiar with Vegas (which has been around a long time). Programs once they get enough years under their belts become highly capable and the user manual's page count tends to reflect that. The integration with Sound Forge is another perk. As long as you can stack audio and zoom in, you can place the DAW audio into position before breaking the link to the original video audio. In a sample-rate mismatch scenario, some video editors will also allow you to to fit audio between two markers (say a transient at the beginning and end of an audio track), so there are various ways to tackle that challenge as well. Sorry for the distraction there. I guess my real point above was that most work should be done in the video editor (when possible) and anything done in a DAW needs to be done with the ability to relink that DAW work back to the video (i.e., same length).
  12. Are you physically swapping these drives or are they permanently connected? And how are you choosing which OS/drive to run as C? If they are two physical drives connected, I would start with checking your UEFI/BIOS, since that is where the C drive is designated. Two physical drives both with an OS on them can confuse the crap out of that, so it may be trying to determine which drive actually has the "OS" on it. A little more insight into your situation would help us understand better.
  13. Not necessarily "required" but this does help. An alternative to try is: (Many) cameras also record audio, so be sure to capture that (you probably won't be using any of it, but see below). In a video editor (not a DAW), you can essentially replace the camera's audio track with the DAW output (be sure the sample rates coincide). The clapboard technique of aligning transients should be simple prior to removing the camera audio (essentially split the video from audio in the video editor, align the DAW track (don't remove sections from it in the DAW, to make sure they mate end to end), then re-link the video to the DAW audio in the video editor). From there, again working in the video editor, the video/audio will remain mated (important point), so you can ripple edit out sections as needed and they will remain aligned. When working with multiple cameras, this also allows bebopping back and forth between views. Big picture, a video editor allows for seamless ripple/razor editing, so keep what the camera saw in mind when working in the DAW and be sure to only remove content in the video editor (after the video and DAW audio are mated). Depending on the video editor, some are also VST hosts, so you can minimize time spent in the DAW before working on the video aspect.
  14. If a source is mono (e.g., a microphone), there is no "stereo" to it, so recording it as mono is the proper route (and leaving it as such). For an individual track, the (stereo) interleave, (stereo) FX used on that mono track, panning, etc., and sending to a stereo bus (default) is where the "stereo" comes from (and how to mix it into the entire piece). @bitflipper posted one of the best posts ever over 11 years ago on the old forums regarding this, so it is definitely worth a read if interested. Bottom line, the raw audio is mono (from a single-point recording source), and best left that way; how it is incorporated into a stereo mix is what is important and should be understood.
×
×
  • Create New...