-
Posts
2,008 -
Joined
-
Last visited
-
Days Won
1
mettelus last won the day on January 28 2024
mettelus had the most liked content!
Reputation
1,668 ExcellentRecent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
I seem to recall that saving a project as a bun file, then unpacking it will strip out all Region FX as well, but it has been a while.
-
FREE - Ample Bass V4 upgrade and Special Bass Promotion 25% off and more.
mettelus replied to Jason Morin's topic in Deals
Boy, I really like the strings view on the Piano Roll and being able to edit imported tabs in Riffer. Please say that is underway for the guitars too -
The voice is an incredibly complex instrument. If you delve into the weeds with a voice, there are multiple harmonics and resonators that can be working at once. While those can be copied/analyzed, it is the "from scratch" part that creates the challenges beyond the basics. There is a quick snippet in the "Never Surrender: A Galaxy Quest Documentary" where Enrico Colantoni (who played the alien leader) talks about the tonal practice he was taught for voice getting him the part (he was initially turned down, but did that as he was leaving the audition and they changed their minds). If you analyze a voice that is "full," there is often a range of that voice where distinct harmonics exists ("chest" voice, "middle" voice, and "head" voice all in play at the same time). A really big challenge with generation are diphthongs (vowels that are really two, like the long "I," which is really "ah-ee")... I tested Synthesizer V and 2 of the three free voices had problems with "why," the other had an issue with "you"... ironically, the first three words of the chorus I was testing are "Why don't you..." so that stopped my test fairly quick. I have not tried it a second time with the newer release. For replacement, that is simpler (in some ways), but requires a voice model that is accurate/well sampled. DaVinci Resolve just released 20 Beta that has both a voice modeler and replacement algorithm that I tested with singing, and does fairly well, but the obvious downside of that if you MUST have permission to use a voice to use it. This doesn't fall under generation, and in some ways has more challenges in the the harmonics of the model are going to be based on the piece it was sampled from (again, the complexity of voice itself). It probably won't be too long before the AI voice generation gets these intricacies ironed out. Quick rewind. I forgot to comment on this earlier, then forgot which thread it was in! Corel "put a pause" on development for Painter, PaintShop Pro, and VideoStudio last year (there are no "2024" versions), so when the Humble Bundle hit, I didn't even comment about it. However, what they did release that was "new" was the Distinct Plugins they incorporate as extras. Of those, Vision FX 2.0 is the most notable, and does have the ability to build on existing images. Quick synopsis of that guy: (Both a pro and con) It is local, not cloud-based, so all processing is done locally, and can put a strain on your machine for high-detail pics (can take a few minutes per pic to generate, and defaults to doing 5... after 5 are done, you can change the text/setting for #5 only if you want to do just one, but have to have them all completed to do that... OR... you can cancel it when it shifts to #2 and restart the whole process... pita either way). (Pro) It works from an image, but only two sliders are available, basically "how much can it alter the original image" and "how much must it follow the text description." For generation, you can use a blank image and say "modify this 100%," with existing images knock that down. I went through a phase of testing medusa images (nice challenge for AI), so a lot of that is the text description... as mentioned above, there is no "definitive list" of what it understands, so there is a lot of hit-or-miss here. Since is local, best to test run on low resolutions/passes to see if the returns are right, then jack up those settings. I also tested with screen caps of AI images posted online and it improved them (dramatically in some cases, as I wanted to add photo-realistic qualities). It does seem to follow some camera/lighting specifics, but this can be hit-or-miss as well. (Con) It makes 5 images per pass, but only one is "save-able." You can screen cap the others to save (more a pita really). (Con) No good documentation, so the keywords and artists recognized are a Hail Mary. Oddly enough, it seems "youthful portrait" is baked into it (there is a pic on their website "somewhere" of a elderly woman rendered as a young girl... well, I'll be... I actually found it, lol). (Con) The "seed" is not select-able, it just cycles randomly when you want to change it. (Pro) The samples on the gallery include the text prompts used (similar to other online sites). Overall it has its uses (specifically if tweaking an existing image), and being a local plugin, it is a perpetual license. My apologies for not posting about that when was in effect (that bundle was $30 but the only thing new to it was the Distinct Plugins stuff (and Corel DRAW basic, IIRC). The rest was "2023" stuff, and brushes (both previously offered). Quick edit: If you scroll down on that "youthful" portrait above (or the one below), you can see "related" images... which have a slider for the before/after image. Another example of a good tweak is this one.
-
I wanted to drop quick feedback on this for folks interested. Two new features I have not touched yet, but are on my to do list are: AI Dialogue Matcher matches clip tone, level and reverberance. AI Audio Assistant automatically creates a finished mix. Both of these are already available from iZotope, but their Dialogue Match is PT only. After they tried to sell Dialogue Match to me for a couple years, when I finally got it with an MPS upgrade I realized I couldn't use it anyway. Someone had specifically asked about this in another thread a while ago, so I want to check that out specifically at some point. **** What I did take the time to test **** As more and more audio tools are coming to bear in Resolve, I really wanted to stress test the AI Vocal Modeling/Replacement for a vocal track (singing). As I delved into it, a few things came to light: The model audio needs to be as dry as possible. Noise and FX (particularly reverb) should be removed to get the best result. The model needs to enunciate properly. For dialogue this is not as often an issue, but some singers have a real challenge with this. Similarly, the audio to be replaced needs to be enunciated as well to replace properly. The AI Modeler is slow, taking roughly 40 minutes to model 2 minutes of vocal. Because it is modeling a vocal, the pitch/key of that model is embedded into the model itself, so this needs to be taken into account for the replacement. Although there "are" tools in Resolve for this, Melodyne is a better choice for its precision capabilities. That said, I was surprised by the results. Applying a model is far quicker (15 seconds or so), and though not good enough quality for a lead, for harmony/backing tracks the results are usable. After seeing this in action, I think it is working under the same premise as Realivox Blue; i.e, the AI script is chumming through audio to match all constants/vowels/diphthongs it can, and without a full map it makes its best guestimate during replacement (and may butcher some). The built-in AI models seem to have a full articulation map available, but again the tone/pitch/inflection in those is dialogue, not singing. Of course I went way beyond its intended purpose for my test, but I wanted to set the bar fairly high to get a feel for its capabilities. The tweaking required for dialogue replacement is far less. Side Note: I buried Resolve while it was in the middle of doing an AI model by multitasking and had to force the program to close. To my surprise, the voice modeler picked up where it left off when I re-opened Resolve (good surprise for once). Lesson learned... when creating a voice model, better to check the status timer and walk away from the computer. Let it do its thing, then come back.
-
Export projects from Cakewalk for import into ProTools?
mettelus replied to misterindie's topic in Cakewalk by BandLab
+1 to this, you want to fully understand what they do and expect to receive from you. For mixing purposes, stems are usually dry (they cannot "unbake" FX you applied), and broadcast wav files are often used so that stems import at the proper timestamp. Again, talking to them to understand the exact requirements they have will give you a better start to things. -
20 hours is a good operational test That Neve only draws 35W, so is pretty small. I checked the Samson quick and there were a few articles about grounding issues, most seemed to be either mode related (half-normal in particular), or from the use of unbalanced cables for any of the connections (one was getting grief from directly plugging his guitar into the bay). I didn't dig into the Samson too much, but sounds like a better plan anyway.
-
+1 Which Neve model are you using Geoff? I did a quick read on several rack units yesterday, and they do not have a significant current draw to speak of but the manuals do reference the ground as being critical. They even go so far as to mention not using power extensions. When I saw how low the current draw is on them, the issue may very well be the grounding Mark mentioned. As you have interacted with Neve already, a quicker route may be to provide them with a detailed layout of your setup and see what they say.
-
There are also a handful of VSTs that can take focus away from the DAW, so the spacebar (or any keyboard input) is not sent to them. Additionally, you can give keyboard inputs to the VST directly, which may be enabled (upper right of the VST window). Does this behavior still occur if no VST windows are open at all?
-
This stands out like a big red flag for me. Is it possible you have those project/audio files set as read-only? Sometimes when simply copying/moving files from another (external) drive, that happens. It made me wonder if that could also be the cause of the crashes, but Cakewalk should throw an access violation (although might not if it is a crash to desktop).
-
This was my reaction as well. If the performance wasn't great, song analyzers won't recognize them anyway. With 6000, that is a lot of time to invest in something. You can still download MIDI files if you putz around on the internet (sites like BitMidi, Midis101, and the like), so it may be a better route to seek out something you want to work with. Over the years, the MIDI renditions have gotten far better than they were 30 years ago. Also, MIDI files that were well done often have blank tracks in them with the song title, artist, and performers as track names. Those you can easily identify just by opening with Cakewalk if TTS-1 is enabled and no MIDI output is selected. I have a slew of MIDI files from that era too, many I listened to only once, I know what they are, and I have never even considered using them.
-
Is an inductive motor kicking on/running when this occurs? AC, dryers, and refrigerators are typical heavy draw items (another indicator is if lights flicker when the AC kicks on, or a vacuum is started). Again, a conditioner is like a giant capacitor in a way, it needs to also draw power to work and will only last for so long before it loses capacity as well (it might last long enough to cover the gaps you are seeing, hopefully it will). Have you ever gone down and checked the amps on the breakers for the house? The main is the biggie, since that controls the entire box, but the sum of (and loading on) each room can affect the entire house (internal to the box, breakers are simply attached to bus bars so the power distribution is common to them all... why I asked about starting/running motors). I would check that or have an electrician look at it just to get a feel for what your situation is. I am the second owner of this home and the breaker box had essentially no mapping to it when I moved in, so I took the couple hours to map out the distribution in the house. I have actually tripped the breaker (selective tripping) to my studio running all the equipment on higher power (whole lotta amplifiers), but that was just screwing around and at a level that would deafen me with any prolonged exposure. To that end, also check the power draw on equipment you have connected to your studio specifically (especially amplifiers of any type)... wattage/voltage (assume 120) will give you the current draw... if those combined exceed that room's breaker (I am assuming it is a slow-blow/time-delay breaker, why it will endure the sag without tripping), it may be already strained from the powered equipment on it. Again, that goes back to the breaker box/electrician. If the main can handle it (important point), you can beef up the current to a location BUT the ratings of the Romex installed must be known since that current cannot be exceeded (can cause a fire). To pull those kinds of current, definitely involve an electrician. When I put in my pole barn I ran 80 amps off the main, but the cabling for that is not typical Romex... that stuff is so thick (100-amp underground distribution cable) I kept the extras just in case I needed to replace the starter wire on a vehicle at some point. Hopefully the Furman will get you where you need to be! I wanted to throw the above out just in case.
-
Another quick comment to the above is you can also pre-coat strings with something to protect them better, both before and after use. Fast Fret is one option, which I believe is liquid paraffin (aka, "white mineral oil"). A friend of mine swore by that stuff, and a can will probably last for years. It is mainly thought of for making strings slick to remove string squeaks, but being a wax it is also a temporary protectant, just like wax on a car.
-
That is a good question. If not in a rush, sites like CyberpowerPC (linked to all their NVIDIA laptops) offer rolling sales on various products, so a little patience can save up to $400. Unlike their desktops, it seems their laptops are all pre-built, so there are no options to customize components (even the RAM capacity it seems), and it seems the default boards are Intel Raptor Lake HM770. If you have a specific component/build type your are seeking, a Google search on those will often cue in the vendors making them.
-
+1, I haven't had the time to kick the tires on 20 yet, but will soon. One of the nice adaptations across the board with video editors is they are realizing (and catering to) their "hobbyist" markets more. With the prevalence of short-form/portrait video apps, there has been a definite streamlining of being able to work with and deliver those with a lot less pain than years ago. The "vertical workspace" in Resolve 20 actually stuck out to me for this reason... there are a lot of people that want to "just edit and finish" something rather simple without the need to "deal with" the format or all of the other features they may not care about. It used to be the thumb rule was "shoot landscape even if delivery is portrait" (losing resolution, but avoiding the pain of a portrait-to-landscape scenario), but is nice to see that is falling by the wayside since the tools have evolved.
-
There are quite a few comparisons between the two apps, but they are often biased one way or the other (can tell from the embedded advertisements). I finally remembered where I saw a nice overview comparison, and it was a sidebar in a video of editing 3D drone footage... of course I cannot readily find it. Bottom line to that was in addition to the Premiere Pro subscription, you also need a Mocha Pro subscription, which ends up being $600+/year combined (can be over $1000). Resolve Studio is a one-time purchase ($295) with free upgrades for life (it has gone on sale more cheaply, but is very rare with one recently for $235), and includes the most of the FX you would get Mocha Pro for. One video I just saw (bookmarked for the price part) the guy joked that price is the "low hanging fruit" (everyone knows this), and his bottom line reasoning was that using Resolve is like a fluid video game (which can get resource heavy) versus Premiere which can stutter and glitch as it thinks to repaint itself when shifting work flows/tasks. [Sidebar of my own with a little more background] That said, the reason for the price difference is also that Black Magic Design makes money from (sometimes VERY expensive) cameras/hardware. Resolve Studio is actually free if you choose go that route for obvious reasons. Similarly, the 3D camera I got is from Insta360 (the ONE RS Twin edition) which also has a free Insta360 Studio app (both for phone and computer), but you can only use it if you have a camera linked/registered. That app actually taps into the camera memory, so you can do a lot of rough editing/camera control from your phone in the field (remote control), and the desktop app has advanced leaps and bounds in the past 5 years, allowing for very quick initial editing before pulling footage into a heavier hitter like Resolve or Premiere. My 3D interest initially stemmed from the ability to track a moving target without needing a camera operator (the object "tracking" is done post-production), and the fact that the lenses each see 220 degrees, so they auto-zip that overlap to remove the selfie stick, stand, or drone carrying it (but you can see the shadow of it on the ground in the link above). I have never had the guts to submerge it, but it is also water-proof to 16ft. In just the past 5 years, a lot of their smaller cameras have gotten more capable (higher frame rates and resolutions), but the ONE RS is still the same model I got. Some really nice features with those cameras are the "FlowState Stabilization" (for bebopping around while doing things) and "Horizon Lock" (to keep the view level)... these are shown in the link above. Their cameras actually record gyro response in the video footage, so you can get movie quality results with the ability to field edit before sitting in front of a computer for the final post-production. Quick edit: Their Insta360 Studio app also allows for exporting a "flat print" of a 3D video (with the focal point and focal point tracking of your choosing), so you can pull that rough edit into a video editor that cannot handle 3D footage.