Jump to content

mettelus

Members
  • Posts

    2,277
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by mettelus

  1. I had to take a step back to think about that video again and there is a bit more to it depending on your guitar (string length from eyelet to peg, where the string is physically bound and cannot move) and the string gauge chosen. Start with extremes as an example: 200# springs on the claw (a typical 6-string set runs from 100-120# tension) - this will hard stop the tremolo at max tension allowed (cannot physically go any higher), but would allow it to bomb... some Strats are set like this, at least ones I have played... they only go down. No springs at all - this would hard stop the tremolo at least tension allowed (cannot physically go any lower), but would allow the tremolo to raise string tension. So as the video referenced, the "sweet spot" in the claw position and springs used would be best at the center point of the tremolo motion, allowing for equal amounts on tension/release application. As for the claw angle, this is based on a couple factors... the gauge/tension per string and string length from eyelet to peg (assuming the nut has minimal resistance). What was stated in the video is accurate for "typical" string sets and a Strat-style string length (6 on a side, with the high E longest) so variations in those will affect each string differently for the same motion in the tremolo. Definitely research string tension for your strings (most have them right on the string package). In my case, I run balanced tension strings on 3-pegs/side guitars... the high E is the lowest tension unwound string and the low E is the lowest tension wound string, with both being the shortest string length. Because of this, the claw being parallel to the mount is the best case (for mine) - equal tension application across all strings will get the best equality in pitch change. I really can only adjust the "sweet spot" to get the normal tension to the center of the tremolo motion, nothing more. Without delving too deeply more into the physics, the tremolo is going to apply equal changes to string length (eyelet to peg), but the sonic potion of the string (nut/fret to saddle) will be slightly different for each string depending on how it sits "steady state" with the tremolo untouched. As with anything, is more adapting to what those specifically are for a particular setup/guitar configuration. Quick Edit: Forgot to mention locking nuts (or even Steinberger's headless guitar design)... both of those concepts are trying to keep string length the same for all strings which will also make tremolo motion more uniform across them.
  2. Which digital piano model are you running? CASIO's website is a bit convoluted so things can be hard to find, but I would start with the manuals section for your model. One of the manuals that may pop up at the bottom is "MIDI Implementation," which will detail the nuances of your particular model. Some models note that they are "compatible" but not "fully compliant," so keep that in mind while searching.
  3. Nice video. I have to delve into that a bit when I get a chance. I have a guitar with a Floyd knock-off but the guitar itself never thrilled me much so I never looked into tweaking anything on it (yet)... just a wall ornament for now. My main has a Kahler clone in it (no claw) and I rarely even put the bar in it; that one floats but I axle greased the hinge pin decades ago now and never had issues with it. But... the IYV I just got with the Wilkinson seems to be fine but I definitely like him targeting steps with that claw angle so that is something to fiddle with for sure in the near future. Side note: I ran the crap out of the IYV this past weekend and the PUPs are perfectly fine (and different enough that the switch shifts tones). I need to update that thread at some point, but when I do finally do a string change I am going to Brasso the logo off those PUPs and keep them. The first song that came to mind for me to stress test that tremolo was the intro to Giant's I'm A Believer... even though that song has gotten air time, the intro (first 1:15 on it) is rarely included, and is one of my favorite intros.
  4. The "from scratch" aspect might be a huge pill to swallow, very similar to programming a synth, or Meldas Multiparameters, from scratch. It might be easier to work with a preset gesture that is close, then disable all but one FX at a time so you can deep dive into each separately at first (extreme highs/lows on parameters often help here)... then start layering them back up as you wrap your head around them. Be sure to save gestures as you go once you tweak them to your liking, but dissecting existing presets might be a better approach for learning. Also work with a simple sample at first, so you know the base sound well and what is going on is obvious (especially how the buffer is grabbing things it is working with).
  5. Do you happen to have Stutter Edit 1? I need to re-open SE2 to look at it again, but the GUI took a step backward IMO. I just took a look at the SE2 documentation and realized (again) why I do not like it. In SE1 pretty much everything is on one page and has identical controls for each parameter you enable, including the visualization of your buffer position per effect (which is HUGE for me to be able to use it). They added more to SE2, but being able to actually see how each FX is working with that buffer disappeared from it That is one VST where it would be really nice to be able to buy an older copy, but iZotope has been religious with purging the market of older software at release time.
  6. I grew up with piano and I didn't take up guitar till I was 18, but learning guitar taught me more about piano/music theory than doing anything with piano ever did (pretty much like studying a foreign language teaches you more about English than you learn in English class). I had borrowed an acoustic with a pretty bowed neck from my Little Brother's aunt, then a friend sold me his old electric setup pretty cheap. I always liked the acoustic intro to Tesla's Love Song, so started with that, then when I got the electric I sat down to figure out the solo. One weird thing that happened with that is the intro has a Bm in it, but most of it is intervals, so I played that Bm with my pinky across the F# and B and do that about half the time to this day. My left pinky has noticeable hook in it because of that... one of the perils of teaching yourself. I know I have mentioned this before, but the coolest learning experience starting out was the power went out one night. I learned more from playing in the darkness in a few hours than I had in the 6 months prior... at least I got the dependency of playing with my eyes out of the way early on, so I am thankful for that. Quick edit: I had to chuckle... I didn't think this song was ever posted online. My room mate first year had let me borrow a copy of a band called Babylon A.D., so when I got the electric the first lick I taught myself was the lead in on "Bang Go the Bells" (and I just kept playing it for probably 20 minutes). Greg was next door then, poked his head in and said, " I USED to like that song." LOL... funny how when the rest of the song is missing, parts are only interesting to the person playing them!
  7. Stutter Edit is rather complex when you delve into the guts of it. Basically, it is fed a buffer and then can manipulate that at will, but that is where the complexity starts. It can be set to take in no more audio buffers (so holds the sample being processed) if desired, but the slicing and manipulations are all done automatically based on the presets which can get intricate to say the least (some of the complex presets are simpler to start with, depending). One thing that does throw folks at times is that you can shut the "Stutter" off, so the FX can be used in a more traditional music fashion. While the stutter is fun to play with, I have yet to use it myself in anything... maybe if I do a cover of Rock Me Amadeus at some point. Although, to your point, if you want pin-point accuracy on what gets processed, that is sometimes simplest by copying that portion of audio to another track so the input buffer is constrained to only that copy.
  8. Stutter Edit 2 will do that more easily (I still prefer the GUI from Stutter Edit 1 for some odd reason). That snippet from the OP would require running multiple tracks (stems, or even elements for some effects done) through multiple Stutter Edit 2 instances (locked to the DAW tempo), since the sample elements buffered are discreet. Stutter Edit needs to see them discreetly to achieve that, so it couldn't be done on a single, combined track. Side Note: I haven't looked at Stutter Edit in a while, the other reason I still prefer 1 over 2 is the visualization of the effect when making edits.
  9. Have you tried changing the order of NOVA and the Gate in your FX chain? Traditionally a gate is almost always first in a chain as it simply controls what signal is allowed to pass. This would also satisfy "another plugin after the Gate" mentioned above, but NOVA also has compression functionality to it so that interaction may also be contributing.
  10. +1 Google Translate will translate images (via their website) if high enough resolution. It will also interactively translate text with the camera on your phone (with the phone app) which is handy if you are traveling and need to read something in a language you do not read well (that actually replaces the text on the live camera image with the translation which is rather cool). It looks like simplified Chinese, but I do not recognize it. Are you running any new VSTs or used any other apps that accessed the device? Depending on the programmer, even if they script an app to be in English, some of the warnings may remain in the native language. An example is that Insta360 is a Chinese brand, and even though their app installs and runs in "English," some dialog boxes spit out Chinese as well.
  11. Quick note embedded in the bottom of the KVR voting email from yesterday: "We'd also like you to be the first to know that, barring any problems in testing, we're releasing Scaler 3.2 on Wednesday November 12th. It's a major upgrade with lots of new features, functions and improvements that takes Scaler 3 to another level - we can't wait to share!"
  12. I am curious with the OP as well. As mentioned above Aux Tracks/Patch Points were specifically introduced for this purposes, but signals are traditionally recorded dry so that they can be adjusted dynamically in post-production. However, there are certain features that are more static in nature, and those are embedded in some interfaces. PreSonus interfaces (via the Universal Control) include their FAT channel (HPF, gate/expander, compressor, EQ, limiter, and reverb), which are more universal and sometimes desired for baking into the recorded track (HPF is often used on anything not bass or kick). The reason why dry is the default is that it opens options with composition/post-production... even with things like recording guitar from hardware it is very common to also record the dry DI output so that can be re-amped or processed separately as things flesh themselves out. Also, some FX do not take kindly at all to baked-in time-based effects (delays, reverbs, etc.) so if you baked those in you could easily buy yourself into re-tracking those parts... FX (and things like Melodyne) do not see those tails as delays/chorus/reverbs, but as signal, so they will blindly process those as well (and rarely desired).
  13. I bold-faced terms so you can Google and read up on them below. Okay, now I have a clearer picture. What you are seeking is something to visually see frequency collisions, and then apply mirror EQ techniques to accommodate that. While SPAN will identify frequency collisions, it is a little clunky, so I use MeldaProductions MMultiAnalyzer for this (it goes on sale for 50% off regularly, especially Black Friday, and you can get $10 more off your first purchase if you sign up for Melda's newsletter). However... having said that, I would rather recommend a process called ducking audio for what you are trying to achieve. It is the same process used by radios/documentary TV for years. Basically you put and Compressor that will accept a side-chain input (Sonitus is one of them) on your main track, and feed the vocal track into the side-chain (via a send, post fader). What that does is makes the compressor's threshold trigger from the send (not the track it is sitting on), so when the vocal is speaking, you can compress (aka, "duck") the backing track 2-3dB, so that the vocal is clear to the listener. A psycho-acoustic phenomenon called frequency masking is your friend here.... that is where sounds at the same frequency are drowned out by the louder one (and only needs to be 2-3dB, nothing egregious). This video is more an FYI, but covers a lot of ground on mixing techniques in 10 minutes, especially methods to identify collisions with a parametric EQ ("frequency microscope") and common mirror EQ techniques (which you can do with most any parametric EQ). Again, in your specific case I would recommend using ducking instead (it is more universal, and you have one track you want to be heard without worry for "making it fit," as it were).
  14. ^^^^ Check and see if the "Allow Only One Open Project at a Time" is checked in your preferences from the image above.
  15. What FX do you have active in the project? Some FX don't take kindly to small buffers or can cause issues if they have look-ahead functionality. Unbounced Melodyne FX being one of them, with mastering plugins or convolution reverbs being others (depending on settings). When tracking/recording, buffers low (32-128) and most FX bypassed will yield good enough latency to record. During mixing/post-production, buffers may need to be increased (512, 1024, 2048) based on what FX are in play at that point. If you are trying to record into a project that is already seen some mixing work, the global FX bypass will "usually" fit the bill to allow the lower buffers needed, but again unbounced Region FX are something to be aware of as well. Because Melodyne separations data is saved in a CWP file, a good indicator if you have unbounced Melodyne edits is the CWP will be larger than typical (in the MB+ range). If Melodyne is active, higher buffers are needed to accommodate the ARA functionality.
  16. I am not familiar with the new layout, but I assume the depth is the threshold (no idea, hopefully someone can chime in with the real answer)? That is simply what causes the gate to open and allow the signal to pass, so will vary depending on the level of the signal in. I am not sure what "pops and crackling" entails, as that seems more like a buffering issue (not sure), but what does stand out is the release time being so low for toms. I tom needs to ring out, or it will get chopped off (is that what you are hearing?). This is a nice "cheat sheet" I just found for gating drums, and without numbers on that "Release" knob I am assuming it is fairly low in value, which will make it chop off the tom unless the input gain is jacked up so it is basically always open.
  17. I just checked my attachments and it includes PMs... I deleted 3 attachments that were in PMs so that space was recovered (same thread), but then went into the thread and they are still there.
  18. Minimum tech specs (SDXL-Turbo) Operating System: Windows 10 or Windows 11 (64-bit, latest updates) Processor: Multicore Intel Core (11th Gen or newer) or AMD Ryzen (2 GHz or faster, 64-bit support) RAM: 16 GB GPU: Integrated or discrete graphics (Intel Iris Xe or newer) Hard Drive Space: 15 GB available (models downloaded separately) Monitor Resolution: 1280 x 800 Side note: It seems Intel is actually partnered with Distinct AI... I came across an article published by Intel on them while searching.
  19. In replying on my cell the bottom of this box says 4.88MB max (total and file). Do you have other attachments taking up the missing space? (Reports of shrinking appendages are going to scare off forum members (for fear of their members)!)
  20. Yeah, this has taken up the place of folks accidentally hitting O and enabling offset in the past... so many posts on that one until the keybinding was removed. A red halo effect around the entire GUI like when you are near death in a video game would be obvious too!
  21. Interesting. Render FX doesn't even show on Distinct AIs product page (??), but it looks like a beefed up version of Vision FX from the teaser video. Their AI generation is based on local system generation, so it can get intense on your GPU. Between the last offering and this one, they also released GPU models to make generation faster and more efficient. Upside is no credits or storing ideas on a cloud server, but the downside is can be significantly slower depending on the detail involved. Still scratching my head on this Render FX (is definitely new), as it includes aspect ratio and more features that Vision FX does not (says post-processing, etc. in the teaser)... is just odd that it is not on their product page. Also of note is the lack of Painter, PaintShop Pro, and VideoStudio... both PaintShop and VideoStudio were suspended ("put on hiatus") in 2023 to focus on Painter, but Painter has not gotten an update since 2023 either. All three of those apps are now 3 years old.
  22. Don't get wrapped around the wheel on this, just focus on describing the issue as much as you can (especially the OP title... you can also edit the OP to change the title as needed). As long as what is occurring is clear, folks will often respond with the terminology you need and even bold face that as @tparker24 did above. Once you have that, it helps you to search things on your own. It is all part of the learning process. What wraps us around the wheel is when OP has so little detail it could be a dozen things... and then the OP never returns!
  23. Even Melodyne is working with the file written to disc during recording. If this is someone you work with fairly frequently, the goal of improving that recorded file (to minimize post-production) would be another focus to consider. If you Google "Apps to Train Voice Pitch," there are several out there, with some more like games to make them more fun. Some of those apps also focus on the unique range of the singer, which is another thing that should be taken into consideration with song choice/composition. As alluded to above, if post-production becomes tedious (with any instrument), a large portion of that falls back onto the performance, and the performer should also be improving their art rather than relying on you for post-production.
  24. I have rarely seen this effective unless the correction is minuscule. The "real-time" versions tend to add CPU overhead and miss the mark with corrections farily readily. If you need to do significant changes, this is better to do in post-production with Melodyne, either the whole track in one go with its macros, or surgically for specific issues. Also bear in mind that if this is in an FX rack, it doesn't alter what was saved to disc (is simply processing that). You would need to bounce that FX to get the file corrected, unless you plan to run the FX forever (not recommended either).
  25. I would recommend doing a cleanup of that project, especially if it is in its final stages. Before you start, do a File->Save As... with a new name (you can put it in the existing project folder). Once that is done, I would do the following: Delete all of the Archived tracks. If you are not using them, get rid of them. Trim off the right end of the project (unless your export end is really 321 bars?). Some of the tracks appear to have no data, but still go on (Tracks 5, 6, and 9 from the OP). If specific to only certain tracks, you can split those and delete everything to the right, especially if they are truly empty... is that automation data in those? Do a deep dive into what FX and automation you are using. Some FX are not CPU friendly at all, but unfortunately Cakewalk doesn't give you a list of FX running in the project and load/lag associated with them. As this is happening in the same place, I am highly inclined to assume it is an FX choking. When tracking, you want buffers low, but when mixing you want to bump those up in preferences (to 512, 1024, or sometimes 2048). This gives the CPU more time to fill buffers and is required if you have unbounced Melodyne Region FX in the project. Check the event list for where the crash is occurring just to see if there is something in there that is obvious (this is unlikely, but good to check anyway). If that will offload at some point, I would also consider creating a another project at 44.1/24 and drag/dropping the new cwp from your browser into it (that will populate the new project with the data from the cleaned up one), then do another Save to that second new one so it is in a 44.1/24 format. Just bear in mind, that even with a capable computer, the right recipe will drive it to its knees. Rather than save/keep everything (this is better done through saving versions as you go), try to keep things simpler in the working version.
×
×
  • Create New...