Jump to content

bitflipper

Members
  • Posts

    3,346
  • Joined

  • Last visited

  • Days Won

    21

Everything posted by bitflipper

  1. What's the other program? I ask because I'm wondering if you couldn't do everything within Cakewalk, i.e. syncing the VO with video. Voiceover processing uses all the same tools as any vocal track, except that the emphasis is on clarity and intelligibility rather than esthetics. That means accentuating the upper midrange frequencies, noise reduction and heavy compression. Spitfish works well (at least, I think it still does; it's a 32-bit plugin and some 32-bit plugins didn't make the transition to 64 bits smoothly). I can't think of any other freebie that does a better job (now, if you want to spend $$$, FabFilter's Pro-DS is arguably the best one around). There are also techniques that predate dedicated de-essers you can use that cost nothing. VOs are almost always mono, but there are "steroizers" that manufacture left-right differences. For example, Meldaproduction has a couple plugins for that, and IIRC some simplified versions are included with their free bundle. Grab that free bundle, as there are GOBS of goodies in there to play with. I'll also second Glenn's recommendation for the Sonitus suite, which has pretty much everything you'll need. Gates are often used, but a standalone gate plugin usually isn't needed as long as you're recording in a quiet, non-reverberant space. But there is one included in the Sonitus suite. Compression is important, because you want levels to be very, um, level. The Sonitus compressor is good, and if you like visual aids it has one of the best. However, there may be times when you want a FET-style compressor, but Cakewalk's got you covered there, too, in the ProChannel. And yes, reverb might occasionally be called for. Usually, it's to mimic a physical space to match the sound in a video. Convolution reverbs are specifically made for that kind of thing. Want to make your voice sound like it's inside a garbage can or a concrete culvert? They can do that.
  2. Those special keys are called "keyswitches", and they're the standard way to switch between articulations within nearly all sampled instruments. Unfortunately, they can be a PIA to work with because a) you have to commit to memory which keyswitches do what, and b) every instrument implements its own keyswitches; there is no standard. Do yourself a big favor and take some time to learn about Articulation Maps. They alleviate the need to keep looking up keyswitches as you compose, since you're letting Cakewalk remember them for you.
  3. You're not doing anything wrong. I'd guess it's a quirk of the instrument you're using. One of my absolute favorite libraries is Amadeus Symphonic Orchestra because it sounds so good, but the way it deals with CC1 is frustratingly inconsistent.
  4. As noted above, there are a number of ways to do this. You could, for example, split the vocal clip to isolate that one phrase and apply the delay as a clip effect. Or move the affected phrases into a new track and apply the delay there. However, I'd suggest automating the delay's wet/dry mix instead. That will give you greater flexibility as you tweak and experiment with your mix, e.g. accentuating the delay more on some phrases than others or perhaps deciding to apply the delay to other parts as well. Taking the time to learn how to do automation will open up a world of possibilities.
  5. It's Ctl-Y here. I seem to recall that it did change once, but that was years ago. I remember that because I spent a day putting a bunch of them back to where they used to be.
  6. "Mastering Audio", IMO, should be required reading for anyone approaching this stuff as a serious discipline. However, for those too cheap to buy the book, this video pretty much lays out its central premise for free.
  7. bitflipper

    Warp 9

    Vintage Modern! Or is it modern vintage? Either way, this would have fit right in with my record collection circa 1978 alongside Larry Fast and Vangelis. Although I guess you'd need to crank the reverb to 11 to compete with the latter. All that Oberheim silkiness makes me smile.
  8. 2 dB is a pretty small difference. Even a small discrepancy between FX on the two tracks will result in different levels. We often forget that EQs change levels, as do most effects. More than once I've wondered "how'd my vocal suddenly get louder?" only to remember "oh, yeh, I just added a send to the reverb bus".
  9. Imagine if the group and reverb busses were separate physical boxes with cables connecting a common sound source to each of them. You'd need two cables, and unplugging either of them results in the box it's routed to going silent. That's what soloing does: it essentially disconnects every other connection except the one being soloed. If you want to also hear reverb, which is a separate bus, you have to "solo" that bus as well.
  10. The clue is in "when I play them back together". When you combine any two (or more) tracks, the result is the algebraic sum of them. If they are identical, i.e. one is cloned from the other, then the level will be doubled. Not just the peaks, although it'll be peak values that are most noticeable. If the tracks are not identical, they'll add together in some places and subtract from one another in other places. That's how you can end up with a mix on the master bus that goes into the red even though none of your individual tracks do.
  11. Wouldn't it be nice if the message included the name of the file it couldn't open? Might want to submit that as an enhancement request. Unhelpful though it may be, the message is more or less self-explanatory. A file referenced by the project isn't there anymore, or it has been renamed or moved. You just don't know which one. A project will often reference many files that you can't see, some of which you may not even need any longer. That's why your project still plays OK. Often, these are temporary files or archived copies to support non-destructive editing, allowing un-do's. Cakewalk generally doesn't like to throw anything away, even including files that you'll never need again. You can clean that stuff up by saving your project under a new name/folder using Save As. You can usually avoid those issues in the first place by rendering any Region FX before your final save. For example, I never quit out of a project while I've got a Melodyne window open.
  12. Well, I can confirm that Noel is in fact not a robot, that he eats and sleeps like the rest of us. Still, you have to be impressed by the productivity of this small team. That's what happens when you take it out of the hands of marketers and let engineers run the show.
  13. I doubt it has anything to do with the recent update (I exported a FLAC just this morning), more likely the problem is within that particular project. Could be a clip is corrupt. It's rare but it does happen, usually due to a disk issue. More likely, it could be that there's some event way out on the timeline ( e.g. a stray MIDI note or automation node). Press CTL-End and see where it takes you.
  14. Yeh, I was thinking of Al Schmitt as I watched that interview. IIRC he accrued more Grammies than anyone, and famously avoided all fx. Sadly, just passed away recently.
  15. Donald Fagan's Nightfly has long been on my list of reference recordings. I never got too excited about the music, but the recording itself is a prime example of how to do it right. The lesson is: get the sound you want before it hits the microphone and you won't need to do much after that.
  16. Thanks, Geoff. Sounds like just the thing for a summer read. I like to recommend the original treatise on the subject, written by Hermann von Helmholtz in 1863. Since he was the first guy to try and explain harmonics to the world, the explanations are written for an audience for whom the entire concept was new. So it's really a beginner's book, but everyone should read it because it's available for free online as a PDF.
  17. To answer your other question, yes, you are getting old. Nothing you or I can do about that, other than to never stop learning stuff. The lesson from this is to not use instrument tracks. They were added as a feature not because they improve anything, but solely because other DAWs did it that way and users demanded it.
  18. That's doubtful. I would be very curious to see and hear a bit of the OP's recording, though. Something's clearly amiss. I have edited some pretty awful, er, edgy and extreme vocals, and Melodyne's almost never had a problem ferreting out the pitch component. On the rare occasions when it has had a problem it's always been because the recording wasn't clean and dry, e.g. it was distorted, noisy, or had fx added before the edit.
  19. Exactly. Interfaces deliver 24 bits, and DAWs subsequently turn that into 32 bits for one reason: it allows us more freedom to mangle and twist audio while burying our folly safely below the noise floor. As you say, we are never entirely freed from the limitations of the analog world. Not if we use microphones, anyway. We are constrained by the practical dynamic range of transducers. Too hot and they clip or pick up unwanted ambience, too quiet and they're noisy. We are also constrained by the acoustics of the physical spaces we record in and the inevitable intrusions from the world around us. And, of course, there's that biggest constraint of all, the limits of our own skills and talents. That's why I don't worry too much about the digital process itself. Of all the villains in our play, it's a minor bit player.
  20. Switch to WASAPI. If the integrated interface still doesn't show up, go into Windows Device Manager and make sure it hasn't been disabled. Byron, nobody's going to disagree with you about the superiority of Focusrite over RealTek. However, the OP explicitly said he wants a simplified setup for practicing. Not recording, not mixing or mastering. For that, the built-in interface should do just fine. I do the same thing when I go on holiday and only want to be burdened by my laptop and nothing else.
  21. I would never yell at you, Geoff. But you have just CONFUSED THE SH*T OUT OF POOR HARLEY FOR NO GOOD REASON. Of course, I'm talking about your enigmatic "Happy New Year" signoff. Many of us are still trying to get back in sync with the calendar as we crawl out from under the pandemic. You're not helping. Harley, with apologies to my esteemed colleague my vote goes to FLAC. Unless you bought the infinite RAM option with your Soundcraft, what you don't want is to run out of memory while recording. FLAC lets you use half as much. If Geoff's right and WAV does give you one bit's worth of additional resolution, it still doesn't matter. Even a high-end ADC is really only accurate to 20 bits.
  22. This seems to be a common issue, judging by how often topics like this come up in the forum. We need to remember that tracks can routinely switch from stereo to mono and back as they wind their way through the signal chain. Some plugins are inherently stereo and will produce a stereo output regardless of what's going into them (e.g. delays and reverbs), while others are inherently mono and always produce a mono output (e.g. most amp sims). Some plugin vendors offer mono-specific versions of their plugins, while the trend in recent years has been toward smart plugins that are able to preserve mono or stereo paths internally. To further complicate matters, there's the choice of track interleave setting. You may have to set the track interleave to stereo even if it's a mono recording, if you're putting, say, a chorus effect on the track. Flipping the track interleave can cause unexpected behavior in some plugins. And don't forget that no matter what processing happens in the track it's all going to wind up stereo in the end if it's going to be routed to a stereo bus. It's a minefield. Cakewalk does a great job of figuring out when to switch between mono and stereo internally, but ultimately it's up to us users to keep track of what's going on under the hood, even if we can't always see what's happening there. Guitar tracks (well, most tracks in general) should usually be recorded in mono (verified visually by the waveform) but it's a good idea to assume they'll end up stereo somewhere along the way. This, btw, isn't necessarily a problem. "I'm still not sure what's the difference between a bass in mono or in stereo panned center, to me it sounds exactly the same, so I don't understand when people say the bass and kick should be mono, but isn't stereo panned center the freaking same?" Yes, Marcello, you are correct. However, there are potential complications that can be avoided if you stick with mono tracks most of the time, especially for those tracks that aren't panned center. Plus the miser in me can't help but see stereo bass, kick, guitar or vocal tracks as a needless waste of disk space.
  23. I totally agree with you; software should never pretend to be smarter than you. Even if sometimes it is, it's best not to assume that by default. However, in this case Melodyne has actually taken a logical path. Melodyne's main purpose is to accurately detect pitch. It's really, really good at that. But some bits are atonal and have no pitch. The sounds of hard consonants, "S"s and "SH"s, vocal fries, breath noises and so on. Prior to version 5, these components were simply ignored. You can, of course, still ignore them if you like. But now Melodyne identifies them so you can adjust their levels. That's a good thing. If Melodyne didn't offer that, then what could it possibly let you do with non-pitched components? Nothing. Now you can do something with them. That's not arrogance, it's perfectly reasonable. Melodyne is not a de-esser, it's an editor. You shouldn't allow it to make sibilance corrections on its own, any more than you'd allow it to make pitch corrections on its own - even though that's what it does best!
  24. AFAIK sibilance detection is automatic. You can redefine the sibilant region within any given blob, but that's it. There are no adjustable parameters that I can see. That's because Melodyne treats all unpitched material as "sibilance", including breath noises, fricatives and such. Basically, if it can't find the pitch then it's classified as "sibilance". This is actually a good thing, because if you tried to edit the pitch on such bits it just wouldn't work and you'd probably just get chipmunks. Make sure you're not using any effects on the vocal track you're editing, as they can confuse the algorithms. Always edit completely dry and add fx later.
  25. Download the demo for Timeless3 and run through some of its presets. Yeh, it's pricey and it's complicated, but version 3 is a big improvement over previous iterations in terms of ease of use. If you want to take a deep dive into what delays can do, no other plugin is as versatile as Timeless3. OTOH, Ricochet is way less expensive, fairly intuitive and a true multi-tap delay with up to 16 steps.
×
×
  • Create New...