Jump to content

bitflipper

Members
  • Posts

    3,211
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by bitflipper

  1. I'm a big advocate for using what you've got and not blowing money unnecessarily. However, kludging SPAN is a PIA and not worth the bother, IMO. Not when SPAN+ is only fifty bucks. Or (better) watch Meldaproductions' weekly half-off sales for the inclusion of MMultiAnalyzer for about $35. Either product is going to be way easier to use, both in the setup and the interpretation. That said, I have to be honest - even though I have both of these as well as BlueCat's impressive FreqAnalyst Pro, I don't actually use any of them. They're just not all that helpful.
  2. Or you can use symbolic links (via the mklink command) and put your libraries anywhere you like. Last year I added a second SSD (1TB) for sample libraries. I put only my most-used libraries there and keep the rest on a conventional drive. Using symbolic links allowed me to move my favorite libraries over to the new drive without impacting any existing projects. Kontakt, Superior Drummer and Spectrasonics instruments think their data is still on the E: drive, but it's actually on F:. I plan to eventually replace my two remaining 1TB mechanical drives with SSDs when I can afford to, but it's more important to use them for things where the speed improvement is most noticeable, e.g. sample libraries. For now, those older drives do the job just fine as long-term bulk storage. My system drive is a 500 GB SSD, but because I limit it to just Windows and applications, it has never been in danger of filling completely. However, if I was building a new system today I'd use a 1TB SSD just so I wouldn't have to plan out my disk usage so strictly. 1TB's are half the price today that the 500 GB drive was when I bought it 5 years ago. In a few years 500GB drives will be considered as obsolete as my dusty drawer-full of 250 and 500MB drives.
  3. Is it dependent on where the Now marker is, or is the key preventative playing the start of the track before freezing? AFAIK, the Now marker plays no part in freezing, bouncing, exporting or otherwise rendering a track. At least, I've never had to care where it was positioned when freezing. I know this because I make a habit of playing the entire song at least once before rendering anything, leaving the Now marker at the end when I do any rendering/bouncing/freezing/exporting. The reason I've made it a habit to play the project through first is that I've experienced dropped notes in exported projects that contain many Kontakt instruments. I suspect it's because until a majority of a track has been played once, the entire library will not have been fully loaded yet due to Kontakt's memory optimization. You'd think it could just load on demand during the render - and I have no theory as to why it happens. But all I have to do is play the project one time and those missing notes come back.
  4. Sadly, Microsoft has a long history of screwing up audio, usually as a side-effect of a well-intentioned security "fix". I can recall at least two recent occasions where they messed with my setup, e.g. turning on my disabled integrated audio and making it the default. They seem to be carrying on the tradition with the latest update.
  5. It's obviously file corruption. I don't know why m4a files are particularly prone to corruption, only that every software player that supports them offers a utility for repairing corrupt m4a files, suggesting that it might be a common occurrence. I can imagine that if a few of the data blocks somehow went MIA in transit the resulting gaps might be brief enough to not actually be audible, but the data would be out of sync. If those gaps were scattered throughout the file, the desynchronization would gradually get worse until the cumulative error became great enough to hear. Were these files created by someone with a DAW, or were they originally iTunes downloads? If the former, you could ask the sender to use a different file format. If the latter, then the files were likely corrupted during their initial download. However, I believe iTunes does offer a repair tool, as well as the ability to convert them to MP3. Another thought...the files could have been corrupted when the sender sent them to you. Were they sent as MIME-encoded email attachments by any chance?
  6. ^^^ This would be the best for you. It's usually better to do the translation at the source rather than recording the "wrong" CC and translating it on the fly during playback. There are also programmable MIDI controllers that provide general-purpose knobs and sliders that can be assigned to any MIDI CC or note you want. Since you like compact devices, check out Korg's nanoKontrol. It's cheap and small enough to fit into a laptop case.
  7. Yep, that's it. You can go a lot deeper, but a journey of a thousand miles begins with a single step, and that video's a pretty good first step.
  8. What's the other program? I ask because I'm wondering if you couldn't do everything within Cakewalk, i.e. syncing the VO with video. Voiceover processing uses all the same tools as any vocal track, except that the emphasis is on clarity and intelligibility rather than esthetics. That means accentuating the upper midrange frequencies, noise reduction and heavy compression. Spitfish works well (at least, I think it still does; it's a 32-bit plugin and some 32-bit plugins didn't make the transition to 64 bits smoothly). I can't think of any other freebie that does a better job (now, if you want to spend $$$, FabFilter's Pro-DS is arguably the best one around). There are also techniques that predate dedicated de-essers you can use that cost nothing. VOs are almost always mono, but there are "steroizers" that manufacture left-right differences. For example, Meldaproduction has a couple plugins for that, and IIRC some simplified versions are included with their free bundle. Grab that free bundle, as there are GOBS of goodies in there to play with. I'll also second Glenn's recommendation for the Sonitus suite, which has pretty much everything you'll need. Gates are often used, but a standalone gate plugin usually isn't needed as long as you're recording in a quiet, non-reverberant space. But there is one included in the Sonitus suite. Compression is important, because you want levels to be very, um, level. The Sonitus compressor is good, and if you like visual aids it has one of the best. However, there may be times when you want a FET-style compressor, but Cakewalk's got you covered there, too, in the ProChannel. And yes, reverb might occasionally be called for. Usually, it's to mimic a physical space to match the sound in a video. Convolution reverbs are specifically made for that kind of thing. Want to make your voice sound like it's inside a garbage can or a concrete culvert? They can do that.
  9. Those special keys are called "keyswitches", and they're the standard way to switch between articulations within nearly all sampled instruments. Unfortunately, they can be a PIA to work with because a) you have to commit to memory which keyswitches do what, and b) every instrument implements its own keyswitches; there is no standard. Do yourself a big favor and take some time to learn about Articulation Maps. They alleviate the need to keep looking up keyswitches as you compose, since you're letting Cakewalk remember them for you.
  10. You're not doing anything wrong. I'd guess it's a quirk of the instrument you're using. One of my absolute favorite libraries is Amadeus Symphonic Orchestra because it sounds so good, but the way it deals with CC1 is frustratingly inconsistent.
  11. As noted above, there are a number of ways to do this. You could, for example, split the vocal clip to isolate that one phrase and apply the delay as a clip effect. Or move the affected phrases into a new track and apply the delay there. However, I'd suggest automating the delay's wet/dry mix instead. That will give you greater flexibility as you tweak and experiment with your mix, e.g. accentuating the delay more on some phrases than others or perhaps deciding to apply the delay to other parts as well. Taking the time to learn how to do automation will open up a world of possibilities.
  12. It's Ctl-Y here. I seem to recall that it did change once, but that was years ago. I remember that because I spent a day putting a bunch of them back to where they used to be.
  13. "Mastering Audio", IMO, should be required reading for anyone approaching this stuff as a serious discipline. However, for those too cheap to buy the book, this video pretty much lays out its central premise for free.
  14. bitflipper

    Warp 9

    Vintage Modern! Or is it modern vintage? Either way, this would have fit right in with my record collection circa 1978 alongside Larry Fast and Vangelis. Although I guess you'd need to crank the reverb to 11 to compete with the latter. All that Oberheim silkiness makes me smile.
  15. 2 dB is a pretty small difference. Even a small discrepancy between FX on the two tracks will result in different levels. We often forget that EQs change levels, as do most effects. More than once I've wondered "how'd my vocal suddenly get louder?" only to remember "oh, yeh, I just added a send to the reverb bus".
  16. Imagine if the group and reverb busses were separate physical boxes with cables connecting a common sound source to each of them. You'd need two cables, and unplugging either of them results in the box it's routed to going silent. That's what soloing does: it essentially disconnects every other connection except the one being soloed. If you want to also hear reverb, which is a separate bus, you have to "solo" that bus as well.
  17. The clue is in "when I play them back together". When you combine any two (or more) tracks, the result is the algebraic sum of them. If they are identical, i.e. one is cloned from the other, then the level will be doubled. Not just the peaks, although it'll be peak values that are most noticeable. If the tracks are not identical, they'll add together in some places and subtract from one another in other places. That's how you can end up with a mix on the master bus that goes into the red even though none of your individual tracks do.
  18. Wouldn't it be nice if the message included the name of the file it couldn't open? Might want to submit that as an enhancement request. Unhelpful though it may be, the message is more or less self-explanatory. A file referenced by the project isn't there anymore, or it has been renamed or moved. You just don't know which one. A project will often reference many files that you can't see, some of which you may not even need any longer. That's why your project still plays OK. Often, these are temporary files or archived copies to support non-destructive editing, allowing un-do's. Cakewalk generally doesn't like to throw anything away, even including files that you'll never need again. You can clean that stuff up by saving your project under a new name/folder using Save As. You can usually avoid those issues in the first place by rendering any Region FX before your final save. For example, I never quit out of a project while I've got a Melodyne window open.
  19. Well, I can confirm that Noel is in fact not a robot, that he eats and sleeps like the rest of us. Still, you have to be impressed by the productivity of this small team. That's what happens when you take it out of the hands of marketers and let engineers run the show.
  20. I doubt it has anything to do with the recent update (I exported a FLAC just this morning), more likely the problem is within that particular project. Could be a clip is corrupt. It's rare but it does happen, usually due to a disk issue. More likely, it could be that there's some event way out on the timeline ( e.g. a stray MIDI note or automation node). Press CTL-End and see where it takes you.
  21. Yeh, I was thinking of Al Schmitt as I watched that interview. IIRC he accrued more Grammies than anyone, and famously avoided all fx. Sadly, just passed away recently.
  22. Donald Fagan's Nightfly has long been on my list of reference recordings. I never got too excited about the music, but the recording itself is a prime example of how to do it right. The lesson is: get the sound you want before it hits the microphone and you won't need to do much after that.
  23. Thanks, Geoff. Sounds like just the thing for a summer read. I like to recommend the original treatise on the subject, written by Hermann von Helmholtz in 1863. Since he was the first guy to try and explain harmonics to the world, the explanations are written for an audience for whom the entire concept was new. So it's really a beginner's book, but everyone should read it because it's available for free online as a PDF.
  24. To answer your other question, yes, you are getting old. Nothing you or I can do about that, other than to never stop learning stuff. The lesson from this is to not use instrument tracks. They were added as a feature not because they improve anything, but solely because other DAWs did it that way and users demanded it.
  25. That's doubtful. I would be very curious to see and hear a bit of the OP's recording, though. Something's clearly amiss. I have edited some pretty awful, er, edgy and extreme vocals, and Melodyne's almost never had a problem ferreting out the pitch component. On the rare occasions when it has had a problem it's always been because the recording wasn't clean and dry, e.g. it was distorted, noisy, or had fx added before the edit.
×
×
  • Create New...