Jump to content

bitflipper

Members
  • Posts

    3,334
  • Joined

  • Last visited

  • Days Won

    21

Everything posted by bitflipper

  1. This discussion makes me feel like such a dinosaur. If I want something repeated 16 times, I physically play it 16 times using my own actual fingers. Sure, each repetition might not be identical, but I see that as a positive. That said, on the rare occasions when I do want to exactly duplicate something, like maybe a tambourine hit, I have no problem using the exact same technique as I'd use with any other type of editor: CTL-C / CTL-V. Yeh, I have loop libraries that allow for a single MIDI note to be stretched to any length, but rarely use them because despite their undeniable convenience, in the end they're just excruciatingly booooring.
  2. For a free Oberheim emu, check out OBXD. Redopter is a distortion plugin, but with a distinct character that lends itself to drum enhancement. For a free substitute, I'd experiment with amp sims, which could probably do the trick just fine. Note that in either case, the plugin would be placed in a separate bus from the main drum bus and mixed under the main drum mix. Here's one guy's list of the best free amp sims.
  3. OK, I get it. Didn't know that style had a name. Slow it down a few bpm and front the band with some teenage girls, and you've got a genre that's still alive and well today. In Japan, anyway. I like the energy, but it's a bit frantic for my laid-back paradigm. If I was to set out to replicate that sound, it would absolutely have Redopter by D16 Group on the drum bus. And one of my favorite 80's synths, the inexpensive but faithful Oberheim 8-voice emulation from Cherry Audio.
  4. Maybe I'm just exposing myself as the out-of-touch old fart that I am, but I have no idea what "Eurobeat" is. I'll bet I could pull off some authentic Euro-beating, though, if I had an example. Got a link or two?
  5. A common practice is to set levels/balances in mono first, before any panning. That's not done to make panning more effective, but rather to help prevent the phenomenon where the mix balance sounds fine in stereo but only when listening in the sweet spot between the speakers. That's because panning creates clarity through separation; lose the separation and you lose the clarity. For example, you might be listening to a mix from the next room and noting that it suddenly sounds inexplicably muddy. That doesn't directly address your question, though. A solution is to modify your monitoring balance to compensate for the hearing imbalance. After you've done that, you can be confident that others will hear your pan decisions the same way you do. I compensate for unbalanced sensitivity between my ears by adding about 2 db of extra gain on my right speaker. I play some white noise, sit smackdab in the middle between the speakers, close my eyes and listen for the "phantom center". If it sounds centered, I know the right speaker is now correctly compensating for the lower sensitivity in my right ear. I could accomplish the same end using the balance control on the master bus, but prefer to adjust the speaker because it's a set-and-forget solution.
  6. Yeh, it actually does make sense. In fact, it's all working exactly as designed. Really. In order to support multiple MIDI-controlled instruments, a mechanism exists to route each MIDI track to its assigned instrument. To do that, each track and instrument is assigned a MIDI Channel, a number between 1 and 16. In your example, you'd probably switch the violin over to channel two and leave the piano on channel one. Moving your violin track to channel 2 is simple: there is a MIDI channel dropdown list in the track header. Choose "2" from the list. Now, the violin will only listen to MIDI channel 2. Playing the keyboard will still activate the piano because it's still on channel one, but the violin on channel 2 will ignore those notes. Your MIDI controller (piano) likely provides a way to designate which MIDI channel it's talking to. It probably defaults to channel one. You could change this at the keyboard, but I like to keep it simple: record the first MIDI track on channel one, then switch it (in the DAW) to a different channel. Don't forget to also inform the soft synth which MIDI channel it's supposed to respond to. Then I'll record the next track (still on channel one), and after recording switch it over to a third channel. Repeat for each subsequent MIDI track, making sure that every track and instrument has its own unique channel number. When you've reached the limit and have used up all 16 MIDI channels, come back and we'll talk about MIDI ports.
  7. Not quite. Vernon's happy with the mix when it's streamed from BandCamp, and that's a reasonable reference because aside from downsampling for streaming, BandCamp is pretty neutral. He's talking about a specific streamer that seems to be overreacting to low frequencies. It's unlikely you'd want to remix a song because one site has technical issues. It would be like remixing so a song sounded better in your car - I've tried that, and it failed miserably, sounding worse everywhere else. Where you could be correct, though, is if the mix has excessive subsonic content. That'll throw any compressor off. Fortunately there's no downside to filtering it out because a) it's inaudible on 99.9% of playback systems, and b) unless you're recording a pipe organ there probably isn't anything musical going on down there.
  8. I'm a big advocate for using what you've got and not blowing money unnecessarily. However, kludging SPAN is a PIA and not worth the bother, IMO. Not when SPAN+ is only fifty bucks. Or (better) watch Meldaproductions' weekly half-off sales for the inclusion of MMultiAnalyzer for about $35. Either product is going to be way easier to use, both in the setup and the interpretation. That said, I have to be honest - even though I have both of these as well as BlueCat's impressive FreqAnalyst Pro, I don't actually use any of them. They're just not all that helpful.
  9. Or you can use symbolic links (via the mklink command) and put your libraries anywhere you like. Last year I added a second SSD (1TB) for sample libraries. I put only my most-used libraries there and keep the rest on a conventional drive. Using symbolic links allowed me to move my favorite libraries over to the new drive without impacting any existing projects. Kontakt, Superior Drummer and Spectrasonics instruments think their data is still on the E: drive, but it's actually on F:. I plan to eventually replace my two remaining 1TB mechanical drives with SSDs when I can afford to, but it's more important to use them for things where the speed improvement is most noticeable, e.g. sample libraries. For now, those older drives do the job just fine as long-term bulk storage. My system drive is a 500 GB SSD, but because I limit it to just Windows and applications, it has never been in danger of filling completely. However, if I was building a new system today I'd use a 1TB SSD just so I wouldn't have to plan out my disk usage so strictly. 1TB's are half the price today that the 500 GB drive was when I bought it 5 years ago. In a few years 500GB drives will be considered as obsolete as my dusty drawer-full of 250 and 500MB drives.
  10. Is it dependent on where the Now marker is, or is the key preventative playing the start of the track before freezing? AFAIK, the Now marker plays no part in freezing, bouncing, exporting or otherwise rendering a track. At least, I've never had to care where it was positioned when freezing. I know this because I make a habit of playing the entire song at least once before rendering anything, leaving the Now marker at the end when I do any rendering/bouncing/freezing/exporting. The reason I've made it a habit to play the project through first is that I've experienced dropped notes in exported projects that contain many Kontakt instruments. I suspect it's because until a majority of a track has been played once, the entire library will not have been fully loaded yet due to Kontakt's memory optimization. You'd think it could just load on demand during the render - and I have no theory as to why it happens. But all I have to do is play the project one time and those missing notes come back.
  11. Sadly, Microsoft has a long history of screwing up audio, usually as a side-effect of a well-intentioned security "fix". I can recall at least two recent occasions where they messed with my setup, e.g. turning on my disabled integrated audio and making it the default. They seem to be carrying on the tradition with the latest update.
  12. It's obviously file corruption. I don't know why m4a files are particularly prone to corruption, only that every software player that supports them offers a utility for repairing corrupt m4a files, suggesting that it might be a common occurrence. I can imagine that if a few of the data blocks somehow went MIA in transit the resulting gaps might be brief enough to not actually be audible, but the data would be out of sync. If those gaps were scattered throughout the file, the desynchronization would gradually get worse until the cumulative error became great enough to hear. Were these files created by someone with a DAW, or were they originally iTunes downloads? If the former, you could ask the sender to use a different file format. If the latter, then the files were likely corrupted during their initial download. However, I believe iTunes does offer a repair tool, as well as the ability to convert them to MP3. Another thought...the files could have been corrupted when the sender sent them to you. Were they sent as MIME-encoded email attachments by any chance?
  13. ^^^ This would be the best for you. It's usually better to do the translation at the source rather than recording the "wrong" CC and translating it on the fly during playback. There are also programmable MIDI controllers that provide general-purpose knobs and sliders that can be assigned to any MIDI CC or note you want. Since you like compact devices, check out Korg's nanoKontrol. It's cheap and small enough to fit into a laptop case.
  14. Yep, that's it. You can go a lot deeper, but a journey of a thousand miles begins with a single step, and that video's a pretty good first step.
  15. What's the other program? I ask because I'm wondering if you couldn't do everything within Cakewalk, i.e. syncing the VO with video. Voiceover processing uses all the same tools as any vocal track, except that the emphasis is on clarity and intelligibility rather than esthetics. That means accentuating the upper midrange frequencies, noise reduction and heavy compression. Spitfish works well (at least, I think it still does; it's a 32-bit plugin and some 32-bit plugins didn't make the transition to 64 bits smoothly). I can't think of any other freebie that does a better job (now, if you want to spend $$$, FabFilter's Pro-DS is arguably the best one around). There are also techniques that predate dedicated de-essers you can use that cost nothing. VOs are almost always mono, but there are "steroizers" that manufacture left-right differences. For example, Meldaproduction has a couple plugins for that, and IIRC some simplified versions are included with their free bundle. Grab that free bundle, as there are GOBS of goodies in there to play with. I'll also second Glenn's recommendation for the Sonitus suite, which has pretty much everything you'll need. Gates are often used, but a standalone gate plugin usually isn't needed as long as you're recording in a quiet, non-reverberant space. But there is one included in the Sonitus suite. Compression is important, because you want levels to be very, um, level. The Sonitus compressor is good, and if you like visual aids it has one of the best. However, there may be times when you want a FET-style compressor, but Cakewalk's got you covered there, too, in the ProChannel. And yes, reverb might occasionally be called for. Usually, it's to mimic a physical space to match the sound in a video. Convolution reverbs are specifically made for that kind of thing. Want to make your voice sound like it's inside a garbage can or a concrete culvert? They can do that.
  16. Those special keys are called "keyswitches", and they're the standard way to switch between articulations within nearly all sampled instruments. Unfortunately, they can be a PIA to work with because a) you have to commit to memory which keyswitches do what, and b) every instrument implements its own keyswitches; there is no standard. Do yourself a big favor and take some time to learn about Articulation Maps. They alleviate the need to keep looking up keyswitches as you compose, since you're letting Cakewalk remember them for you.
  17. You're not doing anything wrong. I'd guess it's a quirk of the instrument you're using. One of my absolute favorite libraries is Amadeus Symphonic Orchestra because it sounds so good, but the way it deals with CC1 is frustratingly inconsistent.
  18. As noted above, there are a number of ways to do this. You could, for example, split the vocal clip to isolate that one phrase and apply the delay as a clip effect. Or move the affected phrases into a new track and apply the delay there. However, I'd suggest automating the delay's wet/dry mix instead. That will give you greater flexibility as you tweak and experiment with your mix, e.g. accentuating the delay more on some phrases than others or perhaps deciding to apply the delay to other parts as well. Taking the time to learn how to do automation will open up a world of possibilities.
  19. It's Ctl-Y here. I seem to recall that it did change once, but that was years ago. I remember that because I spent a day putting a bunch of them back to where they used to be.
  20. "Mastering Audio", IMO, should be required reading for anyone approaching this stuff as a serious discipline. However, for those too cheap to buy the book, this video pretty much lays out its central premise for free.
  21. bitflipper

    Warp 9

    Vintage Modern! Or is it modern vintage? Either way, this would have fit right in with my record collection circa 1978 alongside Larry Fast and Vangelis. Although I guess you'd need to crank the reverb to 11 to compete with the latter. All that Oberheim silkiness makes me smile.
  22. 2 dB is a pretty small difference. Even a small discrepancy between FX on the two tracks will result in different levels. We often forget that EQs change levels, as do most effects. More than once I've wondered "how'd my vocal suddenly get louder?" only to remember "oh, yeh, I just added a send to the reverb bus".
  23. Imagine if the group and reverb busses were separate physical boxes with cables connecting a common sound source to each of them. You'd need two cables, and unplugging either of them results in the box it's routed to going silent. That's what soloing does: it essentially disconnects every other connection except the one being soloed. If you want to also hear reverb, which is a separate bus, you have to "solo" that bus as well.
  24. The clue is in "when I play them back together". When you combine any two (or more) tracks, the result is the algebraic sum of them. If they are identical, i.e. one is cloned from the other, then the level will be doubled. Not just the peaks, although it'll be peak values that are most noticeable. If the tracks are not identical, they'll add together in some places and subtract from one another in other places. That's how you can end up with a mix on the master bus that goes into the red even though none of your individual tracks do.
  25. Wouldn't it be nice if the message included the name of the file it couldn't open? Might want to submit that as an enhancement request. Unhelpful though it may be, the message is more or less self-explanatory. A file referenced by the project isn't there anymore, or it has been renamed or moved. You just don't know which one. A project will often reference many files that you can't see, some of which you may not even need any longer. That's why your project still plays OK. Often, these are temporary files or archived copies to support non-destructive editing, allowing un-do's. Cakewalk generally doesn't like to throw anything away, even including files that you'll never need again. You can clean that stuff up by saving your project under a new name/folder using Save As. You can usually avoid those issues in the first place by rendering any Region FX before your final save. For example, I never quit out of a project while I've got a Melodyne window open.
×
×
  • Create New...