Jump to content

bitflipper

Members
  • Posts

    3,334
  • Joined

  • Last visited

  • Days Won

    21

Everything posted by bitflipper

  1. You are correct. I mis-spoke when I called it "global". It isn't. A better word would be "persistent". Once you set it, that does become the default for subsequent projects. This is how you can inadvertently end up using a different pan law than you thought was in play.
  2. It's a free synth; ergo, 99% on-topic. No need to move it unless you have a better place in mind. If you do, drop me a PM.
  3. Anybody else see that pathname and think "what the heck is that?"? Maybe it's a sample file but doesn't have a .wav extension for purposes of obfuscation. Wouldn't surprise me, it's Waves. Audio files are not going to contain any reliable malware signatures, but without an audio-related extension the antivirus software wouldn't know that's what it was.
  4. I'd suggest a more generalized solution: implement the ability to pull up any designated plugin's UI via a keypress. That would address your specific need, and also allow other handy shortcuts, such as quickly bringing up your mastering limiter, spectrum analyzer or a main multi-timbral synth. In the meantime, a possible solution would be to use screensets.
  5. Question for the sax players: does this apply to altos specifically, or to all saxophones? I've long marveled at the way my band's sax player can quickly transpose in his head. I'm too lazy to even try. Fortunately, I play keyboards and there's a button for that. But if he's using different rules when switching between, say baritone and tenor (which he sometimes does mid-song) that would be an even more impressive skill.
  6. In order to accomplish this, you will need an audio interface with multiple outputs. Unfortunately, that also means investing in a higher-end (read: more expensive) interface. There are other reasons for acquiring such a device, though, such as the ability to have multiple headphone mixes and multiple speaker setups. So even if this scheme doesn't work out you won't be sorry you bought a full-featured audio interface. For simplicity, though, I'd echo Noel's suggestion and just run those backing tracks through a full-range system like a PA, or even headphones. Save the complexity for your music. That said, in a live performance situation I think separate amplification could be very cool. I once heard a solo guitar performance wherein each string on the guitar had its own output and its own amplifier, and the effect was awesome.
  7. Bottom line first: pick a pan law and stick with it. It almost doesn't matter which one you pick, because you'll quickly become accustomed to it and hopefully never think about it again. Stick with just the one pick, though, because the pan law is global, and changing it will mess with any previous projects you revisit. And of course, never change it mid-project unless you want to restart the mix from square one. As to which one is "best", most intuitive or most practical, that's been debated since the feature was first introduced in the 60's. Some prefer -3dB, some -6dB, with reasonable technical arguments for both. SSL introduced -4.5dB, not because it's better than those but merely as a compromise between them (conventional hardware consoles' pan laws are not selectable like in a DAW). The fact that it's not offered in CbB shouldn't trouble anyone in the slightest. Pan laws not only differ in whether they use 3, 4.5 or 6 decibels, but whether they achieve that compensation by lowering the center or by raising the sides. IOW, you can keep the volume steady by adding 3dB at the extreme left and right positions, or by lowering the center by 3dB. The latter helps avoid the effect described in the OP, wherein a track unexpectedly clips just from being panned. Unfortunately, Cakewalk's default option, "0dB, sin/cos taper, constant power" suffers from this potential problem because it raises the sides rather than lowering the center.
  8. The problem, Eric, is that those of us who've used SONAR 6 are all old farts who have trouble remembering what we were doing last month, let alone in 2005. Trust me, the new DAW will be a huge step up from S6 (or even 8.5, which I agree with Mark was SONAR's pinnacle). Since you're getting back into it after a long absence, you're going to have to face a learning curve anyway. Might as well put that unavoidable effort into CbB. We'll be here for ya.
  9. ^^^Your point is well-taken: when there is a discrepancy, trust the DAW over the playback device/software. Rather than asking "what's wrong with my DAW?" instead ask "what's wrong with my player?". But to their credit neither the original poster nor the user who revived the thread reflexively blamed the DAW. In fact, Thierry's (correct) instinct was to verify the file's integrity. He just went about it wrong, making a reference mix within the project rather than exporting it and then importing it back into the DAW, as suggested by Noel. Craig's suggestion is probably the most likely explanation. Note to other folks experiencing this problem - if you think this is bad, wait until you play your masterpiece back in your car. Or on earbuds, or on your friend's hi-fi, or over a PA system. It'll sound different every stinkin' time.
  10. In terms of audio quality, there is no practical difference between DX and VST. I am perfectly happy with DX plugins. And no, DX is not going away. At least, not as long as the XBox lives on.
  11. USB ports can become unresponsive after a sleep, or just due to your power scheme. You can go into Device Manager and exclude them from whatever power-saving scheme you've specified. In Device Manager, locate your USB port (it'll be called "USB Root Hub", and there will likely be more than one so do this for all of them). Right-click on each USB device, select Properties and go to the Power Management tab. If the box labeled "allow the computer to turn off this device to save power" is checked, un-check it.
  12. ^^^ This is the answer. Assuming you're not going to send your mix to a third party for mastering, the last active component in your master bus fx bin should always be a limiter, followed only by metering plugins. You'll probably want a LUFS meter at the very end, which will do a pretty good job of telling you whether the master is going to be too quiet, too hot, or somewhere comfortably within the Goldilocks Zone. You'll probably want to import some music from your favorite commercial recordings, anything that sounds particularly good in the car, and use that as your LUFS target. Setting levels for your car is a tricky business, as the car's player probably has built-in compression and EQ that the owner's manual doesn't mention. Plus the acoustics inside a car are pretty awful. So don't be surprised if your carefully mastered songs sound good only in the car, and nowhere else.
  13. Check to make sure it installed correctly and didn't fail the scan. Go to Preferences -> VST Settings -> Scan Options. Check the box labeled "Generate Scan Log", and then click the button labeled "Reset". Run the scan by clicking "Scan". This alone may do the trick, but if Synthmaster still isn't showing up, check the log. It'll be in %appdata%\Cakewalk\Logs. Open it in Notepad and search for SynthMaster.
  14. "Soloing and muting verifies it is this track." Are you saying that the empty portion produces an output even when it is soloed? If so, is this a separate instrument or one of several tracks routed to a common multi-timbral instrument, e.g. Kontakt with more than one instrument loaded? Can you correlate the phantom notes to any notes in any other track? It would help if we had more information about the project, what instruments are being used, and how the routing is set up. Whenever I've confronted such mysteries, it's always turned out to be a routing problem. That can include how MIDI tracks are routed to synths, how audio is routed internally within a multi-timbral synth, or inconsistent MIDI channel assignments. Symptoms can be varied and weird, e.g. the wrong voice sounding, a silent instrument, keyswitches or CCs being ignored.
  15. Exactly what went through my mind as I listened to it. What is a piano supposed to sound like, when no two sound alike to begin with? I have a real piano, a nice one. But I don't record it. It simply doesn't sound as good as some of my sampled pianos.
  16. Could you describe the symptoms of the "output silence/vst scrambling" issue? I just had a bizarre thing happen in this Omnisphere-heavy project: an instance of Kontakt went silent after updating Omnisphere. Weird. I hesitate to bring it up here, as it might be off-topic. The problem turned out to be a routing issue - the Kontakt audio track's input source had been switched from Kontakt to another synth (Zebra2). I can't imagine a scenario in which I could have accidentally done that myself, and the cross-routing had to have occurred within the last hour.
  17. I missed the 2.7 announcement and wasn't even aware of the update (running ver. 2.6 here), so thanks for cluing me in. Yeh, I know, it clearly says "updates available" every time you start it up. Situational blindness, I guess. I have current projects that use Omnisphere, Trilian and Keyscape. I'm going to update them this weekend and see if there are new problems, then post back my results. [EDIT] I couldn't wait for the weekend. Omnisphere is too important to me to not know if it has a problem. Don't know if this is good or bad news, but I just played back a project with 16 patches in a single Omnisphere instance and there were no discernable problems. How could that be bad news? It is if you're trying to replicate a problem. Sorry, I could not.
  18. Orchestral and choir, and an appreciation for video games...sounds like you should try your hand at video game music. Over the past decade the artform has grown in both sophistication and popularity, yielding some truly memorable pieces that have broken out of the game context and are interpreted live in concerts. And brought unexpected fame to journeymen composers such as Jeremy Soule, who created the soundtrack for my all-time favorite game:
  19. They are all monsters - every one of them. All have a daunting learning curve. They will all create equally great-sounding recordings. Whatever you choose, you can be sure it will require a significant investment in time and effort to obtain fluency. The main differentiators are not what you can do, but a) how well-supported they are by both the vendor and the user community, and b) how intuitive you find the workflow. On the first measure, vendor and community support, Reaper scores very highly. So does Cakewalk. As for the second measure, only you can determine how comfortable the software is to use. For me, Reaper and I didn't click, even though I have great respect for it on a technical level. But hey, Reaper's cheap and Cakewalk's free, so grab them both and dive in. Just try to avoid getting discouraged at the outset by reminding yourself that it takes time to get rolling. You may not play video games anymore, but I still find that an occasional zombie-murdering intermission is helpful for concentration and alleviating stress.
  20. It's been a bit of a dirty secret that some editing techniques that are standard practice with pop music are also used with "pure" genres such as classical, jazz and folk. Sure, engineers working in those genres will usually strive for transparency, but then pitch correction in pop music was once meant to be unnoticeable too. They got over that self-imposed restriction pretty fast. At present, classical music production sticks to a light touch and subtle digital manipulation, but they've only recently become comfortable with admitting they do it at all. It makes sense to apply some amount of dynamic range reduction, given that people are far more likely to listen in the car while sitting in traffic, or on ear buds on a plane or train. Noise reduction seems reasonable, too. But I have to wonder if there hasn't also been some discrete enhancements using EQ and reverb. This video mostly addresses editing, as opposed to processing (we can still draw a distinction between those things, for now). Mostly they talk about comping, but also mention the ability to do polyphonic pitch and timing corrections using Melodyne. There is a segment in the middle that CW users might find interesting, where the hosts attempt to discern between real instruments and virtual instruments. Spoiler: the drummer guessed wrong on the drums and the pianist guessed wrong on the piano.
  21. Better still, it would be nice to see a Sonitus revamp down the road. I understand that even though CW did not write these, they do own the source code.
  22. Well done, as always. I enjoyed it, even though I don't use either feature. First time I've heard "if you don't like the video, click Dislike twice".
  23. One of the reasons ASIO is so efficient is that it doesn't support and therefore doesn't have to deal with multiple data streams from multiple programs. The trade-off is that you can only have one ASIO device active at a time. That's why Glenn suggested WASAPI Shared. Although not quite as fast as ASIO (but close enough), it can deal with multiple sources concurrently. That's how Skype can "ring" even though you're listening to music or watching a YouTube video.
  24. What happens with CC7 is entirely up to the instrument. Even though CC7 (and CC1) are the most widely-implemented controllers, they're not universal. A synth may allow runtime conversion of CC7 to some other value, use it for a nonstandard purpose, or ignore it completely. However, there are other possible reasons for CCs in general to be ignored. Every CC command carries a MIDI channel number; if it's different from the instrument's assigned channel, the synth will ignore it. This is possible when you hand-plant a CC7 via the Event List. If that's how you're adding the CC7 event, try using an automation envelope instead, which should always have the correct MIDI channel.
×
×
  • Create New...