Jump to content

bitflipper

Members
  • Posts

    3,334
  • Joined

  • Last visited

  • Days Won

    21

Everything posted by bitflipper

  1. For me, it's about 50-50. Like rsinger says, it depends. Reasons for using internal effects: Convenience, as they're often integrated into presets, and some are customized for a specific synth. Allows separate FX for each voice in a multi-timbral synth while still using a single stereo pair as its output and not needing an extra bus. Some effects are synced to or modulated by an internal synth parameter. Efficiency. Many built-in FX are simpler and more CPU-efficient. Saves having to own every effect, e.g. a separate flanger Simplicity. Most internal effects have limited controls; great if you don't need every parameter. Reasons for using plugins instead: Third-party plugins are often superior to a synth's built-in effects. You can freeze synths independent of their effects, leaving more options for the final mix. Reverb, in particular, is best applied to many instruments in a common bus if you're after a natural sound, as if those virtual instruments were actually in the same room. Plus it conserves CPU. Routing, e.g. sidechaining. Lots more possibilities for making your mix more dynamic through automation. More options, finer control and a larger UI. Ability to upgrade independent of the instrument. Fewer FX overall means you can learn them better, have a deeper understanding of how they work.
  2. There is certainly plenty to talk about regarding the technical side of mastering. You probably don't want to get me started. However, the direction this thread has taken is appropriate, given that at the end of the day the goal of mastering is making sure your record sounds as good as possible wherever and however it's heard. You just know somebody's going to listen to it on Apple earbuds or laptop speakers, and it's going to sound like cr... er...sh...,er garbage. Somebody's going to listen to it on a train or in the car. The one thing you know for sure is that hardly anybody is going to ever hear it on your speakers in your studio. Mastering tries to make it sound the best it can, regardless of the circumstances. The single best way to assure that is to have somebody else do the mastering, preferably someone who's using a combination of technical standards, full-range neutral speakers in a neutral acoustical environment, trained ears and experience. Unfortunately, those people charge for their services. If paying for such a service is not an option, it's up to you to get as close as possible. It's actually do-able. A couple years ago, a fellow came onto the mastering forum at Gearslutz and made waves there by declaring that he could master his own stuff just as well as a professional ME. Needless to say, his comments were met with pushback ranging from skepticism to derision. It is, after all, a forum frequented by some of the best MEs in the business. Being a fair-minded person, I decided to listen to his material and see for myself. I did not know who he was, but a google search informed me that he'd had a successful band in the past, was now a solo artist and I could get his latest record from Amazon. So I did. And I was absolutely gobsmacked. Turned out, he was right. The record was brilliantly mixed and mastered, with an impressive dynamic range and clarity that's rare nowadays. And he does it all himself, from composing to tracking to mixing to mastering. Maybe he even has a shrinkwrap machine in his house, I don't know. But the final product isn't just as good as anything out there, it puts many contemporary releases to shame. So good that he's been hired to remix and remaster many classic albums such as In the Court of the Crimson King, Aqualung, and Tales from Topographic Oceans. He's done, or is working on the entire catalogs of Yes, King Crimson, Jethro Tull and Gentle Giant. If you want some technical details, Sound on Sound did a writeup that should satisfy your curiosity.
  3. MAutoVolume from Meldaproduction is one I've used. Works well for lead instruments, too. Hornet has Autogain Pro, which is similar but IMO not as easy to use as the Melda one. But it's cheaper. There's also a blatant clone called VocRider. I haven't tried it, so you'll have to google it, but IIRC it's a freebie. What these have in common, including Vocal Rider, is that they'll adjust a track's volume up and down to keep the ratio constant against a reference. Sometimes it works well and sometimes this is definitely NOT what you want to accomplish. It's certainly not the only way to do it. Assuming your vocal track has been properly leveled already, it's usually going to work better if you leave the vocal levels alone and instead lower whatever is competing with it. This can be done with any compressor that has a filtered side chain input (meaning most of the better compressors out there). I sometimes use FabFilter Pro-C2 this way. Even better is to carve out a space in the spectrum for the vocal. Wavesfactory Trackspacer works like this. I use MSpectralDynamics from Meldaproduction this way if it's a super-busy mix, but usually that level of precision isn't necessary so I'll use a dynamic equalizer with a sidechain input. FabFilter Pro-MB can do this, as well as MDynamicEQ from Melda. Another technique uses volume automation instead of a volume-adjusting plugin. Something like Bluecat Audio's DP Meter Pro, for example, can create automation envelopes from any track that can then be used to automate another track or feed a compressor sidechain. There are a bunch more, I just can't remember them all. I recall one that predated Vocal Rider and was so similar when VR came out I remember thinking "Hey, Waves ripped those guys off!". Can't think of what it was called now. Maybe somebody else can remember what it was. Sadly, it was one of the things I lost when my computer got stolen.
  4. Not really. In MS Word the clipboard contains the last thing you cut or copied. As soon as you cut or copy anything else, the original data is discarded and replaced by the new selection, whether it's text, formatting, or an image. It makes sense that Cakewalk would work the same way. The procedure under the hood would logically remain the same regardless of how you copied the data. The ability to copy a clip by holding down the CTL key while dragging is a great convenience, but it's just a shortcut to the standard copy-and-paste mechanic. Which is not to say Cakewalk couldn't offer separate clipboards for different types of data (which they may do, internally). However, the design goal is to maintain consistency so that all types of data (MIDI, audio, automation, markers, clip effects, ARA regions) are manipulated in the same way from the user perspective. So yeh, it's just something to get used to. But worth it in the end.
  5. I should have said "I've no doubt scook will come along to answer your question".
  6. Yeh, Stevie Wonder seems to do alright. That said, personally I prefer an analytical approach. Keeping your tracks below -8 dB is a good practice, keep doing that. But as noted above, relative levels are more important than absolute values. Back in tape days, we had to work hard to keep levels high in order to maintain a decent signal-to-noise ratio. But digital audio has (mostly) freed us from that concern. Tracks peaking at -20 or even -30 dB aren't necessarily a problem. Concentrate on getting the relative levels in balance first, then worry about bringing everything up in volume. Another salient point is that volume perception isn't about peaks, but about average levels. You can easily have a track that peaks right up at 0 dB and still sounds too quiet. Or you can have a track that peaks at -18 dB and is still overwhelming other tracks that have higher peaks. A compressor (or limiter, a type of compressor) is used to raise average levels without raising peak levels. Compression is almost always applied to vocals, often quite aggressively in most popular genres. So once you've gotten your tracks reasonably balanced, add compressors before proceeding to the final mix. Don't be surprised if you find that you end up turning those compressed vocals down!
  7. Unfortunately, I no longer have either Rapture or Dim Pro installed, so I can't check. But both are quite popular, so I've no doubt somebody will come along to answer your question. You might have a problem, given that those are sample players rather than synthesizers. As a general rule, samplers don't handle pitch as well as true synths because they're reading back prerecorded content rather than generating tones on the fly. Consequently, shifting a sample's pitch more than a couple semitones tends to sound funny.
  8. Let's not forget that so many of our favorite records from decades past were EQ'd solely on a console channel strip with just bass, mid and treble knobs. So yeh, knowing what you're doing is way more important than what tools you use to do it.
  9. To answer the first part of the original question, the ProChannel EQ is as good as or better than any free equalizer. It is also as good or better than many paid EQs. Even if later on you decide to spend money on a high-end EQ, having first learned the process using what you already have you'll be a better-informed consumer and might prevent future buyer's remorse. You're taking the right attitude about learning about EQs first. It's a far deeper subject than simply discussing EQ features or comparing different products. I'd suggest some of Dan Worrall's FabFilter tutorials. Most of what he explains is applicable to any EQ.
  10. Pitch bend data is just a number, like any other MIDI data. The only difference is the resolution, because pitch bend is 16-bit data and can therefore offer 16384 different values rather than the usual 128. But it's entirely up to the instrument as to what pitch wheel data actually does. As noted above, most (almost all) instruments present the maximum pitch bend as a user-selectable parameter. Some let you automate it, so the amount of bend can change during a song. If you tell us specifically which synth you're using, I'm sure someone here has used it and can tell you how to configure it the way you want.
  11. That's boring. Wait a minute, lemme light this bong and warm up the lava lamp... Dude! That sh*t's f*ckin' amazing.
  12. Did you install the VST2 or VST3 version, and did you run the VST scanner utility afterward? Any time you install a new plugin, the DAW has to then discover it before you can use it. That's what the scanner does. It looks everywhere that a VST might be (you have to tell it what folders you keep VSTs in) and adds any new ones to the list. I usually invoke the scanner from the Plugin Manager (Utilities -> Cakewalk Plugin Manager). Alternatively, you can run it from the Preferences dialog, which offers more options but essentially does the same thing. If you've already done that (you may have Cakewalk configured to do it automatically every time you start it up) and the plugin(s) still don't show up, it may be that the scanner isn't looking in the right places. That's why I asked about VST2 vs. VST3, as the latter always stores files in the same place, whereas with the former they can go anywhere. So if you installed VST2, and the installer put it into a folder that's not included in the list of places the scanner looks, you'll have to add that location to the list and re-scan. Sounds complicated, I know. But it'll quickly become second-nature once you've caught the notorious G.A.S. bug and start installing new plugins every other day! (G.A.S. = Gear Acquisition Syndrome, the obsessive compulsion to keep buying more and more plugins, or guitars, or whatever.)
  13. My band actually covers "Country Roads" - with a sax solo. And this is exactly how we keep him motivated.
  14. Someone once told me I had great piano chops. At the time, I took it as a compliment. Now I'm not sure. I still have confidence in my pork chops, though. Barbequed over mesquite, mmm.
  15. So I went to the website to try and find out what the product actually is, and right at the top there's a button labeled "What is an Orchestration Recipe?". Wow, like he was reading my mind. But if you click there you get the same video Mathew posted above. Fortunately, there is more explanation to be had there. Each recipe is a video + text tutorial with MIDI files and notation for how to achieve some common orchestral technique. Great idea, actually. (Could have saved me thousands of hours (no, more like 69 years) of closely listening to orchestral music! Nobody got time for that, right? J/K!) The tutorials are as generic as possible, as he doesn't talk about specific libraries or DAWs. Which is good. You could in fact do these lessons using nothing more than Cakewalk and the TTS-1. (OK, the choirs would sound pretty lame, but you could do it.) I watched the first video, and I gotta say it again: it's a great idea. Brilliant, in fact, and I'm surprised no one has thought of this before. I think if you're new to orchestration it would be a great way to fast-track the learning curve. Even if you're an old hand at this stuff, you'll likely still come away inspired to add some of these techniques to your arsenal. I know I did.
  16. I'm still not sure what the product is, but golly, what a great marketing video! Now I really want that "Finely Chopped Piano".
  17. CC11 and CC7 are both defined as volume controls. The MIDI spec does spell it out clearly, but instrument developers often stray from the official spec in order to suit the dynamics of a particular instrument. Some want CC11 to have a dramatic effect, some want it to be subtle. Others completely ignore CC11. I've seen these controllers implemented in three different ways: CC7 is a full volume control where 0 is silence and 127 is maximum volume, and it's the only volume controller CC11 replaces CC7 and works exactly as above (Most common) CC7 and CC11 are additive. Sometimes CC11 will be scaled, serving as a fine volume control with less effect on volume than CC7 . This is common with orchestral libraries, where CC11 is intended to be used for swells and decrescendos. It's also useful to note that usually, CC11's effect is to lower volume, i.e. CC11 cannot make it louder than CC7 says it should be, only quieter. That means CC11 at zero might result in silence. However, I have instruments where CC11 only makes small adjustments to volume and cannot silence the instrument completely, in which case forgetting to set CC11 merely results in a quieter - not silent - instrument. I was unaware that TTS-1 used Expression this way (ya learn something new every day if you're lucky!), because I usually automate volume on the audio track, only automating CC11 on certain Kontakt string and orchestral libraries.
  18. 50 bucks is about right. Like most of their products, it's built around an elaborate step sequencer. I haven't used this particular product, but I'm familiar with others such as Cinematic Guitars Motion and can say that they are definitely fun to play around with. Atmospheres appears to be more pad-oriented than the other CG products.
  19. To reverse the phase on a ribbon mic, you just have to turn it around.
  20. Volume automation will always work on the audio track, but there are legitimate reasons why someone might want to do the automation in MIDI instead. How I'm reading the OP is that there are two instruments that aren't responding to CC7. I can think of a couple scenarios where that might happen. Most likely is that the instrument is designed to use CC11 rather than CC7 for volume. I have a several Kontakt instruments that work that way. Some of them won't even make any sound at all until you put in at least one CC11 event. A less-likely scenario is that CC7 has been reassigned, either by design or by accident. And of course there's always the possibility that automation has been inadvertently disabled on those tracks (make sure the Automation Read button is lit).
  21. In theory, MP3 encoding (or, more specifically the steep low-pass filters it employs) can cause peaks to increase by up to +3dB. Therefore, you'd have to limit to -3 dB true peak to be perfectly safe. Even then, streaming platforms may still raise the overall level if your average RMS is below their recommendation. So if YouTube says -14 dB and yours comes in at -20 dB, they'll raise it up by 6 dB or thereabouts - including your carefully limited peaks, which will then have to be limited again by their algorithm. The end result may not be what you'd have preferred. That said, I limit to -1 dB. It doesn't guarantee absolute protection from overs, but it's conservative enough that any clipping will be probably short enough in duration to slip by unnoticed.
  22. Had to throw in my favorite Filipino band, Fuseboxx. Props for playing keyboards, Chapman stick and singing at the same time!
  23. And of course, my favorite Asian country hasn't been slacking, either. Sorry, it's in Taglish and no subtitles, but I think you'll get the gist.
  24. Love this stuff. When I was 4 years old, my father spent time in Japan and brought me back some interesting gifts. My favorite was a stack of 78 RPM Japanese pop records. Something about how they blended traditional techniques with European classical sensibilities really resonated with my toddler brain. Good to see they're still doing that.
×
×
  • Create New...