Jump to content

bitflipper

Members
  • Posts

    3,211
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by bitflipper

  1. I looked at the project. Before I fix the problem, tell me where the observed project end time is and what time you'd like it to be. I show the track labeled "Split Note F#3" to be the longest, ending at 5:26. Is that what you see?
  2. Well, you've piqued my curiosity so I had to go over to Bandcamp and give a listen. There's still plenty of bass in there, particularly the kick drum. It actually sounds pretty good, though. Maybe the answer lies in compression rather than more filtering.
  3. On the rare occasions when I use synthetic drums, the only vanilla processing I apply is EQ, often to reduce that "unnaturally hyped high end" you describe. More often, the fx will be things that make them sound even more unnatural, such as delays, reverb, distortion and modulators. Acoustic instruments are far more tonally complex and dynamic, which lends themselves to treatments that either highlight or hide the many overtones that are in there, and/or their dynamic characteristics. Electronic percussion just doesn't have that depth. So acoustic fx tend to be subtractive in nature, while electronic drum fx tend to be additive. Also consider combining electronic and acoustic drums. You can end up with an instrument that sounds like it might exist in the real world, but with an interesting twist. The classic example is mixing an 808-style gated sine "kick" under a real kick drum sample, for a deeper low-frequency component. But you can do the same thing with snares and toms. There's no rule that says electronic music must be 100% electronic.
  4. treesha's lovely jam is indeed a testament to the precept that free instruments can be creative catalysts. Reminded me that I recently dug out an old freebie that I'd never used, the Janggu. It's a traditional Korean bongo-like instrument, part of a collection offered by the Seoul National University. I threw it in as an experiment, just because I wanted some non-standard percussion on a song. It surprisingly changed the direction of the composition. I had originally discovered that instrument thanks to a thread similar to this one on the old forum.
  5. Yeh, there's that; no point in sharing an articulation map for a customized instrument. Or, for that matter, an instrument that isn't widely used. I've begun work on an articulation map for an older library, Kirk Hunter Concert Strings 2. I used to love this library but haven't used it in a while, mostly because I rarely need its level of detail. I'm more likely to reach for Amadeus Symphonic Orchestra, which isn't nearly as deep but sounds just as good. My thinking is that if I had articulation maps for CS2, I might start using it again.
  6. It's a logical detour in any discussion of freeware. You can't talk about freeware without acknowledging the reason commercial developers offer it in the first place: to encourage interest in their paid products. Granted, there are some great freebies out there that were created by dedicated hobbyists and altruistically shared (e.g.Thomas Mundt's Loudmax limiter). But you have to sift through a lot of klunkers to find them, which is why this kind of knowledge crowd-sourcing remains such a longstanding staple of recording forums. And you cannot confidently pronounce a freebie as useful without comparing it to its commercial alternatives. The basic premise of the whole thread, as stated by Starship Krupa, is the belief that "a person can put together an excellent system entirely with freeware". He can make such a proclamation only because he has extensive experience with both free and non-free software. So yeh, talking about commercial software in a freeware context is legit.
  7. It's true that articulation maps are most helpful in orchestration. So useful that they actually make composing and arranging more fun and less tedious, and thus inspire greater experimentation. But that's not the only use-case. Any virtual instrument based on strings is a candidate for AM joy, especially faux guitars. Another application would be the more sophisticated voice and choir libraries that offer articulations beyond basic oohs and aahs. Speaking of being scared off by excessive complication, sjoen's mention of pedal steel reminded me of the time I decided against buying a pedal steel VI for that very reason. The demos sounded great, very expressive. But making that happen required some deep articulation switching that didn't look fun at all. I might have to revisit that decision.
  8. Don't normalize. That should be printed onto a t-shirt or wall chart for easy reference. The exception is when, like in Aidan's case, you get a bunch of files that are so quiet that they need extra gain just to get into the ballpark of the rest of the mix. Technically no different from turning up the gain knob, just faster and more permanent.
  9. Cakewalk doesn't actually have to "recognize" the RealiWhistle library, just Kontakt itself. Any library you can load into Kontakt will be fine with Cakewalk. The only time you'll have problems is if the library requires a newer version of Kontakt than what you're running, or, rarely, a library that needs more memory than your computer has.
  10. Some of the Melda plugins haven't turned out to be as useful as I'd anticipated. Spectral Delay is one of them, along with MMultibandChorus and that multiband autopanner, forget what it's called. And speaking of compressors, MModernCompressor was frustratingly unintuitive, and even after figuring out the UI was ultimately just another compressor. However, the good ones more than make up for the few disappointments. MDynamicEQ and MSpectralDynamics are surgical life savers.
  11. Just curious. Articulation maps are the coolest addition to Cakewalk since, um, maybe drum maps twenty years ago? Possible reasons: They're so simple to use that no discussion is needed They look too complicated to get started with Building art maps is so time-consuming that they're treated as trade secrets by composers Users have spent years memorizing keyswitches and are too proud of that accomplishment to switch Everybody's into death metal or EDM and don't need no frickin' pansy-***** articulations "What's an articulation?"
  12. Neither naïve nor irrational, just prudent. Most of the time, the kick and snare tracks being in stereo is an unintentional mistake, and most of the time the left/right channels are actually identical. So there'll be no tonal difference after conversion. Yes, you do get a volume boost, but that's why I do the mono conversion first, before starting to balance the kit pieces. In the rare instances when a track sounds thin after conversion, that's an indication that it really was recorded in stereo with two or more microphones, and that the engineer did not take phasing into account. Leaving such tracks in stereo because collapsing to mono makes them sound worse just pushes the problem back, as the overall mix will likely have mono compatibility issues. In this scenario, rather than using the "convert to mono" shortcut, split the stereo into two separate mono tracks and either delete one of them or phase-align it. Though such a scenario is fairly rare on drums, it is common with synthesizers that insist on stereo output whether the patch calls for it or not. Like you, I fear degradation and will often leave the track stereo for that reason. However, before making that decision it's helpful to check if the track is really stereo to begin with. Quite often, such a check reveals that the patch isn't stereo at all but just has a widening effect (chorus, reverb or delay) added, in order to make it sound stereophonic. They design patches that way so they'll sound good in isolation, without regard as to how they'll fit into a mix. If, however, the track is truly stereo (e.g. a Leslie, auto-panned pad or acoustic piano) and needs to stay that way, then a true stereo panner (such as Boz's Pan Knob) can save the day.
  13. An added plus: all the free fx are feature-limited versions of their paid counterparts, so it's a great way to experiment and see which ones you might be interested in upgrading to the full version. I've bought a few Melda products as a direct result of using their free versions and liking them. (OK, I admit I've purchased more than just a few Melda products, ~40 of them at last count. It's quality stuff and very nicely priced when they go on sale, with a random selection of 4 of them half-price every week.)
  14. I've been lobbying for years for start-of-song and end-of-song markers. These would work like regular markers, but have an additional flag and checkbox in the marker dialog, and maybe be shown in a different color. Beyond that, they would behave like any other markers: show in the Marker List, be moveable, name-able and delete-able. In the absence of such a feature, what I do is create my own start-of-song and end-of-song markers and label them "Start" and "End". Then, when I export, I place the cursor at the Start marker and press F9 to make that the start of the timeline selection, then move to the End marker and press F10. That way, I can precisely control where the export begins and ends and it'll be consistent every time. If I listen back to the exported file and notice it's truncated (e.g. reverb tails or ambience being cut off abruptly), I can easily nudge the End marker a little to the right.
  15. Here's my drum-track organization practice. Bear in mind I've only been doing this for 50 years, and only 20 of those on a computer, so the process is still being refined. I like to place drum tracks in the order that I initially mix them: kick, snare, toms, overheads, hats, room. When I get tracks from someone, they are often all in stereo, so the very first step is to convert kick and snare to mono. These all go into a "Drums" folder, along with any other percussion instruments, and all are routed to a Drums bus. (If I'm using MIDI drums, all the MIDI tracks go into their own "Drums MIDI" folder so I can easily hide them when I'm done fiddling with the MIDI.) Depending on the genre, there will also be one or two additional busses for distortion and reverb. I like to send a little kick, snare and toms to the distortion bus. Unless I'm using reverb as a special effect, it usually works better when you apply reverb to the entire kit via a separate bus. Each drum track will have its own EQ, sometimes a limiter and/or compressor on just the kick, sometimes a compressor on the snare, and usually compression on room mics. Dynamics here are generally just for shaping hits and trimming excessive peaks. All this is done very early in the mix process. Yes, it's common wisdom that you don't put fx on individual tracks before mixing, but for me drums are the exception to that rule - get them sounding good on their own first, then tailor the rest of the mix around them. I prefer to do the main compression and limiting on the entire kit at the bus. Any volume automation will also be done at the bus. This helps to maintain cohesion between the individual drums. Once they've been balanced between themselves I generally don't touch them, preferring to treat them from then on as a single instrument. It also makes it easier to balance the drums with the rest of the mix later on, or to export them as a stem.
  16. ^^^ Well said, Bob. Input/Output buffering isn't the only source of latency. Some plugins necessarily introduce large latencies due to the way they work, and should be applied only after tracking is complete. If you need to add another track late in the mix process, that's the main reason the Global Bypass feature is there. Btw, the reference to "oven" comes from having interface clock circuits literally placed into a warm box to prevent the oscillator from varying due to heat changes. This makes the clock frequency very stable, and is the reason internal clocks outperform even very sophisticated external clocks.
  17. I might just give it a go. Once I get over my current phase, which is 70's style synth anthems. I blame it on Wookie. Every time he posts a song, I get all nostalgic for vintage Larry Fast and Vangelis. That'll pass, though. I was thinking about 60's surf next, although that's challenging for a piano player.
  18. Just did my first gig in a year-and-a-half. It wasn't my fingers that hurt afterward (they've been working out), but my back (which has not).
  19. This discussion makes me feel like such a dinosaur. If I want something repeated 16 times, I physically play it 16 times using my own actual fingers. Sure, each repetition might not be identical, but I see that as a positive. That said, on the rare occasions when I do want to exactly duplicate something, like maybe a tambourine hit, I have no problem using the exact same technique as I'd use with any other type of editor: CTL-C / CTL-V. Yeh, I have loop libraries that allow for a single MIDI note to be stretched to any length, but rarely use them because despite their undeniable convenience, in the end they're just excruciatingly booooring.
  20. For a free Oberheim emu, check out OBXD. Redopter is a distortion plugin, but with a distinct character that lends itself to drum enhancement. For a free substitute, I'd experiment with amp sims, which could probably do the trick just fine. Note that in either case, the plugin would be placed in a separate bus from the main drum bus and mixed under the main drum mix. Here's one guy's list of the best free amp sims.
  21. OK, I get it. Didn't know that style had a name. Slow it down a few bpm and front the band with some teenage girls, and you've got a genre that's still alive and well today. In Japan, anyway. I like the energy, but it's a bit frantic for my laid-back paradigm. If I was to set out to replicate that sound, it would absolutely have Redopter by D16 Group on the drum bus. And one of my favorite 80's synths, the inexpensive but faithful Oberheim 8-voice emulation from Cherry Audio.
  22. Maybe I'm just exposing myself as the out-of-touch old fart that I am, but I have no idea what "Eurobeat" is. I'll bet I could pull off some authentic Euro-beating, though, if I had an example. Got a link or two?
  23. A common practice is to set levels/balances in mono first, before any panning. That's not done to make panning more effective, but rather to help prevent the phenomenon where the mix balance sounds fine in stereo but only when listening in the sweet spot between the speakers. That's because panning creates clarity through separation; lose the separation and you lose the clarity. For example, you might be listening to a mix from the next room and noting that it suddenly sounds inexplicably muddy. That doesn't directly address your question, though. A solution is to modify your monitoring balance to compensate for the hearing imbalance. After you've done that, you can be confident that others will hear your pan decisions the same way you do. I compensate for unbalanced sensitivity between my ears by adding about 2 db of extra gain on my right speaker. I play some white noise, sit smackdab in the middle between the speakers, close my eyes and listen for the "phantom center". If it sounds centered, I know the right speaker is now correctly compensating for the lower sensitivity in my right ear. I could accomplish the same end using the balance control on the master bus, but prefer to adjust the speaker because it's a set-and-forget solution.
  24. Yeh, it actually does make sense. In fact, it's all working exactly as designed. Really. In order to support multiple MIDI-controlled instruments, a mechanism exists to route each MIDI track to its assigned instrument. To do that, each track and instrument is assigned a MIDI Channel, a number between 1 and 16. In your example, you'd probably switch the violin over to channel two and leave the piano on channel one. Moving your violin track to channel 2 is simple: there is a MIDI channel dropdown list in the track header. Choose "2" from the list. Now, the violin will only listen to MIDI channel 2. Playing the keyboard will still activate the piano because it's still on channel one, but the violin on channel 2 will ignore those notes. Your MIDI controller (piano) likely provides a way to designate which MIDI channel it's talking to. It probably defaults to channel one. You could change this at the keyboard, but I like to keep it simple: record the first MIDI track on channel one, then switch it (in the DAW) to a different channel. Don't forget to also inform the soft synth which MIDI channel it's supposed to respond to. Then I'll record the next track (still on channel one), and after recording switch it over to a third channel. Repeat for each subsequent MIDI track, making sure that every track and instrument has its own unique channel number. When you've reached the limit and have used up all 16 MIDI channels, come back and we'll talk about MIDI ports.
  25. Not quite. Vernon's happy with the mix when it's streamed from BandCamp, and that's a reasonable reference because aside from downsampling for streaming, BandCamp is pretty neutral. He's talking about a specific streamer that seems to be overreacting to low frequencies. It's unlikely you'd want to remix a song because one site has technical issues. It would be like remixing so a song sounded better in your car - I've tried that, and it failed miserably, sounding worse everywhere else. Where you could be correct, though, is if the mix has excessive subsonic content. That'll throw any compressor off. Fortunately there's no downside to filtering it out because a) it's inaudible on 99.9% of playback systems, and b) unless you're recording a pipe organ there probably isn't anything musical going on down there.
×
×
  • Create New...