Jump to content

Duncan Stitt

Members
  • Posts

    28
  • Joined

  • Last visited

Reputation

28 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Expanding on my earlier comment, mics come in 3 basic flavors: Flat response, mid-scooped response, and mid-forward response. This is an oversimplification, but there's a technical explanation. Condensor mics either have a center-terminated capsule or an edge-terminated capsule. A center-terminated capsule pushes the mids, and edge-terminated capsule scoops the mids. Neumanns are center-terminated, 414s are edge-terminated. You can see the difference if you compare frequency response graphs, which you can find on the Recordinghacks.com mic database page. To generalize (again) a mid-scooped voice like Rod Stewart. (extreme example) would benefit from a mid-forward mic like a Neumann or Shure SM7b. A honky or nasal singer like the guy from Queen might benefit from a mid-scooped mic like a 414. That's why reputable recording studios have a collection of mics to choose from. They'll do a mic shootout at the beginning of the recording process to determine which mic sounds best on the singer. The thing is, the actual frequency response of the same mic model can vary from one mic to the next, with peaks and valleys that might be a couple of DB different than the published response graph. If one of these little peaks or valleys coincides with a peak or valley in response in the singers voice, it can sound magic - or terrible. That's why you hear about people complaining they got a bad copy of a U87, for example. Getting into the minutiae, the high frequency peak can vary from mic model to mic model. If the sibilance range of your singer is in the 7k range, a mic with a peak at 7k is going to magnify sibilance. An example from a few years ago was a lady singer who was getting too much sibilance on my Gefell MT71s (same capsule as the Gefell 930, but different electronics and housing.) She said, in another studio, she was using a Shure KSM44 with no sibilance issues. I looked up response graphs for both mics and saw a peak at 7k on the Gefell, and a dip at 7k on the Shure. I sold the Gefell , even though it sounded awesome on acoustic guitars. Musical context is important. Some singers might use a mid-scooped mic on one song and a mid-forward mic on another song, depending on how the voice is interacting with the musical accompaniment. For recording instruments, a bass roll off switch on the mic can be crucial. The other day, I wanted a sort of warm sound for a sloppily strummed acoustic guitar, where I wanted to increase the sound of the body and decrease the string sound. I put up a AT4047 which is known for its big low mids and reduced high end. No matter where I put it, I was getting too much low end. I changed to a 414ULS (the flatter response model of 414s) with a foam pop filter to reduce highs, and the 2-position bass roll off switch set to 150hz and got the sound I was looking for. In an ideal world, we EQ our tracks by choosing the right mic, since the EQ curve of the mic is baked into the recording. Of course we can fiddle with EQ to change the sound in the mix, but we can't replicate the sound of a different mic with EQ alone. On the other hand, some companies are promoting their mic modeling system where you buy a certain mic and then run it through their software to emulate the mic of your choice. I have no experience in that realm, I have a bunch of mics instead.
  2. 414s come in two basic flavors: the ULS models are flatter, the XLII/XLS models are brighter. All 414s have a bit of a dip in response in the 1k-2k range, which would contrast nicely with your other mics, which are all midrange-heavy, to different degrees. One big advantage of the 414s is their four-position multi-pattern switch, which can be helpful if you need more isolation. I often use one 414 on vocals and another on guitar for a live guitar/vocal recording, with both set to hypercardioid mode. If you aim the vocal mic up and the guitar mic down, you can get almost complete isolation on the vocal, and good isolation on the guitar. You might want to get a foam pop filter, so if the XLII is too bright, you can take the edge off.
  3. T Boog - my fave piano + acoustic guitar track in a band setting is 'Time Flies Either Way' by DAWES, from the 'Passwords' album. My understanding is that they hired a new keyboard player for that album. He's very tasty, plus he puts pads under virtually every song, which sort of makes their music float on a pillow of sound. And the mix is very neutral, not all bright and tinnny. Notice in this mix, they put one acoustic guitar in the center and let the piano take the stereo spread. Works great on this song, but that doesn't mean it would work on yours. My go-to piano for a busy mix is the Session Piano in the Roland JV1010. it's very blah by itself, but fits into a mix better than any of my "better" piano VIs. I dread the day when that box finally dies.
  4. Yeah, I get triggered by "free music", as if Napster just happened. My fault. Ha ha. After resisting streaming services for years, I finally signed up for Apple Music because they pay more per-listen to the creators than the other streaming services. Even then, if I find an artist I like, I buy their CD or download, ideally through their website or Bandcamp page so the grifters who run the music business get less of a cut. Also, I want to see the credits. It amazes me that songwriters are so often left anonymous on streaming sites. Curious what the instrumentation is on your cello/piano track. The Branden & James duo might have something similar, although I'm not a fan of the mix of the song I linked. I figured, if their music was similar to yours, you could delve into their catalogue to find something better. Lyle Lovett has used cello in his productions, including with his 'Large Band' that has a piano player. They've also done some duo tracks on YouTube with guitar/vocal/cello. I just finished a folky project with guitars, vocals, cello and bass, although in a couple of songs, we turned off the bass to let the cello take it. It all depends on the arrangement. If the cello is playing distinct lines and the bass is more of a mushy pulse under the track, they can coexist. It also depends on how you mic'd the cello. If it's really thick, with a lot of low end, of course the bass is going to fight with it. If it's bright and airy ala the B&J track, it's easier to include a bass track. Mixing is all about compromises. i'll solo the bass and piano tracks together and then sweep a parametric EQ in boost mode to find a frequency range the "blooms". I'll then cut that range in either the piano or bass so they're not competing. I'll also alter the piano part to leave room for another instrument, but I generally record midi piano so it's easy to edit parts while mixing. With midi piano, you can reduce the velocity of problem notes so they're softer but still there, or you can delete them entirely. With a real piano, you can automate volume, but ducking individual notes in a chord is not going to work. Many mix engineers will tell you the key is volume automation. They use it everywhere. Keep in mind, mixing is highly subjective. John Mayer went through three A-list mixers before he found the sound he wanted for the Continuum album, and these were all top tier mixing engineers.
  5. "Time to lighten up a bit Duncan. The material he is looking for can be found for free anyway. This is not the 'piracy' that is causing the losses. Better to concentrate on other major offenses." Free music on the internet is exactly why the music business collapsed. Prior to Napster, musicians and songwriters could make a good living from mechanical royalties. Today, they have to drive for Uber. I agree that streaming services are only adding to the problem, but the problem originated with free music on the internet, and as long as stealing music remains acceptable in the public eye, the creators will continue to suffer. As creators ourselves, it's in our best interests to discourage music piracy, not condone it. For the OP: here's a national act based on piano, cello and voice: https://www.brandenjames.com/music?pgid=ktq4ifr3-c08ea239-dd22-4d4a-aa72-1b89d2bda23a
  6. To clarify, you want to use someone else's work as a learning tool to make your own work sound better, but you don't want to pay for it? This is the attitude that's killing the music business - the very same business we aspire to be a part of. A couple of other music-related forums I've frequented over the years would have flagged this thread for advocating for music piracy, but I'm old and, obviously, times have changed.
  7. When we're children, everything is free. Then we grow up.
  8. Some say avoiding food from the nightshade family helps to minimize inflammation. (Potatoes, tomatoes, peppers.) Top tier athletes like Tom Brady avoid that food group entirely - as do I. Unfortunately, I didn't become aware of the link between nightshades and arthritis until after I turned 65, but I can still work my hands just fine, unlike my Mom, who could no longer play piano when she got to my age due to arthritis.
  9. Not a solution but a possible workaround. Create a new CbB project with the midi file for the piano, set to the tempo he used. Import the piano audio track and then mess around with the tempo till the audio track lines up with the midi track. (I've found, with midi, sometimes if one person's DAW was set at 120, I have to set mine at 199.96 or 120.03 to get everything aligned.) After you've got your audio and midi aligned, import your rough mix audio track, adjust your new piano sound to fit into that mix, and then render a new piano audio track and import it into your original project.
  10. More on proofing the mix - I try to set the balance between vocal and drums/bass first, and then bring in the other instruments. At the end of the process, when I think I'm finished, I turn off all the instruments except bass/drums and vocal to make sure the balance between vocal and bass/drums hasn't gotten skewed. Virtually every time I get to that stage, I have to bring the vocal down a touch, along with the instruments that are competing with it, so I don't lose the power of the rhythm section. Another trick is to listen to the mix from a different room. This can reveal volume balance relationships that are disguised when you're right in front of the speakers hearing the highs that give everything more presence.
  11. Pro mixers work at a very low volume with nearfield monitors most of the time, turning it up now and then to check bass. They do this to keep their ears fresh. At higher volumes, it doesn't take long for our hearing to shift, so that, by the end of the day, we're adding way too much high end to everything. Pro mixers can't afford to do that. I would use the headphones to check for details - occasionally - but not to mix for extended periods of time. Just because they're headphones doesn't mean you're going to avoid hearing damage by using them. Quite the contrary. I don't recall the exact numbers, but they say a significant percentage of young people these days already have permanent hearing damage due to listening. to loud music on headphones or airpods. To get used to working at a lower volume, try comparing your mix to a mix from a commercial CD you like the sound of. You'd want to import the CD mix into your project so you could do an A/B. comparison while you're working. It's just a matter of retraining yourself so you can work at lower volumes. Higher volumes cause hearing damage, and hearing damage is permanent.
  12. I have Synthfont working, but I have to assign each track to it, rather than opening a file and having all midi channels going to Synthfont. How do I change the default midi player from TTS-1 to Synthfont VST? In preferences, the MIDI Devices window is empty, except for 'friendly name'. (I'm not inputing midi, just playing midi files.) In the Instruments window, I see a list on the right, including "SynthFont device", but the output box on the left is blank with no way to assign channels to SynthFont device on the right.
  13. Thanks for the heads up. Sounds promising, or at least it did till your update at the end. Anytime a piece of software is too good to be true, it's highly likely it is too good to be true. Ha ha So... I downloaded VST Synthfont and chose the default installation folder 'VST instruments,' I believe. Cakewalk can't find it, even after a restart, and then going into the utilities-cakewalk plug in manager and rescanning for VSTs. Did the default downloader put it in the wrong place? I'm not a Windows guy, and I'm old, which makes this a headache-inducing experience. I'm on Windows 11. I think I have the latest version of cakewalk, which I had to download last week in order for the program to run. (I hadn't turned it on in a month or so.) TTS-1 is still there, but I prefer the snare sound of the VST Synthfont instruments in your YT video. ETA: putting Synthfont in the right folder worked.
  14. I've got three in my studio. They're comfy and sound flat-ish, as in no hyped lows and highs.
  15. I was imagining a scenario similar to my old system where Cakewalk was sending midi to an external sound module. I even bought a Sonority V3 hardware midi sound module for that purpose, but discovered it doesn't respond to GM patch changes. I was thinking that avoiding a VST instrument loading with every song might increase stability, but if Synthology Standalone doesn't work that way, I'll give the VST a try. I'll definitely be rendering all my songs to audio at some point, but there's going to be a transition period where I'll be tweaking my sequences to accommodate the correct tempos for dancers, and the correct keys to match the voices of the singers. Thanks for your expertise. Your videos are great.
×
×
  • Create New...