Jump to content

Bill Ruys

Members
  • Posts

    47
  • Joined

  • Last visited

Everything posted by Bill Ruys

  1. Mr bitflipper is 100% correct on this. A de-esser is essentially a high-Q (narrow band) filter in front of a compressor. It's looking for peaks in level within that narrow band, and compressing them. The reason for this is so that the (typically high frequency) band is not being attenuated unless it crosses a threshold, because you don't want to roll off that band all the time, only when a sibilant jumps out. So as the flip master says, it's totally the wrong tool for the job. If you have a section at the beginning or end of the recording with silence that exhibits only the his, a learning noise reduction plugin would be best (as others have said). One free plugin that does this is Blue Lab Audio Denoiser.
  2. A link to this review popped up in one of my news feeds, so I took a look. I had to shake my head a few times reading this article. It feels like the reviewer has never actually used a microphone before and has gathered all his or her information from manufacturer's spec sheets and a quick read about the proximity effect on Wikipedia. I guess that's what you get from a microphone review on a headphone review site. You might get a chuckle out of this: https://headphonesproreview.com/best-microphone-for-recording-vocals/
  3. I don't know if you'll find this useful or not, but I have a pair of powered monitors that have the power switch at the back and reaching round each monitor to turn them on/off was a pain in the neck. I put them on a smart plug, so when I walk into my studio, I now just say "Hey Google, turn on studio monitors" and they are powered up before my butt hits the chair. Don't know if you are a fan of Google home (or Alexa) devices, but I find it very easy. When walking out of the studio, I just tell Google to turn them off again. And if, for any reason, I forgot to turn them off, I can do that from my phone, regardless of if I'm home or out. I would imagine that there are ways to turn on/off a smart plug from a batch file on a PC, too. So, going back to the batch file suggestion posted earlier, there may be an option to have the batch file power up the audio interface first, wait for a bit and then launch Cakewalk. There will be a solution - just depends on how hard you want to look. EDIT: Sure enough, a quick Google search shows there is a way to do this from a batch file. It even relates to the same TP-Link smart plugs I'm already using. Link here: Control a tp-link Smart Plug from Windows Without Extra Programs There ya go - solution in the waiting...
  4. It's the dilithium crystals required to fuel it that's the expensive part. And don't get me started on interstellar shipping rates - they're completely off the planet!
  5. He did use a green screen, but it wasn't that big. Most of the outside shots were on location in Wellington, New Zealand.
  6. Thanks for your kind words, guys. I can honestly say that it was Cakewalk that got me into making music when I was still in my 20's back in the Windows 3.1 days. I still remember when Ron Kuper came along and added audio recording to what was a midi only product at the time and how super excited I was. The legacy has now passed onto the next generation and I can honestly say, hand on heart, that it might not have happened if not for the influence of Cakewalk on our family. Home became a place where we could make and record music.
  7. Hey folks, So my son spent his youth in my home studio, perfecting his songs in Sonar/Cakewalk and cut his teeth on our beloved software. He's just released his first music video and I'm beyond proud of what he's come up with. Him, his bass player and a few friends created the video at zero cost and I think what they've been able to produce, funded on nothing but passion, is pretty spectacular. Very proud dad! https://youtu.be/wYePI12wEn8
  8. Count me in to the adoring crowd. As a 25 year + veteran CW user, I'm loving what you guys are doing and the amazing gift that you have bestowed on us. I don't take it for granted. Thank you so much for continuing to develop this amazing tool.
  9. ...but gosh darn, don't they look good! This is the sort of thing that really makes the UI feel polished to me. Nice that they are optional for the purists who scoff at such things. I'm easily bought by a bit of eye candy. Yup, call me shallow, but I'm digging it.
  10. One thing I always do on any DAW is disable Windows sounds - why? Because Windows usually tries to play all it's sounds at 48 KHz. I have had situations over the years where I am working at 44 KHz in my DAW, A Windows system sound plays and in doing so, forces the sound card to 48 KHz. On-board sound cards seem to handle this situation, but some recording cards don't, and stay stuck at 48 KHz. This has been something that specifically affects MOTU cards for me in the past.
  11. As others have mentioned, the new features in Melodyne should help. But the other tool I have used successfully for troublesome sibilants is a multi-band compressor. You set the upper and lower sides of just one band so that it just covers the frequencies where the sibilants are and set your threshold on that band so that it starts kicking in only when the sibilant is present. Set the ratio so that it cuts just the right amount to bring it under control. It's like a De-essing tool on steroids, as you have far more granular control of the parameters. I guess I would also add that this is a case where microphone selection is pretty important. Some bright condenser mics can really emphasise sibilants with certain vocalists, but if you've already laid down all your vocals and can't redo them, I guess you'll have to fix it in the mix.
  12. Nothing at all wrong with CJ's comments, but having worked professionally as a VO artist, I have some differing opinions... The design principal behind the Electrovoice RE20 was to, as much as possible, eliminate the proximity effect, so if someone was going to learn how to use it to their advantage, the RE20 would not be the mic to learn it with. It's a great mic, but probably has less of a proximity effect than nearly any other cardioid microphone on the market. Strangely, one of the most popular microphones in the VO studios I've worked in is the Sennheiser mkh 416 - and it's a shotgun mic! As someone who has recorded music for over 30 years, this struck me as very odd, but the VO world is not the same as the music recording world - they play by their own rules. The professional VO producers I have worked with have actually discouraged me from using proximity to bass-boost my voice for the commercial jobs I have done over the years - they prefer a more neutral recording they can EQ themselves. Of course, if you're doing your own thing just for fun or recording your own podcast, go ahead and beef it up! The same goes for compression - heavy compression will get you that larger-than-life, in your face sound, but if you are working with a producer, they won't much like you doing that - so the question to ask is: Is this solely for you, doing your own production, or are you trying to get professional work? Because the way you attack this will be very different depending on the answer to that question. I would also add that standing waves are a problem, yes, particularly for the listening environment, but not so much the VO recording environment. That's why many of the worlds leading VO artists can get away with recording in vocal booths. If you wanted to find the worst possible environment for room nodes, modes and standing waves, you couldn't do much worse than a vocal booth. In truth, early reflections are a far worse problem for VO recording, because they cue your brain to the fact you are in a small space, so absorption is the best place to start in dealing with this. Standing waves cannot be corrected with typical foam panels and even diffusors as they happen at frequencies to low for most acoustic panels to absorb. Standing waves are mostly to do with room dimensions and geometry. Starting out trying to fix standing waves is the wrong place to start for a hobby VO studio. Deal with early reflections first, then isolation. Then look at standing waves, but realise you can't fix them with foam, you will need to analyse the room and fix with room geometry and bass traps. My tips would be: Set up a reasonable recording space, with good absorbent treatment. If you can use a larger room, resonances due to standing waves, will be low enough that you can EQ them out - but don't get too obsessed with this. If your room acoustics are less than stellar, go with a large diaphragm dyamic micrphone (like the afformentioned RE20), or a Shure SM7B. These are going to hear less of the room than a very sensitive condenser microphone. Learn how to use a gate, subtly. Learn how to use a compressor by experimenting and you will soon understand why you needed to learn now to use a gate first Practice, practice, practice. Your skill and style are more important than anything else. Get people to critique your recordings. When I first started in VO work, my producer pointed out the little things I was doing wrong, like not sounding the last consonant in a word I was over familiar with. I couldn't even hear this until he pointed it out to me. In Cakewalk, learn how to slip edit and apply the FX you need (as per above). Then learn how to save your FX chain so you can add it back to new projects quickly. The reality is that, for VO work, you don't need to become a Cakewalk guru right away. I have over 25 years experience in Cakewalk, but with VO work, you are not usually going to need to deep-dive into it's feature set.
  13. My 3900X would start to choke if I set my MOTU 896 Mk3 Hybrid to 64 samples and loaded up a large number of FX and virtual instruments. I had two choices - bump up the buffers and deal with higher latency, or disable of freeze a bunch of FX/instruments. Making absolutely no other changes and dropping in the 5900X, this issue was completely resolved and I can push the same system much, much harder than I ever could with the 3900X with absolutely zero problems. The 3900X would always start crackling and popping well before the CPU was really loaded very much. With the 5900X I can literally load up hundreds of FX, get the CPU utilisation high, and get not even a hint of a crackle or pop, all at the lowest latency supported by the interface. It's night and day. I finally have the performance I have always hoped for. EDIT: Let me make it clear - I wasn't ever running into issues with not enough CPU horse power, it's that my audio would start to crackle long before I got to the upper limits of CPU utilisation. The difference now is that I can keep on adding FX, pushing the CPU utilisation higher and higher, and I don't get the crackle at very low latency anymore. It always seemed weird to me that I was getting crackle with the CPU at only 20%. Now I'm actually getting all the performance I paid for - Same motherboard, same RAM, same graphics card, same Windows installation - I only swapped out the CPU.
  14. I guess my answer to this is that AMD always had a disadvantage at low latency, meaning that against an intel chip of otherwise much lower specification, the AMD would not perform as well. That's why I say if you have the motherboard that will support it, the 5000 serious is a seriously good upgrade. On the other hand, if you don't care about low latency performance, and for sure, not everyone does, then I agree, you can probably stick with what you have. For me, I can now monitor at extremely low latency without ever having to bump up the latency with any project I am ever likely to create, regardless of track count or plug-in count - this is a huge advantage over my last AMD CPU and actually makes this a fantastic value proposition for me. I don't need outboard gear or the likes of a UAD accelerator card to get near-zero latency performance with all the FX on all the time, as I can now do it 100% native with no down-side at all. It really is a giant leap in performance over my last CPU. It's the best bang for buck I have had in many years. Anyone, like me, that already had a supporting motherboard, RAM, etc. only has to upgrade one component to get a serious uplift in DAW performance. That's why I added that qualifier. If you already have a good intel-based system, there's not as big a driver to change anything.
  15. I don't think there are many X570/B550 boards out there with thunderbolt built in. My MSI motherboard has the thunderbolt header, which is even named in the manual. I erroneously assumed that this meant it would support the MSI thunderbolt add-on card, which includes the cable that connects to this header. Unfortunately it doesn't work, as it's not supported in BIOS. I can't remember the models, but when I first looked, only a couple of the top-end AMD boards supported thunderbolt. That said, there is so little thunderbolt in the AMD ecosystem, good luck getting support if something doesn't work. If I was really after a thunderbolt solution, I would still go Intel at this stage.
  16. Yes, I had run the LatencyMon tool on the 3900X. I never saw any issues with high DPC latency. I will run it on my 5900X, but to be honest, I don't expect to see much of a difference. I don't think the issue with the 3900X was due to deferred procedure calls, it was internal CPU latency which doesn't necessarily show up as DPC latency.
  17. V12 Plugins also working fine for me in the latest version of CbB. What problems are you having? Perhaps we can help. Also, I have actually found Waves tech support to be surprisingly responsive and helpful.
  18. RAM is 3200, CL16 Motherboard is MSI MAG X570 Tomahawk WiFi I have the MOTU 896 Mk3 hybrid, so I can use either USB or Firewire - currently using Firewire via a Texas Instruments based PCIe card Windows 10 release 20H2
  19. I just replaced my Ryzen 3900X (12 core) with the new 5900X (12 core) and I simply can't believe the results in CbB. I have a Motu 896 Mk3 and I always thought that this audio interface was the weak link in my chain when it came to running at ultra low latency. Having swapped in the 5900X, I am running at 64 samples in the ASIO panel and I have a test project with about 28 tracks. I have loaded up hundreds (not kidding) of heavy plugins including a truck load of reverbs, and I just can't get a single crackle or glitch. I never realised just how key the CPU was with regard to low latency performance. My 3900X would be cracking and/or the audio engine would have stalled out long ago, but this 5900X just wont quit. I finally had to give up adding plug-ins as my mouse finger was literally cramping up. Content creators may be getting 20% more out of the 5000 series compared to the 3000 series, but I am getting way, way more than that. Audio would break up on the 3000 series well before the load on the CPU was very high. The latency changes they made which have apparently improved game performance (I wouldn't know, I'm not a gamer), seem to have really paid off for low latency audio too. Jim Roseberry mentioned that AMD had finally solved the ultra-low latency issue. I would take it a step further and say that they have smashed the problem out the ball park and this current generation of 5000 series chips is now outperforming comparable Intel chips for DAW use. I read the thread "Potential CPU otimisation for Ryzen CPUs", but what I'm seeing is just so solid and the performance is so good, it's like, what's left to optimise? This is the best DAW upgrade I have made in years. I am so thrilled! If you have a previous generation Ryzen set-up and a motherboard that will support the 5000 series, do yourself a favour and upgrade - the difference is just incredible. Bill.
  20. Bill Ruys

    Had Enough!!

    Honestly, it sounds like you have a bad HDD that is about to die. If I were you I would urgently back up everything. In over 20 years of using Sonar/Cakewalk, I've never once had corrupted audio files. Correct me if I'm wrong, but you have not posted on this forum before and only joined it a few hours ago, so who are you referring to when you say "no one seems to be able to fix the issue". I hope I'm not feeding the troll...
  21. Ha ha ha. Of course. Twelve Tone -> Roland -> Gibson -> BandLab. Senility must finally be setting in Thanks for the correction.
  22. Making it open source gives years and years of innovation away to anyone to use. Bandcamp didn't get this for free - they paid Roland for it. And now, it's a product that brings people into the Bandcamp ecosystem and helps spread the brand. If they open source it, they lose all that value. Next thing you have various forks of the product, some well maintained, some not. I support Bandlab keeping it closed source. That said, if one day they ever shut up shop and couldn't find a buyer who would continue to develop it, I would hope they would consider making it open source at that point. As a user of this software for well over 20 years, I was happy to pay for it and would be happy to pay for it again, but am enjoying having it for free and seeing it developed and maintained.
  23. I remember several years ago, GPUs were going to be the next big thing in audio FX processing. There was a startup called BionicFX that was developing a software package, AVEX, that would render audio plugins on your GPU. This was back in the early to mid 2000s and if recall, AGP based video cards were the new tech that would enable this. Anyway, it never did amount to anything and the company disappeared. As far as I know, nobody has attempted it since.
  24. Hey Ken, Thanks for patronising us. You are obviously way smarter than we are and can see where we have all gone wrong in our ignorance. I suggest you go find another DAW that you trust. Best of luck.
  25. That was going to be my suggestion too. Lots of good advice in this thread so far. Making room, spectrally for each instrument (EQ, etc) and also making room in the stereo image with panning can help. All the instruments in TTS-1 are stereo, which can lead people to leaving each instrument panned dead centre, but don't be afraid to move them around. You generally want bass, kick and vocals panned centre, but separate other instruments in the stereo image. Just like Lynn says, I really started making strides in my mixes when I started leaving plenty of headroom. Aim for a much quieter mix so that you can sit the instruments you want to highlight above the remaining mix. Leave getting the overall volume to last - which is more a mastering task. Also, don't be afraid to automate the mix to highlight the instruments you want to showcase in different parts of the song. Mixing audio is a little bit like mixing paint. If you try to use all the colours at once, you just end up with muddy brown.
×
×
  • Create New...