Jump to content

bitflipper

Members
  • Posts

    3,211
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by bitflipper

  1. Exactly. Interfaces deliver 24 bits, and DAWs subsequently turn that into 32 bits for one reason: it allows us more freedom to mangle and twist audio while burying our folly safely below the noise floor. As you say, we are never entirely freed from the limitations of the analog world. Not if we use microphones, anyway. We are constrained by the practical dynamic range of transducers. Too hot and they clip or pick up unwanted ambience, too quiet and they're noisy. We are also constrained by the acoustics of the physical spaces we record in and the inevitable intrusions from the world around us. And, of course, there's that biggest constraint of all, the limits of our own skills and talents. That's why I don't worry too much about the digital process itself. Of all the villains in our play, it's a minor bit player.
  2. Switch to WASAPI. If the integrated interface still doesn't show up, go into Windows Device Manager and make sure it hasn't been disabled. Byron, nobody's going to disagree with you about the superiority of Focusrite over RealTek. However, the OP explicitly said he wants a simplified setup for practicing. Not recording, not mixing or mastering. For that, the built-in interface should do just fine. I do the same thing when I go on holiday and only want to be burdened by my laptop and nothing else.
  3. I would never yell at you, Geoff. But you have just CONFUSED THE SH*T OUT OF POOR HARLEY FOR NO GOOD REASON. Of course, I'm talking about your enigmatic "Happy New Year" signoff. Many of us are still trying to get back in sync with the calendar as we crawl out from under the pandemic. You're not helping. Harley, with apologies to my esteemed colleague my vote goes to FLAC. Unless you bought the infinite RAM option with your Soundcraft, what you don't want is to run out of memory while recording. FLAC lets you use half as much. If Geoff's right and WAV does give you one bit's worth of additional resolution, it still doesn't matter. Even a high-end ADC is really only accurate to 20 bits.
  4. This seems to be a common issue, judging by how often topics like this come up in the forum. We need to remember that tracks can routinely switch from stereo to mono and back as they wind their way through the signal chain. Some plugins are inherently stereo and will produce a stereo output regardless of what's going into them (e.g. delays and reverbs), while others are inherently mono and always produce a mono output (e.g. most amp sims). Some plugin vendors offer mono-specific versions of their plugins, while the trend in recent years has been toward smart plugins that are able to preserve mono or stereo paths internally. To further complicate matters, there's the choice of track interleave setting. You may have to set the track interleave to stereo even if it's a mono recording, if you're putting, say, a chorus effect on the track. Flipping the track interleave can cause unexpected behavior in some plugins. And don't forget that no matter what processing happens in the track it's all going to wind up stereo in the end if it's going to be routed to a stereo bus. It's a minefield. Cakewalk does a great job of figuring out when to switch between mono and stereo internally, but ultimately it's up to us users to keep track of what's going on under the hood, even if we can't always see what's happening there. Guitar tracks (well, most tracks in general) should usually be recorded in mono (verified visually by the waveform) but it's a good idea to assume they'll end up stereo somewhere along the way. This, btw, isn't necessarily a problem. "I'm still not sure what's the difference between a bass in mono or in stereo panned center, to me it sounds exactly the same, so I don't understand when people say the bass and kick should be mono, but isn't stereo panned center the freaking same?" Yes, Marcello, you are correct. However, there are potential complications that can be avoided if you stick with mono tracks most of the time, especially for those tracks that aren't panned center. Plus the miser in me can't help but see stereo bass, kick, guitar or vocal tracks as a needless waste of disk space.
  5. I totally agree with you; software should never pretend to be smarter than you. Even if sometimes it is, it's best not to assume that by default. However, in this case Melodyne has actually taken a logical path. Melodyne's main purpose is to accurately detect pitch. It's really, really good at that. But some bits are atonal and have no pitch. The sounds of hard consonants, "S"s and "SH"s, vocal fries, breath noises and so on. Prior to version 5, these components were simply ignored. You can, of course, still ignore them if you like. But now Melodyne identifies them so you can adjust their levels. That's a good thing. If Melodyne didn't offer that, then what could it possibly let you do with non-pitched components? Nothing. Now you can do something with them. That's not arrogance, it's perfectly reasonable. Melodyne is not a de-esser, it's an editor. You shouldn't allow it to make sibilance corrections on its own, any more than you'd allow it to make pitch corrections on its own - even though that's what it does best!
  6. AFAIK sibilance detection is automatic. You can redefine the sibilant region within any given blob, but that's it. There are no adjustable parameters that I can see. That's because Melodyne treats all unpitched material as "sibilance", including breath noises, fricatives and such. Basically, if it can't find the pitch then it's classified as "sibilance". This is actually a good thing, because if you tried to edit the pitch on such bits it just wouldn't work and you'd probably just get chipmunks. Make sure you're not using any effects on the vocal track you're editing, as they can confuse the algorithms. Always edit completely dry and add fx later.
  7. Download the demo for Timeless3 and run through some of its presets. Yeh, it's pricey and it's complicated, but version 3 is a big improvement over previous iterations in terms of ease of use. If you want to take a deep dive into what delays can do, no other plugin is as versatile as Timeless3. OTOH, Ricochet is way less expensive, fairly intuitive and a true multi-tap delay with up to 16 steps.
  8. There are many. Google "VST Multi-tap Delay". Most that explicitly call themselves "multi-tap" delays will have a similar user interface that let you pan, filter and effect each delay independently. I assume that's what you like about Relayer. Plug & Mix has one, but I've not used it. Tekturon from D16 is similar. A popular one that's been around a long time (modeled after classic hardware version) is the 608 from PSP. Ricochet by Audio Damage has a similar UI to Relayer. Those recommendations are based on the assumption that what you like about Relayer is the per-tap controls. But if you use Relayer as a rhythm generator, the king of that hill is Timeless3 from FabFilter, but it's all about modulation rather than individual tap processing.
  9. There are several folks who post to the Songs forum who know what they're doing. One that comes to mind is our own Lord Tim, an expert mixer of modern rock. I wouldn't hesitate to use his mixes as a reference. Another is Bob Oister. Another is batsbrew (band name Bat's Brew), who's all over the Songs forum. So check out the Songs forum and see what goodies are out there today. Some of my mainstream favorites for records that are always expertly mixed and mastered: Dream Theater, Devin Townsend, Steven Wilson. Steven Wilson's Hand.Cannot.Erase is a masterpiece. Even if it's not your preferred genre you can always use it to calibrate your speakers, it's that technically adept.
  10. A lot of folks will tell you to dump the SoundBlaster. In an ideal world, we're all working with RME, Lynx or Antelope interfaces. But in the real world, we have to make do with what we've got. And in truth you really can record decent vocals with a SoundBlaster as long as you're careful not to drive its microphone preamp into distortion. But I'll second the other advice to freeze/render your VIs first. That'll take a load off your CPU and allow you to use smaller buffers and thus reduce latency. Latency need not be an issue anyway. I record vocals with my buffers set to 2048. That's enormous latency. Doesn't impact my vocals or their alignment in the mix, though. The only catch is that I cannot echo the mic channel's input, e.g. to make use of a reverb plugin. But that's not a good idea anyway.
  11. I don't think anyone hears identically in both ears. I certainly don't. My left ear is more sensitive to upper mids and high frequencies. I blame it on many years standing stage left with bands blasting into my right ear. Now, when I talk on the phone it's always held to my left ear. To make sure that's not coloring my mixes, I'll flip my headphones around and see if I still like the mix. (I don't mix on headphones, but headphones in a dark room are always my final QA test.) If the frequency response is similar in both ears, the easiest solution is in the pan control on the master bus (or headphone mix bus if you use one). That control is a simple balance control like the one on your hi-fi labeled "Balance". It just controls the relative left and right volumes. But if your "bad" ear doesn't register high frequencies as well as the other one, an equalizer can be set up based on the results of your hearing test. Hopefully, the test resulted in a detailed frequency response graph. If not, get a better hearing test from an audiologist who does hearing aids. They have to know the frequency response when prescribing hearing aids, which have built-in filters just for such compensation. The results of such a test should allow you to use an EQ plugin the same way an audiologist adjusts those filters. I'd suggest setting up a headphone mix bus if you don't already have one, and have an audio interface that features extra outputs or that can route specified inputs to the headphones. This is a bus that doesn't go out to the main speakers and isn't involved during exports - it's just for monitoring via headphones. Having a separate bus for your headphone mix not only means you can compensate for hearing imbalances, but also for the frequency imbalances that are built-in to the headphones themselves. Of course, you could also just insert the compensation on the master bus, but then you'd need to remember to bypass those plugins whenever you export.
  12. I haven't seen that dialog before. What is it? Looks like a parameter list. Is it specific to one Omnisphere patch, or do you see it regardless of which patch is loaded? Before re-installing Omnisphere, try running it inside a different VST host, such as the free SAVIHost (that's what Spectrasonics used to recommend before they offered their own standalone executable) to see if the problem is specific to Mainstage (it probably isn't).
  13. In practice, it doesn't really matter. Really. I use Firewire here, have done so for many years and it works great. However, if I had to buy a new interface today I'd go with USB-3, just because support for it is built into Windows and every computer comes with at least one USB-3 port. If you ever want to run your DAW on a laptop as a portable rig for onsite live recording, every laptop has a USB port.
  14. I've not tried it myself, but have been in the audience for a live demo of realtime drum replacement. In that demo, a drummer was sitting on a folding chair slapping different parts of his body and tapping his feet. No triggers were used, just microphones. The software being touted was Drumagog, but I'd assume any drum replacement software would work as long as the latency was low enough, including Cakewalk's own drum replacer. To get the latency down it might require a dedicated laptop; if there are enough separate mics on the kit, you could take a direct out from the board to drive the software, thus avoiding the need for a special setup. The drummer in my band has had triggers installed on his acoustic kit. They drive a dedicated sample module. Works great. It wasn't a cheap mod, though.
  15. Have you tried freezing your VI tracks before exporting the whole project? Have you tried playing it back on a different player, e.g. WMP, VLC, Foobar2000, or WinAmp? Or on a different device altogether, e.g. your phone or portable music player? "...mix down is not giving me any audio once it is mixdown." Does this mean when you do the mix in SONAR there is no output? Sheesh, you've got quite the laundry list of bizarre symptoms. I can't think of any one explanation for all of them.
  16. Are the artifacts still present if you export using a slow bounce? Is the project all audio or are there virtual instruments? If VIs are being used, which ones? Are there any unrendered Melodyne clips in the project? What file format are you exporting to, e.g. MP3, WAV, FLAC? Does it happen with alternative formats? Are you sure the artifacts are in the exported file, and not happening during playback? How about posting the file so we can check if we hear them too. Sorry for so many questions, but this is an unusual set of symptoms I've never encountered before.
  17. And that's the strategy in a nutshell: buy high-quality, full-featured plugins and take the time to learn them inside and out. There have been very few truly new features added over the past ten years, with most new product development focused on making them easier and/or faster to use, and often cheaper. But not necessarily better. Same goes for hardware (e.g. the most desirable microphones were either built more than 50 years ago or are based on designs from more than 50 years ago).
  18. I haven't had a scan problem in a very long time, but they are almost inevitable if you are a plugin collector. That's why I no longer see those problems: I am no longer a collector.
  19. I'd be inclined to suspect the fault lies with a plugin. At least, that's been the case every time I've ever solved a bounce/export problem by switching to/from slow bounce. For me it's always turned out to be a virtual instrument, although I can imagine scenarios wherein any processor could end up with corrupt buffers (e.g. things with big buffers such as reverbs and linear-phase equalizers).
  20. That's exactly why Noel added the "sandbox" option. When the scanner opens a plugin, it's running code within the plugin that Cakewalk didn't write and that the scanner has no control over. If something goes wrong and the plugin hangs, the scanner hangs too, as well as the main Cakewalk application. The sandbox option spawns a new process for each plugin test, assuring that the whole process won't blow up if a single plugin hangs. Make sure Cakewalk is up to date. In an effort to make the software more robust, they made it pickier about what errors to report. Too picky, in fact, resulting in plugins failing that didn't used to fail. To combat that, they've dialed back the sensitivity to make it a little less nit-picky. Lots of problems went away for users with the recent update because of that. If you're up to date and still having issues, follow scook's advice above. Rename the offending plugin (Z3ta+.dll), give it a temporary new suffix, e.g. Z3ta+.xxx. That'll prevent the scanner from opening it. Let the scan run to completion, then change the name back and try again. Sometimes that works. If it fails, enable the scan log. This will create a text file that reports what the scanner saw happening. Sometimes, a clue can be found there. The log file will be %appdata%\cakewalk\logs\vstscan.log and you can just open it with Notepad. Note that each time you do this, the information is appended to the log, so you may need to scroll down to see the relevant entries. Or delete any existing vstscan.log before starting. There can be a lot of reasons for a VST scan to fail. As rbh noted above, sometimes it's not hung at all but just waiting on a dialog box you can't see. Whenever I see a scan hang, I press Alt-Tab to see if that's the case. Other times it's due to a missing dependency, iow some file that Z3ta+.dll needs to reference but isn't there. Sometimes it's a registry key that's missing or inaccessible due to Windows permissions. These, however, are usually associated with new installs only.
  21. bitflipper

    Warp 7

    Serious suggestion: take one of bat's tunes that is more or less complete but with some space left in it, send it over to Wookie and let him use his own imagination and synth knowhow to add some synth parts. See what happens. It'd be an interesting departure for both parties, opening some creative doors and maybe, just maybe, create something truly remarkable. Or not. Doesn't matter, I'd still like to hear it.
  22. bitflipper

    Warp 7

    First time I saw an Oberheim was c. 1984, at a studio where I'd been invited to lay down a string part. The engineer suggested I do it on the Oberheim 4-voice instead of my Elka String Synthesizer. I was blown away, and literally dreamed about that synth for many months after. But it cost more than a new car back then, so I never realized my dream of owning one. I did buy a single expander module (which was still ~$700) and slaved it to my Micromoog. I'd never have dreamt that someday regular folks like me and Wookie could own a digital equivalent for $29. Gotta say, you do it justice, my friend.
  23. Bat, you could just hit Rec and start noodling and I'd listen to it. I've been a Superior Drummer user since forever, even before that's what it was called. Started with the Drumkit from Hell, remember that one? Yeh, it sucked, but those folks have been steadily perfecting the tech over the years. To this day SD3 never ceases to impress me with its versatility, whether using a brush kit on smooth jazz or smackin' it for some Batsbrew-style hard rock.
  24. None of these responses have addressed the OP's question. Sure, there are better-sounding sample-based synths out there, but the TTS-1 remains viable. I use it regularly, mostly for percussion but sometimes as a stand-in for a piano or string part that'll later be assigned to a high-end sample library. I do this because the TTS-1 is so efficient that I can stack as many instances as I want while composing/arranging. As to why the synth can't be heard while recording, that's probably just the Echo button. Click it and see.
  25. The .big file is likely an archive that contains multiple files, similar to a .zip file. The plugin's installer should have extracted the files from it. Maybe it did, but put them in an unexpected place. Or maybe the installer failed due to some problem, e.g. Windows permission issue or missing dependency. I'd start with a global search for the dll to make sure it's not been just dropped somewhere unexpected.
×
×
  • Create New...