-
Posts
3,346 -
Joined
-
Last visited
-
Days Won
21
Everything posted by bitflipper
-
There are many. Google "VST Multi-tap Delay". Most that explicitly call themselves "multi-tap" delays will have a similar user interface that let you pan, filter and effect each delay independently. I assume that's what you like about Relayer. Plug & Mix has one, but I've not used it. Tekturon from D16 is similar. A popular one that's been around a long time (modeled after classic hardware version) is the 608 from PSP. Ricochet by Audio Damage has a similar UI to Relayer. Those recommendations are based on the assumption that what you like about Relayer is the per-tap controls. But if you use Relayer as a rhythm generator, the king of that hill is Timeless3 from FabFilter, but it's all about modulation rather than individual tap processing.
-
There are several folks who post to the Songs forum who know what they're doing. One that comes to mind is our own Lord Tim, an expert mixer of modern rock. I wouldn't hesitate to use his mixes as a reference. Another is Bob Oister. Another is batsbrew (band name Bat's Brew), who's all over the Songs forum. So check out the Songs forum and see what goodies are out there today. Some of my mainstream favorites for records that are always expertly mixed and mastered: Dream Theater, Devin Townsend, Steven Wilson. Steven Wilson's Hand.Cannot.Erase is a masterpiece. Even if it's not your preferred genre you can always use it to calibrate your speakers, it's that technically adept.
-
A lot of folks will tell you to dump the SoundBlaster. In an ideal world, we're all working with RME, Lynx or Antelope interfaces. But in the real world, we have to make do with what we've got. And in truth you really can record decent vocals with a SoundBlaster as long as you're careful not to drive its microphone preamp into distortion. But I'll second the other advice to freeze/render your VIs first. That'll take a load off your CPU and allow you to use smaller buffers and thus reduce latency. Latency need not be an issue anyway. I record vocals with my buffers set to 2048. That's enormous latency. Doesn't impact my vocals or their alignment in the mix, though. The only catch is that I cannot echo the mic channel's input, e.g. to make use of a reverb plugin. But that's not a good idea anyway.
-
Mix Compensation for Hearing Impairment?
bitflipper replied to rontarrant's topic in Production Techniques
I don't think anyone hears identically in both ears. I certainly don't. My left ear is more sensitive to upper mids and high frequencies. I blame it on many years standing stage left with bands blasting into my right ear. Now, when I talk on the phone it's always held to my left ear. To make sure that's not coloring my mixes, I'll flip my headphones around and see if I still like the mix. (I don't mix on headphones, but headphones in a dark room are always my final QA test.) If the frequency response is similar in both ears, the easiest solution is in the pan control on the master bus (or headphone mix bus if you use one). That control is a simple balance control like the one on your hi-fi labeled "Balance". It just controls the relative left and right volumes. But if your "bad" ear doesn't register high frequencies as well as the other one, an equalizer can be set up based on the results of your hearing test. Hopefully, the test resulted in a detailed frequency response graph. If not, get a better hearing test from an audiologist who does hearing aids. They have to know the frequency response when prescribing hearing aids, which have built-in filters just for such compensation. The results of such a test should allow you to use an EQ plugin the same way an audiologist adjusts those filters. I'd suggest setting up a headphone mix bus if you don't already have one, and have an audio interface that features extra outputs or that can route specified inputs to the headphones. This is a bus that doesn't go out to the main speakers and isn't involved during exports - it's just for monitoring via headphones. Having a separate bus for your headphone mix not only means you can compensate for hearing imbalances, but also for the frequency imbalances that are built-in to the headphones themselves. Of course, you could also just insert the compensation on the master bus, but then you'd need to remember to bypass those plugins whenever you export. -
I haven't seen that dialog before. What is it? Looks like a parameter list. Is it specific to one Omnisphere patch, or do you see it regardless of which patch is loaded? Before re-installing Omnisphere, try running it inside a different VST host, such as the free SAVIHost (that's what Spectrasonics used to recommend before they offered their own standalone executable) to see if the problem is specific to Mainstage (it probably isn't).
-
In practice, it doesn't really matter. Really. I use Firewire here, have done so for many years and it works great. However, if I had to buy a new interface today I'd go with USB-3, just because support for it is built into Windows and every computer comes with at least one USB-3 port. If you ever want to run your DAW on a laptop as a portable rig for onsite live recording, every laptop has a USB port.
-
I've not tried it myself, but have been in the audience for a live demo of realtime drum replacement. In that demo, a drummer was sitting on a folding chair slapping different parts of his body and tapping his feet. No triggers were used, just microphones. The software being touted was Drumagog, but I'd assume any drum replacement software would work as long as the latency was low enough, including Cakewalk's own drum replacer. To get the latency down it might require a dedicated laptop; if there are enough separate mics on the kit, you could take a direct out from the board to drive the software, thus avoiding the need for a special setup. The drummer in my band has had triggers installed on his acoustic kit. They drive a dedicated sample module. Works great. It wasn't a cheap mod, though.
-
Have you tried freezing your VI tracks before exporting the whole project? Have you tried playing it back on a different player, e.g. WMP, VLC, Foobar2000, or WinAmp? Or on a different device altogether, e.g. your phone or portable music player? "...mix down is not giving me any audio once it is mixdown." Does this mean when you do the mix in SONAR there is no output? Sheesh, you've got quite the laundry list of bizarre symptoms. I can't think of any one explanation for all of them.
-
Are the artifacts still present if you export using a slow bounce? Is the project all audio or are there virtual instruments? If VIs are being used, which ones? Are there any unrendered Melodyne clips in the project? What file format are you exporting to, e.g. MP3, WAV, FLAC? Does it happen with alternative formats? Are you sure the artifacts are in the exported file, and not happening during playback? How about posting the file so we can check if we hear them too. Sorry for so many questions, but this is an unusual set of symptoms I've never encountered before.
-
And that's the strategy in a nutshell: buy high-quality, full-featured plugins and take the time to learn them inside and out. There have been very few truly new features added over the past ten years, with most new product development focused on making them easier and/or faster to use, and often cheaper. But not necessarily better. Same goes for hardware (e.g. the most desirable microphones were either built more than 50 years ago or are based on designs from more than 50 years ago).
-
I haven't had a scan problem in a very long time, but they are almost inevitable if you are a plugin collector. That's why I no longer see those problems: I am no longer a collector.
-
Unable to export project to desktop...digital hash!!
bitflipper replied to Steve Patrick's topic in Cakewalk by BandLab
I'd be inclined to suspect the fault lies with a plugin. At least, that's been the case every time I've ever solved a bounce/export problem by switching to/from slow bounce. For me it's always turned out to be a virtual instrument, although I can imagine scenarios wherein any processor could end up with corrupt buffers (e.g. things with big buffers such as reverbs and linear-phase equalizers). -
That's exactly why Noel added the "sandbox" option. When the scanner opens a plugin, it's running code within the plugin that Cakewalk didn't write and that the scanner has no control over. If something goes wrong and the plugin hangs, the scanner hangs too, as well as the main Cakewalk application. The sandbox option spawns a new process for each plugin test, assuring that the whole process won't blow up if a single plugin hangs. Make sure Cakewalk is up to date. In an effort to make the software more robust, they made it pickier about what errors to report. Too picky, in fact, resulting in plugins failing that didn't used to fail. To combat that, they've dialed back the sensitivity to make it a little less nit-picky. Lots of problems went away for users with the recent update because of that. If you're up to date and still having issues, follow scook's advice above. Rename the offending plugin (Z3ta+.dll), give it a temporary new suffix, e.g. Z3ta+.xxx. That'll prevent the scanner from opening it. Let the scan run to completion, then change the name back and try again. Sometimes that works. If it fails, enable the scan log. This will create a text file that reports what the scanner saw happening. Sometimes, a clue can be found there. The log file will be %appdata%\cakewalk\logs\vstscan.log and you can just open it with Notepad. Note that each time you do this, the information is appended to the log, so you may need to scroll down to see the relevant entries. Or delete any existing vstscan.log before starting. There can be a lot of reasons for a VST scan to fail. As rbh noted above, sometimes it's not hung at all but just waiting on a dialog box you can't see. Whenever I see a scan hang, I press Alt-Tab to see if that's the case. Other times it's due to a missing dependency, iow some file that Z3ta+.dll needs to reference but isn't there. Sometimes it's a registry key that's missing or inaccessible due to Windows permissions. These, however, are usually associated with new installs only.
-
Serious suggestion: take one of bat's tunes that is more or less complete but with some space left in it, send it over to Wookie and let him use his own imagination and synth knowhow to add some synth parts. See what happens. It'd be an interesting departure for both parties, opening some creative doors and maybe, just maybe, create something truly remarkable. Or not. Doesn't matter, I'd still like to hear it.
-
First time I saw an Oberheim was c. 1984, at a studio where I'd been invited to lay down a string part. The engineer suggested I do it on the Oberheim 4-voice instead of my Elka String Synthesizer. I was blown away, and literally dreamed about that synth for many months after. But it cost more than a new car back then, so I never realized my dream of owning one. I did buy a single expander module (which was still ~$700) and slaved it to my Micromoog. I'd never have dreamt that someday regular folks like me and Wookie could own a digital equivalent for $29. Gotta say, you do it justice, my friend.
-
Bat, you could just hit Rec and start noodling and I'd listen to it. I've been a Superior Drummer user since forever, even before that's what it was called. Started with the Drumkit from Hell, remember that one? Yeh, it sucked, but those folks have been steadily perfecting the tech over the years. To this day SD3 never ceases to impress me with its versatility, whether using a brush kit on smooth jazz or smackin' it for some Batsbrew-style hard rock.
-
None of these responses have addressed the OP's question. Sure, there are better-sounding sample-based synths out there, but the TTS-1 remains viable. I use it regularly, mostly for percussion but sometimes as a stand-in for a piano or string part that'll later be assigned to a high-end sample library. I do this because the TTS-1 is so efficient that I can stack as many instances as I want while composing/arranging. As to why the synth can't be heard while recording, that's probably just the Echo button. Click it and see.
-
The .big file is likely an archive that contains multiple files, similar to a .zip file. The plugin's installer should have extracted the files from it. Maybe it did, but put them in an unexpected place. Or maybe the installer failed due to some problem, e.g. Windows permission issue or missing dependency. I'd start with a global search for the dll to make sure it's not been just dropped somewhere unexpected.
-
Unable to export project to desktop...digital hash!!
bitflipper replied to Steve Patrick's topic in Cakewalk by BandLab
ol' pal, have you verified that it's just the one project that won't export? Maybe pull up an old one of similar size and complexity and see if it exhibits the same problem. -
Sounds like CPU shortage, but it’s not.
bitflipper replied to Mark Bastable's topic in Cakewalk by BandLab
It's the never-ending questions that make this a never-ending joy of a hobby. With many pastimes you reach a point where you feel like you pretty much know everything you need to know. That won't happen in this space. At least, that's been my experience. After recording music for half a century I still have questions. I love that. -
Sounds like CPU shortage, but it’s not.
bitflipper replied to Mark Bastable's topic in Cakewalk by BandLab
OK, good, you didn't ask for the short version. There isn't one. But I'll do my best to give you the medium version. When a piece of hardware such as a disk controller or network adapter wants the CPU's attention, it's usually critical that it gets heard right away, before the data it wants to share is gone. In order to deal with such unscheduled demands, a mechanism called an "interrupt" forces the CPU to stop what it's doing and take note. In other words, the device interrupts the normal chain of events like a crying child tugging at Mom's pant leg. Because interrupts can be, um, disruptive to the overall performance of a computer, the system needs to quickly acknowledge the request and get back to business, deferring whatever the controller wants to do until an appropriate time. That's the "D" part of DPC; "deferred". The interrupt handler is a short piece of software that does only the absolute minimum stuff it needs to do so that the CPU can get back to its regularly scheduled program. But the controller is now happy because it knows it's been acknowledged and will be serviced in short order. (Some device drivers are notorious about not adhering to this rule, in particular gaming video adapters, which is why a dedicated DAW build probably won't have a fancy high-end gaming card.) So when the CPU finally gets around to handling the controller's request, it runs a more involved piece of code called a Deferred Procedure Call. You can look at DPC activity using a tool such as LatencyMon, and you'll be shocked at how much time the CPU spends dealing with them. If the DPCs are well-written and efficient, they won't interfere with your most important task, which of course is audio. Problem is, systems make assumptions about what's important to you and they may not always be right. A good example is network traffic, especially wireless networks. Your network card assumes nothing is more important than network stuff. But of course if you're trying to record or process audio, that would be an incorrect assumption. If a network driver hogs the CPU for too long, your audio interface may not get its own interrupts serviced fast enough. When that happens, your audio interface's data buffers can get starved for data, the interface cannot guarantee continuous audio, and you get dropouts. Brief dropouts sound like clicks or pops; longer dropouts can actually make Cakewalk give up altogether and stop the audio engine - leaving you fuming at Cakewalk even though none of this is your DAW's fault. Since you and I have almost no control over how interrupts and deferred procedure calls are handled, the best we can do is avoid or disable those devices that are the most egregious perpetrators of CPU cycle theft. Top of that list is wi-fi adapters, but they're not the only offenders. Download the tool I linked above and give it a go. It will give you way more information than any musician could reasonably be expected to process, but there is some good advice on the Resplendence website to help you sort it all out. Bottom line is to identify which process is monopolizing the CPU with inefficient DPCs and kill it. -
Sounds like CPU shortage, but it’s not.
bitflipper replied to Mark Bastable's topic in Cakewalk by BandLab
An excellent choice for the money. Better - and better supported - than your Alesis, and has a native ASIO driver. That said, your interface probably isn't the problem. There are many reasons for glitchy audio. On laptops, the #1 cause is the high DPC overhead of wi-fi adapters, so make sure you're disabling wi-fi during Cakewalk sessions. -
Make sure you are exporting from PT in RIFF WAV format, which is understood by all DAWs.
-
I'm wondering about the validity of performing such a test with a YouTube video, given that we know YT never presents audio without modification. Using an oscillator plugin such as MOscillator (one of the freebies in the Melda fee pack) might be more trustworthy. I think we obsess over hearing range because a) it's easily measurable, and b) it's something we all fear losing with age. The question that never gets asked: how much high end do you need to hear in order to create a nice-sounding mix? If your kneejerk response is "as close to 20 KHz as possible", that does not reflect reality. If you listen to MP3s, the upper end has been lopped off (IIRC, at 18 KHz). Same for other lossy compression algorithms. If you listen to FM radio, it's limited to 15 KHz. Your guitar amp likely tops out at around 12 KHz. Hammond organs often occupy the upper end of a mix, but a classic Leslie horn only goes up to about 10 KHz. Truth is, we listen to band-limited audio all the time and rarely notice. The real question is where musical frequencies live. Quick, what's the fundamental frequency of the highest note on a piano? How about a piccolo? Think either one goes above 12 KHz? Think again. Sure, there are overtones that are multiples of the fundamental that can run up into the hearing range of bats. But humans can't hear them. But can you tell when they're not there? Try this experiment. Play the highest note on the highest virtual instrument you have on hand. Pipe organ, for example. Nothing goes higher than a pipe organ, AFAIK. Insert SPAN and note where the fundamental frequency is, and where its harmonics lie. Now add a low-pass filter and start cutting those harmonics until you can distinguish a tonal difference. Find some six-year-olds and repeat the experiment with them. When mixing, by far the most important frequencies are the ones everyone can hear with ease, between ~1KHz and ~5KHz. This is why band-limited speakers have long been used by mix engineers; if it sounds good on speakers that don't go much above 8-10 KHz or below 100 Hz, it'll sound good on a full-range system.
-
Exactly. "Professional" does not directly imply that you are necessarily good at something, just because someone was willing to pay you to do it. I have been a professional musician, a professional hardware engineer, a professional software engineer, and a professional teacher. In each of those roles, I knew plenty of associates - also professionals - who were flat-out sh*t at their jobs. There were others that I knelt at their feet beseeching them to bestow upon me even a little of their vast knowledge and experience. On the other hand, there's such a thing as the passionate amateur, someone compelled to learn everything they can about the subject of their passion. Such people aren't constrained by the narrow needs of a specific job assignment, but are free to branch out into any related field that piques their interest. I am happy to call myself an amateur mixer, and would be considerably less happy if I had to do it as a (shudder) job.