Jump to content

bitflipper

Members
  • Posts

    3,078
  • Joined

  • Last visited

  • Days Won

    18

Everything posted by bitflipper

  1. Yes, it is. They take up less disk space, use less memory, load waaay faster, and sometimes are capable of things their sampled equivalents are not. The problem is that modelled instruments just haven't achieved the same level of realism yet. While lots of users say they can't distinguish between Pianoteq and a sampled piano, anyone who plays piano as their primary instrument can easily tell the difference - but only when soloed or way up front in a mix. So for most people and most applications, the modelled alternative is workable. Personally, I'll stick with Keyscape and endure its excruciating load times because it sounds incredible. It's slow because they didn't use any of the usual tricks for minimizing memory usage, e.g. every note is sampled, so no stretching. I have the full suite of modelled instruments from Audio Modeling. They load in the blink of an eye, take up a tiny fraction of the memory and disk space that a Kontakt library would, are expressive and can sound quite good in the right context. But naked, they often sound a bit "synthy". I still like them because they can do things a sampled instrument cannot, such as programmable glissando and vibrato speeds. But as Jim notes above, the speed of the drive itself is consistent, regardless of the library. Perceived slowness is a function of how much data is being loaded into memory on startup. That's why I suggested that phrase libraries might be inherently slower to load because the size of their individual samples is larger, or maybe it always loads the complete sample set. I cannot test this hypothesis myself, as I have no phrase libraries here to look at.
  2. Have you done a Windows restart? Check Task Manager and see if there is an instance of cakewalk.exe running. If so, use End Task to kill it.
  3. Just guessing, but it could be that phrase libraries load bigger chunks into memory than single-note samples. By default, Kontakt preloads only the start of each sample so it's ready to go when called upon. If the note played is short, no further disk access will be required. If you have lots of RAM, you can increase the preload cache, resulting in better performance during playback at the cost of longer load times. With phrase libraries, you're less likely to only play the first 100 milliseconds of a sample and more likely to play the whole phrase. [AFTERTHOUGHT] Take a look at the preload buffer size for those phrase libraries and compare them to conventional libraries. The library author sets the default preload cache size, which can then be overridden by the user. I'd be curious to know if the default cache is larger for phrase libraries (I don't have any here to look at).
  4. When Euripides famously advised us to "question everything", he was indulging in a bit of hyperbole. As you're discovering, literally questioning everything leads you down a rabbit hole of indecision, frustration and paralysis. Sometimes, you're better off simply accepting that "this works, but that doesn't" and carry on. That said, I completely understand one's desire to understand how stuff works, and acknowledge that such understanding often leads to better practices. Do you need to know how automobiles work in order to drive one? No. But Is it helpful to understand its basic principles? Of course. The mechanically knowledgeable driver will get better fuel efficiency, longer-lasting brake pads, longer engine life and get into fewer accidents. Not because he can rebuild an engine in the back yard, but because his daily driving habits are informed by knowledge of what actions are harmful to the car, why preventative measures such as frequent oil changes extend the life of the engine, why slamming the brakes wears pads faster and even wastes gas by turning already-paid-for kinetic energy into useless heat energy. If you're really wanting to explore this rabbit hole, I'd suggest starting at the bottom with some reading on how digital audio works. A good introduction that won't overwhelm the mathematically-challenged is Digital Audio Explained for the Audio Engineer, by Nika Aldrich. The author keeps it simple - some critics say too simple - but still covers the important bases. It won't directly answer all of the questions posed above, but will give you a foundation for either answering them yourself or to better understand explanations you come across later. Once those fundamentals are clear, you can then tackle the broader questions such as how a DAW deals with mixed sample rates. I would refer you to the documentation for an explanation as to how that's handled. Effect plugins, for the most part, don't really care about sample rates. Multiplying two numbers gives the same result regardless of how many times per second you do it. When 2x oversampling is enabled within a plugin, it's just duplicating every sample before performing the operation. The upsampling itself is efficient, but now you're doing twice as many calculations in the same period of time, which reduces the availability of CPU cycles for other things. At some point you'll hit a wall where the CPU cannot keep up, and you have to increase buffer sizes or freeze tracks to carry on. However, up until that point there is no penalty for "making the CPU work hard". The CPU can either handle it, or it can't. It's not a draft animal. Oversampling neither improves nor diminishes audio quality. It does one thing only: it mitigates potential aliasing caused by harmonic distortion that some processors can produce. So does it hurt anything by always enabling oversampling? No. It just gives your CPU more work to do. The key is understanding where harmonic distortion comes from, which plugins might cause such distortion and which ones never will, and how much aliasing is acceptable. You can see aliasing with a spectrum analyzer, so start there. Keep in mind that the analyzer can show aliasing that isn't actually audible, so addressing it with 16x oversampling will only make you feel better, not improve your mix. Synthesizers have the ability to output different sample rates. However, the SR they use internally is unrelated. For example, say you want to generate a nice clean sine wave, A "pure" sine wave has no harmonics, so generating it at a higher sample rate gets you closer to that ideal. But when the sine wave is rendered in the DAW, it has to be at the same SR as the rest of the project, which implies a sample rate conversion within the synth. There is anecdotal evidence that some synths sound better at certain sample rates, which speaks more to the internal coding of the synth than any broader principle. As for FM8 specifically, how it works under the hood is proprietary to NI. I would expect that measures are indeed in place to prevent aliasing, simply because frequency modulation famously creates gobs of high-frequency components that can potentially exceed Nyquist. I'm guessing they wouldn't need 96KHz operators to handle this, but might well do the modulation at a high sample rate, which would allow aggressive filtering at the output. I suspect that FM8 doesn't care about your project sample rate except at the final output, but if I'm wrong about that then yes, FM8 could conceivably be a candidate for oversampling just because of its potential for creating "illegal" frequencies. Sample libraries are a little different, in that there is neither need for nor benefit to oversampling the source files. Whatever flaws a sample has are baked into the original recording. Few sample libraries are sampled at 96KHz, but many are 48KHz and most are 44.1. Within Kontakt, you can mix 'n match libraries with different sample rates. Kontakt handles that internally. Exactly how it does that, I don't know. NI doesn't like to talk about things like that. But I do know that if you're loading a 48 KHz library alongside a 44.1KHz library, you will not hear any difference between loading them into separate instances versus one instance of Kontakt and thus forcing SR reconciliation. Either way, the output is going to match the project sample rate. "Is it worth it making a song at 192khz if the song will sound REALLY good when rendered down to 24 bit 96khz?" Oh man, that's a rat's nest. Audio interfaces are designed for maximum fidelity at one specific sample rate. High-end pro units are likely to work best at 96 KHz, while most prosumer products work better at 44.1 or 48. None are optimized for 192 or 384. But that's irrelevant if your projects are 100% in-the-box and never include recorded audio. If your project consists entirely of sampled instruments and soft synths, a 96KHz project sample rate makes no sense unless you'll be sending it out for mixing/mastering/distribution and the people you're sending it to have requested 96KHz. Otherwise, all you'd be doing is forcing a lot of unnecessary sample rate conversions. A lot of users assume that if higher sample rates are objectively "better", then there must be some benefit to converting to higher rates. But sadly, you cannot add fidelity back in after it's gone. The reality is that you can only degrade fidelity. However, it's important to remember that digital audio is of such a high quality that it's far better than our ears can distinguish. The trick isn't so much knowing how to make it better, but how much it can be made worse before anyone notices. It comes down to this...there is nothing wrong with taking steps that you suspect might improve your music. Experimentation is to be encouraged. But if you can't actually hear the difference, what have you really accomplished? Ah, but I hear you say, just because *I* can't hear the difference doesn't mean somebody else can't hear it. Forget that. You are the benchmark. Nobody is ever going to be listening more closely than you do.
  5. There are many ways to adjust a track's volume, with different considerations with each when deciding which one to use. The most obvious, of course, is the Volume fader. The importance of this method is that it's applied after any effects you have on the track. That makes it the safest way, because you can adjust it at any time and it won't mess up, say compressor thresholds. However, it's mostly used to turn things down because you can only raise the level by, iirc, +6dB with the volume fader. If you need more than a few dB, it's probably better to use one of the other methods first and come back to Volume to fine-tune the track balances. Then you gave the Gain slider. It does the same thing as the Volume slider/fader, but it's applied at the very start of the signal chain, before all your effects. That means you probably don't want to tweak it after you're deep into the mixing process, because boosting the level going into a plugin can alter how the plugin works. Gain is best used early on, to get the tracks into the same ballpark for your rough mix. Both of the above methods are non-destructive, meaning the underlying audio data is not modified and you can therefore change your mind later. Whenever you need a more drastic change, you can use a destructive method that permanently alters the underlying data (well, technically it's only permanent after you've saved the project and exited Cakewalk; before then, CTL-Z still works to un-do it.) This feature is under the Process menu. If you've recorded something that's way too quiet, this is how you raise it up to be comparable to your other tracks. You will probably still use the Volume fader or automate volume afterward. Most plugins also have output level controls, and some also have input level controls. I will often use the output knob on an EQ or compressor to tweak a track's volume if I already have volume automation and don't want to mess it up. Most effect plugins alter volume. A band on an EQ can be thought of as a volume control that only affects specific frequencies. Sometimes we forget that, and can find ourselves wondering why there's an imbalance or a too-loud track when all we did was adjust the EQ. Sometimes, with EQ you can make a part more noticeable without turning it up at all. Balancing tracks can get complicated with FX. Now, bear in mind that all of these methods address "levels", not necessarily "loudness". The two are related, but different things. Perceived loudness can be increased through compression, for example. Compressors (and their more aggressive siblings, limiters) can raise average levels, and it's the average levels that determine the perception of loudness.
  6. At one time I started a similar project. I had just acquired the Oberheim 8-Voice from Cherry Audio and thought "wouldn't it be cool to do a 70's-style epic synth opus"? No samplers, no acoustic instruments, no vox, just lots of layers of subtractive synths. Well, that project sits unfinished along with a couple hundred other half-baked ideas. Lately, I've been thinking about taking a bunch of those unfinished ideas and stringing them together. Last year I gave the project another shot. I let go of my strict no-samples requirement and allowed myself to use Superior Drummer for drums. The rest is all true to the original concept, consisting entirely of subtractive synths. This is where I left off on that one... https://soundclick.com/r/s86sea No current projects, sadly, as my DAW is being rebuilt after it fried on a hot day.
  7. You can't just show a pic without any backstory! Who was this Woody guy, and how, exactly, does he know your Mom? When I was 4 I found myself on a local kid's TV show in Tucson. I had no idea what was going on, as we didn't even own a television at the time. My dad had taken me to the station because he was buying advertising time there. I was only interested in the Teletype machine clacking out news bulletins. Somebody at the station asked if I wanted to be on this show, basically to sit on some bleachers with a bunch of kids and watch cartoons. The entire concept struck me as bizarre, but cartoons are cool so sure. It was a surreal experience, but when Dad asked if I wanted to do it again, I declined. Too many rocks waiting to be turned over in the back yard looking for creepy crawly things.
  8. Well, I understand Yakety Sax is now a hit again, being the new unofficial theme song of the British Parliament. First debuted by Mr. Johnson, it then became the soundtrack of every sidewalk announcement by whatever politician was resigning that week. P.S. Before anybody starts whining that this is a political post, it's not. Watch this clip at 1.5x speed and see if it doesn't bring back fond memories of Benny Hill.
  9. This little unassuming plugin gets used here on nearly every stereo track that isn't panned center, or that moves around via automation. I wouldn't pay $49 for it, but $19 is about right, I think.
  10. Somebody must have had their heart broken by a brown-eyed girl. Can't imagine any other reason anyone would hate that song. Except, I suppose, that it's often badly butchered.
  11. Didn't know about that one. The one I was thinking of is this one. :
  12. I had a Juno-106 back in the day. I just used it as a string synthesizer, slaved via MIDI to my main instrument, a Jupiter-6. The two were a good pairing, as both used the same sound generators. The Juno, though, had a vastly simpler UI that you could learn in minutes. Plus it was smaller and lighter. When I sold the Jupiter, I threw in the Juno, a Yamaha TG-33 and a Roland drum machine to sweeten the deal, just because I thought I was "done" with music and wanted everything gone before it lost all of its value. Synths had no place in my world, a world of suits and ties and 12-hour workdays and Serious Business. Nowadays, I hate people like that. Aside from its brilliant simplicity, the Juno just had just one feature that made it likeable, a pleasant chorus effect that made even the most basic patches sound good. Today, you can get that chorus effect as a free plugin.
  13. Hey, that's pretty cool. I didn't know that was in there. One of many optional columns I've never used. Should note that no applications that have a user interface can actually be oblivious to screen resolution. Every application I've ever written queries Windows for the current screen resolution and DPI, then creates the UI accordingly. Knowing this, the application is then able to avoid, say, rendering a button that the user can't see. Or to center a dialog, or create a toolbar. But as far as Microsoft is concerned, such applications are "unaware". What they actually mean is that the application doesn't check again after it's started up. A "DPI Aware" program is always checking screen resolution, in case it changes. You can imagine the impact that could have on a program like Cakewalk that needs to be heavily optimized. Imagine if it had to check screen resolution every time it scrolled the track view during playback. I think it's a reasonable assumption that display parameters won't change while Cakewalk is running. The mystery is what kind of magic Ableton is performing to suppress scaling.
  14. Such a down-to-earth guy. And for a guitarist, very attuned to things like instrument balance and separation. He approaches a live mix like he's in the studio. Great quote I'll no doubt use: "you always have that 6th gear but you don't live there" (talking about playing quieter to combat overly-reverberant rooms).
  15. ASIO, by design (for low latency), does not support multiple concurrent applications. People have created wrappers that can do it, and apparently some interface manufacturers have included them. Most don't. You could try something like Blue Cat's Connector, which purportedly makes ASIO multi-client. It's $49 but you can demo it to see if it solves your problem. A much easier solution would be to simply switch to WASAPI shared mode. There could be a slight latency hit, but you probably won't even notice it. [EDIT] Just found out that Steinberg itself has made a multi-client wrapper. I know nothing else about it, but I think it's a free download. Here's a link (ftp server).
  16. Cakewalk is DPI-aware, afaik. At least It has always seamlessly adapted to whatever display I'm using, and there've been a few over the past 36 years. Adapting to different monitors implies that the application "knows" the current resolution (as informed by Windows), which is all DPI awareness means. Yes, there is a more advanced mode that lets programs adapt on the fly to changing resolutions. But that only comes into play when, say, you drag the track view to another monitor that's at a different resolution than your main display. If Cakewalk is distorted or blurry, it isn't because it doesn't know the DPI. In fact the monitor is probably at its default setting of (probably) 96 dpi. The distortions are caused by the monitor attempting to fake a lower resolution. I always use a display's native resolution, because that's always going to yield the best results. At one point I had a 20" monitor that was too high-res for its size, making the text difficult to read. That issue went away when I replaced it with a pair of 24" displays. Nowadays I use a pair of 34" widescreen displays, which work much better with my old eyes. Both are running at the same resolution. As for what the "best" monitor size is, I'd say the biggest one that will fit between your speakers without impinging on line of sight to them. For me that worked out to 34", which was as large as I could go and still have an equilateral triangle between ears and speakers without the display occluding them. My two displays are not side-by-side, obviously, but stacked. I actually prefer that setup because it's not as far when I drag something to the other display.
  17. Good on ya, Frank. Too often an interesting problem is posted but then never subsequently updated, depriving us all of the opportunity to learn something. Once you've figured it out - and you will - be sure to also edit your initial post to preface the thread title with "[RESOLVED]".
  18. I'm surprised no one has yet thought to include a Zombies track in a zombie movie or TV series. It would be such a great gag, having the hero frantically running from a zombie horde accompanied by "Tell Her No". No, no, no, don't hurt me now...
  19. AFAIK, audio interfaces don't tell the driver what buffer sizes to use. That has to come from an application, or from Windows itself (which I think only initializes settings on bootup). So Blindeddie is on the right track, I think. Something's overriding the instructions that Cakewalk sent to the driver when you first opened the project. You didn't specify which driver you're using, so I'll assume it's ASIO. If that's the case, give WASAPI a try. WASAPI supports multiple audio streams, each with its own sample rate. I'm just spitballing here, but it seems that if each data stream can have its own sample rate and wordlength, that implies that each one has its own buffers , making it immune to having two applications fight over buffer settings. If you are already using WASAPI, try switching to ASIO, which normally can only be used by one application at a time. I would think that would preclude interference from other processes. [EDIT] I checked out the WASAPI documentation to be sure I wasn't blowin' smoke. WASAPI does indeed support multiple independent audio streams, each with their own buffers. But that only applies to shared mode. In exclusive mode, WASAPI works more like ASIO, in that only one application at a time can have control over the driver. Exclusive mode is also more efficient, obviously, since it doesn't have to juggle multiple data streams. I use WASAPI here, and I have no problem running other programs while Cakewalk is open, e.g. watching a YouTube video. ASIO and WASAPI can be used at the same time, as they are separate worlds. You should be able to use ASIO for Cakewalk and WASAPI for Windows, and there won't be a conflict.
  20. yeh, I've always liked that version, too. I very much liked that version of Carlos Santana, too - back before he started phoning in his performances. But it was the Zombies' vocal harmonies that drew me in initially. Imagine how awesome Santana's cover could have been if they'd had more than one vocalist. Earlier I described the guitarist, bassist and drummer as "hired guns", but here's a video from 2013 that shows nearly the same lineup I saw last night, with only the bass player changed. So those guys have been zombies for quite a while. Rod still plays the same keyboards. Main difference is all of them are considerably greyer now. So am I.
  21. Yesterday afternoon my bass player called me up and said he had an extra ticket to a concert and did I want to go. I said "sure", even before I knew who the act was. It was at a favorite local venue, a 121-year-old 600-seat theater that's run mostly by community volunteers. I'd played there a couple times and knew that it was a good room, acoustically-speaking. Then he told me it was The Zombies. Right, I thought. Aren't those guys like 80 years old? And I was under the impression that the band's principle creative force, Rod Argent, had retired. But OK, I'd be up even for just a decent tribute band. On the way to the theater, I told my friend the story of the fake Zombies back in the day. It's a great story, google it. It's part of ZZ Top's origin story. Short version: two fake Zombies bands toured the US to exploit the real band's hits. The real Zombies had never toured America, having broken up before they had those hits, so nobody knew what they looked like. The promoter was such a hack that he didn't even bother having keyboards in the fake Zombies. So I was pleasantly surprised when they came onstage and there's Rod Argent and Colin Blunstone. The other three guys were clearly hired guns, but awesome players, especially the bassist. The five of them kicked *****, with extended jams on Hold Your Head High and She's Not There. It was a great performance, and I had the luxury of enjoying from 4th row center. Sadly, the FOH guy must have been one of those volunteers, because the sound was atrocious. I mean, really, really bad. I'd have given them a piece of my mind, had I actually paid for the ticket. Best part of the show was when Rod told extended anecdotes of their experiences at Abbey Road with Geoff Emerick as their engineer, using the Mellotron that the Beatles had left behind from recording Sgt. Pepper, making friends with a new intern named Alan Parsons.
  22. Warren has definitely drunk some kind of koolaid and gone full shill. At least he doesn't call it "musical" here. Just "amazing" and "fantastic". Not to say this isn't a great EQ, as it appears to be quite capable and full-featured. But an EQ has a straightforward job, and as long as it does that job, which EQ you use almost doesn't matter. Warren knows this.
  23. I thought the piece was pretty good. I'd have liked him to go into more detail and give specific examples, but as I was reading it I had no trouble making my own mental list of products that he could have been talking about. Are we just now entering the age of B.S. processors? Nah, we've been in it from the start. But the bad:great ratio has been getting worse as old ideas keep getting recycled, dressed up but not refined. However, I don't believe every useful plugin has already been invented. I've long felt that audio analysis could be approached the same way as any other practical numerical analyses. iZotope has been edging in that direction in recent years. Fabfilter has quietly built in advanced smart features with hardly anyone noticing. It's an area that still has a long way to go before we can call it fully played out.
  24. You mean those $5,000 speaker cables? Magic Stones? Acoustical light switches? Yeh, those guys are the worst. But then, their audience isn't the technically-minded and the critical thinkers. Mr. Huart, otoh, isn't speaking to gullible rubes. At least, I don't think so. Most of his advice is pretty solid. That's why I hold him to a higher standard. Now, if Dan Worrall ever posts a video titled "world's most musical EQ?" I will have no choice but to give up on humanity altogether.
  25. Ugh. I expect more from Warren. While nothing in that video is a lie, he is clearly struggling to put a spin on it, resorting to literally equating "musical" with "quality". As in "Acustica stuff always sounds super musical; I've never heard anybody complain about the quality of what they do". By that logic, every plugin that does what it's designed to do qualifies as "musical". Voxengo SPAN, the musical spectrum analyzer! MNoiseGenerator, the musical white noise generator. Heck, I've got some speaker stands that are pretty high-quality - but I've never thought of them as "musical".
×
×
  • Create New...