Jump to content

bitflipper

Members
  • Posts

    3,059
  • Joined

  • Last visited

  • Days Won

    18

Everything posted by bitflipper

  1. That's encouraging. I've seen side-by-side comparisons of the Alpha 65s and the EVOs, and the consensus seems to be that the EVOs are an improvement. And cheaper to boot. I wouldn't mind getting the dual-woofer version, but at 21" wide I just don't have space for them. So you've never felt the need for a sub? Or maybe would like one but just couldn't justify the expense (the Focal sub is a grand)? Or maybe your room isn't big enough to support a subwoofer? For the past 7 years I've enjoyed the luxury of really good bass extension on my current monitors (Emotiva Stealth8), that reach down to a window-rattling 30 Hz at -3dB. Consequently, I never replaced my stolen subwoofer because I didn't need it anymore. Prior to that, I used ADAM P11-A, predecessor to the A7 but with a slightly larger woofer. I liked them a lot, their only real downside being the narrow sweet spot from those laser-like folded ribbon tweeters. These Focals are famous for having a wide sweet spot, and they go down to 38 Hz so no sub needed. I'm also concerned that the EVO might not be loud enough. For everyday mixing, sure, you don't need 104 dB. I mix at around 70 dB. But my band also rehearses in this space and we often learn songs from YouTube videos so there are times when I do need them loud enough for a roomful of people to hear. The Stealths featured 200W of amplification. My other candidate is the Presonus Sceptre 8, at 180W. The EVOs are a wimpy 115W. It sounds like that's never been a problem for you. As I was typing this out I got a call from my guy at Sweetwater, whom I'd asked for advice on this purchase. He didn't waffle for a second, saying he's a total Focal fanboy. Of course, the ones he's running are the $8k model - he must get a helluva employee discount - but he's convinced me to place the order for the Focals.
  2. Hard to find nowadays in the original Wintergreen livery.
  3. That's how they're advertised. They're even on sale at the moment, $494 ea vs $550. But that could just mean they're not selling well. And that could mean they're not very good. And that could mean I'll lose my motivation to make music. And that could mean an early death, because if I can't make music, then what's the point? Why not just become a heroin addict? How do I know it's not some devious trick by those sneaky French buggars? I'll have to consult with Zo, the only Frenchman I know. Then again, he could be in on it. Then again, I could just be overthinking it.
  4. I'm really tempted to get the new Focal Alpha 80 Evo. On paper it specs out great, and its older 6.5" sibling has great reviews. But I haven't actually heard the 8" version yet, as it's only been out for a few months. I just have a hard time believing that a pair of great monitors can be had for $1000. That's way cheaper than previous speakers I've owned.
  5. Yeh, but those people can't tell the difference between a FLAC, an MP3, a WAV and an 8-bit Donkey Kong soundtrack.
  6. The reason your MP3s sound different online is that they are re-encoding and streaming them at a lower bitrate than the original files. In truth, an MP3 at 256 kb/s or higher should be indistinguishable from an uncompressed file, assuming no other processing has been applied (which happens a lot). That said, I deal exclusively with FLAC files nowadays, whether uploading to a hosting service or sharing parts for an online collaboration. For a long time, SoundClick had a file size limit that kept me from uploading FLAC, but they've done away with that. SoundCloud, I think, has always accepted wav and FLAC. Of course, such services won't actually stream you music at that rate to conserve bandwidth, but if you download the files they should not be degraded. For as long as somebody has to pay for the bandwidth. Jeff Bezos needs another yacht, and his servers host the vast majority of, well, everything. MP3 will also remain the standard for as long as people continue to get away with charging big premiums for lossless formats.
  7. And it's not like us geezers don't have enough to deal with already, what with routine failures of random body parts. At least my gall bladder worked for 70 years before clogging up with stones. It should be reasonable to expect comparable longevity from solid-state electronics.
  8. Stuff breaking down. First, a floor monitor died. It was still under warranty, so I took it to an authorized repair shop - 2 months ago. No word on its status. Some online posts suggest that it may be - no kidding - a software problem. In a frickin' speaker. Might require a firmware upgrade. Then my computer burned up. New motherboard, RAM and CPU, $675. At least it's a slight upgrade. After firing up the new and improved computilator, I noticed some distortion in one of my speakers. After an hour of use, it began crackling loudly. A bad capacitor, probably. Wrote to the manufacturer, who replied saying that model was no longer supported and I was on my own. At least they were nice enough to attach a PDF of the amp schematics. But it's unusable at the moment, so I have a computer again but all nearly-completed projects are on hold because I don't want to do a final mix and mastering with just headphones. So now I'm unexpectedly in the market for some new speakers. These things never happen when I have spare cash on hand, so I'm probably looking at a significant downgrade. Anything I can afford will have less power, less bass extension, lower mass, less volume than what I've enjoyed for the past 7 years. Here's a model that seems to be universally well-liked (Presonus Sceptre S8). But I can't get over the fact that it looks like a Minion. If I get this I will have to name it Kevin. Maybe get some kid's size overalls and hang them from the speaker stand.
  9. I reviewed the SWAM instruments a couple years ago for SoundBytes. That was the angle I went with - small size and fast load times. I've since become very fond of the sax and the other reed instruments, but prefer samples for most string instruments, often layering sampled strings with the SWAM violin to bring out a melody. Convincing physical modeling for guitars is still a ways off, being a far more complex challenge than piano.
  10. Yes, it is. They take up less disk space, use less memory, load waaay faster, and sometimes are capable of things their sampled equivalents are not. The problem is that modelled instruments just haven't achieved the same level of realism yet. While lots of users say they can't distinguish between Pianoteq and a sampled piano, anyone who plays piano as their primary instrument can easily tell the difference - but only when soloed or way up front in a mix. So for most people and most applications, the modelled alternative is workable. Personally, I'll stick with Keyscape and endure its excruciating load times because it sounds incredible. It's slow because they didn't use any of the usual tricks for minimizing memory usage, e.g. every note is sampled, so no stretching. I have the full suite of modelled instruments from Audio Modeling. They load in the blink of an eye, take up a tiny fraction of the memory and disk space that a Kontakt library would, are expressive and can sound quite good in the right context. But naked, they often sound a bit "synthy". I still like them because they can do things a sampled instrument cannot, such as programmable glissando and vibrato speeds. But as Jim notes above, the speed of the drive itself is consistent, regardless of the library. Perceived slowness is a function of how much data is being loaded into memory on startup. That's why I suggested that phrase libraries might be inherently slower to load because the size of their individual samples is larger, or maybe it always loads the complete sample set. I cannot test this hypothesis myself, as I have no phrase libraries here to look at.
  11. Have you done a Windows restart? Check Task Manager and see if there is an instance of cakewalk.exe running. If so, use End Task to kill it.
  12. Just guessing, but it could be that phrase libraries load bigger chunks into memory than single-note samples. By default, Kontakt preloads only the start of each sample so it's ready to go when called upon. If the note played is short, no further disk access will be required. If you have lots of RAM, you can increase the preload cache, resulting in better performance during playback at the cost of longer load times. With phrase libraries, you're less likely to only play the first 100 milliseconds of a sample and more likely to play the whole phrase. [AFTERTHOUGHT] Take a look at the preload buffer size for those phrase libraries and compare them to conventional libraries. The library author sets the default preload cache size, which can then be overridden by the user. I'd be curious to know if the default cache is larger for phrase libraries (I don't have any here to look at).
  13. When Euripides famously advised us to "question everything", he was indulging in a bit of hyperbole. As you're discovering, literally questioning everything leads you down a rabbit hole of indecision, frustration and paralysis. Sometimes, you're better off simply accepting that "this works, but that doesn't" and carry on. That said, I completely understand one's desire to understand how stuff works, and acknowledge that such understanding often leads to better practices. Do you need to know how automobiles work in order to drive one? No. But Is it helpful to understand its basic principles? Of course. The mechanically knowledgeable driver will get better fuel efficiency, longer-lasting brake pads, longer engine life and get into fewer accidents. Not because he can rebuild an engine in the back yard, but because his daily driving habits are informed by knowledge of what actions are harmful to the car, why preventative measures such as frequent oil changes extend the life of the engine, why slamming the brakes wears pads faster and even wastes gas by turning already-paid-for kinetic energy into useless heat energy. If you're really wanting to explore this rabbit hole, I'd suggest starting at the bottom with some reading on how digital audio works. A good introduction that won't overwhelm the mathematically-challenged is Digital Audio Explained for the Audio Engineer, by Nika Aldrich. The author keeps it simple - some critics say too simple - but still covers the important bases. It won't directly answer all of the questions posed above, but will give you a foundation for either answering them yourself or to better understand explanations you come across later. Once those fundamentals are clear, you can then tackle the broader questions such as how a DAW deals with mixed sample rates. I would refer you to the documentation for an explanation as to how that's handled. Effect plugins, for the most part, don't really care about sample rates. Multiplying two numbers gives the same result regardless of how many times per second you do it. When 2x oversampling is enabled within a plugin, it's just duplicating every sample before performing the operation. The upsampling itself is efficient, but now you're doing twice as many calculations in the same period of time, which reduces the availability of CPU cycles for other things. At some point you'll hit a wall where the CPU cannot keep up, and you have to increase buffer sizes or freeze tracks to carry on. However, up until that point there is no penalty for "making the CPU work hard". The CPU can either handle it, or it can't. It's not a draft animal. Oversampling neither improves nor diminishes audio quality. It does one thing only: it mitigates potential aliasing caused by harmonic distortion that some processors can produce. So does it hurt anything by always enabling oversampling? No. It just gives your CPU more work to do. The key is understanding where harmonic distortion comes from, which plugins might cause such distortion and which ones never will, and how much aliasing is acceptable. You can see aliasing with a spectrum analyzer, so start there. Keep in mind that the analyzer can show aliasing that isn't actually audible, so addressing it with 16x oversampling will only make you feel better, not improve your mix. Synthesizers have the ability to output different sample rates. However, the SR they use internally is unrelated. For example, say you want to generate a nice clean sine wave, A "pure" sine wave has no harmonics, so generating it at a higher sample rate gets you closer to that ideal. But when the sine wave is rendered in the DAW, it has to be at the same SR as the rest of the project, which implies a sample rate conversion within the synth. There is anecdotal evidence that some synths sound better at certain sample rates, which speaks more to the internal coding of the synth than any broader principle. As for FM8 specifically, how it works under the hood is proprietary to NI. I would expect that measures are indeed in place to prevent aliasing, simply because frequency modulation famously creates gobs of high-frequency components that can potentially exceed Nyquist. I'm guessing they wouldn't need 96KHz operators to handle this, but might well do the modulation at a high sample rate, which would allow aggressive filtering at the output. I suspect that FM8 doesn't care about your project sample rate except at the final output, but if I'm wrong about that then yes, FM8 could conceivably be a candidate for oversampling just because of its potential for creating "illegal" frequencies. Sample libraries are a little different, in that there is neither need for nor benefit to oversampling the source files. Whatever flaws a sample has are baked into the original recording. Few sample libraries are sampled at 96KHz, but many are 48KHz and most are 44.1. Within Kontakt, you can mix 'n match libraries with different sample rates. Kontakt handles that internally. Exactly how it does that, I don't know. NI doesn't like to talk about things like that. But I do know that if you're loading a 48 KHz library alongside a 44.1KHz library, you will not hear any difference between loading them into separate instances versus one instance of Kontakt and thus forcing SR reconciliation. Either way, the output is going to match the project sample rate. "Is it worth it making a song at 192khz if the song will sound REALLY good when rendered down to 24 bit 96khz?" Oh man, that's a rat's nest. Audio interfaces are designed for maximum fidelity at one specific sample rate. High-end pro units are likely to work best at 96 KHz, while most prosumer products work better at 44.1 or 48. None are optimized for 192 or 384. But that's irrelevant if your projects are 100% in-the-box and never include recorded audio. If your project consists entirely of sampled instruments and soft synths, a 96KHz project sample rate makes no sense unless you'll be sending it out for mixing/mastering/distribution and the people you're sending it to have requested 96KHz. Otherwise, all you'd be doing is forcing a lot of unnecessary sample rate conversions. A lot of users assume that if higher sample rates are objectively "better", then there must be some benefit to converting to higher rates. But sadly, you cannot add fidelity back in after it's gone. The reality is that you can only degrade fidelity. However, it's important to remember that digital audio is of such a high quality that it's far better than our ears can distinguish. The trick isn't so much knowing how to make it better, but how much it can be made worse before anyone notices. It comes down to this...there is nothing wrong with taking steps that you suspect might improve your music. Experimentation is to be encouraged. But if you can't actually hear the difference, what have you really accomplished? Ah, but I hear you say, just because *I* can't hear the difference doesn't mean somebody else can't hear it. Forget that. You are the benchmark. Nobody is ever going to be listening more closely than you do.
  14. There are many ways to adjust a track's volume, with different considerations with each when deciding which one to use. The most obvious, of course, is the Volume fader. The importance of this method is that it's applied after any effects you have on the track. That makes it the safest way, because you can adjust it at any time and it won't mess up, say compressor thresholds. However, it's mostly used to turn things down because you can only raise the level by, iirc, +6dB with the volume fader. If you need more than a few dB, it's probably better to use one of the other methods first and come back to Volume to fine-tune the track balances. Then you gave the Gain slider. It does the same thing as the Volume slider/fader, but it's applied at the very start of the signal chain, before all your effects. That means you probably don't want to tweak it after you're deep into the mixing process, because boosting the level going into a plugin can alter how the plugin works. Gain is best used early on, to get the tracks into the same ballpark for your rough mix. Both of the above methods are non-destructive, meaning the underlying audio data is not modified and you can therefore change your mind later. Whenever you need a more drastic change, you can use a destructive method that permanently alters the underlying data (well, technically it's only permanent after you've saved the project and exited Cakewalk; before then, CTL-Z still works to un-do it.) This feature is under the Process menu. If you've recorded something that's way too quiet, this is how you raise it up to be comparable to your other tracks. You will probably still use the Volume fader or automate volume afterward. Most plugins also have output level controls, and some also have input level controls. I will often use the output knob on an EQ or compressor to tweak a track's volume if I already have volume automation and don't want to mess it up. Most effect plugins alter volume. A band on an EQ can be thought of as a volume control that only affects specific frequencies. Sometimes we forget that, and can find ourselves wondering why there's an imbalance or a too-loud track when all we did was adjust the EQ. Sometimes, with EQ you can make a part more noticeable without turning it up at all. Balancing tracks can get complicated with FX. Now, bear in mind that all of these methods address "levels", not necessarily "loudness". The two are related, but different things. Perceived loudness can be increased through compression, for example. Compressors (and their more aggressive siblings, limiters) can raise average levels, and it's the average levels that determine the perception of loudness.
  15. At one time I started a similar project. I had just acquired the Oberheim 8-Voice from Cherry Audio and thought "wouldn't it be cool to do a 70's-style epic synth opus"? No samplers, no acoustic instruments, no vox, just lots of layers of subtractive synths. Well, that project sits unfinished along with a couple hundred other half-baked ideas. Lately, I've been thinking about taking a bunch of those unfinished ideas and stringing them together. Last year I gave the project another shot. I let go of my strict no-samples requirement and allowed myself to use Superior Drummer for drums. The rest is all true to the original concept, consisting entirely of subtractive synths. This is where I left off on that one... https://soundclick.com/r/s86sea No current projects, sadly, as my DAW is being rebuilt after it fried on a hot day.
  16. You can't just show a pic without any backstory! Who was this Woody guy, and how, exactly, does he know your Mom? When I was 4 I found myself on a local kid's TV show in Tucson. I had no idea what was going on, as we didn't even own a television at the time. My dad had taken me to the station because he was buying advertising time there. I was only interested in the Teletype machine clacking out news bulletins. Somebody at the station asked if I wanted to be on this show, basically to sit on some bleachers with a bunch of kids and watch cartoons. The entire concept struck me as bizarre, but cartoons are cool so sure. It was a surreal experience, but when Dad asked if I wanted to do it again, I declined. Too many rocks waiting to be turned over in the back yard looking for creepy crawly things.
  17. Well, I understand Yakety Sax is now a hit again, being the new unofficial theme song of the British Parliament. First debuted by Mr. Johnson, it then became the soundtrack of every sidewalk announcement by whatever politician was resigning that week. P.S. Before anybody starts whining that this is a political post, it's not. Watch this clip at 1.5x speed and see if it doesn't bring back fond memories of Benny Hill.
  18. This little unassuming plugin gets used here on nearly every stereo track that isn't panned center, or that moves around via automation. I wouldn't pay $49 for it, but $19 is about right, I think.
  19. Somebody must have had their heart broken by a brown-eyed girl. Can't imagine any other reason anyone would hate that song. Except, I suppose, that it's often badly butchered.
  20. Didn't know about that one. The one I was thinking of is this one. :
  21. I had a Juno-106 back in the day. I just used it as a string synthesizer, slaved via MIDI to my main instrument, a Jupiter-6. The two were a good pairing, as both used the same sound generators. The Juno, though, had a vastly simpler UI that you could learn in minutes. Plus it was smaller and lighter. When I sold the Jupiter, I threw in the Juno, a Yamaha TG-33 and a Roland drum machine to sweeten the deal, just because I thought I was "done" with music and wanted everything gone before it lost all of its value. Synths had no place in my world, a world of suits and ties and 12-hour workdays and Serious Business. Nowadays, I hate people like that. Aside from its brilliant simplicity, the Juno just had just one feature that made it likeable, a pleasant chorus effect that made even the most basic patches sound good. Today, you can get that chorus effect as a free plugin.
  22. Hey, that's pretty cool. I didn't know that was in there. One of many optional columns I've never used. Should note that no applications that have a user interface can actually be oblivious to screen resolution. Every application I've ever written queries Windows for the current screen resolution and DPI, then creates the UI accordingly. Knowing this, the application is then able to avoid, say, rendering a button that the user can't see. Or to center a dialog, or create a toolbar. But as far as Microsoft is concerned, such applications are "unaware". What they actually mean is that the application doesn't check again after it's started up. A "DPI Aware" program is always checking screen resolution, in case it changes. You can imagine the impact that could have on a program like Cakewalk that needs to be heavily optimized. Imagine if it had to check screen resolution every time it scrolled the track view during playback. I think it's a reasonable assumption that display parameters won't change while Cakewalk is running. The mystery is what kind of magic Ableton is performing to suppress scaling.
  23. Such a down-to-earth guy. And for a guitarist, very attuned to things like instrument balance and separation. He approaches a live mix like he's in the studio. Great quote I'll no doubt use: "you always have that 6th gear but you don't live there" (talking about playing quieter to combat overly-reverberant rooms).
  24. ASIO, by design (for low latency), does not support multiple concurrent applications. People have created wrappers that can do it, and apparently some interface manufacturers have included them. Most don't. You could try something like Blue Cat's Connector, which purportedly makes ASIO multi-client. It's $49 but you can demo it to see if it solves your problem. A much easier solution would be to simply switch to WASAPI shared mode. There could be a slight latency hit, but you probably won't even notice it. [EDIT] Just found out that Steinberg itself has made a multi-client wrapper. I know nothing else about it, but I think it's a free download. Here's a link (ftp server).
×
×
  • Create New...