-
Posts
3,365 -
Joined
-
Last visited
-
Days Won
22
Everything posted by bitflipper
-
When I was in my teens and twenties, I took pride in my John Kay impersonation. When I sang "I like to dream..." all the shiny was stripped from my throat after the "I". Now my voice sounds like that all the time. Some say it has "character", when they're trying to be charitable. Wish I'd been kinder to my vocal cords in the early days.
-
The overarching theme here is that Melda could be your one-stop solution because they offer GOBS of plugins - but not all of them are necessarily best-in-class. Anyone looking for a single vendor to switch from Waves to will be frustrated. Everybody knows my opinion of Waves the company, but I have to admit that nobody else offers the same breadth of products with consistently high quality. Some have the quality but not the breadth (e.g. FabFilter), some have breadth but not the quality (e.g. Plug & Mix). You really have to take the buffet approach when building your toolkit. Semi-related rant...a lot of users moan about Melda's UIs. Voxengo often gets called "ugly". Many also complain about ValhallaDSP's utilitarian look. To those whiners I say man up and invest the time to learn what these products can do. Nobody's gonna critique your mix by saying "I can hear that you used some ugly plugins". /rant
-
That was a good demo, Simeon. Unfortunately, I was not moved to make the purchase. Those variable slides are great and something my current nylon libraries don't do. But I couldn't justify $69 for just that one feature. There also doesn't seem to be polyphonic legato. I also don't like that fret noises, legato and sustains are all separate nki's rather than being keyswitched articulations. The instrument does sound really good and the vibrato sounds natural. But I'll be sticking with Renaxxance for now.
-
How many of these have you used? I was surprise at how many of them I've owned or used (Kustom, Shure, Electrovoice, Echolette, Selmer, WEM, Bose). I just kept thinking that somebody's garage had a lot more room in it for awhile.
-
I'm partial to nonstandard percussion, so I actually use these a lot. Stomps make an interesting substitute (or layer) for conventional bass drums. I like to layer finger snaps and handclaps and run them through a delay.
- 1 reply
-
- 5
-
-
-
That stuff makes my throat hurt. Right after the Waves debacle, there were a whole cluster of videos on the theme of "how to dump Waves". This one was pretty extensive.
-
Just yesterday I was thinking "sure these 6 trombones sound pretty epic, but how much better would it be if I added 60 more?"
-
That's exactly the kind of thing I wanted to know. Thanks Simeon. I'll be checking out your video. Slides and legato transitions are key to all string libraries' believability. Most of the time, I resort to fiddly scroll wheel tweaks to implement them. If a sampled guitar includes slides, they are usually pre-recorded samples with a fixed duration and span, which can sound quite authentic but are only useful at specific intervals and note durations. That's why my go-to solo violin is Audio Modeling's modeled SWAM Violin: being synthesized rather than sampled, it allows you to define your own transitions. For Kontakt libraries, this requires clever scripting and doesn't always sound natural. OTS' Slide Acoustic and Slide Lap Steel do a pretty good job of this, but at the cost of being fiddly and time-consuming to program. I'm waiting for a true modeled guitar to come along. In the meantime, my own secret trick is that many of the "guitars" in my recordings are actually Zebra2.
-
And you are correct. However, the master bus isn't the last place it goes. It's second-to-last. From there the audio is handed off to the audio interface via its driver, and is labeled in the dropdown list of routing destinations with the name that Windows gave it. On my system, for example, it's called "Speakers (Saffire Audio)" because that's what Windows calls my Focusrite Saffire Pro interface. Like Steve says above, it wouldn't matter whether you sent the reference track to the master or to the outside world directly -- if, and only if, there is no processing being done on the master bus. Usually, there is. And you want to bypass that so that the DAW is having zero affect on the sound of your reference.
-
This one might be pretty good, but it's hard to tell as there are no detailed walkthroughs. They've tried to make the interface simple by not using keyswitches and limiting the number of articulations, but I'd need to see a real demo to know if they've succeeded or just dumbed it down too much. Either way, it'd be hard to separate me from my longtime go-to nylon guitar, Renaxxance from Indiginus. It doesn't have as many articulations and options as the OTS Nylon, but for the price of the OTS product you could buy Renaxxance plus two more instruments.
-
Anyone know if the "New" Iron Pack 7 is an update to the old "Iron Pack 7"? I'll gladly give them another 3 bucks if they've added content.
-
Ah, the true Golden Era. Young folks complain that they'll never be able to own a home, jealous that their grandparents bought theirs for less than the price of a pickup truck today. Little do they know, that wasn't the best perc us boomers enjoyed.
-
It seems I was mistaken about UM2 not supporting direct monitoring. Sorry about that. Odd that the manufacturer didn't think to mention it, but made a big deal of it when advertising its more expensive siblings. AFAIK, Cakewalk has no built-in loopback test. It's something you'd only ever do once, after installing a new interface or switching drivers. However, the DAW does query the interface's driver so that it knows how much compensation to apply. If you're using ASIO, you can see the reported roundtrip latency in the Driver Settings panel of Preferences, under "ASIO Reported Latencies". btw, the only way to guarantee an accurate round-trip latency measurement is via a loopback test, where you send audio out from your computer, route it back through the interface and record it.
-
Becanful when buying the Sonah ... Crashes a lot!
bitflipper replied to Sheens's topic in The Coffee House
Sure. They just don't venture down into the basement often. -
Now he may actually be a ghost from a wishing well.
-
Becanful when buying the Sonah ... Crashes a lot!
bitflipper replied to Sheens's topic in The Coffee House
Since we're strolling down memory lane, anybody remember Sickvision? At first I defended the fellow, assuming that he was not a native English speaker. I was wrong. He's from Boston. (btw, lest you think I made up that quote for the sake of humor, the original thread is here.) -
Becanful when buying the Sonah ... Crashes a lot!
bitflipper replied to Sheens's topic in The Coffee House
You guys do realize that newer forumites will not get any of these references, right? -
Good answer! The only place you'll ever see a square square wave is in the icon silkscreened onto your synthesizer next to the waveform selector.
-
There's always a mystery to tug at your brain, which is one of the reasons ours is the best hobby. I commend you for answering your own questions through experimentation. Most folks just ask a question on some forum, accept whatever explanation is offered and incorporate it as an eternal fact from then on. As for what the volume slider does, it sets the starting value for CC7 for the track. It does not have anything to do with velocities. You can observe the action of the slider by setting it to something other than "(101)". If your soft synth respects CC7, you can watch its volume control move as you move the volume slider in the track header. You can also see that if the volume slider is first set to, say, 112, and you then add a volume automation envelope to the track, its initial value will also be 112. "(101)" isn't a real value. It just indicates that the DAW will not be forcing an initial value for CC7. Do all synths respect CC7 for volume? No. It's entirely up to the developer how or whether they want to implement any continuous controller. Do all synths adjust volume based on velocity? No. Again, it's at the developer's discretion. Because volume often does go up with velocity, I suspect that's why people might conflate volume and velocity.
-
While floating-point approximations are a real thing, and might be an issue for astrophysicists, they do not affect audio quality.
-
I've never used that interface, but from what I've read it would seem that the UM2 has a built-in latency that you can't circumvent. Most audio interfaces support something called "zero-latency monitoring", aka "direct monitoring", where you're basically listening to the sound coming into the interface directly rather than running it through the computer first. The tracks you record should be perfectly in time with previously recorded tracks, because the DAW knows what the actual latency is and automatically adjusts the start time of your recorded clip accordingly. Unfortunately, I've found no information to suggest that the UM2 has that feature. You might want to think about upgrading your interface. The UM2's bigger brother, the UMC202HD has direct monitoring and is only $100. M-Audio has a similar product that supports direct monitoring that's even cheaper at $70.
-
Therein lies the crux of your error - not that you didn't have the right idea but that it was derailed by one assumption: that the sound source was immutable. Sample library developers and soft synth programmers go to a lot of trouble to introduce unpredictability, in order to make the instrument sound more natural, e.g. round robins, randomized modulations and effects. It isn't easy, given that samples are by nature static recordings and software oscillators are algorithmic. Unfortunately, such unpredictability torpedoes attempts to make objective, repeatable measurements. That's why we use test signals such as sine waves for testing, despite their being far removed from anything musical. It's about consistency, removing variables that might cause unpredictable or unreproducible results. I'd like to hear about your observations using unprocessed audio files instead of samples. You could even export the same loops you used initially and bring them back in as audio for the tests. You might well observe that the same DAW yielded different results this time! As old Joe F himself might have said: Stay Curious! Or the French equivalent, anyway.
-
Holy crap, Rain, this is getting scary. That black van is now parked in my frickin' garage! They now have access to not only my frozen zombie apocalypse stores but also to many potentially lethal gardening tools. Oh hell, now there's a light blinking on the router...I'm stubbing out this joint right now, turning out the lights and locking the doors.
-
I'm still on version 7, so mine is named "iZotope Ozone 7 Imager.dll". Starting with version 8, they stopped putting the version number in the file names, so I'd guess you're looking for the same name but without the "7". The vst3 version will be in \program files\common files\iZotope. The vst2 versions can be anywhere. Mine are under program files\cakewalk\vstplugins. Note that there are two similarly-named files for each effect in Ozone Advanced. Curiously, it's the larger of the two that is used by the main Ozone container, with much smaller files for the standalone plugins (my Imager dlls are 25MB and 3MB, respectively). The larger file (iZOzone7Imager.dll on my system) is not a VST. It gets scanned by the scanner and is included in the inventory in the register, but does not show up in the Apply Audio FX context menu because the scanner recognizes that it's not a VST. Anyway, that's about all I know concerning Ozone file names and locations. Maybe it will help turn on the proverbial light bulb.
-
Is this what the error message said? Unrecognized format? Were you using the standalone imager, or a full instance of Ozone? If you've updated Ozone since X3 days, you might just need to rescan plugins in X3. I don't remember if old Sonar had the Reset option for the scanner, but if it does I'd start with that.