Jump to content

Starship Krupa

Members
  • Posts

    7,486
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by Starship Krupa

  1. @LittleStudios, The odd thing about it, which @Teegarden points out, is that the Cakewalk documentation specifically says that plug-in oversampling on playback is supposed to be less of a processor hit than running at 88.2 or 96. The results of my testing (albeit on older systems) indicate otherwise. I'm not so concerned about the size of the audio files; 500G SSD's go for about $50. I have a 3TB backup drive. Of course, reading them is potentially another performance hit, but I have an SSD, so I'll just have to see. My current strategy is that I'm going to enable 2X on most of my plug-ins (except the Meldaproduction and other ones that have internal sampling) during render only. At this point, with the trailing edge hardware I have, it's more important to me have stutter-free playback while mixing than it is to have the quality bump. At render time, it doesn't matter, I can crank my buffers up to 200 and let it churn. Then I can be pleasantly surprised after rendering at hearing more detail in the finished mix. ?
  2. @Teegarden, you were curious as to whether the performance hit results I saw were specific to my vintage laptop. I repeated my playback experiments with running the project at 88.2 vs. upsampling all the plug-ins, and same thing, switching the rate to 88.2 incurred less of a performance hit than 2X'ing all the plug-ins. And, when I played the project at 88.2 and all the plug-ins 2X'd, I got the same "bum note" in my bass arp track as I did when rendering that way. Did you read my post where I tried various forms of higher sample rates (plug-in and project)? I must be in the top.1%, because I can hear a difference, and to me it's not even that subtle. Yes, I agree that after going through <256 lossy conversion, the differences probably wouldn't be audible, but I buy FLAC's on Bandcamp and purchase uncompressed or FLAC albums and songs from sites like HDTracks (OMG, Radiohead's Moon Shaped Pool at 48/24). This is becoming more of a trend. If you look at the Cakewalk documentation, one of the benefits of higher sampling rates is said to be "Phase shift is drastically reduced." p. 976 of the Ref Guide. Now I've speculated about how it can be that two power amps can sound different, how it can be that there's so much of a difference in detail and image. In my studio, I have two power amps for powering passive monitors. One is a Crown D60 amp from the '70's, designed for radio station studio monitoring and other pro audio uses. It's "only" 30 watts per channel. The other is an Alesis RA-100, originally designed and sold as a mate to the Alesis Monitor Ones that make up one of my monitoring systems. It's rated at 70 watts per channel. I tried an experiment where I switched my Monitor Ones and my Boston A70's back and forth between the two, and the difference was stunning. I had another musician friend in the studio at the time and he was also blown away by the difference. He's by no means any kind of audio freak, but he described the speakers being driven by the RA-100 as "squeezed" or "choked." or just "smaller." Even in mono, the Crown had a more vivid image. I suspect that the Crown was designed with a lot of attention to phase distortion and maybe group delay. I think something similar goes on with lossy compression and plug-ins that may sound optimal at higher rates. So why would lower rate lossy compression sound "2D" rather than "3D." Theory says that since humans can't hear anything above the frequency range, it's not possible to hear a difference. My (fully admit self-educated) theory is that there are other factors like phase and group delay that get messed up. It's not just about frequency response. @bitflipper suggested that I read Fletcher and Munson to get a better understanding of it, but I haven't yet. Another factor is what genre we're talking about. I'm in awe of artists like Tipper and Telefon Tel Aviv, who create detailed sonic immersive spaces. There's a higher percentage of their fans who know that their stuff will sound better in lossless and with bit perfect playback, on a system with less phase distortion, etc. I'd like to impress those people too. Even for people who don't listen "critically," I think the extra detail can come through subliminally.
  3. @LittleStudios, can you try your experiment but turn off upsampling at render and instead set your export rate to 88.2 or 96? I tried my experiment with playback in my MIDI-only project, and exporting at 88.2 gave pretty much the same psychoacoustic results as upsampling all the plug-ins at 2X. I know that you're trying to avoid the larger rendered file sizes, so just as an experiment. A possible benefit to rendering at a higher sample rate is that any ProChannel plug-ins will benefit. And those include an 1176 clone, a console emulator and a saturation effect, all possibly able to generate frequencies beyond the Nyquist. And Chris, thanks again for opening this topic up. As you know, I was skeptical, but I can hear a difference.
  4. Neither Silverlight nor sliverlight (sic) has anything to do with SONAR or Cakewalk and its user interface. Silverlight was an audio/video streaming technology developed by Microsoft. Netflix and Amazon Prime both used it. Microsoft announced 8 years ago that they were going to end-of-life it in favor of HTML5. Skylight, on the other hand, is the name that Cakewalk, Inc. gave the concept of the dockable rearrangeable views. How there's the main one being the track view, and in addition to that the other ones which may be opened or closed or floated as needs and screen real estate dictate. With the Multidock, which is the usual home for the larger views like PRV, Staff, Console. The overall concept is called The Skylight Interface, the name has nothing to do with 3rd-party technology. It's one of Cakewalk's strong suits, IMO. Once I got good at it, I could really fly through the various views as I needed to focus on them, much more efficiently than something like Ableton Live! that tries to have everything up on the screen at once. I guess the idea is to have as much as possible instantly available (not surprising given the program's origins). The reality is that there are people like me still using a14" screen notebook. A DAW user is usually going to be focused on one task or another and doesn't need to see the Piano Roll and the Console at the same time. I have Vegas Movie Studio open on my other monitor while it's taking over an hour to render out a 4 minute video. I don't mind the layout, although it's kind of homely, but it forces me to keep open views and panels that I don't care about, and none of them can be floated outside the main window. It's poop for dual monitor work. The big brother, Vegas Pro, is a little more dual monitor friendly because I can float my preview window over to another monitor.
  5. You can already drag and drop from the Browser directly to Sitala. Also drag a clip from an audio track to Sitala. The latter is nice for if I decide to switch over to triggering Sitala for one or all of my samples instead of having them in audio tracks. This also works with Speedrum Lite.
  6. If I have a plug-in set to upsample on render, does turning the 2X button off affect rendering? Or does it only affect things when you have playback upsampling enabled?
  7. When I moved the directory, I didn't know ahead of time that Cakewalk lacks front-facing configuration for some of the ancillary file locations, and wouldn't respond to just moving everything back. Turns out that if it doesn't find the files where it thinks they are, it just goes ahead and resets things to Cakewalk Core in your AppData folder. The last time I tried messing with mklink, I had a hard time getting my head around the syntax, but I'll give it another shot.
  8. A classic zebra in the annals of punkadelia.
  9. Websters says "marked by the ability or power to create : given to creating." In my older age, I have a tighter definition than I used to. In order to be "creative," a person must actually create things. Otherwise, all we're doing is creating thoughts inside our heads, and anyone can do that. The fact that my thoughts are about music or visual art instead of, say, car stereos, doesn't make me creative any more than thinking about boats would make me a sailor. Part of this attitude comes from dating "art girls" who came equipped with a portfolio of projects they did in art school, and who looked and acted the part, but after I got to know them, I noticed that they never actually created anything. Well, except for confusion and bitterness in the minds of people they were involved with. ? It's just a label, anyway. Or a conceit. Bottom line, create something. The process of actually doing so tends to stimulate more ideas and more creations. Even if they're for nobody other than me, I'm fine with that. I would ideally like to share my creations with others, as I have enjoyed the creations of others. But that's not mandatory. Making music makes me happy in and of itself and that's enough. Anything else is gravy. YMM certainly V!
  10. I just added a second SSD to my laptop and moved C:\Cakewalk Content to it. I also changed the location of my Documents folder to the new drive. I lost most of my folder locations once I did this, but I sorted it out in Preferences. The only thing that's still missing, even after I copied Cakewalk Content back to C:\ are the stock arpeggiator patterns. How can I fix this? I looked in the registry, in the Cakewalk INI files, and could find nowhere that the location is specified.
  11. Oh dear, I was afraid of this. Curses on you @LittleStudios! ? I'll write more later, but preliminary results indicated an audible benefit to either enabling 2X oversampling of the plug-ins or rendering the project at 88.2. It's similar to the difference I can perceive between <256 MP3's and FLAC's. Individual elements sound more discrete and there's more apparent depth. Switching the 64bit engine on while rendering made no change that I could notice. I did renders of the same piece with and without upsampling at 44.1, then with and without 64bit. All renders were to 24-bit WAV's. Repeated the renders again at 88.2. So the potential for highest quality would have been 88.2 with 2X upsampling using the 64-bit engine. More about how that turned out later.* I enabled upsampling on both the synths and the fx, I imagine that I could narrow down which plug-ins make the biggest difference via trial and error, and I would start with A|A|S Player and Phoenix Stereo Reverb. The audible difference seems to be the same for both states, the version with the 2X upsampled plugs sounds pretty much the same as the one rendered at 88.2, at least in the wee hours of the morning through my speakers. I'll do more listening tests tomorrow. *Weirdly, I did have one synth plug-in severely and repeatably misbehave when I enabled both 2X upsampling and rendered at 88.2. The bass arp track, which is A|A|S Player, starts emitting bad notes. The notes still sound, but one of them is off key when it cycles through the arp. Another synth also possibly sounded off rhythm with those settings, but it might have been that the bass arp was the culprit and was throwing me off. They sounded fine with all the other permutations. Conclusions so far: try rendering the same project at 88.2 or 96 and compare it to itself rendered at 44.1 or 48, because I hear a difference, and I was carrying a healthy skepticism going in. I'm actually biased against hearing a difference, because I would rather not have to concern myself with more settings. I have the suspicion that the improvement comes at a certain threshold and that 4X upsampling or rendering at 192 wouldn't make a further difference. It might, though, I haven't tried it yet. I can't draw any conclusions from the 2X/88.2 render because it's out of tune. Except it does suggest that my concerns about possible negative effects were well-founded. Now I'm really interested to know what @Noel Borthwick makes of my impressions.
  12. Sure it's possible. It's a decade-old hand-me-down Dell Latitude E6410. About a year ago, I upgraded it by replacing its original i5 with an i7. I'm using its onboard (not Realtek) audio with WASAPI. I figured that even if the clock speed were slower, twice the cores and more cache would help with DAW and NLE use. The A|A|S Player engine sounds amazing, probably because its algorithms are constantly crunching a ton of analog modeling. Likely similar with Phoenix Stereo Reverb. This makes sense, the more modeling of "real" objects and spaces, the greater the load. I usually kick resource-hungry plug-ins to the curb, but both of these sound better to me than anything else of their kind. I'll repeat the test on my main system and report back. I sort of recall that one of Noel's systems is (or was) an i7 3770 like mine, so it may yield results closer to the documentation. For me, this stuff is interesting to mess with, but I think the true test is to throw on my best set of cans, upsample every plug-in for playback, and A-B test playback with the 2X button. Or upsample them all for rendering and make two exports, one with and one without. If they sound different, even if it's a placebo effect, then the can of worms is open.?
  13. I just tried all of those operations and they worked. Given the oddities, are you sure the clips you were testing with were linked? It was hard for me to know until I figured out the options, as shown above. The documentation is misleading. The dotted outlines are hard for me to see. To get what we normally think of as a string of linked clips that link to each other and the original, you have to tick both boxes.
  14. Indeed, for which I thank you. ?
  15. Maybe my posts were too verbose, but if you scroll up a couple, I quoted a couple of companies' take on what fx may benefit. It seems that the more a plug-in might generate harmonics above 22KHz, the greater the chance that this information, while inaudible to humans, will get aliased back down into the audible range. If Omni Channel generates saturation or other distortion, then it might potentially benefit. Vojtech Meluzin (aka Meldaproduction) states the opinion that running the whole engine at 88.2 or higher is the best practice. But he also says that listening to the results is the way to know. @Noel Borthwick mentions that oversampling (and presumably also running at a higher rate) may help with phase shift, which very much interests me. He also mentions reverbs as being a type that may benefit. If you take a look at the results of the tests I ran, I came to the conclusion that if you want to experiment with the possible benefits of oversampling, running at 88.2 incurred a significantly lower performance hit than enabling all the plug-ins. But the plug-in that caused the biggest hit (no pun intended) was a synth, not a processor. My Plugin Alliance elysia mpressor, alpha compressor, and Millenia NSEQ caused much less performance hits. AIR Hybrid, much less of a hit than A|A|S Player. A thing to remember is that all of this is only important during mixing and rendering, so if you crank up the latency in your driver settings, any performance hit will be less of an issue. Latency is usually only an issue during overdubbing, and recording soft synths, so during that process, you can hit the 2X button or just bypass your performance-hungry FX. I deliberately dropped my usual mixing latency on the laptop to make the results more obvious. One of the things that led to confusion when I first started using Cakewalk was its use of the term "global." I've always understood that to mean "in the entire program," but that's often not what it means in Cakewalk. If you turn your ProChannel off on one track, the button to do that is labeled "Global," but it only means that all ProChannel modules on that track will be bypassed, not in the entire project. With plug-in oversampling, "global" means that if you enable it in one specific plug-in, let's say elysia mpressor, all instances of elysia mpressor will be oversampled. The "2X" button better fits my usual idea of "global."
  16. I'm working on my 10-year-old laptop, and, inspired by this thread, decided to use it to see what performance hit I might see from enabling upsampling for the plug-ins in this project. It uses 4 synths and 6 FX, and the laptop can play it back fine with no upsampling engaged. Onboard audio CODEC, 736 samples. 8G RAM, i7 760 processor. Kinda old by current standards, but I keep it optimized and so far it has refused to fail to perform whatever tasks I've thrown at it, including DAW and video NLE work. It has a discrete nVidia GPU. Results: With no upsampling, engine load hovers around 45%, with a spike up to 62%. With all 10 plug-ins set to upsample 2X, it hovers around 80%, with multiple late buffers, stutters and usage spikes up to 146%. Clearly, this suggests there is some expense involved in upsampling. One of the synths, however, is A|A|S Player running a String Studio patch that arpeggiates. A|A|S's engines, while they sound amazing, are also the most demanding I run on my system. I limit them to 8 simultaneous voices so that they won't bring the show down. Turning it off for just that synth restored gapless playback with spikes up to 82%. Also turning it off for iZotope Exponential Phoenix Stereo Reverb brought the spike down to 72%. I was just guessing at what would be the most expensive plug-ins. Interestingly, switching the sampling rate in Preferences to 88.2KHz resulted in smooth playback, with usage spiking up to the high 90's. Barely viable, but less expensive than per-plug-in upsampling. This suggests that the tradeoff will be as Chris says, the amount of room that recorded and rendered audio will take up on the hard drive. If space on your SSD is dear, but you have a faster, modern processor with many cores, per-plug-in may be the answer. Since for me it's the other way (as it is for me, I just swapped my DVD+R drive for a second SSD on the portable and don't need to archive projects on it anyway), if I wanted the potential benefits, I'd run at 88.2. Rendering at that rate would mean an extra rate change before distribution, though. I'll try the same project on the main DAW and see what it says. It may just be that the upsample button on the laptop will have to stay off. I always do final renders on the main system anyway. Jury's still out on whether I hear any differences, but I will do further tests, including with headphones. Thank you, @LittleStudios for bringing this up. I've long wondered what the performance hit might be.
  17. No, not at all, Chris. And I certainly apologize if I gave the impression that I thought you were "wrong" about anything. You've definitely experimented more with plug-in oversampling than I have. I come here for discussions, and I usually assume that "why are you seeking that feature?" when I don't understand, is a fair matter for discussion. I've learned some things from people that way. I understand that it can come off as "why the heck would anyone want to do that??" You said in your first post that "Upsampling plugins instead of a high sample rate on the project keeps audio file sizes wayyyyyyy down, keep the audio quality as good as digital can get it," which sounded simplistic to me. There are many things that influence keeping "audio quality as good as digital can get it," and from what I've read elsewhere, plug-in oversampling is a pretty minor one. YMMV. I don't really know much about this topic except what I've read in articles that all take pains to warn against seeing plug-in oversampling as a guaranteed positive thing and what I've heard switching it off and on myself. Which was nothing, but then I don't use a lot of guitar amp sims. Supposedly one of the best tests of whether a distortion plug-in is aliasing is to play a double stop and bend one of the notes. Never tried it. That's why I quoted articles and asked questions. I wasn't coming from a position of knowing it all, rather the opposite. Also, my ears turned 60 this year, and I played in rock bands for about 1/3 of that time, so if the aliasing is up above 15KHz, I may be missing it entirely. However, I'm one of those people who must set his computer's music playback up to have as close to bit-perfect reproduction as possible. I can hear the difference between CD audio played back via ASIO or WASAPI and via Direct Sound, and it's a big one. The former sounds "3-D" and has "depth" to me, while the latter sounds "blurred" and "flat" by comparison. The night that I first set my system up that way I stayed up until dawn listening to my favorite albums because I was hearing so much stuff in them I never had before. This effect is as apparent to me on the onboard sound in my laptop as it is on the interfaces in my studio, so I'm always watching out for this "blurring" being introduced by software I use and want to know about any tips for preventing it. I only wish recording and mixing at 88 or 96 made a difference I could hear, I'd do it every time. As it is, CD quality is plenty as long as it's reproduced correctly, without further conversion. I have witnessed the effects of aliasing via a spectrum analyzer while QA testing DAW software. I ran some tests of sine sweeps through a DAW's (not Cakewalk) sample rate conversions and found that a couple of the permutations resulted in visible (and in the audible range, 10K area) artifacts. I was inspired to do so by this page: https://src.infinitewave.ca/ If you're inclined, you may find it as fascinating as I did. And notice that SONAR, as it was when they tested it, gave some of the most crystal clear results. I have had one effect, ADHD Leveling Tool, a very nice LA/3Aish compressor, mess up badly when Cakewalk's 2X and 64-bit engine were both engaged. Its output level dropped way down. This suggests to me that there may be other, possibly negative, side effects to oversampling certain plug-ins, and I don't know which plug-ins or side effects they would be, so I keep cautious. I turn on the plug-ins' internal 2X oversampling for my Meldaproduction fx during rendering only, because what the heck, it's free, and I trust Vojtech to code the function correctly. I listen for unwanted effects and don't hear any, so it's all good. I have to confess that I don't hear any positive effect either, though. If anyone else wants to let it rip with 16X oversampling all their plug-ins, it ain't up to me to tell them not to, but I myself approach it with some caution and wanted to say so. The possibility exists, at least in my mind, that feeding audio into these things at the rate they're expecting might be best practice, at least for some of them. I'm not sure though. I doubt the Cakewalk engineers would create such a feature and make it that easily accessible if there were too many possible drawbacks. By all means, carry on, I'm glad that you had the experience, as I have many times, of finding out that the feature you were looking for already exists in Cakewalk. And if you can share more specific experiences with plug-in oversampling, I'd love to hear about them.
  18. Why would you enable oversampling for a gate? A gate is a utility that is either in a state of passing or not passing audio, otherwise in the most linear way possible. Which is one of the reasons oversampling is optional: it sometimes makes plug-ins do weird things, and that can even include weird things that you can hear and see on an analyzer. You seem to be so concerned with this, how much have you read up on it (and I don't mean amateur opinions on web forums)? From Meldaproduction's documentation: "Oversampling can potentially improve sound quality by processing at a higher sample rate. Processors such as compressors, saturators, distortions etc., which employ nonlinear processing generate higher harmonics of the existing frequencies. If these frequencies exceed the Nyquist rate, which equals half of the sampling rate, they get mirrored back under the Nyquist rate." Got that? If you're using a processor that happens to generate frequencies above 22KHz (assuming you're recording and mixing at 44.1K), then you will potentially get harmonics back down below 22KHz. Also: "Finally, and most importantly, oversampling creates some artifacts of its own and for some algorithms processing at higher sampling rates can actually lower the audio quality, or at least change the sound character. Your ears should always be the final judge. As always, use this feature ONLY if you can actually hear the difference. It is a common misconception that oversampling is a miraculous cure all that makes your audio sound better. That is absolutely not the case. Ideally, you should work in a higher sampling rate (96kHz is almost always enough), while limiting the use of oversampling to some heavily distorting processors." Got that? Mr. Meldaproduction says that the ideal practice is to record and mix your whole project at a higher rate if you really want to eliminate it. Most importantly, he suggests only using oversampling if you can hear the difference and prefer the sound with the oversampling. From Sonarworks' site: "Oversampling benefits the kinds of plugins that change the shape of the original waveform or create new frequency content....Plugins that benefit from oversampling include compressors, limiter, clippers, amp simulators, saturators, and exciters, but not usually equalizers or time-based processors, unless they also provide some kind of saturation." Moreover, the people who code plug-ins tend to know their stuff, and they're not helpless to prevent anti-aliasing in the plug-ins' internal algorithms. They can put in filters that contain the overtones to the safe range. When they are coding their processors, they expect that they're going to be used normally, which means at the same sampling rate that the rest of the project uses. If we use them outside their design parameters, results are not 100% predictable. Ideally, external oversampling makes it easier to stay away from the Nyquist frequency and therefore avoid the possibility of aliasing artifacts, but that's not guaranteed. There could be some filter or other process in the code that behaves differently, even in an undesirable way, when the plug-in is presented with oversampling. A frequency response curve might change, level might change (I had this happen with a compressor plug-in). Speaking for myself, I think I'd do better to put the effort into working on my mic placement than trying to figure out oversampling my plug-ins.
  19. Huh. I thought the referral discount was only good for your first purchase. I had no idea you could bank it for later use.
  20. It did get airplay, just not on the radio station I was listening to at the time. MTV was still years away. The single came out in 1979, the subsequent album in 1980. The whims of program directors. It was what used to be called an "album rock" station, so they may have passed on a single-only release and then gone with the follow-up single when the album shipped. By the late 70's, I had stopped listening top 40 radio, something that has persisted. The downside is that I've missed out on some good things that the radio stations I was listening to deemed too "mainstream."
  21. All too true. I'm usually really good at search terms in say, Google, but in this forum, I have a hard time finding the information I'm looking for. It doesn't seem to obey "exact phrase" quotes, rather searches on the individual words. What I do wish people would do is scroll through a few pages of topics before posting their question(s). Again, not something that comes naturally to someone not familiar in the ways of forums. As for newbie shoving and shoving the newbie shovers, in the words of Rodney King, "Can we all get along?" Everyone was a newbie at some point, everyone gets grumpy sometimes. There are ways to express this stuff with a positive spin. I hope my "starter kit" helps other people in the future.
  22. I decided to just go ahead and do it the hard way and try all 3 permutations and see what I got. Here's what happened: First, I tried making 3 repetitions, and chose only "Link To Original Clip(s)." This resulted in having 4 clips (the original plus the 3 repeated ones). I'll call them 1-4. Clips 1&2 had the dotted line around the outside to indicate they are part of a linked group, and only clips 1&2 (the original and the first copy) showed any of the linked behavior I expect. If I moved notes around in 1, they moved in 2 and vice versa. Clips 3&4 showed no linked behavior, each of them seems to be independent of the others. If I moved notes in clip 4, that move didn't propagate to any other clip. Also, when I selected clip 3, no other got selected, if I moved it to another lane no other got moved, same with muting and unmuting. That's consistent with the definition I understand of "linked" vs. "grouped." From this, it looks like selecting this option gets you one clip that is linked with the original and then the rest are not. Second I tried the same thing, but with only "Linked Repetitions" selected. Clips 2-4 appeared with dotted outlines. This time, clips 2-4 behaved as if they were linked. Event changes in any of them were reflected in the other two. Moves and mutes/unmutes had no effect on fellow linked clips. Clip 1 was not affected. Third, I tried checking both boxes. Again, the result was a row of 4 clips with dotted outlines. This time, however, all event changes in the clips propagated to the other clips in the group. Change the pitch or length of a note in clip 1 and it changes in 4 and vice versa. Still no propagation of moves or mute state. Conclusions: Although the use case seems thin, "Link To Original Clip(s)" by itself gets you only the original and the first copy linked. The rest are not linked, to the original or each other. "Linked Repetitions" is, intuitively, what you want if you want only the copies to be linked with each other. The original clip will not be part of the linked group. Selecting both gets you what I initially wanted, which is that all copies are linked to the original and to each other. In none of the cases were mute and unmute linked, nor were moves or slip edits, so maybe you were mixing up the two terms? The behavior you described is the difference between Grouped clips and Linked clips. There seems to be only one kind of Linked clip, the only difference in all of these options is which ones end up being linked when you create them. I guess I'll submit my findings to the writer of manuals so that he can use them as he sees fit, if he thinks, as I do, that the documentation on this is vague.
  23. Cakewalk's handling of MIDI port changes is way better than it used to be, at least. It used to be that if I unplugged my nanoKONTROL II, Cakewalk would automatically map the next MIDI port to control surface duty, whether it was already in use or connected to an actual control surface or not. Which in my case meant that the MIDI input in my main interface would stop paying attention to my keyboard. This caused me much wasted time and frustration trying to figure out why all of a sudden I couldn't get MIDI data into Cakewalk from the keyboard controller no matter what I tried. It was because Cakewalk had stopped listening for notes from it and started listening for control surface commands, which it was never going to get. Now it doesn't remap, and it at least tries to correctly map reconnected USB gear. Looks like it's not always getting it right.
  24. It seems like a sudden rash of people clamoring for Cakewalk to help with some angst they are experiencing around the possibility that unless the DAW allows for upsampling plug-ins, their productions will be plagued with aliasing. As far as I know, I have never experienced "nasty sounds coming out of the speakers" due to a plug-in being insufficiently sampled. Intersample clipping, yes, indeed, but not this aliasing they speak of. Wouldn't it result in certain plug-ins being less popular? Why wouldn't plug-in manufacturers build it in to their own products?
  25. Oh, now you've gone and gotten me started. Here in the US the entire record save "Video Killed the Radio Star" seems practically unknown. I think this is in contrast to how influential it was and is in Europe (without this record, Air's Moon Safari never arrives, nor does Daft Punk's Discovery, "Digital Love" is practically a direct tribute to "VKRS"). I got lucky and when it came out, apparently one of the programmers or dj's at the album rock station in Little Rock, AR fell in love with "Clean Clean," so I got treated regularly to that grimy bit of power chord-infused new wave. It was actually played more than "Video." I bought the single copy they had at the more adventurous record shop on the main street of Hot Springs, brought it home, listened to it once, hated it, then threw it on again, and then again and it took up residence on my turntable and in my head for days. Gary Numan's Replicas put the next stake in my rock 'n' roll dirtbag status shortly thereafter. "Down in the Park" smacked me on the head, wherever that park was and whatever "death by numbers" was, they sounded way more interesting than the parade of Trans Ams and airhead girls with feathered hair that was the big weekend entertainment in Arkansas. "Down in the Park where the chant is 'death death death' 'til the Sun cries 'morning'" was way more cool/evil than whatever Ted Nugent was going on about. Both records conjured up visions of a dystopian sci-fi future in different measures. The Age of Plastic was a wistful world of missed opportunities and regret about choices taken, from the viewpoint of an upper middle class man looking back, and Replicas sounded like what you'd get if one of the demi-humans on Diamond Dogs had gone dumpster diving behind Kraftwerk's studio. Both of them had electric guitar power chords that sounded like they were being dispensed from a soft-serve ice cream machine in neat blobs. "Elstree," and "Clean Clean" are my favorites on the album (I love them all, though). "Elstree" and "Clean Clean" both contain themes of desensitization to images and acts of war ("all the bullets just went over my head"). "Clean Clean" was my favorite at age 19, but the emotional landscape described in "Elstree" is more familiar at 60. The idea that whatever we have now, while it may be fine, still isn't what we had before we became aware of our limitations, when the future was wide open. Sadly, both the Essoldo and the Giocondo were gone before I learned what they were. It just came this afternoon, and yeah, quite worth the $12 I paid for it on Amazon. The mix and levels don't sound much different, but they nuked a ton of tape noise, so everything just sounds clearer. It's a light-handed remaster like the Police box set, they didn't go crazy with the limiter, EQ or exciters. You really hear the horse gallop at the end of "Elstree" and the dog barking at the end of "Johnny on the Monorail." And not incidentally, what a motherbleeper of a bass player Horn was on this record. His playing on "Astroboy" reminds me of one of those Who songs where they solo the Entwistle track. (Ha, I Googled "Essoldo" right after this and with the results, Google said "People also search for "Giocondo." Awesome.)
×
×
  • Create New...