Jump to content

bvideo

Members
  • Content Count

    50
  • Joined

  • Last visited

Community Reputation

15 Good

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. There are some confusing issues with MIDI ports in various versions of Windows. Try google: "windows limit of midi ports" This multiport interface works for me.
  2. There's a thing called "snap offset" that can be set as a special property of a clip. Normally a clip snaps at its left edge to whatever landmark you're using in the track, such as "beat". The snap offset is a point in the clip that is used as the snap point of the clip instead of the left edge. It could be handy in the case of aligning a clip that has a pickup, strum, or some other feature before the first "beat" of the clip. (See help for "snap offset".) I don't know if it works for loops, though. Also, since loops are a multiple of beats long, the end of the clip would be short of the end of a beat by the same amount of its snap offset (if snap offset works on loops at all). That would be OK when repeating the loop.
  3. Most likely the two different software implementations are different in how they define %. Rather than a standard acoustical definition, it represents a reproducible setting for the particular control. That said, your use of reverb on a bus with sends from different instruments at different values is one reasonable approach for saving CPU by using a single reverb. You may want to verify the sends are post fader (it's the default) if you want to preserve the mix value as you adjust an instrument fader.
  4. If you are using MIDI long notes and expecting to start playback in the middle of any of those notes, that is almost guaranteed not to align with the rest of your score. The reason is that the DAW only has two choices when starting playback in the middle of any note: 1. start the note at the beginning or 2. don't play the note. In case 1, some kinds of synth voices can't possibly align when started at just any place in the score because they are being started at the "wrong" time. If you were to freeze those notes into their audio equivalent, you'd then be able to start playing anywhere. If you're not talking about long MIDI notes, then never mind. It must be something else.
  5. In real life, true stereo is represented by relative signal delay between left and right and also reflections from the environment, maybe more than volume differences. One can also argue for differences in frequency response. The relative signal delay depends on the distance between microphones or ears. Reflections and frequency response also depend on microphone orientation and ear physics. Microphone orientation can certainly also affect relative volume as well as frequency response. You could try this on a signal recorded in mono: put the straight signal on left and the same signal delayed by a fraction of a millisecond on the right. With the same amplitude on both sides, the signal moves to the left. Experimenting with the delay amount can be fun. When considering how to reposition instruments (even when recorded in stereo), why not consider relative delay? One reason why not is listening (or mixing) stereo to mono can wreck the whole sound if only delay is used to represent stereo. A "true stereo" panner would have a lot of work to do.
  6. Any tempo changes near 9 of the first one? Tempo-based effects?
  7. One way: any controller can be modified using the event inspector in the control area. After selecting a range, that inspector can change values by n% or by increment +n or -n. By % would flatten or exaggerate the dynamics, while + or - would change the over all level, keeping the same dynamics.
  8. What a pain. The Nord should have just shifted the notes before sending them. As it is, the track you record from the Nord would probably play back correctly on the Nord, but not on other synths. Also, most likely, the octave shift CC will not be sent when switching to a program preset that has a programmed octave shift. Again, not friendly to other synths. It's possible a CAL script could run through a track and replace all the CC 29s with the appropriate octave shifting. Hopefully the CC event has an included value to indicate which direction to shift. And beware sustain (CC-64) messages and also notes held across a CC-29. But the shift in a patch change probably won't be seen and may confuse your entire strategy. Also, what happens when you send MIDI notes to the NORD outside of the configured range? Would a track restructured this way still play back correctly? CAL is a custom scripting language built in to Cakewalk. If you have a programming background you could do it. There are also a few people on this forum that are so skilled that they might volunteer to do it for you.
  9. Occasionally I notice late buffers show up when the song is just over, not while it is playing. No artifacts are heard... Also, my guess is the hidden "safety buffer" in some interfaces overcomes artifacts for slightly late buffers.
  10. Useful info here! But just to clear up my understanding about scheduling VSTi's? Normally there's just one VSTi serving any audio (synth) track. That is to say, the "synth rack" for that track only has one item in it, not a stack. This is clearly different from the case where an FX rack with multiple audio effects serves a single track (or bus). My understanding of plugin load balancing is that each audio effect in such a rack could get a thread simultaneously. That per track concept of load servicing does not even apply to synths. Your last paragraph implies a slightly different concept for synths; it doesn't sound like there is anything preventing running simultaneous threads on individual synths in the synth rack (it's global). In this sense, doesn't it make sense to say there is load balancing among the synths in the synth rack (and there probably always has been)? In this view, for a synth instance that has multiple parts, a single thread has to serve all the parts, even if the parts are directed to separate audio tracks (i.e. for the majority of VSTis, not supporting multithreading). Are there situations where Cakewalk would do multithreading to a VSTi that supports it? That would clear up most of my question (post 5?) about "overhead" of stacking synths. RAM is another part of the equation. For a VSTi that can support multiple ins & outs, how much difference in RAM does it take for multiple instances each with a single voice vs. a single instance with multiple voices. The code part should be shared. For a sampler, if samples can be shared between instances, I imagine there is relatively not a whole lot of difference. But how about for a synth like TTS-1? Does it wind up using a lot more memory for multiple instances?
  11. I'm interested to know how mildly and how substantially overhead might be increased by stacking a mono-timbral synth. Also, I wonder if a multi-timbral synth can run multi-threaded.
  12. I had the problem of my Korg M1 (64-bit VST2) opening properties all blank/white in an existing project. I tried Korg Wavestation (also 64-bit VST2), and the same thing happened. I use the Intel onboard 630, no MSI, no gaming software, only one monitor. The project is fine in Sonar latest Platinum. I worried! The project was last saved by Sonar X2. I resaved it from Platinum, and then those VSTi's worked OK in Cakewalk. Lucky me to have Platinum still around.
×
×
  • Create New...