Jump to content

bvideo

Members
  • Posts

    197
  • Joined

  • Last visited

Everything posted by bvideo

  1. Most likely the two different software implementations are different in how they define %. Rather than a standard acoustical definition, it represents a reproducible setting for the particular control. That said, your use of reverb on a bus with sends from different instruments at different values is one reasonable approach for saving CPU by using a single reverb. You may want to verify the sends are post fader (it's the default) if you want to preserve the mix value as you adjust an instrument fader.
  2. If you are using MIDI long notes and expecting to start playback in the middle of any of those notes, that is almost guaranteed not to align with the rest of your score. The reason is that the DAW only has two choices when starting playback in the middle of any note: 1. start the note at the beginning or 2. don't play the note. In case 1, some kinds of synth voices can't possibly align when started at just any place in the score because they are being started at the "wrong" time. If you were to freeze those notes into their audio equivalent, you'd then be able to start playing anywhere. If you're not talking about long MIDI notes, then never mind. It must be something else.
  3. In real life, true stereo is represented by relative signal delay between left and right and also reflections from the environment, maybe more than volume differences. One can also argue for differences in frequency response. The relative signal delay depends on the distance between microphones or ears. Reflections and frequency response also depend on microphone orientation and ear physics. Microphone orientation can certainly also affect relative volume as well as frequency response. You could try this on a signal recorded in mono: put the straight signal on left and the same signal delayed by a fraction of a millisecond on the right. With the same amplitude on both sides, the signal moves to the left. Experimenting with the delay amount can be fun. When considering how to reposition instruments (even when recorded in stereo), why not consider relative delay? One reason why not is listening (or mixing) stereo to mono can wreck the whole sound if only delay is used to represent stereo. A "true stereo" panner would have a lot of work to do.
  4. Any tempo changes near 9 of the first one? Tempo-based effects?
  5. One way: any controller can be modified using the event inspector in the control area. After selecting a range, that inspector can change values by n% or by increment +n or -n. By % would flatten or exaggerate the dynamics, while + or - would change the over all level, keeping the same dynamics.
  6. What a pain. The Nord should have just shifted the notes before sending them. As it is, the track you record from the Nord would probably play back correctly on the Nord, but not on other synths. Also, most likely, the octave shift CC will not be sent when switching to a program preset that has a programmed octave shift. Again, not friendly to other synths. It's possible a CAL script could run through a track and replace all the CC 29s with the appropriate octave shifting. Hopefully the CC event has an included value to indicate which direction to shift. And beware sustain (CC-64) messages and also notes held across a CC-29. But the shift in a patch change probably won't be seen and may confuse your entire strategy. Also, what happens when you send MIDI notes to the NORD outside of the configured range? Would a track restructured this way still play back correctly? CAL is a custom scripting language built in to Cakewalk. If you have a programming background you could do it. There are also a few people on this forum that are so skilled that they might volunteer to do it for you.
  7. Occasionally I notice late buffers show up when the song is just over, not while it is playing. No artifacts are heard... Also, my guess is the hidden "safety buffer" in some interfaces overcomes artifacts for slightly late buffers.
  8. Useful info here! But just to clear up my understanding about scheduling VSTi's? Normally there's just one VSTi serving any audio (synth) track. That is to say, the "synth rack" for that track only has one item in it, not a stack. This is clearly different from the case where an FX rack with multiple audio effects serves a single track (or bus). My understanding of plugin load balancing is that each audio effect in such a rack could get a thread simultaneously. That per track concept of load servicing does not even apply to synths. Your last paragraph implies a slightly different concept for synths; it doesn't sound like there is anything preventing running simultaneous threads on individual synths in the synth rack (it's global). In this sense, doesn't it make sense to say there is load balancing among the synths in the synth rack (and there probably always has been)? In this view, for a synth instance that has multiple parts, a single thread has to serve all the parts, even if the parts are directed to separate audio tracks (i.e. for the majority of VSTis, not supporting multithreading). Are there situations where Cakewalk would do multithreading to a VSTi that supports it? That would clear up most of my question (post 5?) about "overhead" of stacking synths. RAM is another part of the equation. For a VSTi that can support multiple ins & outs, how much difference in RAM does it take for multiple instances each with a single voice vs. a single instance with multiple voices. The code part should be shared. For a sampler, if samples can be shared between instances, I imagine there is relatively not a whole lot of difference. But how about for a synth like TTS-1? Does it wind up using a lot more memory for multiple instances?
  9. I'm interested to know how mildly and how substantially overhead might be increased by stacking a mono-timbral synth. Also, I wonder if a multi-timbral synth can run multi-threaded.
  10. I had the problem of my Korg M1 (64-bit VST2) opening properties all blank/white in an existing project. I tried Korg Wavestation (also 64-bit VST2), and the same thing happened. I use the Intel onboard 630, no MSI, no gaming software, only one monitor. The project is fine in Sonar latest Platinum. I worried! The project was last saved by Sonar X2. I resaved it from Platinum, and then those VSTi's worked OK in Cakewalk. Lucky me to have Platinum still around.
  11. I guess my notions of mute and solo come from mixing equipment, where they apply to any signal on a channel. The "soloing tracks" article you posted doesn't say whether it applies to MIDI or audio, and it doesn't refer to the input echo control, or make a distinction between recorded or live signal. The MIDI echo article doesn't refer to mute or solo. So really, the behavior is not spelled out, so it could be anything. It's potentially confusing in its description, and maybe as well in its implementation, since recorded and live signal are merged somewhere along the signal chain, but it's not spelled out for MIDI. I did notice, though, that the behavior for audio tracks is different from MIDI. In particular, solo on one audio track shuts off the incoming live signal on a separate, echo-enabled, track. (simple: project with two empty audio tracks, both echo enabled; listen to audio input through one of them, then solo the other; the audio stops) That is how I expected MIDI to work, too. Here's Cakewalk's audio signal flow. The little tan M and green S operators come well after recorded material merges with live input, so it's pretty clear how audio should work, and it does seem to work that way. I haven't seen the MIDI equivalent signal flow diagram. Sorry I was so late to respond. By the way, I don't use "Always Echo Current MIDI Track". But that possibility just brings up the notion that the echoed "Current MIDI track" with a mute, or some other track with solo, would cause the same behavior as I observed for a conventionally input-echoed track. Bill B.
  12. You could write for Stephen Wright. Actually, here is a Stephen Wright line: "Last night I played a blank tape at full blast. The mime next door went nuts." Your line is funnier.
  13. support@cakewalk.com, for reporting, but no tracking that I know of. It sounds like a bug. Can it be isolated to the playback of just one track or does there need to be a stream from multiple tracks? Generally, people find that those initializing MIDI messages need to be spaced out in time in front of all the notes, otherwise the first notes are all skewed depending on how long the synths take to engage program changes, not just sysex messages. The same might be said for controls & program changes on the same track, namely leaving time after a patch change before the next control. (I come from a world of dino-synths, so maybe today's synths process patch changes or sysex instantly.) Also, MIDI is pretty slow, on the order of just over 3 bytes per ms. That's 1.6 bytes per Sonar's tick at 120 bpm. So putting a sysex in there is guaranteed to delay some of the other messages "on the same tick". Still, order is certainly important.
  14. Oh no. Now I'm getting the impression that solo combined with input echo is goofy too. I echo-enabled two MIDI tracks and soloed one. Data from the keyboard still gets through the non-soloed one. What's more, after I played around with mute, solo, and echo enable, there came a point where all solos and mutes were off, input echoes on, but one of the tracks would not register input on its meter. A few clicks later and it did register. Sorry, no recipe just yet. I sent the original report to Cakewalk support.
  15. Apparently smart mute makes the synth track follow mute operations on the MIDI track, but not vice versa. Not clearly described in the manual.
  16. Gswitz, The icon in the plugin window I referred to is meant to globally set / unset upsampling individually for the particular plugin. It does indeed modify AUD.ini (the list) automatically. From Sonar's Help: To globally enable/disable upsampling for a plug-in, click the FX icon in the upper left corner of a plug-in window, and select Upsample on Render or Upsample on Playback on the drop-down menu. These options globally persists for all instances of the plug-in in all projects, so it only needs to be set once per plug-in.
  17. Dave, Some of those MIDI activity flashes seem weird! Too bad MidiOx wouldn't work there. There is an explanation of instrument track muting in the Sonar help file. There's a reference to "smart mute". "Smart mute" describes a setting that can be accessed in the Synth Settings menu. Maybe you and I have different settings there. My "Enable Smart Mute" is checked. For quick reference: --------- from Help: ----------- Smart Mute for Split Instrument Tracks When a virtual instrument MIDI track is soloed or muted, SONAR automatically manages muting or soloing the related set of audio/MIDI tracks in order to properly playback the soloed/muted tracks. In order to facilitate audio recording on Split Instrument Tracks, Smart Mute is no longer the default behavior for Split Instrument Tracks. By default, you can now individually mute individual split MIDI/audio tracks for soft synths. If you want to enable Smart Mute for Split Instrument Tracks, open the Synth Rack view, click the Synth Settings menu and select Enable Smart Mute on the drop-down menu. Bill B.
  18. Sorry, User 905133, I did not answer your question at first. I edited my answer a few minutes later. I used the right-click menu on the track for step 6. I have not tried a different step 6. However, I have replaced the synth in step 11 two different ways. I get what you're saying about trying different lines while muting the already-recorded part. I would never have thought of doing it that way because of my preconceived idea of what mute should do. And I probably would have wanted to record my trials with further abilities to alternately listen to takes etc. Before the advent of take lanes/tracks, I would have set up a clone track with no data on it for each new take. I don't really remember back that far any more ?.
  19. True If you have a synth or FX that needs upsample but doesn't do it, Cakewalk offers a pre-upsample, post downsample (Upsample on Render and/or Upsample on Playback in the upper left icon in the plugin window). So from the synth perspective, having your projects at 44.1 is OK.
  20. Thank you for your interest, User 905133. I used the track right-click menu "insert instrument". Later, I used "Replace Synth" from both the synth rack menu and the track right-click menu. And I found that the choice of soft synth did make a difference. I tried 4 different ones. TTS-1 did not show the change of behavior caused by step 24. Three other VSTi synths did, including Korg M1, Rapture Session, and Lounge Lizard. I know M1 is not with Cakewalk. I don't know about the other two, they came with Sonar, probably.
  21. I went step by step as described below. The pattern has a very strict sequence. Also, my scenario did not work the same way using the TTS-1 as compared with the M1 and Rapture Session VSTi's. I started this with Sonar Platinum. No data was ever recorded or entered onto any track. Start with new blank project. Create a MIDI track (track 1). Set Input from keyboard and output to a synth module or external whatever (both channel 1) Enable input echo and make sure your keyboard triggers the synth (just make sure your MIDI is going through Sonar, not direct). Hit Mute and verify the oddity that notes from the keyboard still play on the external synth, but the meter doesn't move. Create an instrument track (I used the M1 VSTi & Rapture session will also reproduce this, but not TTS-1) Choose input from your keyboard, channel 1. Don't play anything now. Create an audio track, with input from some legitimate audio channel. (Won't actually send any audio though) Save this test project & exit Sonar Start Sonar and load the project (I started from this point when I repeated my scenarios) (optionally, you can "replace synth" in the instrument track menu to try this on different VSTi's) Split the instrument track Repeat the MIDI track test on track 1, i.e.: Enable track 1 input echo and make sure your keyboard triggers the external synth Hit Mute and verify the oddity that keyboard notes still play You can disable track 1 input echo now Test the split instrument track pair (tracks 2 & 3): Enable input echo on the MIDI track (2) Demonstrate echo from keyboard to VSTi works Observe meter operates on both MIDI & synth tracks (2 & 3) Mute the MIDI of the pair. Observe the synth track mutes too. Play notes and observe no echo & no meter movement. Unmute the synth track -- still no echo & no meter movement. Enable input echo on the audio track (track 4). Now play notes on the instrument track pair just as you did before. Now synth meter moves (track 3) but not MIDI meter (track 2), and sound plays! I had a little trouble reproducing this at first. Yesterday I did this with the M1 VSTi, but I wanted to use something included with Bandlab Cakewalk, so I tried it with TTS-1 and it did not reproduce step 26 23. But it did reproduce with Rapture Session & Lounge Lizard. (probably steps 13-15 aren't needed). It happens with Sonar and Cakewalk. At steps 22 and 23, this is how I would expect mute and input echo to interact properly. Steps 5 and 26, not. Starting from step 10, it is possible to rerun the experiment with different VSTi's. I suspect step 24 sets something that cannot be unset, so starting over from step 10 is necessary to reproduce.
  22. Dave, The weird thing that happened with the split instrument track was done without any prerecorded notes. I made a test project with a pure MIDI track, a split instrument track, and a pure audio track, with no data on any of them. My aim was only to test the behavior of mute in connection with echo input for those three cases. After playing with the pure MIDI track, and seeing what I reported in the OP, I played with the split instrument track next. It seemed to behave the way I would expect. I echo-enabled the MIDI track and muted it, and unmuted the synth track, then played notes matching the MIDI track. There was no sound. Then I played with the audio track muting and echo-enabling it & maybe un-echoed (exact step here, I don't remember). It behaved as I would expect: no input echo sounded when muted. Then when I went back to the split instrument track & played notes, the synth track sounded the notes. I had not touched widgets on the split instrument track, it just started behaving differently. I realize this is not a very precise set of steps to reproduce, & I may not get a chance to nail it down until tomorrow evening.
  23. Thanks, Dave, for verifying this back to X2. I'm a long-time user and I have subconsciously come to expect it to work the other way. Just obviously never depended on it.? I've heard there might be some 8.5 users here. Any takers?
×
×
  • Create New...