Jump to content

bvideo

Members
  • Posts

    197
  • Joined

  • Last visited

Everything posted by bvideo

  1. Is this Midi going to a h/w synth? If so, what about your interface? Also, does your transport stop moving or does it continue playing the following measures?
  2. You were able to work your keyboard without the M50 editor. That suggests that the editor is not passing its input forward to the MIDI output. I think we know the editor can send to the output, but just doesn't pass forward its input. That's either a major bug or there is some config item in that editor that suppresses that routing. In Sonar there is a taskbar widget (maybe optional?) that shows two little red dots. One shows input received, the other shows output sent. What do you see there?
  3. The m50 may not be producing sound from playing on its on keyboard because its MIDI "local control" is off. That is a preferred and perhaps default function of Sonar to send to MIDI ports when it starts up and addresses the MIDI connections it has. That's a setting that can be changed. Generally, "local control off" means Sonar intends that you set up the track's routing with "echo on" so that all performance goes in to a Sonar track and then back out to the keyboard. Suitable track settings for input and output are required. And whatever MIDI effects are on the track ought to pass output for any input.
  4. The single "c1 note on" does seem weird. Some of your "note on" events could have 0 velocity, which means note off. "midi/usb lead": if it's a typical midi/usb adapter, one midi connector is input & the other is output. Also, you could mention the make & model of your adapter, in case it's one that is known bad. "timing clock" may be configurable in the synth. Or maybe midi-ox can filter it out of your display. Also active sensing.
  5. It's possible to zoom all the way in to the sample resolution. See anything then?
  6. IRQ conflicts used to be a thing. Now not so much. IRQs used to go to "15" on early PC hardware. Now, if you look at your "System Information" under the [-] hardware resources -> IRQs, you see numbers up to 511 at least. And no conflicts. This was taken care of by hardware improvements in PC architecture, including an "Advanced Programmable Interrupt Controller" and also "Message Signaled Interrupts". Software improvements include "Deferred Procedure Call" (DPC), which offloads traditional (old fashioned) interrupt processing to lower priorities (and also allowing for use of all the cores to do device servicing). Nowadays, when looking for hitches and sputters, it is frequently in some DPC, and Resplendence LatencyMon is likely to point out which ones cause the problems. Some older network interface drivers used to be somewhat of a problem, especially if your cable was unplugged, because they would run high priority loops to detect the cable status.
  7. Ram makes a difference when you use a sample-playing vsti. It might be helpful to say a lot more about the projects you have: audio? midi? softsynths? number of tracks? sample rate?
  8. How about "envelope segment shape"? You can change the straight lines between nodes to curves. Perhaps your envelope nodes happen at inconvenient points in the wavevorm where a sudden change in gain causes clicks. Perhaps using a curve makes the change not so clicky.
  9. To eliminate certain artifact-producing settings, you could see if it happens with 64-bit audio set/not set and plug-in load balancing set/not set.
  10. What counts more than anything is the input setting on every MIDI track. (User 905133's initial assumption).
  11. What does your multidock look like? (That's where you expect the piano roll to open, right?)
  12. Red-lighting the prochannel EQ is definitely worth avoiding when expecting it to show useful levels. But just for fun, I overloaded the master bus with an audio track & the master bus input gain. Reduced the master bus master fader until the master bus meter was never red. Then enabled the prochannel eq & set a mild curve. The prochannel overload light was full on. But I did not hear any artifacts while the audio played. So the red lights didn't seem to correspond with audible artifacts. Of course some other VST or prochannel effects could react badly when processing "out of range" data, even when processing in floating point, so there might be several reasons for not fixing master bus overload using the master fader.
  13. In the analog mixing world, this would not be good. But in the digital - floating point - world, it would be fine. I believe the master bus output through its output fader is still floating point. Floating point numbers can represent much larger quantities than the fixed point 24 bit numbers that audio drivers work with. I assume after the master bus fader, floating point gets converted to 24-bit fixed point on the way to the driver. That's where clipping or terrible digital artifacts would first happen if the meter reads "too hot". Of course I don't know the code, and the signal path diagram in the manual is not necessarily a proof of this assumption, but it might be fun to "overload" a signal sent to the master bus and see how it sounds when compensated with the master fader.
  14. Chasing for articulations needs a different implementation from regular tracks. In your case, there is no "note" outstanding in the articulation track, and by conventional chasing, no note needs to be triggered. But obviously articulation chasing would need to trigger the last articulation note (or maybe even the last several notes), even though the articulation note lengths have expired. A workaround for your posted case might be to extend all note lengths up to the beginning of the next note, but there is an obvious conflict with that approach also. Chasing of articulated notes needs to run in parallel with chasing of their articulations. I wonder if they have tried to implement it that way?
  15. I doubt this mixer can be configured to split the signal the way you want. Typically, a smallish mixer has a separate send/receive path (e.g. "aux") that is nominally for patching in effects, but can also be used to send & receive from your computer while the mixer is used to mix all the audio into your sound system. This mixer doesn't have that. On the other hand, your Realtek onboard should have some way to take input from your mixer into the computer while also mixing it with computer audio out the Realtek into your sound system. That would provide the separation you need.
  16. When you see "very high" what does your disk usage look like? Is your sample player is thrashing samples?
  17. Someone I know killed his computer with compressed air. Static electricity? Overspeeding fans?
  18. This is the exact symptom of how Cakewalk and Sonar have frequently misjudged the number of beats when promoting an audio clip to a loop, as rsinger said in the first reply.
  19. My detailed reply is waiting to be approved. Bottom line, Korg M1 and Triton vstis are multitimbral. If the extreme is different, that's a shame. This word just in: the Korg Collection TRITON extreme owner's manual has the same words about setting the MIDI channel for each timbre.
  20. The Korg M1 vsti is definitely multitimbral, as is the original synth. In combi mode, click on the puke green MIDI widget (in between "performance" and "master fx" and just right of the vertical divider is the selector for MIDI channel for each slot. I don't have the Korg triton vsti, but the "Korg Collection Triton Workstation Music Workstation Owner's Manual" for the vsti clearly states there is a setting for MIDI channel for each slot (timbre). See the MIDI section of the "Combi" chapter. The MIDI button is between the Setting and Zone buttons. The first column of that page selects the MIDI channel per zone as from the manual: Most likely all the Combi presets are monotimbral. You need to roll your own. Here's the M1:
  21. Another way to approach the Korg is just go ahead and set up a template with 16 simple instrument tracks, one each for every MIDI channel, each track with its own instance of the Korg. Then your project has the same layout as you would with the TTS1, namely one track for each channel, with an individually assignable program per track. You can select any program from any bank into any track, so you can assign the GM voices as you please. The workflow might not be much different from what you might do with the TTS1. Note: having multiple instances of a VST does not use significantly more memory than a multitimbral single instance, and the CPU multithreading might be better. The TTS1 does have a kind of special status in Cakewalk, though, in that opening a standard MIDI file on an empty project automatically deploys TTS1. Hard to beat in terms of simple workflow.
  22. Korg typically provides the combi mode as a multitimbral form for using one instance to perform up to 8 separate instruments. So you load programs not banks into combi slots. And it's typical to be able to load any program into any of the 8 slots. So you could create an empty combi and select programs from a general midi bank into the slots, giving you a multitimbral GM synth. You could look at it this way: the TTS1 supplies exactly one "combi"; it has slots for up to 16 programs, selected from the on-board GM programs. A Korg combi supplies only 8 slots; you can load up to 8 programs, all of them GM, if that's what you want to hear. (I leave out all the other possibilities offered by the Korg.) There are differences, obviously, in the way MIDI channels and audio outputs are assigned and these could be important to you.
  23. Raw sysex can certainly be on a track (any track), and you can see it in that track's event list. But another scheme is that sysex banks can be stored in the sysex view and called by a sysex bank event on a track, which can also be seen in the event list for the track. Yet another scheme is that a sysex block can be stored in the sysex view and can be marked to be sent when the project is opened. Then it won't appear in any track's event list. Sysex events (raw or bank references) on a track will be played and replayed whenever the play head crosses them (as can be seen when watching an event list). Sysex banks sent on project open don't get replayed. When cakewalk opens a midi file, events like bank/patch, volume, and pan that are seen at the beginning of tracks can get stored in the widgets of track headers, so they won't be seen in event lists. I don't know about sysex events; could be interesting. Of course when cakewalk writes a midi file from such a project, all events need to be written out to the tracks they belong to. Sysex banks that are marked to be sent on project open need to be put somewhere in the file. Does an extra track get created? I don't know.
  24. Depending on the version of Sonar you have, there may be a zooming widget between track headers and track data. Check the manual, and see if your zoom is way down low.
  25. Perhaps you are thinking of processing the plugins in parallel while streaming audio data through the chain. Maybe a good name for it would be "plug-in load balancing". That name is already in use, however, in Cakewalk. It's the name of a feature that processes the plug-ins in parallel to distribute the load across multiple cores when possible.?
×
×
  • Create New...