Jump to content

bvideo

Members
  • Posts

    186
  • Joined

  • Last visited

Reputation

48 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Red-lighting the prochannel EQ is definitely worth avoiding when expecting it to show useful levels. But just for fun, I overloaded the master bus with an audio track & the master bus input gain. Reduced the master bus master fader until the master bus meter was never red. Then enabled the prochannel eq & set a mild curve. The prochannel overload light was full on. But I did not hear any artifacts while the audio played. So the red lights didn't seem to correspond with audible artifacts. Of course some other VST or prochannel effects could react badly when processing "out of range" data, even when processing in floating point, so there might be several reasons for not fixing master bus overload using the master fader.
  2. In the analog mixing world, this would not be good. But in the digital - floating point - world, it would be fine. I believe the master bus output through its output fader is still floating point. Floating point numbers can represent much larger quantities than the fixed point 24 bit numbers that audio drivers work with. I assume after the master bus fader, floating point gets converted to 24-bit fixed point on the way to the driver. That's where clipping or terrible digital artifacts would first happen if the meter reads "too hot". Of course I don't know the code, and the signal path diagram in the manual is not necessarily a proof of this assumption, but it might be fun to "overload" a signal sent to the master bus and see how it sounds when compensated with the master fader.
  3. Chasing for articulations needs a different implementation from regular tracks. In your case, there is no "note" outstanding in the articulation track, and by conventional chasing, no note needs to be triggered. But obviously articulation chasing would need to trigger the last articulation note (or maybe even the last several notes), even though the articulation note lengths have expired. A workaround for your posted case might be to extend all note lengths up to the beginning of the next note, but there is an obvious conflict with that approach also. Chasing of articulated notes needs to run in parallel with chasing of their articulations. I wonder if they have tried to implement it that way?
  4. I doubt this mixer can be configured to split the signal the way you want. Typically, a smallish mixer has a separate send/receive path (e.g. "aux") that is nominally for patching in effects, but can also be used to send & receive from your computer while the mixer is used to mix all the audio into your sound system. This mixer doesn't have that. On the other hand, your Realtek onboard should have some way to take input from your mixer into the computer while also mixing it with computer audio out the Realtek into your sound system. That would provide the separation you need.
  5. When you see "very high" what does your disk usage look like? Is your sample player is thrashing samples?
  6. Someone I know killed his computer with compressed air. Static electricity? Overspeeding fans?
  7. This is the exact symptom of how Cakewalk and Sonar have frequently misjudged the number of beats when promoting an audio clip to a loop, as rsinger said in the first reply.
  8. My detailed reply is waiting to be approved. Bottom line, Korg M1 and Triton vstis are multitimbral. If the extreme is different, that's a shame. This word just in: the Korg Collection TRITON extreme owner's manual has the same words about setting the MIDI channel for each timbre.
  9. The Korg M1 vsti is definitely multitimbral, as is the original synth. In combi mode, click on the puke green MIDI widget (in between "performance" and "master fx" and just right of the vertical divider is the selector for MIDI channel for each slot. I don't have the Korg triton vsti, but the "Korg Collection Triton Workstation Music Workstation Owner's Manual" for the vsti clearly states there is a setting for MIDI channel for each slot (timbre). See the MIDI section of the "Combi" chapter. The MIDI button is between the Setting and Zone buttons. The first column of that page selects the MIDI channel per zone as from the manual: Most likely all the Combi presets are monotimbral. You need to roll your own. Here's the M1:
  10. Another way to approach the Korg is just go ahead and set up a template with 16 simple instrument tracks, one each for every MIDI channel, each track with its own instance of the Korg. Then your project has the same layout as you would with the TTS1, namely one track for each channel, with an individually assignable program per track. You can select any program from any bank into any track, so you can assign the GM voices as you please. The workflow might not be much different from what you might do with the TTS1. Note: having multiple instances of a VST does not use significantly more memory than a multitimbral single instance, and the CPU multithreading might be better. The TTS1 does have a kind of special status in Cakewalk, though, in that opening a standard MIDI file on an empty project automatically deploys TTS1. Hard to beat in terms of simple workflow.
  11. Korg typically provides the combi mode as a multitimbral form for using one instance to perform up to 8 separate instruments. So you load programs not banks into combi slots. And it's typical to be able to load any program into any of the 8 slots. So you could create an empty combi and select programs from a general midi bank into the slots, giving you a multitimbral GM synth. You could look at it this way: the TTS1 supplies exactly one "combi"; it has slots for up to 16 programs, selected from the on-board GM programs. A Korg combi supplies only 8 slots; you can load up to 8 programs, all of them GM, if that's what you want to hear. (I leave out all the other possibilities offered by the Korg.) There are differences, obviously, in the way MIDI channels and audio outputs are assigned and these could be important to you.
  12. Raw sysex can certainly be on a track (any track), and you can see it in that track's event list. But another scheme is that sysex banks can be stored in the sysex view and called by a sysex bank event on a track, which can also be seen in the event list for the track. Yet another scheme is that a sysex block can be stored in the sysex view and can be marked to be sent when the project is opened. Then it won't appear in any track's event list. Sysex events (raw or bank references) on a track will be played and replayed whenever the play head crosses them (as can be seen when watching an event list). Sysex banks sent on project open don't get replayed. When cakewalk opens a midi file, events like bank/patch, volume, and pan that are seen at the beginning of tracks can get stored in the widgets of track headers, so they won't be seen in event lists. I don't know about sysex events; could be interesting. Of course when cakewalk writes a midi file from such a project, all events need to be written out to the tracks they belong to. Sysex banks that are marked to be sent on project open need to be put somewhere in the file. Does an extra track get created? I don't know.
  13. Depending on the version of Sonar you have, there may be a zooming widget between track headers and track data. Check the manual, and see if your zoom is way down low.
  14. Perhaps you are thinking of processing the plugins in parallel while streaming audio data through the chain. Maybe a good name for it would be "plug-in load balancing". That name is already in use, however, in Cakewalk. It's the name of a feature that processes the plug-ins in parallel to distribute the load across multiple cores when possible.😄
  15. There are differing degrees of multiprocessing. One special case is "plug-in load balancing", where the chain of plugins on a single track is handled by multiple processors, one per plugin. I was wondering if you tried disabling that. I'm sure one of my projects had a problem with it, but only with certain plugins and certain settings thereof. I asked you about buffer size because plug-in load balancing is disabled when the buffer is smaller than 256 (by default). So I am still curious if you have tried those things. Aside from plugins and audio drivers, the core of Cakewalk & Sonar is shared by dozens (hundreds? Thousands? ???) of users, not many reporting playback or rendering corruption. That's why I'm asking you these questions about a corner case I believe is there, and has been there since plug-in load balancing was introduced. (My early report).
×
×
  • Create New...