Jump to content

azslow3

Members
  • Posts

    671
  • Joined

  • Last visited

Everything posted by azslow3

  1. Are you sure that is the case when you change by the API? Sure it can be my bug, but in my observation the flag is set only in case you change is done by mouse.
  2. I think MIDI jitter and audio jitter are from a bit different "domains": Audio is a continuous stream. Audio is sampled using clock and when this clock is not accurate, samples are for "wrong" time. Audio jitter is inaccuracy in the time of samples. If there is more then one clock and they are not synchronized, f.e when two interfaces are used in parallel, there are "drift" and "jitter" between audio streams. If interfaces are synchronized, there is no drift but still there is some jitter between stream (samples taken at the same world time are not put at the same time position in the audio streams). Note that audio transfer way/speed/latency does not influence that jitter. MIDI events are not sampled as continuous stream. Jitter there is a deviation in latency. So how late an event is delivered in comparison with usual delivery time. Unlike audio, there is no predefined "sample rate". Obviously there is some "rate of reading hardware sensors and converting them to events", but it is unknown and device specific. The only known clock/delays is MIDI hardware transfer clock (~31kHz) and so it takes ~1ms to transfer one note. Hardware MIDI transfer use uni-directional continuous stream, so a note can be delivered as soon as prepared. In other words ~1ms is full (and constant) delay between the note is ready and delivered (important for comparison with USB). USB-MIDI has much higher transfer speed in comparison to MIDI. Even USB 1.1 is at least 1.5MHz (up to 12MHz), so transferring one (or even several) notes is way faster using any USB (one note in less then 0.02ms). But USB use host driven packet delivery. And here comes the problem, in a "standard" mode used by computer keyboards, mouse and "cheap" USB-MIDI, delivery from device happens every 5-12ms ("polling rate", device+mode specific fixed number, easy to see in Linux, I have not tried to look under Windows). So a single note, in case of 10ms quantization, will be delivered between 0.02ms and 10.02ms after it is ready for delivery. And so there will be "jitter" up to 10ms. USB-MIDI devices with own drivers are supporting (and using) lower polling rates. With 1kHz polling rate max deliver jitter will be ~1ms, for any number of simultaneous notes (USB2+ can go higher, but I have not checked if that is used in USB-MIDI).
  3. I have not done the test, but one "theoretical" note. Let say you have 100Hz sin wave. That means its "turnaround" is 10ms. If the interface input to output latency is 5ms, recording input and output simultaneously should produce 180° phase shift. I mean visual shift between waveforms depends from the frequency and the interface RTL. PS. I assume you have checked the interface reported latency is accurate, using loop-back recording. Good interfaces report it correctly, but if external mic pre-amp with digital output is used, it is not (can't be) auto-accounted into the interface RTL. Also while RTL is easy to measure with loop-back (in a DAW or with special utility), its division into input and output parts is way trickier to deduct.
  4. Drum/Keyboard module processing time (physical impact till MIDI is generated) can be ms or so. And there is no guarantee MIDI(USB or own) output is sent at the same time/before/after module sound generator get the event. So comparing Mic on pad with audio from module output is just a measure of module impact to sound latency, which is not bound to MIDI latency. Audio buffer size contribute to the "jitter" of MIDI to VST output latency. If the interface is used for MIDI and audio, I have a feeling the jitter is smaller since the interface knows time relation between both inputs. "Real hardware MIDI 1" transfer speed (throughput) is ~1ms per note. For drums that is less significant then for keyboard (we have more then one finger per hand). USB quantization contribute as most into transfer latency (there is almost no difference to transfer 1 or 100 notes, the delay till the "packet" is sent dominates). In that respect USB 1 is way worse then 2 (throughput of USB 1 is sufficient). So in practice hardware MIDI connection + audio interface with MIDI may have lower latency then device own USB connection. For interfaces there is MIDI loopback test utility (like RTL for audio), I remember with RME I had something around 2-3ms while cheap "MIDI to USB adapter" had more then 10ms. My Kawai connected throw RME with MIDI had lower latency then own USB ( I don't remember the results of the test with my Roland drums). For me MIDI latency starts to be boring in case it goes "crazy" only. That has happened several times, for some reason some (not all!) MIDI devices start to get latency over 30-40ms. Not DAW/audio interface/audio buffer dependent. Disappears with Windows restart... I still have no idea from where that comes. Note that most MIDI devices normally "imitate" instruments with "natural" acoustic latency (unlike f.e. singing, guitar, flute, etc., I mean something with rather short or even fixed distance from the "sound generator" to our "sound sensors"). Just using headphones compensates 3-5ms latency.
  5. You can... That was never possible if the output was to side-chain and I remember there was problems with AUXes (since reply what is current output returns error, at least for side-chains), but changing bus to other bus is working fine. Well, if there are several sends, changing the output can auto-reorder sends, which for me is a bug, but that is not surface specific. Surface can't see topology graph, it just get a flag "topology changed" as refresh parameter. And the flag is not set when track output is changed, nor when send output is changed by surface. At least in X2.
  6. Thanks! In my current dev version of AZ Controller I try correctly cache names when possible (previous behavior was not optimal and buggy...) . That is how the bug was spotted. Since I slowly checking for names, once a change is noticed I re-check output/send names (if they are used for feedback). May be it is time for me to check if CbB: (a) trigger topology change flag for refresh when output or send output are changed (the last by surface) (b) if outputs/sends to side-chains and auxes are controllable. I am still developing under X2, may be there are some improvements 😏
  7. When destination (bus) is renamed, old name is reported by GetMixParamValueText(... MIX_PARAM_OUTPUT ...). I can't tell since when, but I remember a discussion about some "caching for names" not long time ago. In Sonar X2 it is working correctly. CbB version: 2022.11 (build 021, 64bit)
  8. azslow3

    AKAI MPK261 Config

    "Shift" is logical definition in "ACT MIDI", you can "Shift Learn..." it on the Options tab of "ACT MIDI". Which button of MPK261 you define as "Shift" is up to you. It just should be able to send MIDI (momentary, so it sends a message when you press and another when release) to the DAW (see AKAI documentation which buttons do). "Shift B1" and "B1" use the same incoming MIDI for B1, just the reaction is different when B1 is pressed when "Shift" is already pressed. So you can't "MIDI Learn" "Shift B1" separately. "ACT MIDI" is fixed to support up to 8+8+8+1 controls (separate MIDI messages). MPK261 has more physical controls (8+8+8 strip controls, 5 transport buttons, "DAW Control" section). And there are "Banks", which switch MIDI messages sent by the same controls. It is not possible to assign all available controls with one instance of "ACT MIDI". You may switch to "Generic Surface", it supports more controls. Note that both "stock" Cakewalk surface plug-ins have limitations what and how you can assigned, if you need more freedom you can use "AZ Controller".
  9. I think it is unclear what you want... Do you want play only one instrument using whole keyboard at any particular time? Then already given answer should help. If you want switch instruments from keyboard, write what you want to use for that (f.e. pedals, knobs/buttons on keyboard, etc.). If you want play all instruments at the same time, so split your keyboard into several regions, then you can use Drum Map or several tracks with active echo and the same keyboard as input (or just omni), with forced channel and MFX event filter (MIDI FX plug-in) tuned to filter out unwanted regions.
  10. If that is possible (only M-Audio knows for sure...), AZ Controller can use it (well, that has to be defined in the preset...). Also note that with AZ Controller any button or pad (which sends messages) can be used as a "Shift" ("Ctrl", "Alt.", "CapsLock". etc.), to change what any other control(s) is(are) doing. But the button has to send something. And any knob (in finite CC mode) can be used as "N position" switch. I mean If "Shift+<<" is not sending separate MIDI message and you want extra command, you can define f.e. "Back+<<" for that (while still keeping "Back" doing "Undo" when pressing "Back" alone).
  11. If the strip can send pitch bend only and that is not editable, in Cakewalk you can use MFX which converts pitch bend to CC: https://tencrazy.com/gadgets/mfx/ , PW-Map. What you will see in PRV is still pitch bend, till you "render" that effect.
  12. I was targeting "is any sound can be driven by MPE?" question. But you are right, my post is probably "too academic" for musicians... So I better list just practical points: MPE should be activated on keyboard and inside VSTi to work correctly. Knowing how that is done (and what can deactivate it) may avoid confusion. Original MIDI Polyphonic After-touch is not a part of MPE. Some other messages are used differently. In other words, MPE keyboard can be used with MPE unaware VSTi, but it should not be in MPE mode. The "sound" can be wrong otherwise. the "sound" produced by MPE compatible VSTi in MPE mode from preset not designed for MPE may be not the same as in conventional mode. That is VSTi (and preset) dependent. Also editing MPE recording can be more difficult then conventional recording (DAW dependent). In other words, it is probably better use (record) MPE mode only in case MPE is really used.
  13. Unfortunately I have not found HammerPro88 protocol documentation. User Guide mention 2 things: "Output port for LEDs in DAW mode" and "most DAWs automatically configure the device". In BITWIG, are any LEDs follow the DAW? If some do (f.e. transport), but other don't (fader buttons), M-Audio has not foreseen the feedback for particular buttons (at least not "easy" way). If no LEDs have feedback... it may be worse checking output port is set correctly. But the fact "they are always ON" point to the first case (with incorrect port they should be "always OFF"). "Most DAWs automatically..." is probably not true. In particular DAWs installation instructions they suggest selecting the DAW on keyboard and use Mackie mode in the DAW. Theoretically they can support feedback for buttons and pads (so the possibility for a DAW control LEDs, pads colors, etc.) even in case Mackie mode is not supporting that, I mean in "native" mode. But without protocol documentation from M-Audio it is very hard (up to impossible) to deduct.
  14. If I have understood the specification correctly... there is no "required" ingredients at all to use (or imitate) MPE 🤪 "MPE compatible host" just means the DAW can save (record, edit) Note and CC MIDI events with different channels into one track and deliver them to VSTi "as is" (without modifying MIDI channel in the events). I think (almost?) all DAWs can do that. For editing, a DAW should support convenient editing events with one particular MIDI channel (without that editing MPE recording will be nightmare). Controller is not required, corresponding MIDI can be created in MIDI editor. But converting recording from not MPE keyboard to "MPE compatible" can be boring (till the DAW supports corresponding scripting... theoretically possible with CAL..). If VSTi is not MPE aware, it will take 15 instances of the same VSTi with the same preset to implement MPE with it. Also MIDI has to be specially processed before feeding each instance. Note that such structure is rather hard to do in Cakewalk. PS. depending from VSTi and DAW, special care may be required for switching on MPE (RPN 6 messages) and prevent switching it off (on stop). ------------------------- Finally, I think everyone who want use MPE should read MPE specification instead of MPE advertisements.... MPE is rather simple "trick" to allow changes in several parameters per note. Original MIDI 1.0 has foreseen just one (Polyphonic After-touch). Current MPE defines 3. All that just to support keyboards with extra sensors per key.
  15. For the 3d option to work, the slider should send CC11. It will be assigned to AZ Controller logic, so initially CC11 will not come throw but modify WAI volume (or something according to preset logic). If the control is put in the group "A" and that group is toggles, it will no longer block CC11 and so it should be usable in plug-ins. And in that case it will be be processed by AZ Controller, so will not move WAI volume. But switching presets on the Keyboard, so control is sending something not assigned in AZ Controller, is probably simpler approach to achieve the same goal. "Groups" in AZ Controller are primary for controllers which can't switch hardware presets easily or when "advanced" logic is used (f.e. auto move some controls to/from pure MIDI mode when particular VSTi is in focus... some AZ Controller users experiment with funny configurations, may be just because that is possible 🤪 ).
  16. It depends from the version of Session. "SONAR" edition was bound to Sonar. General Strum Session 2 has serial number (as other AAS Session plug-ins, Platinum users could get those, in this case they are listed in AAS account).
  17. I have to checked what exactly is going on in this concrete preset for AZ Controller and so how easy/hard it is going to be to add "modes" for controls. But AZ Controller supports all kinds of plug-in control in Cakewalk, including Dynamic Plug-in Mapping (sometimes called just "ACT" since used in "ACT MIDI" plug-in) Direct Plug-in Control (used in Mackie Control plug-in) dynamically including/excluding controls from surface plug-in operations (not available in other surface plug-ins). The last options allows at run time "de-assign" let say sliders 7-8 from AZ Controller logic, which effectively allows using them in VSTi plug-ins as MIDI input. For that, put controls in questions in some Group (in the Hardware Tab) and assign some button to toggle the group.
  18. I don't think people here joke about soundstage or audio imaging. And there was no claims audiophile equipment produce the same sound as studio monitors. But there is significant difference in opinions from where all that comes and what it really is... In other words, are DSD/768kHz/etc. really the "keys" to that "great audiophile sound"? Listening some recordings with studio monitors is way less fun then listening them on $100 HiFi player. My 20 years old €40 Yamaha computer speakers with "virtual surround" button pressed produce crazy soundstage, even when they stay near each other. But I don't think based on that someone consider using HiFi player for mixing own songs or computer speakers to check soundstage 😏
  19. If you are blind, there is related solution: https://www.azslow.com/index.php/topic,346.0.html But since LEDs make no sense in such case, the device stay dark all the time (and there is no on-screen display). So the solution is far from perfect for majority of users. Well, you may find interesting to work just with audio feedback: you can stay away from the computer monitor nor look at XTC, and still have all important information...
  20. I think that is a wish to use 8x oversampling, just for "safety" or really required by some crazy plug-in to work properly... In the second case it is better re-think and abandon the plug-in in question. It is definitively not written good when requiring such oversampling (and not providing it internally). In the first: that is "too much" in safety, do not forget that all less then a half of sampling frequencies are perfectly reproduced. Unlike 8x AA in graphics, extra high sampling rage does not "improve" anything. 96kHz by itself is already reproducing frequencies more that two times higher then any human (and audio equipment) can perceive.
  21. With me you can talk on this forum, my forum, WhatsUp/Skype/Phone. On this forum you can "talk" to Mark (current supporter of Mackie Control in Cakewalk), Noel (long time Cakewalk developer) and other Cakewalk staff people. That is not the same as "support" from Novation/Focusrite/M-Audio, where you normally can communicate just with (or throw) the "first level" support (people trained to answer common questions). People here not only know how all that works, but they are also able to modify related parts of the code in case that is required.
  22. Keyboard sends MIDI from each particular control to one port only. Which one depends from the keyboard settings, so currently loaded preset and activated mode. That is not something Cakewalk nor surface plug-in can influence (till plug-in knows how to switch mode/preset...). The "bug" I have mentioned was rather confusing, in your example even so you could see let say "MIDIIN2" in the preferences, the DAW was actually using "LKMK3" instead. If that can happened in the current Cakewalk version I am not sure, I have not observed the effect for a while (but I try to be careful with connections...).
  23. My first guess is a bug in Cakewalk, it likes messing with MIDI assignments of Control Surfaces (started in some version, then was changed/improved/modified, but I am not sure it is eliminated completely...). "AZ Controller" will not help if that is the issue, it also use Cakewalk MIDI (for that reason I was thinking several times to work with MIDI devices directly, but mentioned "one port for everything" case will be impossible then). General rule with Cakewalk: always start it with the same MIDI devices connected into the same USB slots. Unfortunately that is not always practical or even not possible (I have several MIDI devices and I don't want switch all of them on every time I start a DAW, I also use different audio devices with different settings and that is real nightmare in all DAWs I use, why even "smartest" DAW developers can't at least remember one set of settings for each interface, better several presets, is a good question...).
  24. Here is the explanation: some DAWs are able to route one port (a MIDI device from software perspective) as "normal" MIDI (f.e. as input to VST Instruments) and to control the DAW. Obviously messages for controlling the DAW should be filtered out, f.e. Mackie sliders are sending "PitchBend" MIDI messages, you don't want change the pitch in your synth when you change the volume of your track. DAWs in that category are Ableton and Cakewalk (while Cakewalk has no way to block the output to device in that mode, but that are details...). But most DAWs need a separate port (device) for DAW controlling. And if some device pretend it can emulate "standard" controllers, like Mackie, it can't send keys to the same port. That is not technically possible, even in Cakewalk (with Mackie surface plug-in). Many controllers, including Lauchkey, are Ableton oriented. And so the have (or had) just one port. But if they want support "other DAWs", they have to implement at least 2 pairs of ports. And that has happened with MK3, they just want attract more users. -------------- Which ports to use depends from what the device is sending, in most controllers (Launchkey including) that is freely configurable. In factory presets other then for Ableton, "DAW controlling" messages are send throw "DAW controlling port". And so this port has to be used to catch that messages. -------------- "ACT MIDI" not only has strict limit on the number of controls, it also does not show the messages it receive. So you can't easily check which port you want. Another limitation is just "banks" based switching what particular controls operate. That in practice is not flexible, especially with limited number of available controls. How a control modify target parameter is also fixed, just "jump" and "catch" are possible. That is not always practical, at least not for all cases. -------------- "AZ Controller" does not has these limitations. It supports unlimited number of possible controls, modifiers and combinations. It shows last receive MIDI message. I has several fancy modes for finite hardware controls. It also support several instances, so several ports can be configured in one preset (for the "Master" instance) or separately. And when device has controllable LEDs, it can use them as well.
  25. Bold parts can't work at the same time. Once MIDIIN3 is assigned to Mackie, all messages from it are blocked. And device is not sending keys there in any case (till in custom setup). In general, you mention 3 different methods to use MIDI controller in Cakewalk: throw Control Surface plug-in (AKA ACT, even so only one of such plug-in has "ACT MIDI" in the name). That you activate with Cubase mode on device and Mackie plug-in in Cubase mode in Cakewalk. You doesn't mention/use that in "Testing" part. "Remote Control" of particular Cakewalk elements. In "Testing" you recommend move controls "all the way" during learning. From my knowledge Cakewalk is not sufficiently "smart" to learn control type/range. It just take the first message and use corresponding default value range. MIDI learn inside VSTi. For 2 and 3 important is "simple" type of the message the control sends. In general on devices without touch function nor encoders that is default (till in Mackie/HUI imitating mode and the mode is activated), so I guess any "not DAW" preset for Mini can be used for both. For 3 important is enabled echo on the track. So keys should work ("produce sound"). In Cakewalk that is default for focused track, but some users "manage" to switch that off. Note that "ACT MIDI" supports an alternative way to control plug-ins, "Plug-in dynamic mapping" (also at many places/texts called just "ACT"...). Mackie does not support that way (it uses yet another way, "Plug-in direct control", which is not supported by "ACT MIDI"...). There are several differences between MIDI learn inside plug-in and plug-in controlling throw surface plug-ins: MIDI learn only works for plug-ins with MIDI input and only when that input is active ("echo on"). That means most FXes are not controllable that way (the have no MIDI input). Surface plug-ins can control any plug-in which expose automation parameters, independent from MIDI inputs. MIDI learned changes are recorded into MIDI clip, as "CCs" (assuming control sends CC). Surface plug-ins modify automations, so changes are recorded as automations. MIDI learned assignment are remembered by plug-in, this particular plug-in way. So can be preset dependent. Surface plug-ins mappings are saved separately by Cakewalk, they are activated any time corresponding plug-in is loaded, even with other preset or in other project. Note really a difference... But worse to mention: for Mackie, plug-ins mappings has to be manually written by user (also I doubt Mini in Cubase mode can select FX to control control). For "Plug-in dynamic mapping" all assignments are saved into 2 system-wide XML files, and sometimes Cakewalk can corrupt them, effectively loosing all created mappings (till good versions are backuped by user). Finally, the only "perfect" way to integrate Mini into Cakewalk is AZ Controller. Mini has more controls then "ACT MIDI" support and its hardware controls don't match Mackie and so ineffective in that mode. But no-one has created corresponding preset so far. People normally just stop at "working acceptably", with any controller 😏
×
×
  • Create New...