Jump to content

azslow3

Members
  • Content Count

    334
  • Joined

  • Last visited

Posts posted by azslow3


  1. You already hit the first reason Control Surfaces are difficult... Names!🙂

    ----

    ProTools support HUI (protocol) only. So, if you select "ProTools" target, DM will speak HUI.

    HUI (device) was produced by Mackie (company) with special HUI protocol,  later they have produced Logic and then MCU (Pro) devices, commonly known as "Mackie" (device). And MCU can be switched to "ProTools" mode (HUI protocol).

    So, switch DM to "ProTools" mode and use MIDI devices to configure "Mackie Control" surface plug-in in Cakewalk. In that plug-in select "HUI (Beta)" protocol. Start with MIDI In/Out which you think control channels 1-8  and should also send transport and other buttons(*). When that work, add "Mackie Control XT" with In/Out for the next group of 8 channels. And so on.

    * that is why you see some "CC" from transport on USB-1. Note they are a bit more complicated that just "CC"...

    mcmcleod is the author of "HUI (Beta)", so you will be supported in case something goes wrong on the software side (BTW "Mackie Control" is Open Source, in case you have some experience with C++ and want to have a look how all that works...)

    ----

    If you want to observe which messages are sent and how they are interpreted, you can initially use AZ Controller with mentioned by be "HUIv2" preset loaded on USB-1 (Instead of "Mackie Control", do not try to use in parallel). Unlike other plug-ins, it display what it receives and how it interpret (for not yet assigned complex messages you will need to play with interpretation "Options", see my previous post for explanation). But there is no "XT", so you will be limited to 8 channels + transport, till you extend the preset (far from easy without experience) or switch back to "Mackie Control" + XT(s).

    ----

     

    MMC is a (fixed) set of SysEx messaged, originally indented to sync hardware MIDI devices. Cakewalk CAN send them for that purpose,  but you do NOT want to use MMC as "MMC" in any case, just as SysEx messages assignable to some actions in Generic Surface (if you are not in ProTools mode...). For Generic Surface read (3-5 times, not a joke...) the documentation of Cakewalk. I repeat, at least 3 times. I have stopped after reading it 2 times, could not understand it, and started to write AZ Controller 🙄

    • Haha 1

  2. 1 hour ago, msmcleod said:


    @azslow3 - does AZController support HUI type messages ?

    AZ Controller support all MIDI messages ;)

    Not a joke, in fact that is more difficult than someone can guess...

    • HUI has "triggered" support different CC modes in one preset
    • 14bit CC sequences (including (N)RPNs) are supported with standard as well as arbitrary order, including mapping a part of the value to separated controls (A&H style)
    • complex SysEx mappings (Roland and Yamaha digital mixers in "native" mode)
    • Mackie handshake and Roland checksum, Mackie style ring feedback (other feedback I have seen so far could be organized in presets without special code in plug-in).

    But I can't claim the same for OSC, f.e. Behringer DMs OSC is not supported, I guess NI MK2 OSC also needs some addition in the code. Well, NI MK2 MIDI (native mode) will be hard to support too, but not in the MIDI parsing part... 

    My HUI example was tested with Nocturn only, one user has tested with d8b http://www.azslow.com/index.php/topic,223.msg1386.html#msg1386

    BTW I have complex preset for Yamaha 01V in MIDI mode, can happened something in it can work for other Yamaha mixers (if MIDI implementations overlap). But that is not recommended since controlling DAW using DM native MIDI means DM can no longer be used as a mixer (Cakewalk project will influence sound processing in the mixer).

    • Like 1

  3. 13 hours ago, mamero said:

    I want to use my Yamaha DM2000 to send MMC (MIDI Machine Control) to control Sonar's transport and location. I also want to use the DM2000 as a control surface for Sonar ideally with two-way communication. I have been trying to get this configured for days now without luck. It really shouldn't be this difficult to set up should it?

    It depends...

    Configure DM's DAW control mode (preferably "Mackie", in case it has it, but HUI should also work somehow). And then use Mackie surface module in Cakewalk.

    For anything else (in case you are not happy with simple approach), you will need deep understanding how all that staff works. F.e. you probably don't want MMC function as originally intended (to control hardware devices) even in case you have corresponding hardware, but you can use MMC sending buttons as generic Control Surface buttons, to control Cakewalk transport (or something else in Cakewalk...). You can even use MIDI DM signals, originally independent to sync DMs and/or save parameters, to control something in the DAW. But that is  relatively difficult. 


  4. I also can't remember faders ever followed CC7 events. Logically CC7 envelopes was always separate from CC7 events in the clip, the fader is working with envelopes (and there is no mode switch for it), so how could it follow two different values at the same time? 


  5. 3 hours ago, Bruno de Souza Lino said:

    As better as ASIO drivers are nowadays, USB still is a serial bus. If you have a device that's slower than your interface on the same bus, the controller will run everything at the speed of the slowest device in the bus and there's nothing you can do about it except for making sure your interface has its own bus and nothing else uses it.

    I guess in case that is true, my old 8x8 USB2 interface couldn't work on it's lowest settings when connected to 10m USB hub in parallel with  several USB1 devices. But it works.  USB specification deals with different standard/speed devices much better then making everything slow. 🙂

    Also under 1ms RTL is never "comfortable". Computer should be top optimized and plug-ins carefully selected. Yes, there are no USB interfaces with such feature. But 3.3ms is really usable, with USB2 and moderate buffer size. In practice, the difference can be rarely perceived (taking into account that moving your head 30cm in any direction change latency by 1ms...).

    • Like 1

  6. 16 hours ago, Victor Payne said:

    1) The Midi device I have is the RPx400 by DigiTech it has and additional plug in footswitch RPXFC with 3 buttons on it.

    Unfortunately Digitech documentation say nothing about MIDI these devices send (they just describe how they work with in-house software).

    Please check the following:

    • remove ACT MIDI and start recording "RPx400 MIDI" device into MIDI track. Press each button for 2-3 seconds. Then stop recording.
    • Open "Views/Event List" in Cakewalk. It should list what buttons  send to Cakewalk. Write us what you see in the "Kind" and "Data" column,

    Alternatively you can use any MIDI monitoring tools you now.


  7. 11 hours ago, Victor Payne said:

    Hi, I have some old Kit for my guitar setup , A digitech RPX400 and control foot pedal device with three selections to allow hands free 1)Stop/rewind, 2)Record, 3)Play/Pause 

    I have tried using the ACT learn function and assigning predefined options to each but the surface does not seem to save the presets to the action of the pedal 

    i.e I select the surface then the button 3 which I have given the play/pause function through the options setup, I then switch on the ACT learn , select the button 3 , the screen says Learn Midi, I press button 3 on the Physical foot pedal and the surface stops the learn.  I switch off ACT learn, save the surface .

    When I try button 3 on the  physical foot pedal it starts play on the DAW  and then when I press it again it does a STOP/Rewind to beginning of the track 

    Nothing I do will actually assign the Play/Pause function !!

    This is a simple 3 button footswitch and the DAW does respond but I cannot get the ACT learn function to assign the right function to the button 

    surely if there is a simple device its a 3 button foot control pedal but I'm noy able to make it assign the correct function , should I give up now or does someone 

    have some advice that could help me make this function. ?

    Thank you in advance if you have some advice to offer.

     

    If some device can send MIDI messages, you can configure it to stop/record/play (or to do many other things).

    • you are mixing many terms, so please write exactly
      1. which device you try to use as MIDI controller, RPX400 or "control foot pedal device". If the later, which one (name/model)?
      2. which Cakewalk Surface plug-in you try to configure: "ACT MIDI Controller" or "Cakewalk generic surface"?
    • "ACT learn"  cell/button you can see in plug-ins (including mentioned Surface plug-in and VST) have nothing to do with what you try to achieve. You need to do "MIDI Learn" (only) inside Surface plug-in (sometimes called "ACT plug-in"). Yes,  the word "ACT" is used for many different things.
    • some devices/controls don't send simple MIDI messages, in that case you can't "learn" them, you need to enter the message manually. Other can send more then one message from the same control. So my question (1) is important, we can have a look in concrete documentation to find the right way for you

  8. 21 hours ago, Blades said:

      Notably, setting the USB no-suspend thing in the power options (even with Ultimate option, this wasn't set)...

    You are right, not there by default... Now I have to check what else I have manually changed, so I can do that again when needed... 🤔


  9. 17 hours ago, rsinger said:

    What I've read is that usb 3.0/3.1 provides more bandwidth (you can have more channels), but doesn't improve latency. That's why we haven't seen a lot of new usb 3/3.1 interfaces. If you have other devices on your usb hub besides the interface it may help due to increased bandwidth/throughput.

    USB 1.1 has sufficient bandwidth for 2x2. What make difference for audio interfaces between USB 1/2/3 / FW/ thunderbolt  is communication organization. F.e. USB is a bus with predefined  minimum for communication "cycles", and that minimum is relatively high for USB1 and USB2. That is the reason you can't find USB1-2 interfaces with latency (RTL) lower then some value (for USB1 is was quite big, with significant improvement in USB2). USB3/FW/TB/PCI(e) open a possibility to make it lower, which some interfaces use (down to 1ms).

    So, USB3/TB can improve latency, when used properly on hardware and in drivers. But since USB2 can go down to ~3ms and lower latency requires very special system settings to work stable, the market is limited and so the number of such devices.

     

    12 hours ago, Billy86 said:

    Thanks for weighing in everyone. I’ve got a decent i9/9900 system and maxed out RAM @ 64 Gb. I also run ParkControl, which prevents Windows from parking cores (which I understand it does by default), spreading out the CPU workload across all cores; in my case 8 cores/16 threads. This is what multi-core processing for plugins option does in CbB. ParkControl has always worked well for me, so I’ve stuck with it, and don’t run CbB load balancing 

    Disabling parking cores (in general disabling C state changes) is the way to bring down occasional system latency from ~250uSec to ~50uSec. Unfortunately that spread out significant heat (f.e. my i9 will constantly spread >=90W,).  Unfortunately that is the only way to work with sub 64 samples buffers and/or to bring possible CPU load closer to theoretical max without introducing audio glitches. But the price (in terms of noise or super-silent cooling system) is too high for an average user...

    Plug-in multi-core processing in Cakewalk (as I understand it) is based on parallelizing processing after splitting audio buffer (that is why there is lowest buffer size with which it can be enabled), that effect can't be achieved with external tools.

    • Like 2

    • tip with power plan is good. To be on the "safe side", there is also "Ultimate" power plan. Note that by default Windows is NOT showing all available (and so many relevant) options in power plan editor, so it is f.e. not possible to manually edit one power plan to another (there is registry tweak on github to show all options). Simple switching to properly constructed power plan covers all recommended (f.e. in mentioned MOTU pdf) settings. Also I have found on the fly switch https://www.microsoft.com/en-us/p/powerplanswitcher/9nblggh556l3 useful, since I use one computer for everything (no reason to keep it "ultimate" all the time).
    • disabling WiFi, NVIDIA audio and in fact all other devices which are not in use is also good idea in general.
    • but switching priority to background processed is not a good idea in general... Proper written drivers + proper written DAW RT part should take care about priorities. Sometimes priority to background helps, sometimes running DAW as background process also change the result. All that are dirty workaround. Obviously, we want the DAW get resources as quick as possible, except audio driver activity. F.e. we don't want Windows own scheduled tasks have priority over the DAW. It seems like the problem is with some audio drivers, which for some reason run something as background process. So once the DAW (heavy plug-in) use resources, the driver can't get required time slot in time. For me, that is the only reasonable explanation why shifting general priority can have positive influence. Note there are some "tools" which allow manually set priority to particular processes/threads and some people report that works.

    I want to add one point, which I have noticed by occasion recently and it seems like that is not mentioned often:

    • sharing driver between applications can drastically affect stable buffer size. I have checked with my M-Audio and Phonic. Both allow ASIO in parallel with other modes. Once the same device is open by other application (f.e. web browser is running), even in case that other application(s) is not producing any sound, small buffers start glitch in the DAW.
    • Like 1

  10. 17 hours ago, George D said:

    Is it common the phenomenon the engine to load 100% and the audio processing to be lower or they usually go to the same level?

    Just downloaded reaper and insert the project of cakewalk there. I also insert the same plugins and the project did not drop out. I've insert twice the time the plugins of cakewalk and the cpu reached 65-70% and started to have some clicks and pops.

    Cakewalk always works in "real time", REAPER by default use anticipative engine. The later has obvious advantages, but there are some disadvantages as well (f.e. try to live play with several tiny "look ahead" plug-ins...).

    To really compare, record arm all tracks in REAPER (or switch off anticipative processing). I must admit that in most cases I still get better performance in REAPER and I was really surprised when I hit opposite the first time, but under some conditions Cakewalk can deal with the same project in real-time better.

    Some people have switched to REAPER for this (and other) reasons, but other stay with Cakewalk or use both (or even more DAWs), also for good reasons...

    In any case, I always recommend to have REAPER+ReaCWP installed near Cakewalk. In case of questions and/or troubles (what is plug-in CPU load on each track? which plug-ins have "look ahead"? which plug-in is crashing the DAW? etc.), just open the project in REAPER and check performance meter/use plug-ins isolation/other "debugging" tools. Sure, most real projects will not sound the same (no Cakewalk specific plug-ins except ProChannel EQ will work after conversion, along with other differences), but for debugging that should be more then sufficient.

    • Thanks 1

  11. 13 hours ago, Misha said:

    I feel like completely disconnected. 

    When recording, sure, I press little "fx" button and can get away with 256  buffers... 

    But when mixing, having a bunch of dynamic, lookahead vsts (8 or more) plus  some Kontakt/Halion stuff, I can not getaway with anything lower that 2048.  

     i7-8850H/32ram/nvme m.2/ usb 3 + thunderbolt.. Tried both usb controllers. Oh well. Thanks everyone!

    Have you checked your computer for "audio processing compatibility"? I mean Ultimate power plan, latency monitor, CPU throttling, etc. I mean something has to make your system (unexpectedly) busy for more then 20ms to force 2048 buffer size.

    Another quick check: open the project in REAPER (with ReaCWP, that should load some if not all plug-ins with project specified settings). Check performance monitor to detect what is going on (it will display CPU load per track, RT load, etc.).

    Even on old notebook with Realtek and ASIO4All I was never forced to set more then 192 for recording, if the project could work at all (if CPU is insufficient, the buffer size doesn't help). I think 256 is "safe maximum" for mixing on modern systems, it tolerates not optimized systems and other glitches. Your system should be able to record with 128 with many FX/VSTi. I mean with any interface (if everything is optimized and the interface is reasonable,  64 or even lower should work without glitches).

    PS. Lookahead in plug-ins increase RTL but has no direct influence on the buffer size nor CPU use. Lookahead is just algorithm forced approach, by itself that doesn't indicate the plug-in is CPU heavy.


  12. I don't think that buffer processing overhead plays significant role at such buffer size, so the need to go over 1024 comes from some severe jitters in processing. It can be seriously underpowered or not optimal for audio processing system. Are simple projects (f.e. audio + not sample based synth + FXes) run fine with low buffer (64, in worse case of 10 years old celeron, 128)? If yes, is the same project still runs fine with 1-2 Kontakt instruments? If both are fine, I guess the system is underpowered for current project. If "CPU only" project runs fine, but sample based has troubles, closer look at the disk system (disks, controller, settings, fragmentation) should help to understand from where it comes. If CPU only project doesn't run with 128, something is the system introduce (unexpected) latency, so system settings are not optimal.

    I guess MOTU think that modern computers don't need huge buffers, also in some DAWs the buffer size has little impact on possible mixing project size (mixing doesn't work in real-time).


  13. 12 hours ago, msmcleod said:

    hmm - 1000 parameters is a lot.

    I don't think the CS API was ever designed for querying such a large amount of parameters. If that had been envisaged, it would probably have had a call to return all the common parameters in a single struct as the result of a single call.

    May be I have misunderstood the intention with EQ/Comp, it will be way less then.

    By itself 1000 parameters under default timing is not a problem, there are some presets which use that amount. At the beginning I had worries, so my monitors have "speed" parameter to not ask every cycle. In practice, I have not hit significant CPU use nor audio glitches by monitoring every cycle. All my presets have all monitors in that mode.

    But when requesting CPU time is absolute, let say 1ms per loop, that is just 1/75 of not RT processing by default. With 10ms cycle that is 1/10 and can start influence something.


  14. 5 hours ago, msmcleod said:

    I don't think finding the change should take too long - assuming that you're not querying too many parameters, and the COM overhead isn't too high (it shouldn't be as by that point, you've already got the interface, and it's not inter-process).  Cakewalk already knows the state, and all you're doing is a parameter look up on that state. There's no messages involved, just function calls and a big switch statement.

    24 tracks x (track, sends, EQ and Comp parameters). So O(1000).  It takes at least 2-3 times more calls then parameter to get the information. Not sure when it start to take significant time, but there should be some limit even on modern computers 😉


  15. Simple controls can be used more then just in "Jump" and "Soft takeoff" modes. I have played with that in AZ Controller and I have found 2 additional modes useful:

    •  "Endless imitation" mode . Knob/fader always change parameter from current value, but it does that in one half of its range for particular direction. That allows "fine tuning", which is not possible otherwise.
    • "Instant" mode. Knob/fader always change parameter from current value and does that from any position. But "resolution" is initially lower when position and value mismatch. That allows instant coarse changes without jump nor soft takeoff.

    In addition, especially in "Instant" mode, non linear curves can work better for particular parameters.

    For myself, I have found that tuning something is convenient with encoders (f.e. X-Touch MINI can provide almost the same functionality as MCU-like devices), while automating or live changes during playing is better with normal controls. I mean making smooth in time changes with Behringer encoders is almost impossible for me. I have shortly tested NI encoders for that and the feeling was better, but still not the same as turning/moving "real" thing 🙄


  16. I don't think MIDI IO speedup is going to bring any improvement for the topic in this particular case. MIDI messages in question is reaction on parameter changes in Cakewalk.  So something has to be changed during 10ms and AZ Controller has to find that change. In Cakewalk  CS API that is always done by routine scanning, which by itself is going to take some time (primary in Cakewalk). Also current buffer size (I guess) is going to add its length as jitter to that process, since CS loop is not synced with RT audio.

    BTW I have already proposed on my forum that smooth changes in parameters should be inter/extrapolated on the hardware side (unlike in VST3, there is no ramp info possible in CS API, so I understand it is not easy to decide how to do this).


  17. A bit late, but ACT support jog wheel.  F.e. MCU use it.

    ACT is a C++ API, so it "support" any kind of devices, as long as someone implement support for particular device using it. No restrictions there, API is open as has MIT License (open source, public and can be used for free by everyone, unlike related APIs in most other DAWs).

    BTW. For someone who doesn't like to write in C++, there is AZ Controller. "Jog" Action with all related parameters and ready to use by any encoder (including touchscreen apps throw OSC) is about 1minute to configure using it.

    There are many things ACT does not support, f.e. content editing and matrix. But jog wheel and almost all other mixing related operations are supported already.


  18. 3 hours ago, Heinz Hupfer said:

    If you send me the Midi Controls and what you want to have on these Knobs or Buttons I can write one...

    If you want try your luck in "deep integration" without having the surface, I can give you required information... Real problem (from experience) is find someone with device and willing to test the result 😉

    • Like 1
    • Thanks 1

  19. 17 hours ago, Richie_01 said:

     I get a little annoyed by misleading statements like "it can't be done" or "VST2 is obsolete".  From the technical point incorrect. Technical issues are not a matter of opinion but ask for mathematical accuracy more often.

    If you are annoyed by something, that is not a prove it is incorrect. Let me cite you: "technical issues are not a matter of opinion".

    VST2 is obsolete: https://forums.steinberg.net/t/permission-to-develop-vst2-host-that-will-never-be-distributed/202042
    in other words, it is illegal to develop VST2 plug-ins for anyone who has not signed related license before 2018, so before it was finally declared obsolete (the first time it was declared obsolete was 2013). Note that Steinberg has full rights for VST.

    Note that VST has plug-in and host side. So no new DAW is allowed officially support VST2, till it is developed by someone with existing license.

    * I am aware that in EU (and probably some other countries) binary API (ABI) re-implementation is allowed (by precedents) without permission from the rights-holder. And such implementation exists.  But that is "gray" area in general.

    To understand why Program Changes MIDI messages are tricky to use in VST3, you will need to read (and understand) official VST3 SDK documentation and the source code.

     

    But sure, in case "your world" is not the same as "my world" (BTW I am from the planet Earth), VST2 license (gravity, etc.) can be different for you. At least from your last message you also have holidays now, so we have something common 🙂

    • Like 2

  20. From what I know, no-one has implemented deep integration into Cakewalk for NI keyboards (I don't have the keyboard, but I try to trace reports of deep integration of any surfaces into Cakewalk). So no  "as in some other DAWs" integration.

    Originally, MK2 (unlike MK1) had MIDI steering for DAWs disabled (DAWs had to use OSC for the integration). Later NI has enabled MIDI for all S MK2 and A keyboards. Details for used protocol are not public, but interested developers can get them. What I mean:

    1.  not only Cakewalk and NI but also anyone with sufficient skills can provide deep solution (f.e. such solutions exist for REAPER)
    2.  partial functionality (f.e. transport) is relatively easy to implement, since there are several "generic" plug-ins with MIDI support. So someone is probably using that already (while I have not seen success reports).
    • Thanks 1
×
×
  • Create New...