Jump to content

azslow3

Members
  • Posts

    756
  • Joined

  • Last visited

Everything posted by azslow3

  1. If some device can send MIDI messages, you can configure it to stop/record/play (or to do many other things). you are mixing many terms, so please write exactly which device you try to use as MIDI controller, RPX400 or "control foot pedal device". If the later, which one (name/model)? which Cakewalk Surface plug-in you try to configure: "ACT MIDI Controller" or "Cakewalk generic surface"? "ACT learn" cell/button you can see in plug-ins (including mentioned Surface plug-in and VST) have nothing to do with what you try to achieve. You need to do "MIDI Learn" (only) inside Surface plug-in (sometimes called "ACT plug-in"). Yes, the word "ACT" is used for many different things. some devices/controls don't send simple MIDI messages, in that case you can't "learn" them, you need to enter the message manually. Other can send more then one message from the same control. So my question (1) is important, we can have a look in concrete documentation to find the right way for you
  2. You are right, not there by default... Now I have to check what else I have manually changed, so I can do that again when needed... ?
  3. USB 1.1 has sufficient bandwidth for 2x2. What make difference for audio interfaces between USB 1/2/3 / FW/ thunderbolt is communication organization. F.e. USB is a bus with predefined minimum for communication "cycles", and that minimum is relatively high for USB1 and USB2. That is the reason you can't find USB1-2 interfaces with latency (RTL) lower then some value (for USB1 is was quite big, with significant improvement in USB2). USB3/FW/TB/PCI(e) open a possibility to make it lower, which some interfaces use (down to 1ms). So, USB3/TB can improve latency, when used properly on hardware and in drivers. But since USB2 can go down to ~3ms and lower latency requires very special system settings to work stable, the market is limited and so the number of such devices. Disabling parking cores (in general disabling C state changes) is the way to bring down occasional system latency from ~250uSec to ~50uSec. Unfortunately that spread out significant heat (f.e. my i9 will constantly spread >=90W,). Unfortunately that is the only way to work with sub 64 samples buffers and/or to bring possible CPU load closer to theoretical max without introducing audio glitches. But the price (in terms of noise or super-silent cooling system) is too high for an average user... Plug-in multi-core processing in Cakewalk (as I understand it) is based on parallelizing processing after splitting audio buffer (that is why there is lowest buffer size with which it can be enabled), that effect can't be achieved with external tools.
  4. tip with power plan is good. To be on the "safe side", there is also "Ultimate" power plan. Note that by default Windows is NOT showing all available (and so many relevant) options in power plan editor, so it is f.e. not possible to manually edit one power plan to another (there is registry tweak on github to show all options). Simple switching to properly constructed power plan covers all recommended (f.e. in mentioned MOTU pdf) settings. Also I have found on the fly switch https://www.microsoft.com/en-us/p/powerplanswitcher/9nblggh556l3 useful, since I use one computer for everything (no reason to keep it "ultimate" all the time). disabling WiFi, NVIDIA audio and in fact all other devices which are not in use is also good idea in general. but switching priority to background processed is not a good idea in general... Proper written drivers + proper written DAW RT part should take care about priorities. Sometimes priority to background helps, sometimes running DAW as background process also change the result. All that are dirty workaround. Obviously, we want the DAW get resources as quick as possible, except audio driver activity. F.e. we don't want Windows own scheduled tasks have priority over the DAW. It seems like the problem is with some audio drivers, which for some reason run something as background process. So once the DAW (heavy plug-in) use resources, the driver can't get required time slot in time. For me, that is the only reasonable explanation why shifting general priority can have positive influence. Note there are some "tools" which allow manually set priority to particular processes/threads and some people report that works. I want to add one point, which I have noticed by occasion recently and it seems like that is not mentioned often: sharing driver between applications can drastically affect stable buffer size. I have checked with my M-Audio and Phonic. Both allow ASIO in parallel with other modes. Once the same device is open by other application (f.e. web browser is running), even in case that other application(s) is not producing any sound, small buffers start glitch in the DAW.
  5. Cakewalk always works in "real time", REAPER by default use anticipative engine. The later has obvious advantages, but there are some disadvantages as well (f.e. try to live play with several tiny "look ahead" plug-ins...). To really compare, record arm all tracks in REAPER (or switch off anticipative processing). I must admit that in most cases I still get better performance in REAPER and I was really surprised when I hit opposite the first time, but under some conditions Cakewalk can deal with the same project in real-time better. Some people have switched to REAPER for this (and other) reasons, but other stay with Cakewalk or use both (or even more DAWs), also for good reasons... In any case, I always recommend to have REAPER+ReaCWP installed near Cakewalk. In case of questions and/or troubles (what is plug-in CPU load on each track? which plug-ins have "look ahead"? which plug-in is crashing the DAW? etc.), just open the project in REAPER and check performance meter/use plug-ins isolation/other "debugging" tools. Sure, most real projects will not sound the same (no Cakewalk specific plug-ins except ProChannel EQ will work after conversion, along with other differences), but for debugging that should be more then sufficient.
  6. Have you checked your computer for "audio processing compatibility"? I mean Ultimate power plan, latency monitor, CPU throttling, etc. I mean something has to make your system (unexpectedly) busy for more then 20ms to force 2048 buffer size. Another quick check: open the project in REAPER (with ReaCWP, that should load some if not all plug-ins with project specified settings). Check performance monitor to detect what is going on (it will display CPU load per track, RT load, etc.). Even on old notebook with Realtek and ASIO4All I was never forced to set more then 192 for recording, if the project could work at all (if CPU is insufficient, the buffer size doesn't help). I think 256 is "safe maximum" for mixing on modern systems, it tolerates not optimized systems and other glitches. Your system should be able to record with 128 with many FX/VSTi. I mean with any interface (if everything is optimized and the interface is reasonable, 64 or even lower should work without glitches). PS. Lookahead in plug-ins increase RTL but has no direct influence on the buffer size nor CPU use. Lookahead is just algorithm forced approach, by itself that doesn't indicate the plug-in is CPU heavy.
  7. I don't think that buffer processing overhead plays significant role at such buffer size, so the need to go over 1024 comes from some severe jitters in processing. It can be seriously underpowered or not optimal for audio processing system. Are simple projects (f.e. audio + not sample based synth + FXes) run fine with low buffer (64, in worse case of 10 years old celeron, 128)? If yes, is the same project still runs fine with 1-2 Kontakt instruments? If both are fine, I guess the system is underpowered for current project. If "CPU only" project runs fine, but sample based has troubles, closer look at the disk system (disks, controller, settings, fragmentation) should help to understand from where it comes. If CPU only project doesn't run with 128, something is the system introduce (unexpected) latency, so system settings are not optimal. I guess MOTU think that modern computers don't need huge buffers, also in some DAWs the buffer size has little impact on possible mixing project size (mixing doesn't work in real-time).
  8. May be I have misunderstood the intention with EQ/Comp, it will be way less then. By itself 1000 parameters under default timing is not a problem, there are some presets which use that amount. At the beginning I had worries, so my monitors have "speed" parameter to not ask every cycle. In practice, I have not hit significant CPU use nor audio glitches by monitoring every cycle. All my presets have all monitors in that mode. But when requesting CPU time is absolute, let say 1ms per loop, that is just 1/75 of not RT processing by default. With 10ms cycle that is 1/10 and can start influence something.
  9. 24 tracks x (track, sends, EQ and Comp parameters). So O(1000). It takes at least 2-3 times more calls then parameter to get the information. Not sure when it start to take significant time, but there should be some limit even on modern computers ?
  10. Simple controls can be used more then just in "Jump" and "Soft takeoff" modes. I have played with that in AZ Controller and I have found 2 additional modes useful: "Endless imitation" mode . Knob/fader always change parameter from current value, but it does that in one half of its range for particular direction. That allows "fine tuning", which is not possible otherwise. "Instant" mode. Knob/fader always change parameter from current value and does that from any position. But "resolution" is initially lower when position and value mismatch. That allows instant coarse changes without jump nor soft takeoff. In addition, especially in "Instant" mode, non linear curves can work better for particular parameters. For myself, I have found that tuning something is convenient with encoders (f.e. X-Touch MINI can provide almost the same functionality as MCU-like devices), while automating or live changes during playing is better with normal controls. I mean making smooth in time changes with Behringer encoders is almost impossible for me. I have shortly tested NI encoders for that and the feeling was better, but still not the same as turning/moving "real" thing ?
  11. I don't think MIDI IO speedup is going to bring any improvement for the topic in this particular case. MIDI messages in question is reaction on parameter changes in Cakewalk. So something has to be changed during 10ms and AZ Controller has to find that change. In Cakewalk CS API that is always done by routine scanning, which by itself is going to take some time (primary in Cakewalk). Also current buffer size (I guess) is going to add its length as jitter to that process, since CS loop is not synced with RT audio. BTW I have already proposed on my forum that smooth changes in parameters should be inter/extrapolated on the hardware side (unlike in VST3, there is no ramp info possible in CS API, so I understand it is not easy to decide how to do this).
  12. Do I understand that right, the proposal is support $30k device? Roland has tried with $5k device... that was epic fail ? But Cakewalk was supporting some EUCON, so can happened.
  13. A bit late, but ACT support jog wheel. F.e. MCU use it. ACT is a C++ API, so it "support" any kind of devices, as long as someone implement support for particular device using it. No restrictions there, API is open as has MIT License (open source, public and can be used for free by everyone, unlike related APIs in most other DAWs). BTW. For someone who doesn't like to write in C++, there is AZ Controller. "Jog" Action with all related parameters and ready to use by any encoder (including touchscreen apps throw OSC) is about 1minute to configure using it. There are many things ACT does not support, f.e. content editing and matrix. But jog wheel and almost all other mixing related operations are supported already.
  14. If you want try your luck in "deep integration" without having the surface, I can give you required information... Real problem (from experience) is find someone with device and willing to test the result ?
  15. If you are annoyed by something, that is not a prove it is incorrect. Let me cite you: "technical issues are not a matter of opinion". VST2 is obsolete: https://forums.steinberg.net/t/permission-to-develop-vst2-host-that-will-never-be-distributed/202042 in other words, it is illegal to develop VST2 plug-ins for anyone who has not signed related license before 2018, so before it was finally declared obsolete (the first time it was declared obsolete was 2013). Note that Steinberg has full rights for VST. Note that VST has plug-in and host side. So no new DAW is allowed officially support VST2, till it is developed by someone with existing license. * I am aware that in EU (and probably some other countries) binary API (ABI) re-implementation is allowed (by precedents) without permission from the rights-holder. And such implementation exists. But that is "gray" area in general. To understand why Program Changes MIDI messages are tricky to use in VST3, you will need to read (and understand) official VST3 SDK documentation and the source code. But sure, in case "your world" is not the same as "my world" (BTW I am from the planet Earth), VST2 license (gravity, etc.) can be different for you. At least from your last message you also have holidays now, so we have something common ?
  16. From what I know, no-one has implemented deep integration into Cakewalk for NI keyboards (I don't have the keyboard, but I try to trace reports of deep integration of any surfaces into Cakewalk). So no "as in some other DAWs" integration. Originally, MK2 (unlike MK1) had MIDI steering for DAWs disabled (DAWs had to use OSC for the integration). Later NI has enabled MIDI for all S MK2 and A keyboards. Details for used protocol are not public, but interested developers can get them. What I mean: not only Cakewalk and NI but also anyone with sufficient skills can provide deep solution (f.e. such solutions exist for REAPER) partial functionality (f.e. transport) is relatively easy to implement, since there are several "generic" plug-ins with MIDI support. So someone is probably using that already (while I have not seen success reports).
  17. "Like it should" is relative... they support "Logic Pro X, Ableton Live, Cubase, Nuendo und GarageBand". Cakewalk is not in the list. So, from NI perspective it will work "as it should" in Cakewalk, without all that features ? You pay NI $500+ for the controller, I think it is better ask NI to support Cakewalk then reversed, since you don't pay Cakewalk.
  18. As I have written in another thread, VST2 is obsolete and VST3 does not support "program changes" directly, it is forced to be "preset change" possibly (auto) bound to incoming PC MIDI messages (by host, messages are not delivered to plug-ins). So that idea is closed: in case of VST (which is VST3 only since 2018) PCs make no sense. Even before, while some soft synths can switch presets in "real-time", most can't. They have to access the disk to load the preset and other related info (f.e. samples). In real-time with "modern" latency that has to happened within 1-2ms. That doesn't work even with i9+M2+50M file, while usual drum synth has to load 1-5GB. So, instant switching in general can work stable with "preloaded" presets only, and that in turn can be done with parallel tracks only.
  19. Don't hope, by then there will be VST4 ? Steinberg has "fixed" the license for VST3, plug-ins developers HAVE TO switch to whatever new version of VST Steinberg decide to release within fixed period of time (VST2 is still used many years after it is declared obsolete and by now 2 years since no new developers can sign VST2 license... Steinberg does not like that...)
  20. I forgot to mention... in CW there is "Translate program/bank changes" in the VST3 menu, which for me is off by default. I guess it should be turned on to get PC messages a chance to be delivered to VST3 plug-ins.
  21. For me, there is a good technical reason why MIDI is strange in VST3. I was digging into that when creating GM VST3, obviously with the need to process program changes on all 16 channels (and other MIDI messages). Steinberg has KILLED MIDI in VST3, there is simply no MIDI input stream there, as it is known in any other plug-in formats (including VST2). Notes /PB are are still there. CC is reasonably "workarounded". But PC is disaster. Apart from questionable decisions, what they write in the documentation does not match what they have written in the code (SDK). So plug-in and host developers can only guess how that can/should work. BTW my VST3 doesn't work in Cakewalk (https://github.com/AZSlow3/FluidSynthVST) It was written for REAPER (Linux and Windows) and it works there, but not in CW. While that is a kind of "my fault", that by itself tells a lot about "compatibility" in VST3 world. Not surprise REAPER shows PCs (as parameters, that in addition to duplication in "presets" the prescribed way), while Cakewalk stops showing parameters starting from the first PC PS. I hope everything from Steinberg is declared obsolete one day. VST2 and ASIO was good, but both are made ill by license and "force obsolete" decision for VST2. VST3 is still in DIY/prototype, from the ideas, documentation and the code point of view (not to mention it is C++ ABI dependent, something clearly programmed to produce troubles coming from different compilers C++ ABI incompatibility). The "quality" is best demonstrated in VST3 Linux, these parts was written by someone who has never programmed on Linux before and had no time to read the documentation... Even after some problem is identified, explained and possible fix is provided, it take them ages to fix: just a week ago I have got notification that the problem/solution I reported more then a year ago is "fixed in development version".
  22. Ah sorry.... I am Russian, but for long time I don't notice the difference between Russian, German and English on YouTube... ? In short: when released, M2/M4 had no real loopback (while it was declired). But that feature was added later. There are 2 loobacks: one just with output, one mixing output with hardware input (f.e. to record windows sounds + mic). But there is just one "output", so if it is already used for loopback, you can't use it for something else. In other words, you can hear what you are recording but you can't hear something you don't want to record (f.e. other tracks from Cakewalk, otherwise they are also recorded). MAY BE IMPORTANT: I have not found any prove that all works with 2 ASIO applications. Loopback does not automatically mean 2x ASIO, can be WDM+ASIO/WDM+WDM only.
  23. And that in fact can be problematic. I don't have MOTU, but M2 is 2x2 interface. So In other words you most probably (I simply do not see any theoretical possibility) can't monitor loopback recording together with Cakewalk, so Cakewalk output should be muted when you are recording (or there should be no other tracks in Cakewalk and recorded track should have echo off). Logically M4 should be able to do that, creating loopback on one pair of outputs and cakewalk output to another pair (since 4x4)
  24. All RME interfaces allow arbitrary loopback. By arbitrary I mean you are free to compose several mixes (f.e. in your case from one software output) as use that mixes as input in other software. That "loopbacks" can be used for other mixes, f.e. for headphones (mixed with any other signals, f.e. Cakewalk output). Note that the number of mixes is equal to the number of the interface IO channels. F.e. Babyface pro has 12. Related physical channels are irrelevant, so you can use ADAT channels for loopbacks and still have all analog channels for something else. Multi ASIO capable, so both (or more) programs can use ASIO in parallel (so use any driver mode). PS. I have RME, so I have checked all that really works. PSPS. without "special" interface, you can try voicemeeter banana to do the trick. I have not checked it for multi-asio, but Skype->DAW with fancy routing works.
  25. As you can guess, when I was writing ReaCWP I was also checking the result. So on my own I always could Null (down to reasonable level) whatever I have thought has to null. Your question can be solved scientific way: upload 2 projects, so CWP and RPP, with the same example audio and free compressor which sound/does something differently. And let people explain the difference. If you can (have time/internet/have the place), also render and upload the results which are different. It can be something apart from projects producing that (but don't be surprised rendered output is different from what you hear, in that case please try to "render" playback using audio loopback). Just to make it clear. I am convinced plug-ins are working differently for you, it is not imagination nor "0.1dB". I simply try to pin from where it comes, in case you are interested. But there are so many variables that guessing in the thread is not productive.
×
×
  • Create New...