Jump to content

azslow3

Members
  • Posts

    671
  • Joined

  • Last visited

Everything posted by azslow3

  1. 24 tracks x (track, sends, EQ and Comp parameters). So O(1000). It takes at least 2-3 times more calls then parameter to get the information. Not sure when it start to take significant time, but there should be some limit even on modern computers πŸ˜‰
  2. Simple controls can be used more then just in "Jump" and "Soft takeoff" modes. I have played with that in AZ Controller and I have found 2 additional modes useful: "Endless imitation" mode . Knob/fader always change parameter from current value, but it does that in one half of its range for particular direction. That allows "fine tuning", which is not possible otherwise. "Instant" mode. Knob/fader always change parameter from current value and does that from any position. But "resolution" is initially lower when position and value mismatch. That allows instant coarse changes without jump nor soft takeoff. In addition, especially in "Instant" mode, non linear curves can work better for particular parameters. For myself, I have found that tuning something is convenient with encoders (f.e. X-Touch MINI can provide almost the same functionality as MCU-like devices), while automating or live changes during playing is better with normal controls. I mean making smooth in time changes with Behringer encoders is almost impossible for me. I have shortly tested NI encoders for that and the feeling was better, but still not the same as turning/moving "real" thing πŸ™„
  3. I don't think MIDI IO speedup is going to bring any improvement for the topic in this particular case. MIDI messages in question is reaction on parameter changes in Cakewalk. So something has to be changed during 10ms and AZ Controller has to find that change. In Cakewalk CS API that is always done by routine scanning, which by itself is going to take some time (primary in Cakewalk). Also current buffer size (I guess) is going to add its length as jitter to that process, since CS loop is not synced with RT audio. BTW I have already proposed on my forum that smooth changes in parameters should be inter/extrapolated on the hardware side (unlike in VST3, there is no ramp info possible in CS API, so I understand it is not easy to decide how to do this).
  4. Do I understand that right, the proposal is support $30k device? Roland has tried with $5k device... that was epic fail πŸ™„ But Cakewalk was supporting some EUCON, so can happened.
  5. A bit late, but ACT support jog wheel. F.e. MCU use it. ACT is a C++ API, so it "support" any kind of devices, as long as someone implement support for particular device using it. No restrictions there, API is open as has MIT License (open source, public and can be used for free by everyone, unlike related APIs in most other DAWs). BTW. For someone who doesn't like to write in C++, there is AZ Controller. "Jog" Action with all related parameters and ready to use by any encoder (including touchscreen apps throw OSC) is about 1minute to configure using it. There are many things ACT does not support, f.e. content editing and matrix. But jog wheel and almost all other mixing related operations are supported already.
  6. If you want try your luck in "deep integration" without having the surface, I can give you required information... Real problem (from experience) is find someone with device and willing to test the result πŸ˜‰
  7. If you are annoyed by something, that is not a prove it is incorrect. Let me cite you: "technical issues are not a matter of opinion". VST2 is obsolete: https://forums.steinberg.net/t/permission-to-develop-vst2-host-that-will-never-be-distributed/202042 in other words, it is illegal to develop VST2 plug-ins for anyone who has not signed related license before 2018, so before it was finally declared obsolete (the first time it was declared obsolete was 2013). Note that Steinberg has full rights for VST. Note that VST has plug-in and host side. So no new DAW is allowed officially support VST2, till it is developed by someone with existing license. * I am aware that in EU (and probably some other countries) binary API (ABI) re-implementation is allowed (by precedents) without permission from the rights-holder. And such implementation exists. But that is "gray" area in general. To understand why Program Changes MIDI messages are tricky to use in VST3, you will need to read (and understand) official VST3 SDK documentation and the source code. But sure, in case "your world" is not the same as "my world" (BTW I am from the planet Earth), VST2 license (gravity, etc.) can be different for you. At least from your last message you also have holidays now, so we have something common πŸ™‚
  8. From what I know, no-one has implemented deep integration into Cakewalk for NI keyboards (I don't have the keyboard, but I try to trace reports of deep integration of any surfaces into Cakewalk). So no "as in some other DAWs" integration. Originally, MK2 (unlike MK1) had MIDI steering for DAWs disabled (DAWs had to use OSC for the integration). Later NI has enabled MIDI for all S MK2 and A keyboards. Details for used protocol are not public, but interested developers can get them. What I mean: not only Cakewalk and NI but also anyone with sufficient skills can provide deep solution (f.e. such solutions exist for REAPER) partial functionality (f.e. transport) is relatively easy to implement, since there are several "generic" plug-ins with MIDI support. So someone is probably using that already (while I have not seen success reports).
  9. "Like it should" is relative... they support "Logic Pro X, Ableton Live, Cubase, Nuendo und GarageBand". Cakewalk is not in the list. So, from NI perspective it will work "as it should" in Cakewalk, without all that features πŸ˜€ You pay NI $500+ for the controller, I think it is better ask NI to support Cakewalk then reversed, since you don't pay Cakewalk.
  10. As I have written in another thread, VST2 is obsolete and VST3 does not support "program changes" directly, it is forced to be "preset change" possibly (auto) bound to incoming PC MIDI messages (by host, messages are not delivered to plug-ins). So that idea is closed: in case of VST (which is VST3 only since 2018) PCs make no sense. Even before, while some soft synths can switch presets in "real-time", most can't. They have to access the disk to load the preset and other related info (f.e. samples). In real-time with "modern" latency that has to happened within 1-2ms. That doesn't work even with i9+M2+50M file, while usual drum synth has to load 1-5GB. So, instant switching in general can work stable with "preloaded" presets only, and that in turn can be done with parallel tracks only.
  11. Don't hope, by then there will be VST4 πŸ˜‰ Steinberg has "fixed" the license for VST3, plug-ins developers HAVE TO switch to whatever new version of VST Steinberg decide to release within fixed period of time (VST2 is still used many years after it is declared obsolete and by now 2 years since no new developers can sign VST2 license... Steinberg does not like that...)
  12. I forgot to mention... in CW there is "Translate program/bank changes" in the VST3 menu, which for me is off by default. I guess it should be turned on to get PC messages a chance to be delivered to VST3 plug-ins.
  13. For me, there is a good technical reason why MIDI is strange in VST3. I was digging into that when creating GM VST3, obviously with the need to process program changes on all 16 channels (and other MIDI messages). Steinberg has KILLED MIDI in VST3, there is simply no MIDI input stream there, as it is known in any other plug-in formats (including VST2). Notes /PB are are still there. CC is reasonably "workarounded". But PC is disaster. Apart from questionable decisions, what they write in the documentation does not match what they have written in the code (SDK). So plug-in and host developers can only guess how that can/should work. BTW my VST3 doesn't work in Cakewalk (https://github.com/AZSlow3/FluidSynthVST) It was written for REAPER (Linux and Windows) and it works there, but not in CW. While that is a kind of "my fault", that by itself tells a lot about "compatibility" in VST3 world. Not surprise REAPER shows PCs (as parameters, that in addition to duplication in "presets" the prescribed way), while Cakewalk stops showing parameters starting from the first PC PS. I hope everything from Steinberg is declared obsolete one day. VST2 and ASIO was good, but both are made ill by license and "force obsolete" decision for VST2. VST3 is still in DIY/prototype, from the ideas, documentation and the code point of view (not to mention it is C++ ABI dependent, something clearly programmed to produce troubles coming from different compilers C++ ABI incompatibility). The "quality" is best demonstrated in VST3 Linux, these parts was written by someone who has never programmed on Linux before and had no time to read the documentation... Even after some problem is identified, explained and possible fix is provided, it take them ages to fix: just a week ago I have got notification that the problem/solution I reported more then a year ago is "fixed in development version".
  14. Ah sorry.... I am Russian, but for long time I don't notice the difference between Russian, German and English on YouTube... πŸ™‚ In short: when released, M2/M4 had no real loopback (while it was declired). But that feature was added later. There are 2 loobacks: one just with output, one mixing output with hardware input (f.e. to record windows sounds + mic). But there is just one "output", so if it is already used for loopback, you can't use it for something else. In other words, you can hear what you are recording but you can't hear something you don't want to record (f.e. other tracks from Cakewalk, otherwise they are also recorded). MAY BE IMPORTANT: I have not found any prove that all works with 2 ASIO applications. Loopback does not automatically mean 2x ASIO, can be WDM+ASIO/WDM+WDM only.
  15. And that in fact can be problematic. I don't have MOTU, but M2 is 2x2 interface. So In other words you most probably (I simply do not see any theoretical possibility) can't monitor loopback recording together with Cakewalk, so Cakewalk output should be muted when you are recording (or there should be no other tracks in Cakewalk and recorded track should have echo off). Logically M4 should be able to do that, creating loopback on one pair of outputs and cakewalk output to another pair (since 4x4)
  16. All RME interfaces allow arbitrary loopback. By arbitrary I mean you are free to compose several mixes (f.e. in your case from one software output) as use that mixes as input in other software. That "loopbacks" can be used for other mixes, f.e. for headphones (mixed with any other signals, f.e. Cakewalk output). Note that the number of mixes is equal to the number of the interface IO channels. F.e. Babyface pro has 12. Related physical channels are irrelevant, so you can use ADAT channels for loopbacks and still have all analog channels for something else. Multi ASIO capable, so both (or more) programs can use ASIO in parallel (so use any driver mode). PS. I have RME, so I have checked all that really works. PSPS. without "special" interface, you can try voicemeeter banana to do the trick. I have not checked it for multi-asio, but Skype->DAW with fancy routing works.
  17. As you can guess, when I was writing ReaCWP I was also checking the result. So on my own I always could Null (down to reasonable level) whatever I have thought has to null. Your question can be solved scientific way: upload 2 projects, so CWP and RPP, with the same example audio and free compressor which sound/does something differently. And let people explain the difference. If you can (have time/internet/have the place), also render and upload the results which are different. It can be something apart from projects producing that (but don't be surprised rendered output is different from what you hear, in that case please try to "render" playback using audio loopback). Just to make it clear. I am convinced plug-ins are working differently for you, it is not imagination nor "0.1dB". I simply try to pin from where it comes, in case you are interested. But there are so many variables that guessing in the thread is not productive.
  18. You need to match the input first. As I have mentioned already, that is not easy. Pan law and exac level matching should be achieved. Note that corresponding settings in Cakewalk and REAPER are different, the same result is not always coming from almost the same words in the preferences, so that has to be checked explicitly. audio sample size used for processing. Check both DAWs are in 64bit processing (sample size as floating point, not program code) mode. audio sample size and sample rate used for the input. F.e. if your recording is 96kHz, Cakewalk apply pre-conversion in case the project settings are different (and project settings are always in sync with your audio interface). In REAPER project sample rate and interface sample rate can be different. Also note conversion algos (f.e. different for "online" and "offline"). Don't forget CAKEWALK can upsample/split buffers, if corresponding settings are activated. output chain, including rates and levels. Also don't forget to check you have nothing in the REAPER Monitoring chain (since it is a kind of "outside of the project", it is easy to forget). plug-in settings. Not only current "preset". DAWs (can) indicated offline/online rendering mode to plug-in and plug-in can use that. More safe is compare "rendered" results, and in case they are the same find the reason "real-time" is different (there are several related settings). And the list is far from complete... But one program (plug-in), with the same input and the same settings produce the same result (down to random component, which in most plug-ins is not time line nor host dependent). Computer programs are not "creative" and I have not heard any plug-in explicitly made sound different in different DAWs (while that is technically possible, if caught, the (commercial) plug-in developer will loose the reputation... and you know, in the audio world "believe" is almost everything )
  19. DAWs sound the same, down to (theoretically) perceptible level. But the comparison should be done right, f.e. default settings in REAPER differ from default settings in Cakewalk. No, the output is not digitally equivalent. F.e. in Cakewalk audio is ALWAYS aligned to project samples, in REAPER it CAN BE aligned, but it is not by default and the procedure is rather tricky (taking into account the project, the clip and the audio interface can have different sample rate, that make no much sense there). Good start for comparison is just open Cakewalk project in REAPER, with ReaCWP. Many settings which can be converted are converted, so the sound should more closely match to the original then by simple "copy paste" audio/midi. BTW in ReaCWP documentation I explain many technical/internal differences between both DAWs (only from Cakewalk->REAPER perspective, not touching reverse direction). -------- For the thread: I have recognized many advantages of Cakewalk after switching to REAPER, they was so "obvious" that I have not noticed them before... So: my colleague ask me really often: "I again can't find how to..." and my answer is "wait a moment... I have to remember...", realizing in Cakewalk the same is simple and doesn't need "memorizing". REAPER is way more "flexible", but that has its price. Some highlights (the list is really long, so just few): for simple MIDI based work, especially when (multi-track) MIDI files are the origin and/or outcome, REAPER is in "hard to use" up to "that doesn't work..." range. step sequencer, matrix, etc. Good in Cakewalk. channel and other "must have" controls (f.e. "where can I select my MIDI keyboard for the track?") are good placed in Cakewalk for "safety" - REAPER all the way. Multi-platform, all versions (including the first one...) available for download and take no time (13MB) do download and install. Simultaneous versions completely independent (portable install), offline authorization, not restricted demo. So you know your can start the DAW anywhere and you will be able to start it anywhere in far future, independent from the company nor license existence. This DAW was made to do anything without stopping the transport (Cakewalk is better then Sonar in that respect, but still some operations have to be done with silence). deep troubleshooting is build-in into REAPER, with all timings, CPU consumption and delays (per plug-in). Plus 2 way plug-in isolation. anticipative engine is way more forgiving, when it comes to "play along"/mixing/mastering. It also void Plug-in Delay Compensation problem during recording. Note that "all in real time" is somehow better work in Cakewalk, especially when some plug-ins in monitoring chain have tiny delays (REAPER make them longer...) or the system power is on limit (in rather special conditions I could make a chain of plug-ins work without glitches in Cakewalk, while that couldn't be achieved in REAPER without rising the buffer). for experiments: fancy routing (including MIDI) and modulations - REAPER control surfaces - Cakewalk (AZ Controller... not many understand that, but some do...) In reality, it is good to have both πŸ˜‰
  20. Do you mean vMPC2000XL freeware? ACT will not help there, may be Drum Map and some MIDI FXes...
  21. It is better to set the same sample rate for the interface in all application, then convince the interface to switch into that rate (up to reboot, in case it refuse switching by software). So, check Windows sound settings and set targeted sample rate (check all visible in Windows inputs and outputs individually), then check Cakewalk and the interface controlling panel. That is especially important with one interface in the system, it will be "default" for Windows and so most probably in use outside DAW. Different interfaces react differently on switching requests, in best case Windows application are silenced when DAW/own control panel is switching to something else. Assuming Windows and Cakewalk are using the same sample rate, most interfaces can output ASIO and Windows sounds simultaneously. Some, but not all. That information is somehow hard to find for particular interface. Some people claim playing with flags (disabling them) in the "Exclusive" section in Windows settings can help with some interfaces. Impact on performance/latency/stability is a different question. There is no reports some interface can work with different sample rates at the same time. All have just one hardware clock. But in some driver frameworks Windows can re-sample on the fly, so OP can try MME and WASAPI Shared to get some sound even in case interface is not working with the Project clock (Cakewalk Project sample rate is locked and interface/driver should be able to work with it, unlike in some other DAWs...).
  22. ACT is no more convoluted then Ableton scripting, the difference is Akai supports Ableton and doesn't support Cakewalk... Instead of videos, I recommend to read the documentation: https://bandlab.github.io/cakewalk/docs/Cakewalk Reference Guide.pdf page 1272+ You can use 8 (+shift) buttons and 8 faders of your controller with "ACT MIDI Controller plug-in". You can use more buttons with "Generic Surface plug-in". The functionality is more or less limited by strip and (VST) plug-ins controls. If you want to use APC with Matrix view (something in Ableton direction), that is assigned in the Matrix view itself (ACT does not support it). APC is a controller specially designed for Ableton, which in turn has rather special work flow. Cakewalk likes "mixer" controllers (Mackie or clones). You can try to use APC with Matrix / assign some commands to buttons / use faders for VST controller (with ACT or MIDI learn). In practice that is not controller which steer Cakewalk well, I will say "X-Touch Mini" is minimal (and the cheapest) in the right direction.
  23. Probably https://wiki.fractalaudio.com/gen1/index.php?title=MIDI_Bank_Select So, bank is selected by CC#0 (0,1,2) and then PC is used to select patch in the bank. For correct visualization INS file can help, but you can work just with numbers.
  24. My TD11 shows ~12ms latency. I guess TD25 is in the same category. But before thinking about reducing it by audio interface, there are several things to check. if you can't set all settings to lowest (in Roland ASIO panel) without drop outs/choppyness, you need to optimize your computer till you can. Another interface will not help to solve that problem it can be 12ms is ok for you. Simple check: put speakers ~4m away from your head. Is it still comfortable to play e-drums? If yes, you should be fine when playing throw EZDrummer using headphones (10ms ~ 3.4m for the sound speed). check that latency throw EZDrummer is really what is expected based on current reported latency. I repeat, if you see 12ms it should really be close to the same in headphones with EZDrummer as with internal TD sound and speakers 4m away.
  25. I can confirm, corresponding command does not work. I do not think there is any difference between known (official) Logic Control documentation and MCU (Pro) in the same mode. Cakewalk plug-in strictly follows the layout. While I do not have any Mackie device, I have checked remotely and X-Touch (big one) sends exactly the same messages. http://www.azslow.com/files/mcu.png I am not sure what you mean by "standard db log 10"... Cakewalk value to dB is significantly not linear. I personally approximate it with 4 linear peaces (that gives sufficient for me precision for all possible values). REAPER use exp(dB*lg(10)/20) for saved value. But it converts it to "fader" (so MIDI) values with quite complicated (not public) formula. They have mentioned somewhere (if I remember correctly) it is a king of Mackie (Logic) approach, probably it should be described in some documentation (I haven not rechecked Logic documentation, may be it is there). In any case, Cakewalk curve for normalized (0. to 1.) values works great for all controls, so for faders, knobs and encoders. That I have realized after trying to control REAPER by encoder (tried direct value and "fader" curve, with different step sizes, nothing was convenient).
Γ—
Γ—
  • Create New...