Jump to content

azslow3

Members
  • Content Count

    359
  • Joined

  • Last visited

Everything posted by azslow3

  1. Since there is a link to my post in OP... In fact Cakewalk has improved the authorization schema since that time. There is no need to re-authorize throw Assistant (which rarely manage to update itself and refuse to work till updated) and there are "warnings" when the authorization expire soon. I think Cakewalk went as far as they could (assuming they want time limited authorization). I personally don't like authorizations. But they are almost impossible to avoid these days. Windows software is known for long time compatibility, but at some point Microsoft can consider to break it. Will software X authorize and work correctly in Y years? No one knows. For myself I prefer to have an option for "disaster" case. It can be not "perfect", but I like when it exists. For Cakewalk there are options: X2-X3 with offline authorization (WINE compatible, so available forever since x86 platform can be emulated) and converter to another DAW (which requires no authorization and work on any system). I mean nothing can completely "brick" Cakewalk projects, not even instant shutdown. In the modern world that is "sufficiently safe" for me.
  2. If a DAW crash after using some plug-in, it make sense contact plug-in developers first...
  3. @Kevin Perry Thanks for the tip! I still could not reproduce the issue. I have swapped ~1/~2 for 2 VSTs, so 8.3 names was swapped (while original long names kept). Cakewalk still could find everything... VST scanning has reported 1 "new" plug-in and 0 removed (I don't think I have made any other changes, so expected 2/2). And so I can imagine how strange things can happened. UUIDs for synths in fact include 8 characters from 8.3 name + VST2 ID (not that I don't trust Noel, just wanted to check I also "see" them ). I link automations by other UUIDs, which do not include name/VST id, worked fine so far. And so the problem doesn't affect my code (that was my worry). But I am not Cakewalk...
  4. I have tried to reproduce, but I have failed... put VST2 on a separate disk, played with 8.3 enable/disable, renaming long/short, etc. Every time Cakewalk was able to find the plug-in and its automations. According to the (Windows) documentation, 8.3 related call just return 8.3 in case it exists and return original (possibly longer) name when not. I can only imagine there is no check when it returns more then 8.3 and that produce overflow (garbing something).
  5. @foldaway from my experience, feeding plug-ins with incompatible data can produce way more strange effects then just a crash... VST2 is identified by ID, not by file name. This ID supposed to be unique, registered by Steinberg. But who follow all rules... Unlike UUID, VST2 ID is short (4 characters) and so clashes are likely, forcing DAWs to use some (unspecified) method to distinguish physically different plug-ins with the same IDs. As I have mentioned, it seems like Cakewalk in general match plug-in in the project with currently installed one. I mean it seems like 8.3 name issue is more glitch/bug then regular (also from OP and discussion, it happens for automations matching, not for plug-in matching). In case Cakewalk is unable to match some plug-in at all, most probably plug-in developer has done that on purpose and attempts to find a "back door" is not a good idea...
  6. I have thought plug-in replacement is the only case when (instance) UUID+parameter (that is how I match automations when parsing projects) can be disrupted... I remembered changing dll name was not affecting project loading. And I have just checked again: after renaming TruePianos.dll into True.dll and re-scanning in Cakewalk, "new" dll was matched in the project and related automation was not orphaned.
  7. Automation data are linked to plug-in parameters. I don't think plug-in installation path is ever stored in projects. I suggest you to check you are using the same plug-in format (VST2 or VST3) on both computers. Cakewalk could automatically "replace" VST2 with VST3, in case VST3 is installed (and compatible with the procedure, at least from Cakewalk perspective). If such replacement happens, parameters list can be different and so existing automation data can't be matched.
  8. Projects in Cakewalk have fixed sample rate, it can't be changed (Cakewalk is sample accurate and audio is positioned using exact samples). Bit depth for files is set in Preferences / File / Audio data. Driver is working with settings in Preferences / Audio / Driver settings. It make sense to record with the same bit depth as audio driver has. Any processing happens in 32 or 64 floating point numbers (depends from 64bit engine depth setting), independent from input/recorded files bit depth.
  9. I remember one discussion about X32. Some of it's build-in effects have delay (just like some software plug-ins), but the devices is not compensating (so it works like Cakewalk with PDC off). Such delays are not reported to the DAW and so can't be compensated by Cakewalk PDC. You can correct X32 real latency manually in Cakewalk (in section "Sync and Caching"). There are many posts (for any DAW) how you can do this. F.e. real settings on X32 (with all desired effects) and headphones record beat listening backing track. Adjust offset till tracks are in sync on playback (you need to re-record after every adjustment). There are more accurate methods as well. Note in case the band listen backing track throw speakers, every 1m from speaker to listener adds ~3ms acoustic delay. Probably not significant in your case, so just to keep in mind these tiny (but visible in DAW) delays exist. But if you have 2 live tracks and put a plug-in with delay on one of them, live sound should be in sync (both tracks delayed). In case you have some plug-in which fail that (easy to check one by one), I guess it is better just avoid that plug-in.
  10. 1. To avoid misunderstanding for "PDC" button, please check: https://discuss.cakewalk.com/index.php?/topic/34909-what-is-the-default-state-of-pdc/ In short, the button "PDC" overrides the compensation once activated, in your case you don't want that (you want PDC is working). 2. I don't think plug-ins (can) distinguish between playback and recording. So if during playback everything is in sync, plug-ins report delays correctly. Note that you need to test with re-loaded project without changing anything. At least stop and start transport (play/stop). Cakewalk is still buggy when routing/delays are changed "on the fly". If you plan to mix live, avoid related changes. F.e. don't switch between presets which change particular plug-in delay, don't turn monitoring on/off, etc. 3. Make sure you don't mix the output from the DAW with live signals, f.e. you don't have "direct monitoring" mixed with DAW output. Even when PDC works and so everything from the DAW is in sync, it is always out of sync with original signal (plug-ins with delays just make the difference more prominent). 4. External signal looping can't be accounted correctly live. There is difference between live and playback. F.e. if you have backing track and loop it externally (with "output to input" cable), monitored input will be out of sync (by the interface latency plus delays in plug-ins). But if you record looped signal, on playback it will be in sync with backing track (assuming audio interface latency is reported correctly). In other words, DAW assumes you are recording listening backing track and shift the result "back in time". PS. CbB is good for "offline" work, I mean recording and mixing. It is also reasonable for live performance at home. But personally I agree with bdickens, I don't trust CbB when more then 2-3 persons are listening the output live. At the same time, I know people which have successfully used Sonar for live performances. PSPS. Before someone claim my personal opinion has no reason. I normally try to quick check that what I am writing is reality. So I have created 3 tracks project during writing this post, one "loop back" and 2 "live", which monitor that loop back. On one of loop back tracks I put Ozone with mastering preset. During simplest checks (switching monitoring on/off, several second recording and duplicating the result to the 4th track) CbB one time glitched with compensation and one time "silenced" monitored tracks. Play/stop helped both times. ๐Ÿคจ
  11. REAPER claims it support some RADAR Project files. So the file where whole project is defined. That should load related tracks and position audio files correctly. Revise "Project Info Files" and "RADAR System Files". There can be more then one file with expected extension and/or several project files (I never had RADAR), but I guess REAPER parse one file (not directory) which you select in the Open dialog. You have to find right one. Cakewalk has no special support for RADAR Projects. Once you get the project in REAPER, you will have to use usual procedure to transfer files between DAWs, so render each track into separate WAVs and then import them into CbB.
  12. I only have Ozone elements, and it consumes under 1% of (my) one CPU Core... So, I still recommend first find where the problem is and then decide what to do with it. Easy with REAPER: create new project, add track, add Ozone with preset you want, record arm the track, open performance meter, look at "Total CPU", "RT CPU" and "RT longest-block" (right click and enable corresponding options if you don't see them). Compare numbers with GUI open and closed. You can post Performance meter screenshot here if you have difficulties interpreting the numbers.
  13. ExtraPluginBufs is related to some Cakewalk internals. What exactly it does and how that can affect plug-in stability was not explained by Cakewalk (at least I have not seen any explanation). But it does not magically switch on anticipative processing, Cakewalk always process audio in Real Time.
  14. From my knowledge, ReaCWP (Sonar to REAPER) is still the only attempt to transfer complete projects from one DAW to another. For transferring audio tracks (only, no FXes nor synth) between DAWs there is AATranslator. Cakewalk format is not supported, the project has to be exported as OMF or converted with ReaCWP first. It is relatively expensive software, so for just few projects manual export of audio files is the way to go. PS. I know, many people have chosen Studio One. But my choice was REAPER. And so there is ReaCWP but no converter into SO ๐Ÿ™„
  15. When comparing such issues in REAPER with other DAWs, make sure you disable "Anticipative FX processing". Some other DAWs have similar feature, but not Cakewalk. That feature makes "buffer size" relevant for recording/monitoring only, playback is processed "semi-offline" with specified buffer size (default 200ms, so ~8000 at 44.1kHz). Check if something is different when plug-in GUI is open/closed. It can be graphics (driver) related issue. Check DPC system latency when you observe crackle / pops. It can give a hint about problem origin (f.e. plug-in intensively access SSD and related operations block the system, happens with any nominal disk transfer rates). With switched off Anticipative FX processing, you can use Performance Meter (enable all RT options there) to see what is going on. Death optimized for audio system can utilize close to full processor power without audio problems, on a "standard" system problems can start appearing with almost idle CPU. Especially without proper Windows Power Plan.
  16. May be complete miss... but I remembered REAPER has RADAR projects import. May be worse a try.
  17. "The most popular" rumor about audio interfaces and latency: Latency importance is oversized (each 30cm from an audio source adds 1ms latency in any case). In general, latency limits are: vocal monitoring throw software - <3ms. Not used in practice, since partial direct monitoring (with zero latency) solve the problem. You normally want just reverb added, and it can have 50ms+ latency. e-guitar soft sim monitoring. Preferably <5ms. e-drums with soft synth monitoring. Preferably <7ms. MIDI keyboard with soft synth monitoring. 10-15ms is tolerable is most situations. Under 20-25ms is playable. for anything else latency is not important CPU power has little to do with lowest possible latency. The difference is like a truck vs sport-car, you can drive with 10t but that does not mean you can drive fast. Sure, in case you need that 10t (in audio case many heavy soft-synths and effects) you need a car which can do that. CPU characteristics are declared as "power", not as "speed", even so CPU frequency is naturally perceived like a speed. The fact is, any 10MHz DSP easily beats in latency (many times) most powerful 5HGz desktops. The key to success with latency is strict audio optimization in BIOS and OS. And there can be brick walls (in hardware and drivers). "The most popular audio interface" is Realtek. 2i2 is a good entry level music audio interface with pre-amps, it is not in top league in any category (latency, drivers, sound quality) but it is reasonable for many use cases. Stable usable latency is around 8ms, and so it can be inconvenient with e-guitar soft sims only. For comparison, under the same relaxed settings on the same system top (in latency) interfaces have under 5ms. The lowest allowed by driver latency is rarely usable in practice even on optimized top system. When looking into "latency charts", buffer sizes under 32 are not meaningful. Check what particular interface does with buffers 64/128 on 48kHz. "Accordingly" in the context means the highest setting of latency you are still convenient or the lowest your system and the project allows without pops and clicks, in case you get problems in your convenient range. That is human, project, system and tasks dependent.
  18. msmcleod probably will fix sends in HUI part. Theoretically, everyone is free to do this (the source code is on GitHub). HUI is even more archaic then Automap and Windows 7. And it was not invented to have cross-DAW compatibility nor to support different controllers. It just happened one particular DAW was (is) a "standard" in "professional studios". This DAW was almost refusing to use anything not "specific for me", hardware manufacturers was proud to claim "market leader compatibility" and other DAWs (with way more bright controllers support) could gain some users by "you can use pro hardware". BTW mentioned DAW has other owner and own controllers with new own protocol now (Cakewalk partially supports these controllers) Novation Impulse comes not even close to the capability of original "HUI device". It simply has way less physical controls (and those which are there are of lower quality...). So, it is "usable" with HUI software but functionality and usability is not at the level of specially build for such controller solutions. "InControl" on "Newer devices" is just... yet another MIDI button. The rest is up to particular DAW plug-in for particular device. I guess they was tired supporting Automap and decided to sell "MIDI controllers" instead (Automap devices was not standard MIDI class devices for OS). HUI protocol use 2 MIDI messages for most controls and is not symmetric (controls and feedback messages are slightly different). In contrast, Logic protocol use 1 MIDI message per control and is symmetric where possible (so except for encoder rings). There is a good reason to use 2-4 MIDI messages per control, when the number of such controls is huge (f.e. to map all parameters of digital mixers to the MIDI world). For dedicated DAW controllers it make no sense.
  19. Well, that confirms the device use Zone 0xb for Assigns ๐Ÿ™‚
  20. You have realized that.. knobs are always sending the same messages. And since you can control Pan, they are working fine in Cakewalk. So record messages when you SWITCH to SendA...D, so messages from buttons.
  21. @chakko Well... if motivated... you can install AZ Controller (https://www.azslow.com/index.php/topic,6.0.html), load HUIv2 preset (https://www.azslow.com/index.php/topic,223.msg1386.html#msg1386) and with a bit of luck it will switch device into HUI mode. Then you can press keys Novation specify for switching into Sends mode and switching sends and write down what you see in the "Last MIDI event" doing so (in the Options tab check CC interpretation is in 14bit mode, HUI sends 2 CCs per button, but at the end any of them will give us some hint what should be defined).
  22. @msmcleod It is hard to find high resolution picture for original HUI, but knobs assignment section is in Zone 0xB ("Send a" ... "Send e") In your code I see you translate assignments from buttons in Zone 0x17 (Auto enable).
  23. I recommend find out and post which MIDI message the button responsible for switching into Sends mode send to the DAW (MIDIOx or simply recording it as MIDI event into track when Control Surface module is removed). Probably msmcleod will be able to help then. BTW I suggest to try MCU mapping on the device. MCU protocol (more precisely LC protocol) documentation is still freely available, HUI always was closed proprietary protocol. Mackie plug-in implements all features of MCU, while HUI as you can see is "Beta".
  24. Yes, I have started to write it when I was stuck in attempts to setup existing plug-ins the way I want ๐Ÿ™‚ https://www.youtube.com/watch?v=Baoo0-CysSI&t=87s is a good starting point in your case. For switching between strip and sends control... that can be challenging for you at the beginning. In case you have future questions about AZ Controller, please use my forum or PM.
  25. With "Cakewalk ACT MIDI" you can't configure 8 buttons + extra 4 arrows (at most 1 of them as a "Shift" for 8 buttons). WAI based buttons (f.e. Mute) is also not an option, if I remember correctly. But with 2 Banks and corresponding switch buttons you can make vol/pan <-> sends working. With "Cakewalk Generic Controller" you can configure 2x8 + 8 + 4. But switching banks, like vol/pan to sends, is not possible. To avoid any limitations you can use AZ Controller (www.azslow.com). But it is harder to configure. The world is not perfect ๐Ÿ˜
ร—
ร—
  • Create New...