Jump to content

azslow3

Members
  • Posts

    671
  • Joined

  • Last visited

Posts posted by azslow3

  1. Try to use ASIO mode with native drivers. That always provide the best possible for (any) interface latency. In case you can't set the buffer to 128 or lower in ASIO, check computer optimization for audio. The rest depends from your PC and used plug-ins.

    Note that without native drivers Windows and the interface add huge latency which is just not shown. If you don't want optimize the computer and ready to accept consequences, WASAPI sometimes can produce reasonable results (I guess ASIO drivers assume you are serious about audio and so some of them are not forgiving related computer problems).

     

  2. I guess everything over ~20k (as you can see quite a lot...) lands somewhere under 20k (with full "weight"). The synth can also try to use resulting frequencies later in the own chain. Just imaging a band 30-40kHz is used to modulate sub 100Hz band. What I mean, end result is not limited to just digitally aliased frequencies, any kind of distortions can be triggered.

    BTW such spectrum is a definitive confirmation the plug-in/preset creator has done something wrong. Audio plug-ins should not produce ultra-sonic components. That is no longer "audio" since there is no equipment which can reproduce it (and even in case such equipment is created, no one is able perceive it, except resulting analog world distortions and aliasing).

  3. What you mean by "right"? May be the person who has created the settings was using 44.1kHz, and so that sound is intended. May be he/she was using 96k. And it can happened it was 192...

    There are way more frequencies in 44.1 version. I am not an expert, but that can be aliasing results from math. I mean everything over 22kHz calculated in 44kHz rate produce something inside 22kHz. Your "96kHz" mp3 is 48kHz, only you can see (on original 96kHz track) what is inside ultra high range.

    Plug-ins/preset developers who care check that generated (also intermediate) frequencies are always in reproducible (by sample rate) range, and oversample + LPF when not. Other don't care. Fortunately Cakewalk can do that too.

  4. That is well known effect. It comes from bad math inside plug-ins, but used as an argument 96kHz sounds "better".

    In Cakewalk you don't have to do everything in 96kHz (that is one time decision since you can't change sample rate in existing projects),  you can use "local up-sampling" when make sense http://www.noelborthwick.com/cakewalk/2015/10/24/improving-your-synth-sounds-with-real-time-upsampling/

    Some people just use 96kHz for everything, to not think about possible troubles with plug-ins. If you do that,  check what you send to your monitors doesn't have high frequency (apply LPF to the limit of your hardware).

  5. To get Cakewalk working again, probably disconnecting the keyboard and starting Cakewalk will work (cleanup MIDI and Control Surfaces in Preferences, exit, connect the keyboard again and start Cakewalk again).

    If not, delete TTSSEQ.INI and ctrlsurface.dat in the %APPDATA%\Cakewalk\Cakewalk Core folder.

    Before configuring surface in Cakewalk, check you have chosen Mackie mode. From the documentation I guess you can try to select Studio One or REAPER  (see keyboard documentation how to select target DAW).

    You need to enable correct MIDI ports first. Do NOT enable all of them! Try enable just the first one and play some soft synth with keys. Probably that should work (different sources in the Internet mention different numbering), if not enable one after another (disabling previous) till you find "keyboard" MIDI port.

    Enable DAW ports (Input and Output), all sources suggest they should be the 3d. Add Mackie and assign these ports (so most probably 3d) to it. Try some DAW operations. If not working, open Mackie interfaces (from Utilities menu in Cakewalk) and set "Disable handshake".

    If (and only if) everything works so far...

    If you have some hardware connected to physical MIDI (5 pin) ports of the keyboard, you can enable port 2.

    There is one pair of ports for M-Audio Editor, enabling them in Cakewalk is looking for troubles (by default keyboard sends some information to all enabled ports, who know how EDITOR port interpret that).

    In case you want add ACT, assign the same port as you use for keys to it (NOT port you use for Mackie).

    I hope that helps. Note that I don't have that controller, all suggestions are from device documentation (and personal experience with Cakewalk).

  6. 5 hours ago, Marc Harris said:

    Not at all. I've 'rolled my own' for two decades. The parts are a mobo, memory , storage and video card. That's not rocket science.

    Mobo, memory, storage, video card and audio interface should be selected carefully for audio (real-time) applications.  All related BIOS and Windows settings should be tuned for that exact hardware combination. The result can be 'pro'.

    Apple does that for your with Apple computers. There are people/companies which does that for you with Windows computers.

    You have 'rolled your own' and then blame Microsoft/Steinberg for the result...

    • Like 1
  7. To have best experience with audio under Windows, the computer should not be "put together", but constructed specially for audio by people which know how to do this. That is major difference between "PC" and "Apple", the first can be constructed by anyone... the result is unpredictable.

    On Windows there is RME Audio interfaces and "other" audio interfaces. Again, if great result (in flexibility, latency, etc.) is expected out of the box and the user has "no time" for tweaking, especially on not "audio special" computer, the interfaces has to be RME. Apple lovers should understand that situation.

    • Like 3
  8. Since there is a link to my post in OP...

    In fact Cakewalk has improved the authorization schema since that time. There is no need to re-authorize throw Assistant (which rarely manage to update itself and refuse to work till updated) and there are "warnings" when the authorization expire soon. I think Cakewalk went as far as they could (assuming they want time limited authorization).

    I personally don't like authorizations. But they are almost impossible to avoid these days. Windows software is known for long time compatibility, but at some point Microsoft can consider to break it. Will software X authorize and work correctly in Y years? No one knows. For myself I prefer to have an option for "disaster" case. It can be not "perfect", but I like when it exists.
    For Cakewalk there are options: X2-X3 with offline authorization (WINE compatible, so available forever since x86 platform can be emulated) and converter to another DAW (which requires no authorization and work on any system). I mean nothing can completely "brick" Cakewalk projects, not even instant shutdown. In the modern world that is "sufficiently safe" for me.

     

  9. @Kevin Perry Thanks for the tip!

    I still could not reproduce the issue. I have swapped ~1/~2 for 2 VSTs, so 8.3 names was swapped (while original long names kept). Cakewalk still could find everything...

    VST scanning has reported 1 "new" plug-in and 0 removed (I don't think I have made any other changes, so expected 2/2). And so I can imagine how strange things can happened. UUIDs for synths in fact include 8 characters from 8.3 name + VST2 ID (not that I don't trust Noel, just wanted to check I also "see" them ;) ).

    I link automations by other UUIDs, which do not include name/VST id, worked fine so far. And so the problem doesn't affect my code (that was my worry). But I am not Cakewalk...

  10. I have tried to reproduce, but I have failed... put VST2 on a separate disk, played with 8.3 enable/disable, renaming long/short, etc. Every time Cakewalk was able to find the plug-in and its automations.

    According to the (Windows) documentation, 8.3 related call just return 8.3 in case it exists and return original (possibly longer) name when not. I can only imagine there is no check when it returns more then 8.3 and that produce overflow (garbing something).

  11. @foldaway from my experience,  feeding plug-ins with incompatible data can produce way more strange effects then just a crash... VST2 is identified by ID, not by file name. This ID supposed to be unique, registered by Steinberg. But who follow all rules... Unlike UUID, VST2 ID is short (4 characters) and so clashes are likely, forcing DAWs to use some (unspecified) method to distinguish physically different plug-ins with the same IDs.

    As I have mentioned, it seems like Cakewalk in general match plug-in in the project with currently installed one. I mean it seems like 8.3 name issue is more glitch/bug then regular (also from OP and discussion, it happens for automations matching, not for plug-in matching). In case Cakewalk is unable to match some plug-in at all, most probably plug-in developer has done that on purpose and attempts to find a "back door" is not a good idea...  

  12. 15 hours ago, Noel Borthwick said:

    This would be rare. We replace VST2 to VST3 based on rules provided by the VST3 spec. If a plugin vendor marks the plugin as being convertable then its unlkely they will change parameters because that would be silly.

    I have thought plug-in replacement is the only case when (instance) UUID+parameter (that is how I match automations when parsing projects) can be disrupted... 

    10 hours ago, Noel Borthwick said:

    Yes this is the unfortunate effect of 8.3 names with VST2. We have code to handle loading plugins irrespective of mismatches and we synthesize a better UUID that doesnt rely on short names for that. However for automation it still uses the old format for compatibility reasons. 
    I'll think about it some more and see if there is a way to auto detect this and prevent envelopes getting orphaned in such scenarios but its not going to be easy.

    I remembered changing dll name was not affecting project loading. And I have just checked again: after renaming TruePianos.dll into True.dll and re-scanning in Cakewalk, "new" dll was matched in the project and related automation was not orphaned.

  13. Automation data are linked to plug-in parameters. I don't think plug-in installation path is ever stored in projects.

    I suggest you to check you are using the same plug-in format (VST2 or VST3) on both computers. Cakewalk could automatically "replace" VST2 with VST3, in case VST3 is installed (and compatible with the procedure, at least from Cakewalk perspective). If such replacement happens, parameters list can be different and so existing automation data  can't be matched.

  14. I remember one discussion about X32. Some of it's build-in effects have delay (just like some software plug-ins), but the devices is not compensating (so it works like Cakewalk with PDC off). Such delays are not reported to the DAW and so can't be compensated by Cakewalk PDC.

    You can correct X32 real latency manually in Cakewalk (in section "Sync and Caching"). There are many posts (for any DAW) how you can do this. F.e. real settings on X32 (with all desired effects) and headphones record beat listening backing track. Adjust offset till tracks are in sync on playback (you need to re-record after every adjustment). There are more accurate methods as well.

    Note in case the band listen backing track throw speakers, every 1m from speaker to listener adds ~3ms acoustic delay. Probably not significant in your case, so just to keep in mind these tiny (but visible in DAW) delays exist.

    But if you have 2 live tracks and put a plug-in with delay on one of them, live sound should be in sync (both tracks delayed). In case you have some plug-in which fail that (easy to check one by one), I guess it is better just avoid that plug-in.

  15. 5 hours ago, Wong Jian Ming said:

    Here the problem, by inserting different plugins on the different drums mics, some offending plugins are causing the live monitored signal to not be compensated correctly thus arriving later than expected, resulting in audible flam between drums. 

    This only happens on the live monitoring, when playing back the same recorded tracks, it plays back with all the drum tracks in sync. 

    So my hypothesis is that certain plugins are not reporting the correct delay for PDC to correct.

    1. To avoid misunderstanding for "PDC" button, please check:

    https://discuss.cakewalk.com/index.php?/topic/34909-what-is-the-default-state-of-pdc/

    In short, the button "PDC" overrides the compensation once activated, in your case you don't want that (you want PDC is working).

    2. I don't think plug-ins (can) distinguish between playback and recording. So if during playback everything is in sync, plug-ins report delays correctly.

    Note that you need to test with re-loaded project without changing anything. At least stop and start transport (play/stop). Cakewalk is still buggy when routing/delays are changed "on the fly".

    If you plan to mix live, avoid related changes. F.e. don't switch between presets which change particular plug-in delay, don't turn monitoring on/off, etc.

    3. Make sure you don't mix the output from the DAW with live signals, f.e. you don't have "direct monitoring" mixed with DAW output. Even when PDC works and so everything from the DAW is in sync, it is always out of sync with original signal (plug-ins with delays just make the difference more prominent).

    4. External signal looping can't be accounted correctly live. There is difference between live and playback. F.e. if you have backing track and loop it externally (with "output to input" cable), monitored input will be out of sync (by the interface latency plus delays in plug-ins). But if you record looped signal, on playback it will be in sync with backing track (assuming audio interface latency is reported correctly). In other words, DAW assumes you are recording listening backing track and shift the result "back in time".

     

    PS. CbB is good for "offline" work, I mean recording and mixing. It is also reasonable for live performance at home. But personally I agree with bdickens, I don't trust CbB when more then 2-3 persons are listening the output live. At the same time, I know people which have successfully used Sonar for live performances.

    PSPS. Before someone claim my personal opinion has no reason. I normally try to quick check that what I am writing is reality. So I have created 3 tracks project during writing this post, one "loop back" and 2 "live", which monitor that loop back. On one of loop back tracks I put Ozone with mastering preset. During simplest checks (switching monitoring on/off, several second recording and duplicating the result to the 4th track) CbB one time glitched with compensation and one time "silenced" monitored tracks. Play/stop helped both times. 🤨

    • Like 1
  16. REAPER claims it support some RADAR Project files. So the file where whole project is defined. That should load related tracks and position audio files correctly.

    Revise "Project Info Files" and "RADAR System Files". There can be more then one file with expected extension and/or several project files (I never had RADAR), but I guess REAPER  parse one file (not directory) which you select in the Open dialog. You have to find right one.

    Cakewalk has no special support for RADAR Projects. Once you get the project in REAPER, you will have to use usual procedure to transfer files between DAWs, so render each track into separate WAVs and then import them into CbB.

     

  17. I only have Ozone elements, and it consumes under 1% of (my) one CPU Core...

    So, I still recommend first find where the problem is and then decide what to do with it. Easy with REAPER: create new project, add track, add Ozone with preset you want, record arm the track, open performance meter, look at "Total CPU", "RT CPU" and "RT longest-block" (right click and enable corresponding options if you don't see them). Compare numbers with GUI open and closed. You can post Performance meter screenshot here if you have difficulties interpreting the numbers.

  18. 7 hours ago, Misha said:

    azslow3, I thought that "extra plugin buffer size" (or similar wording) in config file in Cakewalk does exactly what you described? Similar to Anticipative FX processing. But did not solve Izotope issues

    ExtraPluginBufs is related to some Cakewalk internals. What exactly it does and how that can affect plug-in stability was not explained by Cakewalk (at least I have not seen any explanation). But it does not magically switch on anticipative processing, Cakewalk always process audio in Real Time.

  19. From my knowledge, ReaCWP (Sonar to REAPER) is still the only attempt to transfer complete projects from one DAW to another.

    For transferring audio tracks (only, no FXes nor synth) between DAWs there is AATranslator. Cakewalk format is not supported, the project has to be exported as OMF or converted with ReaCWP first. It is relatively expensive software, so for just few projects manual export of audio files is the way to go.

    PS. I know, many people have chosen Studio One. But my choice was REAPER. And so there is ReaCWP but no converter into SO 🙄

  20. When comparing such issues in REAPER with other DAWs, make sure you disable "Anticipative FX processing". Some other DAWs have similar feature, but not Cakewalk. That feature makes "buffer size" relevant for recording/monitoring only, playback is processed "semi-offline" with specified buffer size (default 200ms, so ~8000 at 44.1kHz).

    Check if something is different when plug-in GUI is open/closed. It can be graphics (driver) related issue.

    Check DPC system latency when you observe crackle / pops. It can give a hint about problem origin (f.e. plug-in intensively access SSD and related operations block the system, happens with any nominal disk transfer rates).

    With switched off Anticipative FX processing, you can use Performance Meter (enable all RT options there) to see what is going on.

    Death optimized for audio system can utilize close to full processor power without audio problems, on a "standard" system problems can start appearing with almost idle CPU.  Especially without proper Windows Power Plan.

  21. "The most popular" rumor about audio interfaces and latency:

    1. Latency importance is oversized (each 30cm from an audio source adds 1ms latency in any case). In general, latency limits are:
      • vocal monitoring throw software - <3ms. Not used in practice, since partial direct monitoring (with  zero latency) solve the problem. You normally want just reverb added, and it can have 50ms+ latency.
      • e-guitar soft sim monitoring. Preferably <5ms.
      • e-drums with soft synth monitoring. Preferably <7ms.
      • MIDI keyboard with soft synth monitoring. 10-15ms is tolerable is most situations. Under 20-25ms is playable.
      • for anything else latency is not important
    2. CPU power has little to do with lowest possible latency. The difference is like a truck vs sport-car, you can drive with 10t but that does not mean you can drive fast. Sure, in case you need that 10t (in audio case many heavy soft-synths and effects) you need a car which can do that. CPU characteristics are declared as "power", not as "speed", even so CPU frequency is naturally perceived like a speed. The fact is, any 10MHz DSP easily beats in latency (many times) most powerful 5HGz desktops. The key to success with latency is strict audio optimization in BIOS and OS. And there can be brick walls (in hardware and drivers).
    3. "The most popular audio interface" is Realtek. 2i2 is a good entry level music audio interface with pre-amps, it is not in top league in any category (latency, drivers, sound quality) but it is reasonable for many use cases. Stable usable latency is around 8ms, and so it can be inconvenient with e-guitar soft sims only. For comparison, under the same relaxed settings on the same system top (in latency) interfaces have under 5ms.
    4. The lowest allowed by driver latency is rarely usable in practice even on optimized top system. When looking into "latency charts", buffer sizes under 32 are not meaningful.  Check what particular interface does with buffers 64/128 on 48kHz.

    "Accordingly" in the context means the highest setting of latency you are still convenient or the lowest your system and the project allows without pops and clicks, in case you get problems in your convenient range. That is human, project, system and tasks dependent.

     

    • Like 1
×
×
  • Create New...