Jump to content

azslow3

Members
  • Posts

    671
  • Joined

  • Last visited

Everything posted by azslow3

  1. Do not mix "MIDI learn" with "ACT learn". Also both are for steering plug-ins, not Cakewalk itself, they work completely different way. "MIDI learn" inside "ACT Midi" is yet another type of "learn". That is for learning the controller. Confusing, I know... For "Undo" in "ACT MIDI", in the Options tab (of ACT MIDI) select "Bank1" "B1" (or other) in the Buttons row and find "Edit|Undo" in the list. If you want a dedicated undo button in the CbB GUI, enable "Custom" module in the Control Bar and select Edit/Undo for one of its buttons. BTW in AZ Controller you can construct context sensitive custom action, f.e. Undo with transport stop as normal undo but "Stop->Undo->Record again" when you press it during recording. That is not strait forward to configure, but such (and more) examples exist on my site.
  2. For transport and undo, ACT MIDI will do the trick. You will need to configure it a bit, there are many instructions/videos. If you want more you can try this: http://www.azslow.com/index.php/topic,247.0.html Depending from the version of your keyboard, you will need to re-learn controls.
  3. (a) You try to use X-Touch Mini from 2 (or more) applications in parallel. That is not supported by Windows standard MIDI drivers. It is not Cakewalk specific not X-Touch specific. The message is misleading, but it comes from Windows (it has many device related misleading diagnostic). Check you do not have MIDI-OX, other DAWs, Behringer editor, etc. running when you start Cakewalk. Also check you are not using some plug-ins which try to work with MIDI devices directly (rare but exist, f.e. Ctrlr) (b) you have started Cakewalk with X-Touch or any other MIDI device disconnected / connected to other port, etc. Under some conditions, Cakewalk is confused and theoretically can try to open the device twice (not that I have seen that). Delete corresponding ini files for Cakewalk, it will regenerate them. (c) your device/cable/connection is unstable. That can produce all sort of errors. That is not unusual with Mini, my periodically fail to initialize or crash after closing DAWs, when connected throw 10m USB hub and I forget to switch on extra power for it. Try another port and cable. (d) set UWP mode for MIDI in Cakewalk. UWP supports device "sharing" (unlike MME).
  4. It should work with ACT MIDI (there is no dedicated module). Or you can use this: http://www.azslow.com/index.php/topic,377.0.html
  5. I am not sure you mean that, but there is one bug with "presets" and just one ACT mapping per plug-in. If you have 2 ACT MIDI (or Generic, or any other) instances, they share the same ACT plug-in mapping table ("ACT Learn"). They both show the same "preset name". But you can use them with different presets, just be careful when saving and never save preset when both GUIs are open.
  6. Overclocking by itself should not produce any issues in the DAW. But overclocking recent CPUs hit throttling rather quick. Check with some utility (f.e. TrottleStop) that you do not have such problem. Note that throttling can be from several parameters: temperature, short term power consumption, long term power consumption and current.
  7. I do not have NI keyboard, but I have seen several such questions. Without good solution. Just some background: * first NI keyboards was supporting Mackie Transport. Early Komplete Kontrol was locking in DAW instance switching. So, transport was working and Komplete Kontrol steering with manual instance switching. Nothing else was foreseen and nothing could be implemented. * in the second NI keyboards generation they have removed Mackie Transport (and controlling MIDI) support. Komplete Kontrol was still the same, so transport functionality could be done with OSC only (my no one from my knowledge has done that for Cakewalk) * recent Komplete Kontrol has returned MIDI steering, using documented (not in public) protocol. And it covers not only transport but also other DAW parameters. They have also unlocked instance switching. And so, it is THEORETICALLY possible to make recent S and A keyboards work in Cakewalk with full integration. But that is not done yet...
  8. I see that as the only and artificial problem. Back in time, they had high granularity in monitor/tv native resolution. So there was 800x600, 1024x768, 1440x900, 1680x1050, 1920x1080 etc. monitors. Smart people cood choose "optimal" for own perception and desired size resolution, so apps are looking good without doing anything. F.e. I had 19' 3x4 monitor with 1024x768, 19' 16x9 with 1440 and 24' with 1680. At some point, apps have moved toward "native" FHD wide screen design. That is working well, from 14' on notebook up to 27' on desktop, without any scaling (at least for most people). They have now "jumped" 2x in resolution (fixed), but not so in size. 4K on 14' is an overkill and 55' you are not going to use from 30cm distance. It can all be good for video, 60' TV looks a tick better in 4K then in FHD from several meters (but effect of frame interpolation and HDR is way more visible). But for monitors that is almost pointless. After disaster with 3D, they have decided to go "safe way" in marketing: 4K, big size, curved. Just to sell something "new", not because it make sense. In such discussions I always remember my good friend. We have bought Panasonic plasma TVs at the same time, he took FHD and I HDReady (the price difference was almost 50%). 5 years later we have bought our first BluRays, so he could finally use the advantage... But for the price difference I could get new projector with 3m screen. Way more enjoyable with movies.
  9. Hi, 150% zoom means 1.5 pixels pro one. So such scale will be fuzzy by definition. 100%, 200%,400% are fine. HDPI aware apps just calculate everything themselves, obviously avoiding not even graphic scaling and rendering fonts and other vector graphics in native DPI and desired size. Another approach is declare the interface HDPI aware, but scale the picture at constant 2x. Such apps are not really HDPI, but kind of trick OS and so avoid 150% PS. @bassman. I have seen your BCR2000 preset, but I have not checked it yet. Sorry.
  10. lol... why people still fall into the same trap? Next will be 16k.... Are you still going to buy it for 10' tablets? I understand there can be some advantages for photo processing (but I guess HDR bring more here). Who professionally works with 4K Video also obviously need 4K screen. But why have pixels which is impossible to use 1x1? Yes, everything can be a tick smoother, with 4x processing power or 4x picture size. I am still happy with HD Ready TV for TV (42') and FHD 24'' monitor. I use 100% on FHD 15' notebook monitor as weöö, so I guess I will use 100% on 32'' 4K in case I buy it. And I wish me a new projector in 4K, since I have 3m screen. But when someone needs scaling to work normally, it simply make no sense.
  11. What you mean by "/24" ? Normally "<frequency>/24" means ASIO buffer size is 24 samples. But I am not sure in your case (24 is very small buffer, supported by some relatively expensive interfaces with top computers). All current interfaces can work in 24bit mode and 32bit mode is hardware-wise not possible. DAWs process in 64bit FP (at least 32bit FP). So the "sample size" 24bit is normally assumed as the only option and so omitted. 48/<n samples> has less latency then 44.1/<n samples>. Less latency in addition to more samples per second is more system demanding. In case the system is on limit with 44.1 it can not handle 48, at least not with the same buffer setting. Sonic difference is small and CbB supports upsampling for plug-ins. That is audible for many plug-ins, it make them run on double frequency. So (IMHO): to be on the "safe side" in all audiophile discussions you need 96kHz, on powerful system with top interface it can be possible to have less latency with 48 then with 44.1 (on a weak system with low/mid interface it make sense upgrade the interface first, in case latency matters ) , has no difference otherwise.
  12. When almost unused they normally produce zero noise. My is in silent mode most of the time (including small 3D editing and old games). It depends how much power particular job needs, is the card in "gaming mode" (over/under clocking/voltage settings) and how good is the cooling system when passive. So that can be a good point for OP: low profile can be problematic from the noise point of view. If the case allows, normal profile is safer.
  13. I have opted for 2x16GB. I know my needs at home (at work decisions are in different size and price range...). GPU can outperform CPU in some tasks. But that is the task and GPU dependent. By 1050Ti can not outperform i9 in Blender rendering.
  14. + I means asking someone to do his usual job for free is a bit unfair, no? So, comments from me (I have recently upgraded to almost identical system, except I put MSI MB/GPU and populated 2 RAM slots only): * no overkill in your configuration * "overclocked i9-9900K" is not the term it was before. Without any "overclocking", this CPU in "standard" turbo mode can (will) throttle under some conditions. With all threads CPU intensive load (and MB auto overclocking settings) it consumes 230W+. I have opted for 12A because of Noctua recommendation. It can handle 180W limit, probably a bit more (and probably with less aggressive voltage tweaking CPU can do more job from the same power). 12S is less performing. Common option is not even 14. For this CPU it is dual tower 15. The case should be chosen correctly and access to RAM slots will be limited, but that is more safe way to go. * from my knowledge there is no benefit from populating all 4 RAM slots, but the first two slots can be especially tricky to access with a big cooler. * you can set "all cores 5GHZ" to get under 10% theoretical computation power more. And hope the load is not too hight (so you do no use the whole computational power, not even close). A bit counterintuitive, but has some theoretical benefits. * Desktop CPUs have limited number of PCI-E lines (not PCI-E slots related). Your MB has 2 M2 slots and only one is "primary". Also note the second M2 disable a part of SATA connections. So "3 M.2" and "5-6 HDD" is not realistic. Put bigger M.2 if you need more space (f.e 1-2TB M.2 are not significantly more expensive when calculated pro GB). * the system will be quite, in case you do not put any HDD. But under low load only. As you can guess, once something is consuming 300-400W in total, all that is converted into heat. And this heat need to be moved out. Noctua and other have many differently sized case fans. Modern PSU and MB have special connections and steering for these extra fans, so they will be quite till they are required. "Stress loaded" air cooled CPU+GPU are helicopter like, no workarounds... * if you plan any GPU intensive tasks, opt for better GPU. If you do not have GPU tasks at all, CPU build-in GPU can do the job. I have 1050ti because I also play relatively old games on FHD monitor, that is sufficient, smooth and quite. Also 3D editing is smoother and quite. But 3D rendering is way faster on CPU then on this GPU. * I have 650W, no problem in that configuration. But if you plan more powerful GPU (alone or SLI), that can be problematic. Note that PSU total power is just a "label", look at concrete lines max currents to get the idea from where the problem comes, even in case total consumption from all components is under total limit.
  15. I do not know how Presonus drivers work, but your first graph is definitively not from 96kHz sampling hardware. Can it be it shows 96kHz, while in fact doing that is software? Not sure it is possible with Presonus (possible with my M-Audio), does it work with ASIO and WDM at the same time? I mean check that Windows is not configured to use it for something, in 44.1/48kHz mode (the only way I know to convince Windows it to have another interface for it... when there is only one, Windows try to grab it). My interfaces report correctly - when the interface is used for Windows / other app, its ASIO frequency can not be changed. But I do not have Presonus, they f.e. could "trick" by letting frequency change in software while still locking hardware. Other possible way is checking Windows settings. In the Control Panel (old one) / Hardware and Sound / Sound. In playback and recording, right-click on each device, Properties / Advanced. Check everywhere is 96kHz, at least for all IO of FirePod.
  16. Another term is "Slip stretching" (f.e. ctrl+shift dragging clip border). That allows make clip length what it should be, f.e. align clips recorded at different hardware clock. Note that is the subject of complex algo, you can set it in Preferences, separately for "preview" and "rendering", the quality can be significantly different.
  17. This graph shows that your interface, at the moment of the test, physically works in 48kHz mode. Check its own control panel (if it has one). At least check that RMMA is in ASIO. In MME it WILL NOT switch the interface, so the interface continue to work in the last mode it was asked to work. From where the garbage comes I have no idea, on my interfaces the part up to 24kHz has the same general shape, but the upper part is zero (I mean when hardware is in 48kHZ, RMMA in MME with 96kHz). Can it be some "windows audio improvement" or some other software "effect"? (I do not have FirePod, but when I had SoundBlaster it has tried to "improve" my sound internally). I ask because even up to 20kHz part is horrible. Sure, I do not expect it is as flat as for top current interfaces. But even my M-audio Fireware (without pre-amps) looks way better (it falls after 20kHz in 48kHz mode). If that garbage has found its way into your recording, there is nothing you can improve there. But at least it should be possible to switch your interface into real 96kHz for future recordings.
  18. Sorry, English is not my primary language. Is your question about Dell XPS? All XPS (independent from version) and most Latitudes have latency problems. DELL has messed something in hardware and/or BIOS, they periodically release "BIOS updates" which should improve latency. But all tests in the Internet confirm opposite. Last BIOS update prevent Linux to run normally, at least in my case... Linux developers write that Dell BIOS export buggy information. That can be the reason why MS ACPI driver periodically spike up to 3-5ms. Some people disable devices assigned to ACPI, some of them claim improvements... Can audio interface influence the result? A kind of. Roland VS-20 is just dead freezing on my XPS (re-connecting does not help to get the sound back). RME BF-Pro has no problems in 64 samples / 48kHz, at least under light load. That does not mean there is no glitches at all, but resulting sound is acceptable for me. Important to have absolutely no background activity. So all MS tasks should be finished, network disconnected, no extra USB devices. Completely different point. How you connect your keyboard? F.e. I periodically get strange latency from my DP (Kawai) and E-Drums (Roland), both connected with USB. I know it is strange because 2 other MIDI keyboards connected in parallel have no such key to sound delay. I still do not understand under which conditions that happens. Try connect by MIDI to the audio interface (or USB in case you always use MIDI) to check there is (no) difference. 5-7ms soft synth output latency (128 samples buffer) should not be significantly inconvenient if the rest is ok (transferring 10 finger chord throw MIDI cable takes 6-10ms, I have seen people playing MIDI keyboards on stage 😉 )
  19. I will not repeat my post in recent thread, you can find full story if your want. Two major points: for some people it takes much longer. I can not update even just Assistent in 5 minutes for some people Assistent NEVER works as expected, I guess also the consequence of slow connection Recently I have upgraded my computer. So, Windows 10 installed from scratch and updated. Downloaded new assistant. "Installing..." (NO PROGRESS INDICATOR). After an hour, I have decided to check... the file it was downloading was not growing. To check that, I had to know where files are, no? Unlike on my old computer, the second attempt to install CbB with Assistant was successful: it has managed to start installation after downloading. Still, all that is Pain.
  20. Got it. So opposite idea from "early" processing, with "late" processing for the content in live track. But if CbB developers will manage to calculate correct delay difference between live and other, why not do "early" processing and remove PDC button (since there will be no need for it)? 😉
  21. Well, this option automatically makes one MIDI track live. In my tests always. Without clicking on anything. If you have no difference between this option on and off, something is really broken. If you try the test I have described before, with 3 tracks, do you get expected result? If yes (so no problem), there is probably some fancy routing in your project.
  22. + Do NOT buy DELL XPS or other DELL "slim" toys. Not only my XPS has bad latency, after some BIOS update fan on the left side always spinning. With the latest BIOS update Linux can not run at all. Complete BIOS downgrade is not possible. Noisy like hell (fans, some electrical cracking inside notebook and power supply) peace of crap...
  23. I do not know how delaying MIDI an help with PDC. MIDI is bound to input audio. "Precalculating" not live tracks with delays is from my knowledge the only know way to deal with PDC better (eliminates the problem almost completely, except open question what to do when live track has a send to delayed bus). No possibility to permanently set MIDI input to "None" and no possibility to permanently disabled VST MIDI output (or at least not include it into "All" stream) are 2 from many long standing bugs... "Allow MIDI Recording without an Armed Track" (record arm by echo) sounds as logical as "Record arm but don't record" (monitor without recording) in one another DAW 😀
  24. "Always echo" is a kind of useful when there are several instruments, for fast switching between them using the only MIDI keyboard. Changing the instrument is a matter of selecting another track then and that can be easily automated. Usually proposed way is avoiding plug-ins with delays till the end phase of mixing or even till mastering, so auto echo is normally not an issue. Some plug-ins have really huge delays, up to 1 second. Mixing in such environment is almost impossible, any parameter in the chain before such plug-in takes 1 second before it is audible. I was so used to this option that I was missing it in other DAW. Especially when achieving the same effect requires 3 different track options set, for each track in question.
  25. I have my own checklist, and recently updated it: http://www.azslow.com/index.php/topic,395.0.html With all ideas I could find in the Internet about the topic, including exotic and questionable "rumor". Most staff is not so important, really I have to put somewhere that "Use Latency Monitor to check system latency and problems with devices/driver" and "Use REAPER build-in Performance Monitor to check audio/plug-ins timing" sections are where people should start. RME control panel can show severe hardware problems, ThrottleStop problems with cooling and fancy CPU settings, REAPER RT monitor can show where is the bottleneck, Latency Mon identify the driver (sometimes MS performance tool is needed to find real driver). All that within 5 minutes, without thinking much... It can be some PCI-e card does not show bad things in related driver but cast other devices/drivers problems, but that is unusual. No checklist can make bad designed hardware better. F.e. nothing can help DELL XPS running with low latency. But at least utilities can point where the problem is (in case of DELL that is system related hardware, accessed by ACPI, and severe system locks probably coming from the same source). Also mentioned tools give exact numerical values which compromise is worse the effort. F.e. in many cases disabling C-states, SpeedStep and locking CPU frequency just allow to run at 32/48/64 buffer size instead of 48/64/96. And I guess some people will prefer to have 8W CPU power consumption and so idle fan and no noise with 96 samples buffer size, instead of reducing it to 64 by spending 40W on idle system.
×
×
  • Create New...