-
Posts
756 -
Joined
-
Last visited
Everything posted by azslow3
-
MCU support for Komplete kontrol seems to be on its way
azslow3 replied to Anders Madsen's topic in Cakewalk by BandLab
You need "extra configuration", you have to say Cakewalk explicitly that you have MCU ? This feature is not "legacy", effects with MIDI input and synthes are effectively the same thing. So I do not think that is ever going to be obsolete. I try to deduct related synth a bit hacky way, from the number returned for the track MIDI output. The track should be MIDI (or "Simple synth") and then it works. But I remember there are situations when that fails (Cakewalk has rather fancy way to reference IO channels, I guess the reason for occasional problems with In/Out assignment inside a project once the hardware configuration is changed). So, explicit API should be added into Cakewalk to do the same "production quality" way. -
+ Can be a combined with: * Cakewalk traditionally "likes" the interface is set to the project sample rate. But that rate is fixed only after you have some audio in the project. * Cakewalk traditionally is not the best in enforcing own requirements to the interface * (any) interface is locked to one particular frequency at any time, when several applications running in parallel want different rates, there is no common rules what should happened. Other software can be flexible in that domain, silently "on the fly" recoding audio before sending it to the interface. So that is not an indication something is wrong with Cakewalk. Tip: when manipulating sample rate or other interface parameters outside Cakewalk, stop Audio Engine in Cakewalk (opening preferences is doing that automatically) and do not forget match preferences to the changed settings before enabling the engine.
-
MCU support for Komplete kontrol seems to be on its way
azslow3 replied to Anders Madsen's topic in Cakewalk by BandLab
I guess that is homogeneous (S1/2 and A) MIDI based bi-directional communication. MKI was MIDI based, MKII was/is OSC based and I guess they have understood that is a mess. Not that they make all that a big secret (they have allowed open source projects which are using all these technologies), but for some reason they do not publish protocols in public. Note that these protocols are more advanced then MCU, obvious since "DAW integration" for supported DAW is above MCU level. From CbB side, a reliable way to find which Synth has relations to which track (available on surface API level) still does not exist. On NI side track switching functionality is unlocked for a while (was previously locked to "supported" DAWs). From practical perspective of full integration with CbB, I still do not have any of these keyboards ? And while I have found my X-Touch Mini preset (developed without the device) is reasonably good with Mini (which I have bought recently), I for example have found my accessible RME app (developed without the device) far from perfect, since I have misunderstood some concepts of TotalMix and so my imagination how things work was not bound to the reality... (I have fixed that since I have RME now). So there will be no (my) preset till I have the device. -
WOW - what a pleasant hardware surprise!
azslow3 replied to Robert Bone's topic in Cakewalk by BandLab
Note that NVME is only as good as the drive behind it. And cheap drives are cheap for a reason... Examples from my own notebooks: * original drive installed in my Dell XPS was not bad, it could really deliver over SATA speeds. But even that drive was not comparable with EVO Pro 970 I put into it later. * Lenovo Ideapad also say "NVME". But practical benchmarks have shown under SATA speeds, all the way down to 90MB/Second. When looking at published performance results, keep in mind that SSD drives know when some space is not allocated. "Reading" empty drive indicate up to full NVME speed, the drive is not really accessing the storage in this case, it generates zeros. -
Mackie d8b used for a controller in Cakerwalk
azslow3 replied to chuckebaby's topic in Cakewalk by BandLab
From what I have seen, build-in MIDI supports just one HUI, so 8 channels only. -
That means it can not be loaded. Try this one: https://github.com/AZSlow3/Cakewalk-Control-Surface-SDK/raw/dyneq/Bin/x64/MackieControl.dll Put it into the same place you see installed file. It is build with VS2017, has some bug fixes and adjusted to support CbB. Put https://raw.githubusercontent.com/AZSlow3/Cakewalk-Control-Surface-SDK/dyneq/Surfaces/MackieControl/MackieControl.ini into the same directory for ProChannel EQ and Compressors mapping (edit it for FX plug-ins if you want, but other ProChannel modules can not be selected). If you had Sonar 32bit installed before, it can be there is no 64bit MackieControl on your computer or it is not registered (CbB see 32bit registration, but can not work with it). Open console as Administrator (press Start button, type "cmd", right click on console and select "Run as administrator"), "cd" to the directory with DLL and run "regsvr32 MackieControl.dll"
-
Yet another thread which remind me that my (home) "primary desktop" is a Celeron, with 4GB RAM and 512GB HDD... Not everything I have can be loaded (sure, just one track... other should be rendered), but I am still almost happy (I have invested ~€400 to make it silent). ? For "Amienware 17" (and other "big" notebooks), be aware that you will be not allowed to take it on board of (most?) lowcoster European airlines for free. My 15'' XPS is the biggest which fit into the "free box" while in (properly sized) bag. For DELL XPS: these morons do not include MB->SATA cable in case the notebook is originally without the second (2.5'') drive. NVMEs are fine for the case something should be continuously streamed, and that something need the speed. Can save some seconds on computer/program start as well, but that is important for super time constraints only. For most cases SATA SSD is sufficient. It is still ridiculously expensive to build large arrays with SSDs, so HDDs will survive for a while (I mean for 50+ TB systems, not sure DAWs will ever need that). In practice, I have not experienced any noticeable change after replacing "cheap" NVMe to the latest Samsung on my notebook. But f.e. busy SQL server can be busted that way (optimizing end applications can bring much more, but that cost way more money).
-
Cakewalk not recording midi from some keys
azslow3 replied to Peter Lowe's topic in Cakewalk by BandLab
Do you use Control Surfaces? They can block individual keys, thinking they are for controlling purpose. Also look at MIDI icon in the system try, does it "blink" when you press these keys? If not, MOX is not sending them or they are blocked outside (unlikely since Sonar is working for you). -
It is more about logic inside UPS, so how long it waits the power is fine again and which instructions in produce. My best real live example is 20 years old. "Smart" colleagues have connected (relatively) big particle accelerator to the office power network. Normally it was powered from a separate major line (many megawatts power consumption). As the result, there was "tiny" pikes of like ~500V every several seconds. Typical measure devices was still showing constant 220V, only oscilloscopes could observe real picture. That was a good test for many devices. Interesting that most computers, switches and other electronic have continued working normally. At the end we have "lost" just several devices (from thousands). But UPS (and alike) equipment went crazy. Best UPSes have indicated continuous failure, after a while have switched off powered equipment gracefully and went offline. Many UPSes was switching on/off, some of them eventually was completely discharged and a part of them have not even triggered graceful shutdown. The best observation was with big power system, which had diesel generators as the second level backup. The electronic was starting/stopping engines ?
-
I want just describe my experience with situations (and partially found reasons) with something I call "e-noise" (similar to what you describe): all build-in interfaces (Realtek, SB), reacted on (wired) mouse, HDD, load. I could (can) not eliminate that. The level is low, but annoying. In monitors and headphones. Kawai DP (2 wire power cable). Problematic when there is any connection to/from it (any combination of USB/MIDI/Audio). MIDI was cured by cutting ground at the receiving end. By MIDI standard it should not be connected, but most MIDI cables are symmetric (so it is soldered in both connectors). It sounds like my TC voice processor is badly designed, so with DP->TC connected by MIDI, TC XLR outputs start producing the noise. Audio connection is way better with HD 400. No solution for USB yet. Behringer small USB mixer (2 wire power cable). Generate "e-noise" as soon as there is more then just one input and output in use. F.e. USB+(balanced)output = No noise. (one Unbalanced/balanced)input+output=No noise. But USB+Input+Output=Noise. Several inputs + Outputs = Low/no noise, connected equipment dependent. HD 400 helps. But I ended putting second hand audio interface (8x8, Phonic) as a mixer. Roland TD (unbalanced outputs, 2 wire power cable). Was prone to generate noise with Behringer mixer, HD 400 helps. Monitors connected with unbalances (TS) AND balanced (XLR) cables at the same time. Activity on the other cable end (connected/disconnected/connected to different equipment) make no difference. When just one cable is connected, there is no noise. And now some crazy staff I had "pleasure" to observe during my life... mixed "ground" and "zero" in house power net. So at some place(s) they was connected (!). Sometimes indirectly, when "ground" was wired as "zero". That is really dangerous.... some (not music) devices create distortions on some or all power wires UPS/"power conditioners" can smoke and even explode (one has exploded in my hands... I was really lucky, there was no consequences for me). Thinking about it, there was more times such devices coursed fire/troubles then any other equipment. They are also getting crazy when something is wrong with the power line (start switching to battery and back continuously, etc.).
-
Bad latency, but worse with ASIO than with MME32
azslow3 replied to Matt R's topic in Cakewalk by BandLab
Modern Realtek chips do have ASIO driver. And the latency can me quite low with it ( under 7ms, so not worse then most dedicated interfaces). Some notebook manufacturers have branded drivers with "forgotten" ASIO part (or just control panel part of it). I had that issue with Dell XPS. Older chips can be used with WASAPI or ASIO4ALL. All such chips I have tested had between 10ms and 16ms usable latency. For MIDI tasks that is sufficient. Note that unlike with native ASIO drivers, latency reported from other drivers (including ASIO4ALL) is wrong in most cases. So do not trust "I have 3ms latency with Realtek!" posts. But you should be able to get under 20ms even in worse scenario. -
Solved (thanks!!) Can i buy an ad free forum experience?
azslow3 replied to Gswitz's topic in Cakewalk by BandLab
Tons of adds periodically appear on top of posts, Firefox/Linux. Not consistent, re-opening the thread removed them. PS. what people expect from "free" forum for "free" product? Have you never played in F2P games? ? -
Yes, CbB/Sonar engine is what it is. It does not support "ahead" audio processing like some other DAWs (Studio One, REAPER, etc.). So turn off PDC compensation (special button) and leave with consequences or do not use corresponding plug-ins during recording. No other workarounds.
-
Turn Your Apple Watch Into A Wearable Wireless MIDI Controller
azslow3 replied to TheSteven's topic in The Coffee House
I have tried to use mouse for that (dedicated mouse as a MIDI controller). Cost $5, does not require watch and simpler to use then any touch device (especially as small as a watch). Back with normal controller now, big buttons and normal knobs. Most people do not try to use MiniGuitars and NanoViolines for a reason, while theoretically that is possible ? -
A bit on-topic... Melodyne in REAPER has full freedom. It can be used as a Track FX. Note that content of the track/item is not "frozen", it is possible to move/edit clips on the track with Melodyne. And in REAPER release (and after Melodyne update) that is really working pretty well (== without crashing, including adding Melodyne without stopping the transport). I have just 3 observations: when used on an item (so more like Sonar way, to correct a small peace instead of the whole track), audio preview is not working (by current REAPER design). So this option is effectively almost useless. people put Melodyne on every track in 20-30 tracks project and then wonder why computer is instantly 100% loaded during editing (Melodyne continuously analyze updated source in the background) without "save minimal undo" compatibility option set (unset by default for all plug-ins), any edit operation in Melodyne is painful on a slow computer. So if Cockos manage to audition Melodyne as a take FX, REAPER ARA integration can be called perfect...
-
Windows 10 can influence the DAW performance and some not ASIO aspects of audio. But as long as the device and the driver are the same, you can expect the same latency. Also as I have mentioned the first post in that thread does bot contain everything, people was measuring interfaces on Windows 10 and not only with 44.1kHz. For Scarlett, in one of the posts it is mentioned that "64" setting is probably 128 samples per buffer internally. All interfaces have some "extra" latency settings, some of them can expose a part of these settings in some form to the user. The latency is a sum of many delays: AD + transfer to computer + driver + transfer from computer + DA. The buffer size is just a chunk size in which audio is processed in the DAW. That directly influence the latency, f.e. if a DAW works with 48kHz/128 the "buffer length" in time is 2.8ms. Since the DAW becomes the whole buffer, that theoretically can not happened before 2.8ms since the first sample is digitized. But all other processes are not instant, f.e. the DAW should have time to process the buffer. The difference between measured latency and the buffer size latency is what the interface+driver have to do the rest. F.e. 7.3ms - 2.8ms = 4.5ms. The smaller the buffer own length (f.e. 96kHz/64 - 0.6ms) the smaller total latency can be, with the same "overhead" (4.5ms + 0.6ms = 5.1ms). In practice, not all components of the overhead are constant.
-
https://www.gearslutz.com/board/music-computers/618474-audio-interface-low-latency-performance-data-base.html Note that many interfaces/conditions are not in the first post, googe the thread for almost all interfaces RTL tables. Note that not all posts there have equal "quality". And "traps" are not only numbers taken from "some DAW", but also RTL screenshots when the interface has some build-in route and so the "loopback" was performed without DA-AD conversion. Also these numbers should be interpreted as "the best you can get". So, if you are able to use some mode (like 96kHz/32), you will get the same numbers. But it can happened the particular mode with particular interface/driver is not usable (on particular computer, DAW, project, etc.). It took me a while to understand that many (most?) people are not interested in low latency. They do not use in DAW monitoring, except may be MIDI for which latency is less important. So even some "high end" devices have big latency.
-
* https://www.gearslutz.com/board/showpost.php?p=12352524&postcount=1163 So according to that post, there can be somehow wrongly labeled settings for 6i6. So "64" is more "128". So the next thing to check is why you can not go lower. That is computer related. It can be that nothing can be done (as f.e. with my 8 years old Celeron class desktop), but with relatively powerful computer, even 6 years old, it should be possible to reduce the buffer size after tweaking. * your original 7.3ms is good. In fact too good for that Presonus. All reports indicate around 10ms for the same settings. Note that this interface can report wrong numbers to the DAW. Make a loopback check, manual or with RTL, to get real latency. ------ UAC according to all tests it has very good latency. Is is a bit more expensive then other and definitively bring better latency for that money. But it can not do 7.3ms under 48kHz/128, so I could not resist from "trolling" a bit. UAC owners could prevent that by "wait... 7.3ms? even my good latency UAC can not do that with such settings". And it was "7.3ms? its too high... my UAC is better". ?
-
Do you mean "not high enough"? All published results I could find show that UAC-2 has 7.7ms under the same conditions. I can not judge the interface because I do not have it, but all "ultra super under 2ms" RTL for UAC spammed across the Internet are about "96kHz/24 samples per buffer", normally commented with "with a 50 tracks full of heavy effects". I guess they have borrowed computer from aliens (or they have done the test with REAPER in playback mode and anticipative processing on, in other words not running anything in realtime).
-
I hope your problem is solved by new audio interface. 7.3ms is not bad, the best you can get at 48kHz/128 is 6.6ms. That is not great improvement for 6x price. And so the question is why you could not use 64. Can be the interface, but can be something else. You will know soon ?
-
No hosts? ? And for the topic... Common, the only big country which has tried to calculate what is real price for everything and attempted to use such prices is not there last 25 years. Is someone still think that producing a product and selling it IS the way to make BIG money in THIS world? LOL.
-
Assign Transport controls and pads of a Axiom25 v2
azslow3 replied to Jymbopalyse Armageddon's question in Q&A
Assign pads to different MIDI channel from keys (on the controller). Use the same input for both tracks, but select particular channel for each track. Use Generic Surface to learn transport buttons. -
My list does not include multi-cpu checks. I have racks of servers, but they are not used for audio ?
-
While many things can be tweaked so Sonar/CbB can run fine, expecting it can work as most performance optimized DAW to the date (I mean REAPER) is hopeless. One responsible for fluent operations with small buffers feature, the anticipative processing, simply does not exist in the Cakewalk engine. But it should be possible to make CbB working. With bigger buffer size and less plug-ins, but still. When I think I have troubles, I normally start with my personal list: http://www.azslow.com/index.php?topic=395.0
-
What sample rate are you recording at (and recommend)?
azslow3 replied to Christian Jones's topic in Cakewalk by BandLab
As the original question is about "recording", it is different from intermediate format (which better keep 32bit FP) and the final format (which can be 16bit). Each bit is ~6dB (SNR, DNR, etc... just a approximation, but it works well in all math). When you record without some hardware compressor/limiter and let say set the gain to result -18dB average and record into 16bit, your average resolution will be 13bits. And not so loud section can easily be recorded with just 10bits. During mixing and mastering, you will level this signal (with compressors, EQ, etc.). Try bitcrash something to 10bit, that is easy to notice even with $1 headphones on $1 Realtek built-in interface. And if you record close to 0dB, a part of signal is going to be digitally clipped. That you can also hear on low end equipment. So 24bit for recording is a good idea. With 32 samples per second you can get up to 16Hz frequency. So the frequency response it not good by definition ? I believe that 16bits dithering make sense. I am sure there is better equipment then my and way more advanced listeners, I can hear the difference only starting from 14bit downwards. But is such examples I think it is important to mention was it with unusual amplification or not. Was you playing already mastered track and at the very end could hear the difference in reverb, or was it just reverb sound and the signal was amplified +12 dB or more. Because with sufficient amplification it is possible to hear the difference of 24bit dithering on notebook speaker. I had to amplify more then 60dB + max all other volumes to achieve that, so it make no sense but possible ?