-
Posts
777 -
Joined
-
Last visited
Everything posted by azslow3
-
I guess the confusion comes from existence of: "revived" plug-ins: info about VX-64 in Sonar 2024/2025 Revived plug-ins are Sonar/subscription locked VST3 (original are unlocked). VX-64 is not in the list (at least not yet). Original VX-64 plug-ins are from pre-platinum time. They can be (I have not checked recently, but there is no reason that no longer work) installed offline, in case the user has corresponding license and installation files (8.5 - X3).
-
You need to check Focusrite Loopback is enabled. In Windows 11: 1) right click the speaker icon, select "Sound settings" ("Soundeinstellungen") 2) in appeared panel select "More sound settings" ("Weitere Soundeinstellungen"). 3) in appeared panel select "Recording" tab ("Aufnahme"). Try to find something like "Focusrite feedback". If not there, right click on any and select "Show deactivated devices" ("Deaktivierte Geräte anzeigen"). Right click on Focusrite feedback (I guess it should be there), select "Activate" ("Aktivieren") (obviously in case it is deactivated). 4) you should be able to select it as input in Audacity and other programs. And hopefully it can record Sonar then. PS. You could record in Sonar because it is in the set of ASIO input channels, unrelated to windows settings.
-
Have you followed all steps in my last link? I mean "Expose / Hide Windows Channels" "Lautsprecher (loopback)" input in Audacity is not Focusrite loop-back input (it is WASAPI output, the feature I have mentioned before, it can loop-back record the system + apps which use WASAPI, but not ASIO). As a check, you can use steps described later, so set Input of some program in Windows audio settings to "Loopback".
-
It can be Will is right and you can, at least there is one report of success: But he doesn't mention if he was using ASIO in Cakewalk. I suggest you try. If that doesn't works, try disable Loopback input in Cakewalk and try again. Sure that is not going to work with two ASIO applications, they can't run in parallel with Focusrite.
-
WASAPI does not require anything to record any WASAPI output (so loop-back, f.e. system sounds), except recording application should be able to work that way (from my knowledge Sonar doesn't, Audacity does). But that doesn't work with ASIO outputs (at least in my tests). Real interface loopback logically should support all directions, but was primary intended to record something (not ASIO) into DAW (ASIO), I mean not other way around. At the moment I don't have Focusrite to check.
-
These apps are not (only) for on board sound and they are not in the category of "generic drivers", they allow flexible audio routing in case (any) hardware interface and OS own features are insufficient. Generic drivers can disturb dedicated ASIO drivers, it is recommended to remove them in case of problem, as a part of troubleshooting. And keep them removed in case they are not needed, to avoid these problems appear. That is a good advise for everything not used (not only for audio processing). But when you explicitly use something for a good reason, and that can be generic driver, it is fine. If it works. In this thread there was no discussion about any "problems with audio". It is "how to" thread
-
From the second OP post, I have concluded he has an "ASIO audio interface with loopback feature" and knows about that feature ... That is "the answer" in case one loopback is sufficient, from what I remember Focusrite has exactly one. But in many situations you need more then one. Even in case of let say Zoom, you need (a) a mix sent to Zoom, (b) an input from Zoom. If locally you have just one mic and don't want send anything else, single loopback can work. In all other cases you have a problem, f.e. if you have two local mics and want record them separately or send the sound from your computer. If you have several programs which want ASIO, you need multi-ASIO capable interface. Most are not. If you want record audio from one program into other, but at the same time record the whole process, you need recorder which can do that (f.e. OBS) or multiple loopbacks. Real Multi-ASIO capable interface with multi-loopbacks support is "the answer" in almost all cases, but they are not cheap 😏 Virtual multi-ASIO capable interface with multi-loopback is also "the answer" in almost all cases. And that is what Voicemeeter programs are. The only problem they are not stable...
-
Technically MOTU AVB and other audio interfaces with internal mixing capabilities are "digital mixers". Dedicated digital mixers also convert everything into digital and do the rest using build-in computer. And since the audio is already in digital form, it is "natural" to send it to computer when required, so function as an audio interface. Some mixers are stage oriented and have no audio interface, but most do (or have such option). Pre-amps in some (cheaper) stage oriented mixers have lower quality. Digital mixers as audio interfaces in general have higher latency then usual audio interfaces, they are not designed for "throw DAW" monitoring since they have "sufficient" effects on-board. If live control without computer and build-in effects are not required, there is no reason switching to DM. DAW control is simpler and better with a dedicated control surface. Sonar doesn't has stock surface plug-in with OSC. Also all stock plug-ins are not flexible. (Any) VST can be controlled by real MCU and clones, DM emulations may have insufficient buttons to switch into that mode (and/or select the plug-in to control). AZ Controller can be configured to do anything possible in Sonar (from Control Surface), the problem someone has to configure it (and that can be challenging). If you just want control control Volume/Pan/Mute and switch to controlling plug-ins (f.e. EQ) using the same controls, that is relatively simple with AZ Controller (also possible in Cakewalk Generic surface or Cakewalk ACT MIDI, but for limited number of controls and without feedback). BTW you can start with TouchDAW / TouchOSC on a tablet. Both work with stock Cakewalk plug-ins and AZ Controller and cost almost nothing. The functionality of TouchDAW is exactly the same as with real MCU. TouchOSC can be configured to work as any surface. The only difference with real devices - no hardware controls.
-
Only analog mixers are working "Without any latency" (really still with it, but super tiny), any digital mixer has latency, any software mixer has significant latency. But that is not my point... Voicemeeter are software mixers. You need Voicemeeter/Banana/Potato depending from the number of separate buses you want. You need a separate bus or separate channels for each application (Windows also counts as "an application"). With Voicemeeter you can "split by channels". If you prefer separate "buses", use Banana/Potato. Then sequentially configure what goes where, starting from Inputs (if you record from Sonar, that is "an input" for Voicemeeter). But sometimes it doesn't work... F.e. different sampling rates is looking for troubles, keeping "devices" in use while re-configuring it is looking for troubles, changing audio hardware interface settings when Voicemeeter is working is looking for troubles, using software which directly access audio interface is looking for troubles, etc. And periodically it just stop working normally without any reason (including distorting audio... rather annoying if you was recording an interview with someone and what you could hear was fine). Also it can just "stuck", not showing (I mean not always showing) errors. It simply stop working as configured till restarted. This approach is ok if you want the setup one time for hobby activity. For anything more serious or every day, consider small external mixer with second audio interface (build-in realtek counts as such) as a cheap solution or (half)hardware digital mixer/interface for comfortable way (with RME you also get really small latency and good pre-amps, even Babyface has 12 mono/6 stereo mixes you can record, composed from 4 hardware, 6 stereo outputs from (different) software and other mixes, from them the first two stereo mixes are for monitors and headphones. Well, it cost may more then small analog mixer... ). Podcast devices are somewhere in-between in price, have convenient hardware controls and some have build-in recorder (for "just in case..."), but they are less flexible for "inside computer" mixing.
-
Yes, "a little tricky, but not impossible". Not really "a little", depending from the mixer and what you want. Some DMs can send MIDI or something else comparable to it (OSC) or convertible to it (proprietary daemons) when you operate its physical controls (or change corresponding parameters by other means, f.e. using Tablet App to control the mixer). The purpose of that capability can be: 1) explicit DAW (or some MIDI device) control. In this case the control/parameter on the mixer is not changing anything on the mixer itself. 2) syncing one mixer to another (f.e. old Yamaha) or controlling the mixer from outside (A&H, Behringer, etc.). In this case corresponding mixer parameters are changed. When a mixer supports "DAW control", that is of the first kind. As special control layout on the mixer. Normally limited, just strip volumes/mutes plus transport. And even that can be limited to 8 channels only (even on devices with 16-24 strips). When you use the second approach, you "sync" parameters between the DAW and the mixer. F.e. HP Freq of internal mixer EQ and HP Freq in EQ plug-in. Obviously you can't use that mixer channel "normally", its EQ will be DAW EQ dependent (but that can be used in the opposite direction, you control hardware EQ using parameters from software EQ). If you read in the documentation for the mixer "Mackie compatible control", it is the first kind. You can use standard Mackie surface plug-in in Sonar. In all other cases, including the second approach, you need special solution (Studiomix is working throw own special solution). For Qu16 (note the solution is old, I never had the device and so I can't check it is still working): https://www.azslow.com/index.php/topic,178.0.html Behringer (Midas) expose all parameters throw OSC, so the second approach is theoretically possible with any mixer parameter. "Theoretically" because that was never tested with AZ Controller and probably need modifications to support (for X32/M32 I know which, for Wing and other I have not checked). Or you can use some other OSC to MIDI converter. In general, check the documentation of particular DM to find what/if it supports in terms of DAW controlling / MIDI / OSC. There are no standards. Each model is different, even from the same company.
-
From that video, you definitively try to (re-)implement ACT using MIDI signalling, in a DAW which is not flexible with MIDI signalling. So, using some "advanced MIDI scripting" in particular plug-ins.
-
It seems like you try to implement ACT functionality without using ACT I think you are working in direction of one 3d party solution in another DAW. I will not risk to publish the link, but you can easily find what I mean by searching for helgoboss. He has several creatures, the one in question has "Learn" word in the name. I mean that is not the approach Cakewalk had in mind when ACT was developed, but using ACT for the purpose is simpler then using a chain of "MIDI tricks" you have applied.
-
BTW the last time I have checked Touched Portal, it was not on-pair with Touch OSC in MIDI/OSC activity. And quickly checking now, it is still not...
-
I am not against nice panels 😏 But how have you achieved "Everything happens completely synchronously on the synth"? If you load another preset in Z3TA (or just modify something by mouse), do all these controls reflect related changes? ACT / NKS / Automap (obsolete) / VIP (seems like obsolete) are not working "MIDI way", so they are bi-directional (by design). "MIDI learn" in a synth requires the synth itself support "send learned MIDI on changes" to be bi-directonal. Long time ago, when I was naively thinking "that should be standard feature", I have found the synths I had tried do not support that. I have just rechecked with Z3TA+2, it is not doing that by default I have found no related options. MIDI learned control Z3TA+2 doesn't send that MIDI when I operate the control by mouse (or load new preset). But, in turn, synchronization is not possible. Even simple "catch" behavior is not supported there (some synths have that option). The reason I prefer ACT even for finite knobs, here I can tune how and when physical control modify the parameter (several approaches are possible).
-
If you have such real device: And see that on display (mix mode): and then switch to (ACT mode)... and may be to (PC mode)... For me that looks more usable then "Remote control". I will never remember which knob controls what there, also the position is initially not in sync. I guess you have no problem to find which knob you should turn on BCR to change CA-2A gain, even so you have never worked with it, right? And the value is always "in sync". I mean for long session with Z3TA, "Remote control" (when connected to real device) has advantages - finite knobs are more "playable" then encoders (at least for me). But for controlling DAW in general, including mix/effects - no way...
-
By ACT I mean the approach, not "ACT MIDI Controller" plug-in. That plug-in is in fact limiting and its limitation has triggered me to write my own ACT plug-in. I mean AZ Controller (https://www.azslow.com/). Standard Mackie (and all other) is also ACT plug-in. Creating own virtual surface to make a project one single "instrument" under its control, so "Remote control" the same way as "MIDI Learn" in soft-synths, is a valid approach. But that is different from generic DAW control, I mean using one device with arbitrary projects. At the beginning I was creating project specific presets, major problem was remembering what controls do in particular project... With one preset which always works the same way there is no such problem (I guess that is the reason for MCU and alike popularity). What also helps are actual labels for controls (another MCU/C4 and alike advantage). MIDI does not allow the last, you need OSC (f.e. https://www.azslow.com/index.php/topic,295.0.html) or at least a window with such labels (what "ACT MIDI" GUI does... well, not perfectly...).
-
"Remote control" was superseded by other approach long time ago. In Cakewalk it is called "ACT". (N)RPN is a protocol which allows way more parameters and better accuracy then standard MIDI CCs. Yes, technically it is using CCs you have mentioned. But logically it works as "A knob on my controller sends NRPN 20". So you specify "20" in the configuration (and hope the controller and Sonar agree how to send/receive NRPNs, they usually do). In case of motorized faders, LEDs and in most cases encoders (CME VX-5+ has motorized faders and encoders), there is no "generic" definition how that should work. MIDI by itself was oriented toward finite knobs/sliders without any feedback. So the solution has to be "complicated". That doesn't mean it should be difficult to use, you normally can switch the device into Mackie emulation mode and use corresponding plug-in. But when you switch to "Custom solutions", in case you don't like what Mackie emulation does, the setup is more complicated. Still can be done in 1-2 days. Only when you want DIY everything yourself (write own controlling logic in supported by the DAW language, in case of Cakewalk that is C++), it can take a while... But as you probably know, I have proved even that is possible in less then 15 years
-
I am glad I could help. When something is working fine, after a while I also forget what is required to make it working
-
You need to load Faderport 2 preset into AZ Controller. Int the top left corner, after "Presets:", you should see "AZ Faderport V2 0.5" (or 0.0 at the end, depends which preset you want). In the screenshot that field is blank, also there is some "Detached 0" control, there is no such in mentioned preset. In case you have forgotten how to get it, download from https://www.azslow.com/index.php/topic,444.0.html (attachment to the second post for 0.5), import from Utilities/Cakewalk Plug-in Manager), then you can select it. Select it when some project is loaded. Save the project. Close Sonar. It seems like that way selected preset will be preserved (there is some misunderstanding between Sonar and AZ Controller about the time preset is modified and should be saved, AZ Controller was written at the time Sonar was always saving the preset...).
-
First find what is not working. is your latest screenshot where it doesn't work? The screenshot looks ok for me, FP2 is found and assigned to AZ Controller. if AZ Controller does not "appears" as surface , even so Sonar shows MIDI ports and assignment: Sonar or hardware (including software loopers) issue, Sonar "think" device is not there (can't open, etc.) if AZ Controller appears as surface, open it and check "Last MIDI event". When you press/move something on device, do you see new messages are coming? if yes, select FP2 preset. If still doesn't work, press stop (square) button and let me know what you see in the "Last MIDI event" if no (no messages are coming), recheck "Record" settings (your previous screenshot was fine, but something could change in between). If everything is fine there, Sonar has probably messed your MIDI devices. Delete its TTSSEQ.ini (correct one, if you have CbB / Splat / etc. installed, each has its own). In Windows Device Manager show hidden devices and "cleanup" duplicates (ghosts). if you can't get what you show on the latest screenshot. Delete TTSSEQ.INI and cleanup ghost MIDI devices in Windows Device manager. Recheck you use MME driver Unlike with other surface plug-ins, AZ Controller shows what it is receiving. So easy to check what is going on. But Sonar has to instantiate it first and assign proper MIDI ports. Apart from locked ports ("no memory" error, Microsoft has assigned the same error code to real "no memory" and "MIDI device is in use"...), there is probability Sonar still can mess with MIDI devices (till Mark has managed to get it under control ). As I have mentioned, every product has own settings file, it can happened CbB has managed to create "stable working" configuration for FP2.
-
Everything is right, except the word "VST" should be removed. That is true for all types of plug-ins. In particular, all Sonitus effects are DX, not VST. Using top-bar preset management is better option as long as you stay with Cakewalk. You can review / export / import these presets centrally from Cakewalk Plug-in Manager (Utilities menu). To be on safe side, backup from there (I mean don't try to find and backup related existing files). If you need to save preset which should work in another DAW, it is better to use plug-in own preset management. That is plug-in specific (there is no standard), but saved by Cakewalk presets are not usable in other DAWs. For VST plug-ins (so, not for Sonitus) there is yet another option for saving presets, also available from the top-bar. You can save presets in standard formats defined by VST developer (Steinberg). They are (supposed to be) usable in any DAW. In practice, there is no warrantee that will work. Also note that even in case you save preset using one of these 3 methods, it can happened it will not work on different computer or after full re-installation. Better to have full system backup. And when computer change is planned, check all important projects are playing there as desired, before abandoning (giving away/reformatting/etc.) the old one. PS. there is still no universal, user friendly and standard preset system. Major players have tried to introduce such systems, but somehow they work inside "own world" only. And so in most cases there are 3 formats: plug-in type specific (VST2, VST3, etc.), host specific and plug-in specific. Host can be a DAW (f.e. Sonar) or plug-in wrapper (f.e. NI NKS). So, depending from the method you use to instantiate plug-ins, you can have presets in more then 3 formats (even when you are working in one DAW).
-
Presonus has a little bit "abused" the label: long time ago there was "FaderPort", known as such, single channel controller. at some point they have made new "FaderPort", another single channel controller. It is incompatible with the first one. But instead of using distinguishable name for new device, they are calling old one "FaderPort Classic" and new one "FaderPort". Shops don't like such confusion, so they normally call it "FaderPort V2". and they they have made "FaderPort 8/16", completely different multi-channel controllers. So in all these names "FaderPort" is a label for all Presonus controllers, not a name for particular device (like most companies do). PS. Behringer does the same with "X-Touch" label. "X-Touch Mini", "X-Touch Compact", "X-Touch" and "X-Touch One" are 4 different controllers, with different controls and functions (not a "newer version" nor "extended/limited" as people can think based on common name). PS.PS. for AZ Controller there is no separate preset for FaderPort8. But "Mackie" preset should work. Even so I don't recommend it, till original Cakewalk Mackie limit you and you want better matching to device functionality (unlike "X-Touch", "FaderPort8" doesn't have exact Mackie layout). For FaderPort16 I don't have ready to use preset at all (Mackie preset has to be extended using "slave" instance connected to the second pair of ports, but that is not an easy task...).
-
From what I remember "mac version" was later... But googling I have found CbB under CrossOver installation tutorial, so may be I will give it another try with Wine (I still support my utilities and I was always developing under Linux, and I was stuck with X2 all that time):
-
The last version which can work without problems with Wine under Linux is Sonar X2. Sonar X3 can run with some tuning. I have not manage to run SPlat and I have never tried CbB.
-
That is the reason bus powered interfaces have limited number of IO channels, especially pre-amps. But these people know what they do (or at least they should) and they obey USB standards. I mean good device specification on paper match the reality. For the rest. The power on USB socket from particular PC/Notebook can be unstable/low/etc. Then you have a "bad power supply". But external/build-in power supply can also be "bad". So listing possible consequences is like concluding "all electrically powered devices are bad since the power supply can be bad/broken, I refuse to play e-guitar to avoid such troubles" 😏 I mean there are several good reasons to prefer separately powered interfaces, but "all bus powered interfaces are bad" is not one of them.