Jump to content

azslow3

Members
  • Posts

    776
  • Joined

  • Last visited

Reputation

482 Excellent

4 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You need to check Focusrite Loopback is enabled. In Windows 11: 1) right click the speaker icon, select "Sound settings" ("Soundeinstellungen") 2) in appeared panel select "More sound settings" ("Weitere Soundeinstellungen"). 3) in appeared panel select "Recording" tab ("Aufnahme"). Try to find something like "Focusrite feedback". If not there, right click on any and select "Show deactivated devices" ("Deaktivierte Geräte anzeigen"). Right click on Focusrite feedback (I guess it should be there), select "Activate" ("Aktivieren") (obviously in case it is deactivated). 4) you should be able to select it as input in Audacity and other programs. And hopefully it can record Sonar then. PS. You could record in Sonar because it is in the set of ASIO input channels, unrelated to windows settings.
  2. Have you followed all steps in my last link? I mean "Expose / Hide Windows Channels" "Lautsprecher (loopback)" input in Audacity is not Focusrite loop-back input (it is WASAPI output, the feature I have mentioned before, it can loop-back record the system + apps which use WASAPI, but not ASIO). As a check, you can use steps described later, so set Input of some program in Windows audio settings to "Loopback".
  3. It can be Will is right and you can, at least there is one report of success: But he doesn't mention if he was using ASIO in Cakewalk. I suggest you try. If that doesn't works, try disable Loopback input in Cakewalk and try again. Sure that is not going to work with two ASIO applications, they can't run in parallel with Focusrite.
  4. WASAPI does not require anything to record any WASAPI output (so loop-back, f.e. system sounds), except recording application should be able to work that way (from my knowledge Sonar doesn't, Audacity does). But that doesn't work with ASIO outputs (at least in my tests). Real interface loopback logically should support all directions, but was primary intended to record something (not ASIO) into DAW (ASIO), I mean not other way around. At the moment I don't have Focusrite to check.
  5. These apps are not (only) for on board sound and they are not in the category of "generic drivers", they allow flexible audio routing in case (any) hardware interface and OS own features are insufficient. Generic drivers can disturb dedicated ASIO drivers, it is recommended to remove them in case of problem, as a part of troubleshooting. And keep them removed in case they are not needed, to avoid these problems appear. That is a good advise for everything not used (not only for audio processing). But when you explicitly use something for a good reason, and that can be generic driver, it is fine. If it works. In this thread there was no discussion about any "problems with audio". It is "how to" thread
  6. From the second OP post, I have concluded he has an "ASIO audio interface with loopback feature" and knows about that feature ... That is "the answer" in case one loopback is sufficient, from what I remember Focusrite has exactly one. But in many situations you need more then one. Even in case of let say Zoom, you need (a) a mix sent to Zoom, (b) an input from Zoom. If locally you have just one mic and don't want send anything else, single loopback can work. In all other cases you have a problem, f.e. if you have two local mics and want record them separately or send the sound from your computer. If you have several programs which want ASIO, you need multi-ASIO capable interface. Most are not. If you want record audio from one program into other, but at the same time record the whole process, you need recorder which can do that (f.e. OBS) or multiple loopbacks. Real Multi-ASIO capable interface with multi-loopbacks support is "the answer" in almost all cases, but they are not cheap 😏 Virtual multi-ASIO capable interface with multi-loopback is also "the answer" in almost all cases. And that is what Voicemeeter programs are. The only problem they are not stable...
  7. Technically MOTU AVB and other audio interfaces with internal mixing capabilities are "digital mixers". Dedicated digital mixers also convert everything into digital and do the rest using build-in computer. And since the audio is already in digital form, it is "natural" to send it to computer when required, so function as an audio interface. Some mixers are stage oriented and have no audio interface, but most do (or have such option). Pre-amps in some (cheaper) stage oriented mixers have lower quality. Digital mixers as audio interfaces in general have higher latency then usual audio interfaces, they are not designed for "throw DAW" monitoring since they have "sufficient" effects on-board. If live control without computer and build-in effects are not required, there is no reason switching to DM. DAW control is simpler and better with a dedicated control surface. Sonar doesn't has stock surface plug-in with OSC. Also all stock plug-ins are not flexible. (Any) VST can be controlled by real MCU and clones, DM emulations may have insufficient buttons to switch into that mode (and/or select the plug-in to control). AZ Controller can be configured to do anything possible in Sonar (from Control Surface), the problem someone has to configure it (and that can be challenging). If you just want control control Volume/Pan/Mute and switch to controlling plug-ins (f.e. EQ) using the same controls, that is relatively simple with AZ Controller (also possible in Cakewalk Generic surface or Cakewalk ACT MIDI, but for limited number of controls and without feedback). BTW you can start with TouchDAW / TouchOSC on a tablet. Both work with stock Cakewalk plug-ins and AZ Controller and cost almost nothing. The functionality of TouchDAW is exactly the same as with real MCU. TouchOSC can be configured to work as any surface. The only difference with real devices - no hardware controls.
  8. Only analog mixers are working "Without any latency" (really still with it, but super tiny), any digital mixer has latency, any software mixer has significant latency. But that is not my point... Voicemeeter are software mixers. You need Voicemeeter/Banana/Potato depending from the number of separate buses you want. You need a separate bus or separate channels for each application (Windows also counts as "an application"). With Voicemeeter you can "split by channels". If you prefer separate "buses", use Banana/Potato. Then sequentially configure what goes where, starting from Inputs (if you record from Sonar, that is "an input" for Voicemeeter). But sometimes it doesn't work... F.e. different sampling rates is looking for troubles, keeping "devices" in use while re-configuring it is looking for troubles, changing audio hardware interface settings when Voicemeeter is working is looking for troubles, using software which directly access audio interface is looking for troubles, etc. And periodically it just stop working normally without any reason (including distorting audio... rather annoying if you was recording an interview with someone and what you could hear was fine). Also it can just "stuck", not showing (I mean not always showing) errors. It simply stop working as configured till restarted. This approach is ok if you want the setup one time for hobby activity. For anything more serious or every day, consider small external mixer with second audio interface (build-in realtek counts as such) as a cheap solution or (half)hardware digital mixer/interface for comfortable way (with RME you also get really small latency and good pre-amps, even Babyface has 12 mono/6 stereo mixes you can record, composed from 4 hardware, 6 stereo outputs from (different) software and other mixes, from them the first two stereo mixes are for monitors and headphones. Well, it cost may more then small analog mixer... ). Podcast devices are somewhere in-between in price, have convenient hardware controls and some have build-in recorder (for "just in case..."), but they are less flexible for "inside computer" mixing.
  9. Yes, "a little tricky, but not impossible". Not really "a little", depending from the mixer and what you want. Some DMs can send MIDI or something else comparable to it (OSC) or convertible to it (proprietary daemons) when you operate its physical controls (or change corresponding parameters by other means, f.e. using Tablet App to control the mixer). The purpose of that capability can be: 1) explicit DAW (or some MIDI device) control. In this case the control/parameter on the mixer is not changing anything on the mixer itself. 2) syncing one mixer to another (f.e. old Yamaha) or controlling the mixer from outside (A&H, Behringer, etc.). In this case corresponding mixer parameters are changed. When a mixer supports "DAW control", that is of the first kind. As special control layout on the mixer. Normally limited, just strip volumes/mutes plus transport. And even that can be limited to 8 channels only (even on devices with 16-24 strips). When you use the second approach, you "sync" parameters between the DAW and the mixer. F.e. HP Freq of internal mixer EQ and HP Freq in EQ plug-in. Obviously you can't use that mixer channel "normally", its EQ will be DAW EQ dependent (but that can be used in the opposite direction, you control hardware EQ using parameters from software EQ). If you read in the documentation for the mixer "Mackie compatible control", it is the first kind. You can use standard Mackie surface plug-in in Sonar. In all other cases, including the second approach, you need special solution (Studiomix is working throw own special solution). For Qu16 (note the solution is old, I never had the device and so I can't check it is still working): https://www.azslow.com/index.php/topic,178.0.html Behringer (Midas) expose all parameters throw OSC, so the second approach is theoretically possible with any mixer parameter. "Theoretically" because that was never tested with AZ Controller and probably need modifications to support (for X32/M32 I know which, for Wing and other I have not checked). Or you can use some other OSC to MIDI converter. In general, check the documentation of particular DM to find what/if it supports in terms of DAW controlling / MIDI / OSC. There are no standards. Each model is different, even from the same company.
  10. From that video, you definitively try to (re-)implement ACT using MIDI signalling, in a DAW which is not flexible with MIDI signalling. So, using some "advanced MIDI scripting" in particular plug-ins.
  11. It seems like you try to implement ACT functionality without using ACT I think you are working in direction of one 3d party solution in another DAW. I will not risk to publish the link, but you can easily find what I mean by searching for helgoboss. He has several creatures, the one in question has "Learn" word in the name. I mean that is not the approach Cakewalk had in mind when ACT was developed, but using ACT for the purpose is simpler then using a chain of "MIDI tricks" you have applied.
  12. BTW the last time I have checked Touched Portal, it was not on-pair with Touch OSC in MIDI/OSC activity. And quickly checking now, it is still not...
  13. I am not against nice panels 😏 But how have you achieved "Everything happens completely synchronously on the synth"? If you load another preset in Z3TA (or just modify something by mouse), do all these controls reflect related changes? ACT / NKS / Automap (obsolete) / VIP (seems like obsolete) are not working "MIDI way", so they are bi-directional (by design). "MIDI learn" in a synth requires the synth itself support "send learned MIDI on changes" to be bi-directonal. Long time ago, when I was naively thinking "that should be standard feature", I have found the synths I had tried do not support that. I have just rechecked with Z3TA+2, it is not doing that by default I have found no related options. MIDI learned control Z3TA+2 doesn't send that MIDI when I operate the control by mouse (or load new preset). But, in turn, synchronization is not possible. Even simple "catch" behavior is not supported there (some synths have that option). The reason I prefer ACT even for finite knobs, here I can tune how and when physical control modify the parameter (several approaches are possible).
  14. If you have such real device: And see that on display (mix mode): and then switch to (ACT mode)... and may be to (PC mode)... For me that looks more usable then "Remote control". I will never remember which knob controls what there, also the position is initially not in sync. I guess you have no problem to find which knob you should turn on BCR to change CA-2A gain, even so you have never worked with it, right? And the value is always "in sync". I mean for long session with Z3TA, "Remote control" (when connected to real device) has advantages - finite knobs are more "playable" then encoders (at least for me). But for controlling DAW in general, including mix/effects - no way...
  15. By ACT I mean the approach, not "ACT MIDI Controller" plug-in. That plug-in is in fact limiting and its limitation has triggered me to write my own ACT plug-in. I mean AZ Controller (https://www.azslow.com/). Standard Mackie (and all other) is also ACT plug-in. Creating own virtual surface to make a project one single "instrument" under its control, so "Remote control" the same way as "MIDI Learn" in soft-synths, is a valid approach. But that is different from generic DAW control, I mean using one device with arbitrary projects. At the beginning I was creating project specific presets, major problem was remembering what controls do in particular project... With one preset which always works the same way there is no such problem (I guess that is the reason for MCU and alike popularity). What also helps are actual labels for controls (another MCU/C4 and alike advantage). MIDI does not allow the last, you need OSC (f.e. https://www.azslow.com/index.php/topic,295.0.html) or at least a window with such labels (what "ACT MIDI" GUI does... well, not perfectly...).
×
×
  • Create New...