Jump to content

Emanu3le85

Members
  • Posts

    59
  • Joined

  • Last visited

Posts posted by Emanu3le85

  1. 17 minutes ago, Wookiee said:

    You could try IK Multimedia ARC Studio, I was sceptical but am quite impressed with the subtle but useful change, probably cheaper than doing room treatment properly. My room treatment cost much more than the 299.99 Euro price of ARC studio.

    I'm a bit skeptical too, I've heard a lot of talk about DSP acoustic treatments, who knows, maybe in the future... thanks for your advice

  2. 20 minutes ago, Wookiee said:

    All the frequencies are present and balanced I am listening on my Adam's A7x's, in a semi treated room through IK Multimedia ARC studio room correction software and hardware box.

    you hear it well perhaps because we are using almost the same speakers, I use the Adam A7V😝 I'm joking.

    it's not bad but I feel I can do better perhaps by treating the room better acoustically

  3. 17 minutes ago, Wookiee said:

    I can hear this being used by DJ's in a set, seems the sort of thing they like.

    Mix sounds reasonably well balanced.

    Hello Wookiee, I had the mastering stem done in a studio but personally it is not translatable on all devices, if played on a large system it sounds reasonably good, but on small speakers or mobile speakers it sounds with the high part of the spectrum rather distant and closed, thanks for listening

    (Google translate)

  4. 1 hour ago, David Baay said:

    I don't get latency with your project, but I'm missing some plugins. The first suspect I see is:

    LoudMax - Per the website: "LoudMax is a Look-Ahead Brickwall Loudness Maximizer". It's in an FX Chain in the Prochannel on the DnB bus.

    Other suspects would be one of the several reverbs that might use convolution.

    And, as mentioned previously, I don;t have any of the Kilohearts stuff.

    In general, I would not recommend using such and intensely "pre-populated" template with such convoluted routing and so many plugins.

     

    I didn't know about the loudmax issue, thanks, the template is complex because it is the skeleton of my songs, I already have everything there, I will try to eliminate that limiter, the parallel reverbs are very useful for me to not waste time building everything every time, I will try to find the culprit

  5. 3 hours ago, David Baay said:

    Processing load alone will not increase latency. Latency will remain the same as you start getting more and more crackles from late buffers until eventually the audioe engine drops out. I posted early on about Plugin Delay Compensation and that is still the most likely cause, but it doesn't seem you've thoroughly investigated that by removing/bypassing plugins or archiving tracks.

    I've been trying for at least a month to understand what that latency is by eliminating all the plugins or bypassing the fx or turning off the Echo inputs, changing the buffer settings, but from the template the latency of the recording automations doesn't go away, the only solution seems to be open an empty template, I don't know what I've done, as soon as I can I'll attach the screenshots of my MIDI/instrument template

  6. 1 hour ago, 57Gregy said:

    If there is no audio or software synthesizers in a project, you can use the MIDI metronome. As soon as you add audio, it will begin to use the audio metronome to try to keep things in synch.
    In Preferences, you can select to use the audio met all the time if you want.

     

    metprefs.png

    I have more than 30 tracks of just virtual synth and 3 drum machines, I don't work much with audio,other than some tweaks to the recording of my takes but sometimes I use freeze, what interests me is having a reactive Midi/instrument project, However, I had never see the audio metronome before,   now it is set to itself as "metronome", it's okay?

  7. 1 hour ago, Xoo said:

    Audio metronome enabled?

    I think I still haven't understood what the audio metronome is.

    59 minutes ago, msmcleod said:

    Sounds like you've got some lookahead plugins.  Mastering/Linear Phase plugins will cause this.  They're designed to be used at mixing/mastering time and should be avoided at tracking time.

     

    do you think it's some plugin? maybe it could be, but I still think it's the fact that the project has to calculate more than 30 instrument channels connected to the aux with echo input activated each time, if the project is empty I have no latency in recording

  8. 53 minutes ago, msmcleod said:

    FYI - regarding latency with automation...  for VST2 plugins, the automation resolution is restricted by the current buffer size.   So for a buffer size of 1024, you may experience up to a 1 second lag in picking up automation (unless your automation happens to land exactly on the buffer boundary).  Reducing your buffer size will help to mitigate this.

    VST3 doesn't have this limitation.

    unfortunately the latency of the automations even when decreasing the buffer does not go into sync and is more than 2/3 seconds, but I think it is a problem with my new Midi/Audio template, in fact each instrument channel is connected to an aux with audio return, in fact if I open with an empty project I have no latency when recording automations

  9. 15 hours ago, msmcleod said:

    MIDI latency was an issue that gave me grief for years, until I worked out what was going on.

    In Preferences->Project->Clock, you can choose your clock source.  It defaults to Audio, which is by far the most accurate and reliable.

    The only issue is, you need the audio engine to be running for this to work.  If you've a MIDI only project (like I used when I used racks of MIDI synth modules), then this isn't going to work unless you have audio or a soft-synth in your project.  If you don't, then the timer essentially isn't running, and MIDI timing is all over the place.

    If you're recording only MIDI, you could set the clock to "Internal" - this will work for MIDI only projects, but its accuracy can then depend on whether the high-resolution timer is enabled both on your motherboard and in Windows, and how well they play together.  Sometimes enabling the high-resolution timer can make things better, sometimes worse, and in many cases can cause a whole bunch of other system instabilities/blue screens if its set wrong.... best not to play with it. Really.   Feel free to try the "Internal" clock setting - if it works right away, great - but don't mess around with high-res-timer stuff unless you really know what you're doing and know how to revert it.



    By far the easiest solution is to leave it set to Audio, then insert a soft synth or an audio track with a silent clip, and do this before you start recording any MIDI. 

     

    thanks for the advice, I have tried many times to set the clock to internal but it automatically goes back to Audio, I have significantly improved the situation by enabling cache reading both during playback and recording in "file system" and I lowered the buffer size to 128 semples, the computer seems to hold up,

    with the old PC I would only have glitches with these settings

  10. On 4/25/2024 at 3:57 PM, John Vere said:

    La mia conclusione è che Motu è un'azienda orientata al Mac e non dedica tempo allo sviluppo dei propri driver audio e software per Windows. 

    Sul mio computer più vecchio, che era W10, dovevo utilizzare un buffer 512 di problemi tecnici. Sul mio nuovo W11 c'è un leggero miglioramento delle prestazioni e posso usarne 256. I miei progetti sono ridicolmente semplici. Non mi avvicino a nessun plug-in che consuma la CPU. 

    Il mio progetto più complicato sarà al limite del glitch utilizzando il Motu come interfaccia @ 256 ma se passo al mio Zoom L8 posso abbassare i miei buffer a 128 e persino a 64. Stesso computer, stessi progetti. 

    Ho scattato alcune schermate in modo da poter confrontare le impostazioni. Presta attenzione alla casella di sincronizzazione e alla memorizzazione nella cache, assicurati che Motu venga visualizzato lì. A volte i driver invasivi vengono visualizzati lì ed è necessario eliminarli in Reg Edit. 

    Assicurati inoltre di avere il driver ASIO più recente dal sito Web Motu. 

     

    Schermata(701).png.c44299a5b7438fb6e1c8c64ea29cb5e7.png

    Schermata (699).png

    Schermata (700).png

    it seems that I solved a lot by activating the reading of the cache in "file system"  during playback and recording and also by lowering the latency to 128 samples, now when I record, the audio clip is perfectly in synch but there is still a lot of latency of the midi automations during recording, maybe when I created my new template I exaggerated with the channel aux, in fact the problem doesn't exist if I start a new empty file

  11. 12 minutes ago, David Baay said:

    Most people would find the latency with that buffer size troubling. I generally run a 64-sample buffer while recording, and never more than 128, especially when input-monitoring  hardware synths. But you mentioned sync problems on the order of "a few seconds" which is a lot even for a PDC problem. The only thing I know of that has ever caused that much of a problem is some interface drivers not playing nice with Metronome Count-in enabled. Try recording without it. If it makes a difference and you have not had this issue before, a reboot might cure it. Also make sure to zero out any Timing Offset you might have tried in Sync and Caching (not to be confused with the Manual Offset for audio record latency compensation).

     

    I've been testing for about one year with 2 laptops, an Asus consumer and an MSI Katana gaming, but nothing has changed, still using the motu m4, I have always seen the same latencies on every PC on every occasion.

    I believe at this point that the sound card has problems if you tell me that the 512 buffer is too much for most people.

    I'll stop with the posts about latency, but I don't understand the solution thank you all the same

  12. 22 minutes ago, David Baay said:

    Dato che hai diversi post che non menzionano questo problema, presumo che si tratti di un problema nuovo e/o specifico del progetto...? Stai ancora utilizzando il MOTU M4 sia per l'audio (presumibilmente in modalità ASIO) che per l'ingresso MIDI o stai utilizzando una tastiera MIDI USB? Quale dimensione del buffer e quale latenza di input/output/andata e ritorno è stata segnalata in CbB? Quali plugin sono stati aggiunti di recente al progetto e dove? Se è dovuto a un plug-in che induce PDC, abilitare il pulsante PDC Override nel modulo Mix funzionerà solo se il plug-in incriminato non si trova sulla traccia che stai monitorando o su un bus nel suo percorso verso l'output. Bypassare tutti gli FX tramite il pulsante nel modulo Mix dovrebbe eliminare il PDC ma potrebbe essere necessario attivare/disattivare la riproduzione per reimpostarlo.

    yes I have several posts on latency but each one is specific to its own problem, I always use the Motu M4 and midi kbd: behringher u control

    Asio

    sample rate 48000, 24bit, buffer 512

    (file system) enable read/write cache "disabled"

    I/O buffer 256 playback/256 rec

    recording latency reported in CbB = 1286 (even moving the offset manually does not change anything in the recording)

    I have the checkbox "remove DC offset during recording"

    I don't have any particular CPU-heavy plugins

  13.  I have different latencies,

    when I apply recording, all the automations start very late which does not happen in playback.

    when I record audio from an internal VST or an external synth the recording will start a few seconds early.

    latencymon does not detect any latency problems, I tried to deactivate all the fx, activate the dc compensation, change the buffer size but nothing seems to change

    What else?

  14. 12 hours ago, sjoens said:

    I found this out when comparing an older mix done thru a Behringer interface and a new mix done thru a Mackie Onyx.  The Behringer sounded better to me and there was no way to make the Onyx sound the same.  At least in the same price group it's all subjective I guess.

    I also tried to pass the signal from the sound card through my Mackie VLZ3 (made in America) but I didn't notice any difference compared to the internal processing in the signal, even by raising the gain of the mixer channel it didn't change, I think that to obtain a certain type of sound you need to have certain preamps like the Neve 5059 or 8816 summing mixer, but are expensive

  15. 3 hours ago, John Vere said:

     

     

     

    ok but if I use vst I would have to exit the sound card to process analogically and then return, this would increase a lot of latency in songwriting, however I understood what you mean and it is in fact the only real solution to analogue heat

  16. 11 minutes ago, msmcleod said:

    I like this idea - in fact, I did this for a while.  The only reason I stopped doing it was because it slowed me down keeping track of aux tracks/tracks.

    So now I only do this if I need to.  Most things end up going to a bus in any case, which is where most of my volume automation goes.

    I correct myself, by recording the audio of the instrument channel the prochannel will still be recorded if it is turned on in the aux channel, it will have to be turned off, but in any case having the 2 independent sessions helps me a lot in calibrating the sound, because I don't start the song with the prochannels turned on I turn them on halfway through the song because the console emulations change the response of the sounds a bit and I like to start dry then after recording I do a bit of trim, saturation, eq on the prochannel

  17. 17 minutes ago, msmcleod said:

    Back when I was recording to tape, I used to get the sound right for each instrument and record everything as it sounded.  For example, I'd get the guitar sound I liked, then recorded that straight to tape.  If corrective EQ was needed to get it to fit in the mix, then that was a mixing task.  The key was to get the best sound possible to tape.

    So nowadays in the DAW world, I use the FX Bin for sound design, and the Pro Channel for mixing (obviously it means making Pro Channel PostFx).

    For example, if I record the guitar dry, I'll use TH-U or Guitar Rig in the FX bin for my sound.  The Pro Channel is then free for mixing duties.

    in fact I also use the prchannel as a console channel, placing it post fx and in the fx section I insert the recording chain of the instrument, but now I have a new method, I use the instrument channels only for the fx  and the midi data, then I route to a group (aux) and insert the prochannel as if it were the audio channel of the console, so I have separated the recording sessions and the mix session, in this way if I record the instrument the prochannel will not also be recorded and then I insert the recorded audio track into the aux group (prochannel) for the audio mix

    • Like 1
  18. 4 hours ago, John Vere said:

    I hope you understand that there’s no pre amp involved in a DAW. The pre amp is your audio interface which controls the input level. 
    You could turn the tracks Gain right off and your input level of the audio you are recording will not change. 
    Once the audio is recorded or in the case of the output of a VST instrument then the gain can be used to set a desired level of the input to the track or bus channel strip. 

    Digital channel strips will usually fall short of the emulation of analog gear. Often making things worse not better. 
    You want great analog sound then purchase a real studio mixer like a Midas or ? 

    The pro channel is just as David said another place to put effects. It differs in that it comes with modules you can use that are easy on your CPU. You can drag them up and down to try different things. You can save pre sets etc Its a handy tool that keeps things tidy. 
    The inclusion of the Concrete Limiter in Sonar has made my day.  

    So why do all these analog consolle emulations exist? I only use vst or hardware synths, I don't preamplify from the sound card, but after recording the sound at -12 dbfs if I increase the console emulation trim on the prochannel the sound acquires more character, and if I do it on all the channels everything will be heard more beautiful, then by putting the console emulation at the beginning I can see the input level in dbfs or rms when the faders are in postfader levels and this helps me a lot in gain staging, in addition to the VUs In my opinion, using Prochannel emulation is better than not using it

    36 minutes ago, pwal³ said:

    check the signal flow diagram here (i would've embedded it but the forum doesn't allow non-https image links, even though it's their own 🙄)

    https://legacy.cakewalk.com/Documentation?product=SONAR&language=3&help=Mixing.07.html

    Thanks 

  19.  

    1 hour ago, David Baay said:

    In DAW world there is no meaningful difference between a Strip FX and an Insert FX they are both "in line"  such that everything at the track input is forced to go through them (notwithstanding that some module might have a parallel path built into it).  The Prochannel is effectively just a second FX bin that can be placed before or after the track FX bin, just as a 3rd-party channel strip plugin can be placed before or after other plugins in the FX bin.  And then there's the Clip FX rack which gives you a place to put a plugin ahead of everything once the live input's been recorded.

    In some sense it's a moot question because by the time any plugin anywhere in the DAW sees the signal it's already been amplified to line level before it was digitized whether by the mic pre built in to the interface or by an external mic pre into the interface's line in or by the amp in a hardware instrument that outputs line level.

    my intention is to emulate an analogue console without necessarily having to resort to third-party plugins that overload the CPU, then I tried to hear the effect of my songs with and without the console emulation in the prochannel and it can be heard a lot, in my opinion those modules sounds good, thanks

  20. I use the Prochannel a lot as a channel strip of the console, putting the pre at the beginning then the compressor and equalization and the FX Chain represents the Insert and I put everything post FX, but sometimes I wonder if the Prochannel is really a channel strip or if it is an insert, The fact that the modules are outboard style and not channel strip makes me think so in a channel strip I don't have an LA2A in outboard format or an 1176 or a reverb, the pre should be at the beginning not at the end how do you use prochannel? Would I be better off inserting a third-party emulation as a channel strip and using the prochannel as the track insert?

    Thank you

    (Google translate)

  21. I noticed that if I add a midi controller to cakewalk and then open some vsti synthesizers, the midi controller will by default have one or more parameters connected without having activated "midi learn" or having assigned any NRPN to the automation line, I wanted to know how I can disable this automatic control? because if I wanted to do a live automation performance with the cc37 knob of the midi controller for example, it will write the automation of the control I set but at the same time the default parameter set will also change, creating a mess (the midi controller cc knob changes depending on the plugin). Is there a way to see the default connections of the cc parameters with the midi controller? Thank you

  22. 10 hours ago, rsinger said:

    I have an msi crosshair that's a couple years old and it's fine. For the most part I followed Sweetwater's optimization guide.

    https://www.sweetwater.com/sweetcare/articles/pc-optimization-guide-for-windows-10/

    Check the bios and set it for performance - although the crosshair is a gaming machine the bios was set for balanced. For audio work you'll want to boot up and not run any other apps and put the machine in airplane mode. If you just booted, wait a couple minutes before running latency mon. HTH.

    fantastic, now I have tons of tutorials to watch, can the OS settings be saved for hypothetical formatting or will I have to redo everything if I reset?

×
×
  • Create New...