-
Posts
796 -
Joined
-
Last visited
Everything posted by azslow3
-
lol... why people still fall into the same trap? Next will be 16k.... Are you still going to buy it for 10' tablets? I understand there can be some advantages for photo processing (but I guess HDR bring more here). Who professionally works with 4K Video also obviously need 4K screen. But why have pixels which is impossible to use 1x1? Yes, everything can be a tick smoother, with 4x processing power or 4x picture size. I am still happy with HD Ready TV for TV (42') and FHD 24'' monitor. I use 100% on FHD 15' notebook monitor as weöö, so I guess I will use 100% on 32'' 4K in case I buy it. And I wish me a new projector in 4K, since I have 3m screen. But when someone needs scaling to work normally, it simply make no sense.
-
48/24 vs. 44.1/24 sampling - performance vs. quality
azslow3 replied to Sven's topic in Cakewalk by BandLab
What you mean by "/24" ? Normally "<frequency>/24" means ASIO buffer size is 24 samples. But I am not sure in your case (24 is very small buffer, supported by some relatively expensive interfaces with top computers). All current interfaces can work in 24bit mode and 32bit mode is hardware-wise not possible. DAWs process in 64bit FP (at least 32bit FP). So the "sample size" 24bit is normally assumed as the only option and so omitted. 48/<n samples> has less latency then 44.1/<n samples>. Less latency in addition to more samples per second is more system demanding. In case the system is on limit with 44.1 it can not handle 48, at least not with the same buffer setting. Sonic difference is small and CbB supports upsampling for plug-ins. That is audible for many plug-ins, it make them run on double frequency. So (IMHO): to be on the "safe side" in all audiophile discussions you need 96kHz, on powerful system with top interface it can be possible to have less latency with 48 then with 44.1 (on a weak system with low/mid interface it make sense upgrade the interface first, in case latency matters ) , has no difference otherwise. -
When almost unused they normally produce zero noise. My is in silent mode most of the time (including small 3D editing and old games). It depends how much power particular job needs, is the card in "gaming mode" (over/under clocking/voltage settings) and how good is the cooling system when passive. So that can be a good point for OP: low profile can be problematic from the noise point of view. If the case allows, normal profile is safer.
-
I have opted for 2x16GB. I know my needs at home (at work decisions are in different size and price range...). GPU can outperform CPU in some tasks. But that is the task and GPU dependent. By 1050Ti can not outperform i9 in Blender rendering.
-
+ I means asking someone to do his usual job for free is a bit unfair, no? So, comments from me (I have recently upgraded to almost identical system, except I put MSI MB/GPU and populated 2 RAM slots only): * no overkill in your configuration * "overclocked i9-9900K" is not the term it was before. Without any "overclocking", this CPU in "standard" turbo mode can (will) throttle under some conditions. With all threads CPU intensive load (and MB auto overclocking settings) it consumes 230W+. I have opted for 12A because of Noctua recommendation. It can handle 180W limit, probably a bit more (and probably with less aggressive voltage tweaking CPU can do more job from the same power). 12S is less performing. Common option is not even 14. For this CPU it is dual tower 15. The case should be chosen correctly and access to RAM slots will be limited, but that is more safe way to go. * from my knowledge there is no benefit from populating all 4 RAM slots, but the first two slots can be especially tricky to access with a big cooler. * you can set "all cores 5GHZ" to get under 10% theoretical computation power more. And hope the load is not too hight (so you do no use the whole computational power, not even close). A bit counterintuitive, but has some theoretical benefits. * Desktop CPUs have limited number of PCI-E lines (not PCI-E slots related). Your MB has 2 M2 slots and only one is "primary". Also note the second M2 disable a part of SATA connections. So "3 M.2" and "5-6 HDD" is not realistic. Put bigger M.2 if you need more space (f.e 1-2TB M.2 are not significantly more expensive when calculated pro GB). * the system will be quite, in case you do not put any HDD. But under low load only. As you can guess, once something is consuming 300-400W in total, all that is converted into heat. And this heat need to be moved out. Noctua and other have many differently sized case fans. Modern PSU and MB have special connections and steering for these extra fans, so they will be quite till they are required. "Stress loaded" air cooled CPU+GPU are helicopter like, no workarounds... * if you plan any GPU intensive tasks, opt for better GPU. If you do not have GPU tasks at all, CPU build-in GPU can do the job. I have 1050ti because I also play relatively old games on FHD monitor, that is sufficient, smooth and quite. Also 3D editing is smoother and quite. But 3D rendering is way faster on CPU then on this GPU. * I have 650W, no problem in that configuration. But if you plan more powerful GPU (alone or SLI), that can be problematic. Note that PSU total power is just a "label", look at concrete lines max currents to get the idea from where the problem comes, even in case total consumption from all components is under total limit.
-
I do not know how Presonus drivers work, but your first graph is definitively not from 96kHz sampling hardware. Can it be it shows 96kHz, while in fact doing that is software? Not sure it is possible with Presonus (possible with my M-Audio), does it work with ASIO and WDM at the same time? I mean check that Windows is not configured to use it for something, in 44.1/48kHz mode (the only way I know to convince Windows it to have another interface for it... when there is only one, Windows try to grab it). My interfaces report correctly - when the interface is used for Windows / other app, its ASIO frequency can not be changed. But I do not have Presonus, they f.e. could "trick" by letting frequency change in software while still locking hardware. Other possible way is checking Windows settings. In the Control Panel (old one) / Hardware and Sound / Sound. In playback and recording, right-click on each device, Properties / Advanced. Check everywhere is 96kHz, at least for all IO of FirePod.
-
Another term is "Slip stretching" (f.e. ctrl+shift dragging clip border). That allows make clip length what it should be, f.e. align clips recorded at different hardware clock. Note that is the subject of complex algo, you can set it in Preferences, separately for "preview" and "rendering", the quality can be significantly different.
-
This graph shows that your interface, at the moment of the test, physically works in 48kHz mode. Check its own control panel (if it has one). At least check that RMMA is in ASIO. In MME it WILL NOT switch the interface, so the interface continue to work in the last mode it was asked to work. From where the garbage comes I have no idea, on my interfaces the part up to 24kHz has the same general shape, but the upper part is zero (I mean when hardware is in 48kHZ, RMMA in MME with 96kHz). Can it be some "windows audio improvement" or some other software "effect"? (I do not have FirePod, but when I had SoundBlaster it has tried to "improve" my sound internally). I ask because even up to 20kHz part is horrible. Sure, I do not expect it is as flat as for top current interfaces. But even my M-audio Fireware (without pre-amps) looks way better (it falls after 20kHz in 48kHz mode). If that garbage has found its way into your recording, there is nothing you can improve there. But at least it should be possible to switch your interface into real 96kHz for future recordings.
-
Sorry, English is not my primary language. Is your question about Dell XPS? All XPS (independent from version) and most Latitudes have latency problems. DELL has messed something in hardware and/or BIOS, they periodically release "BIOS updates" which should improve latency. But all tests in the Internet confirm opposite. Last BIOS update prevent Linux to run normally, at least in my case... Linux developers write that Dell BIOS export buggy information. That can be the reason why MS ACPI driver periodically spike up to 3-5ms. Some people disable devices assigned to ACPI, some of them claim improvements... Can audio interface influence the result? A kind of. Roland VS-20 is just dead freezing on my XPS (re-connecting does not help to get the sound back). RME BF-Pro has no problems in 64 samples / 48kHz, at least under light load. That does not mean there is no glitches at all, but resulting sound is acceptable for me. Important to have absolutely no background activity. So all MS tasks should be finished, network disconnected, no extra USB devices. Completely different point. How you connect your keyboard? F.e. I periodically get strange latency from my DP (Kawai) and E-Drums (Roland), both connected with USB. I know it is strange because 2 other MIDI keyboards connected in parallel have no such key to sound delay. I still do not understand under which conditions that happens. Try connect by MIDI to the audio interface (or USB in case you always use MIDI) to check there is (no) difference. 5-7ms soft synth output latency (128 samples buffer) should not be significantly inconvenient if the rest is ok (transferring 10 finger chord throw MIDI cable takes 6-10ms, I have seen people playing MIDI keyboards on stage ? )
-
I will not repeat my post in recent thread, you can find full story if your want. Two major points: for some people it takes much longer. I can not update even just Assistent in 5 minutes for some people Assistent NEVER works as expected, I guess also the consequence of slow connection Recently I have upgraded my computer. So, Windows 10 installed from scratch and updated. Downloaded new assistant. "Installing..." (NO PROGRESS INDICATOR). After an hour, I have decided to check... the file it was downloading was not growing. To check that, I had to know where files are, no? Unlike on my old computer, the second attempt to install CbB with Assistant was successful: it has managed to start installation after downloading. Still, all that is Pain.
-
Got it. So opposite idea from "early" processing, with "late" processing for the content in live track. But if CbB developers will manage to calculate correct delay difference between live and other, why not do "early" processing and remove PDC button (since there will be no need for it)? ?
-
Well, this option automatically makes one MIDI track live. In my tests always. Without clicking on anything. If you have no difference between this option on and off, something is really broken. If you try the test I have described before, with 3 tracks, do you get expected result? If yes (so no problem), there is probably some fancy routing in your project.
-
Laptop recommendations for use with Cakewalk
azslow3 replied to fallenturtle's topic in Cakewalk by BandLab
+ Do NOT buy DELL XPS or other DELL "slim" toys. Not only my XPS has bad latency, after some BIOS update fan on the left side always spinning. With the latest BIOS update Linux can not run at all. Complete BIOS downgrade is not possible. Noisy like hell (fans, some electrical cracking inside notebook and power supply) peace of crap... -
I do not know how delaying MIDI an help with PDC. MIDI is bound to input audio. "Precalculating" not live tracks with delays is from my knowledge the only know way to deal with PDC better (eliminates the problem almost completely, except open question what to do when live track has a send to delayed bus). No possibility to permanently set MIDI input to "None" and no possibility to permanently disabled VST MIDI output (or at least not include it into "All" stream) are 2 from many long standing bugs... "Allow MIDI Recording without an Armed Track" (record arm by echo) sounds as logical as "Record arm but don't record" (monitor without recording) in one another DAW ?
-
"Always echo" is a kind of useful when there are several instruments, for fast switching between them using the only MIDI keyboard. Changing the instrument is a matter of selecting another track then and that can be easily automated. Usually proposed way is avoiding plug-ins with delays till the end phase of mixing or even till mastering, so auto echo is normally not an issue. Some plug-ins have really huge delays, up to 1 second. Mixing in such environment is almost impossible, any parameter in the chain before such plug-in takes 1 second before it is audible. I was so used to this option that I was missing it in other DAW. Especially when achieving the same effect requires 3 different track options set, for each track in question.
-
I have my own checklist, and recently updated it: http://www.azslow.com/index.php/topic,395.0.html With all ideas I could find in the Internet about the topic, including exotic and questionable "rumor". Most staff is not so important, really I have to put somewhere that "Use Latency Monitor to check system latency and problems with devices/driver" and "Use REAPER build-in Performance Monitor to check audio/plug-ins timing" sections are where people should start. RME control panel can show severe hardware problems, ThrottleStop problems with cooling and fancy CPU settings, REAPER RT monitor can show where is the bottleneck, Latency Mon identify the driver (sometimes MS performance tool is needed to find real driver). All that within 5 minutes, without thinking much... It can be some PCI-e card does not show bad things in related driver but cast other devices/drivers problems, but that is unusual. No checklist can make bad designed hardware better. F.e. nothing can help DELL XPS running with low latency. But at least utilities can point where the problem is (in case of DELL that is system related hardware, accessed by ACPI, and severe system locks probably coming from the same source). Also mentioned tools give exact numerical values which compromise is worse the effort. F.e. in many cases disabling C-states, SpeedStep and locking CPU frequency just allow to run at 32/48/64 buffer size instead of 48/64/96. And I guess some people will prefer to have 8W CPU power consumption and so idle fan and no noise with 96 samples buffer size, instead of reducing it to 64 by spending 40W on idle system.
-
Sure, one track is one track, in any DAW. So a synth/FX is included in PDC or not included in PDC. To make existing content playing with PDC and live playng without PDC the track should be duplicated and existing content removed from the copy, with recording to the copy. DAWs are only different in which PDC is applied to particular track. In CbB, there is one global PDC for all parallel chains. There are DAWs which have one PDC for all live chains, but per chain PDC for not live chains, with automatic aligning. In the last case disabling PDC is not required at all, live chains never get delayed from not live chains in this case. -- What OP describes exactly match behavior from my example. So I prefer to ask one more time before agree there is yet another bug in CbB ?
-
It can be reproduced with 3 tracks: audio track with 2x ReaFIR, 2 instrument tracks with identical MIDI clips, originally in sync with audio. (After PDC is calculated correctly after adding ReaFIR, which may need project reload) as long as PDC for live tracks is not disabled everything is in sync. Disable PDC (with transport stop and rewind). With "Always Echo..." set, one of the instrument tracks will go out of sync (no matter what you click/select).
-
As long as the option "Always Echo Current MIDI Track" is set, you always have one MIDI track live. So no, you do not need to click/select/etc. the track for that. Unset that option in preferences, check then echo is off on all MIDI/instruments tracks and try again. Notes: If you change something during playback, that does not recalculate PDC. So you need to stop transport, make the change, start transport (Sonar/CbB was never good processing modification on the fly) Sometimes PDC button is buggy. Try to rewind/reset audio engine (the same for many strange things). Check magic "prepare MIDI" buffer size (given in ms, default 50, try 500-1000). It should be long when you have plug-ins with delays (that option is buggy for at least 5 years). Synths can "skip" notes and have other strange behavior otherwise. PS. using plug-ins with delays and record in parallel without thinking about PDC and fancy buffers is possible, but not in CbB...
-
Can it be that the track in question has auto echo on? I remember that is default option but it can be turned off. That will explain the synth excluded from PDC is different (follow focus), since even without record arming, in case echo is (auto)enabled (synth can be played with MIDI keyboard), that is "live input".
-
Just to clarify... To don't hear a delay on live inputs you turn OFF PDC (PDC is on by default for everything). Normally when turned off, PDC affect live inputs. So, live inputs are out of sync with the rest but can be monitored without delay. So, you observe some synth(s) with not armed nor monitored MIDI track on input are also ignoring PDC, so they are in sync with your live recording, right?
-
One FW M-Audio without pre-amps but with encoders, as standard audio interface for computer. Another M-Audio with 2 pre-amps which can be chained with the first one, it was placed more close to gear. Phonic Firefly, which play the role of rack mixer with a possibility to work as an interface. VS-20, too unstable under Windows, has high latency, but works under Linux (unlike all mentioned before). Babyface Pro as portable and the only quality product I have.... Not to count interfaces in e-drum, vocal processor and small analog mixer. Why? I had no one wise in the near to plan my home gear ?
-
Plug-in delay and related PDC in many cases can not be solved by faster computers. Like you can not reduce the size of your shadow by running faster... We do not suggest you change your workflow. Your original observation was "Vocal clips, when recorded are not synced correctly". That is a bug. In the DAW, in some plug-in or just in the way you use the DAW or some particular plug-in. So our recommendation is attempt to find it. Note that your proposal to just automatically disable FX engine on recording, while sounds like ultimate, is not really ultimate. If there is a bug with PDC, it can affect not only recording but also playback. Depending from plug-ins and routing, audio clip has to be at "wrong place" to sound in sync, f.e. when some plug-in is in parallel with vocal chain. BTW that you can test: when you get "not synced" recording, does it play correctly with all FXes off?
- 25 replies
-
- new takes not synced
- plugin latency
- (and 2 more)
-
FX should report latency correctly. DAW should take it into account. If FX is used in monitoring chain, there is nothing DAW can do, except giving "bypass all" functionality. DAWs distinguish between keeping FX "hot", so ready to include it into processing chain any time, "cold" when plug-in is still there but does not influence project and "off" when plug-in is unloaded. Well, theory is good, but what can you do practically? It is good to identify which plug-ins introduce which latency and what is complete routing. Unfortunately, CbB does not show you which PDC is applied. You can try (that is quick) the following: install REAPER (portable will work, demo is fully functional) and ReaCWP extension to it. Point to your custom VST folders in preferences. Open your CbB project (better a copy... if ReaCWP is installed correctly, you will be able to do this, with some luck it will be loaded currectly) and open View/Performance Meter or Track Manager. Here you will see PDC per track. Note that values are rounded to the buffer size, for each FX individually, so in CbB you probably have less PDC then shown here. Opening FX of individual track and selecting concrete FX will show actual PDC for this plug-in in the left bottom corner.
- 25 replies
-
- new takes not synced
- plugin latency
- (and 2 more)
-
Disclaimer: the following is IMHO, not 100% sure. PDC on/off just switch PDC (delay compensation) for tracks you are recording. So it change what you hear during live monitoring but is should not influence the time where recorded material is placed. As Scook has mentioned, most common source for not correctly placed recorded clips is your audio interface drivers. By default, CbB assume latency reported by the driver is correct and "reverse" the effect (note that is not Plug-in Delate Compensation (PDC) but Interface Delay Compensation (has no common abbreviation)). So first check you have everything right without plug-ins: record dry vocal and try to record another dry vocal listening the first recording, in case the result is not what you expect you have problems with interface delay settings. PDC works by trusting the information provided by plug-ins. There can be "bad" plug-ins which report no latency while introducing it (that can be helpful in some cases). In this case a DAW has no chance to compensate correctly, try to find plug-in in question and search for something like "live mode/zero latency" option in that plug-in. Good plug-ins with such options do that right, they really switch to different algorithms to avoid latency. But "bad" plug-ins can simply set Delay parameter to zero, without doing anything else.
- 25 replies
-
- new takes not synced
- plugin latency
- (and 2 more)
