Jump to content

azslow3

Members
  • Posts

    671
  • Joined

  • Last visited

Everything posted by azslow3

  1. It is more about logic inside UPS, so how long it waits the power is fine again and which instructions in produce. My best real live example is 20 years old. "Smart" colleagues have connected (relatively) big particle accelerator to the office power network. Normally it was powered from a separate major line (many megawatts power consumption). As the result, there was "tiny" pikes of like ~500V every several seconds. Typical measure devices was still showing constant 220V, only oscilloscopes could observe real picture. That was a good test for many devices. Interesting that most computers, switches and other electronic have continued working normally. At the end we have "lost" just several devices (from thousands). But UPS (and alike) equipment went crazy. Best UPSes have indicated continuous failure, after a while have switched off powered equipment gracefully and went offline. Many UPSes was switching on/off, some of them eventually was completely discharged and a part of them have not even triggered graceful shutdown. The best observation was with big power system, which had diesel generators as the second level backup. The electronic was starting/stopping engines 🤣
  2. I want just describe my experience with situations (and partially found reasons) with something I call "e-noise" (similar to what you describe): all build-in interfaces (Realtek, SB), reacted on (wired) mouse, HDD, load. I could (can) not eliminate that. The level is low, but annoying. In monitors and headphones. Kawai DP (2 wire power cable). Problematic when there is any connection to/from it (any combination of USB/MIDI/Audio). MIDI was cured by cutting ground at the receiving end. By MIDI standard it should not be connected, but most MIDI cables are symmetric (so it is soldered in both connectors). It sounds like my TC voice processor is badly designed, so with DP->TC connected by MIDI, TC XLR outputs start producing the noise. Audio connection is way better with HD 400. No solution for USB yet. Behringer small USB mixer (2 wire power cable). Generate "e-noise" as soon as there is more then just one input and output in use. F.e. USB+(balanced)output = No noise. (one Unbalanced/balanced)input+output=No noise. But USB+Input+Output=Noise. Several inputs + Outputs = Low/no noise, connected equipment dependent. HD 400 helps. But I ended putting second hand audio interface (8x8, Phonic) as a mixer. Roland TD (unbalanced outputs, 2 wire power cable). Was prone to generate noise with Behringer mixer, HD 400 helps. Monitors connected with unbalances (TS) AND balanced (XLR) cables at the same time. Activity on the other cable end (connected/disconnected/connected to different equipment) make no difference. When just one cable is connected, there is no noise. And now some crazy staff I had "pleasure" to observe during my life... mixed "ground" and "zero" in house power net. So at some place(s) they was connected (!). Sometimes indirectly, when "ground" was wired as "zero". That is really dangerous.... some (not music) devices create distortions on some or all power wires UPS/"power conditioners" can smoke and even explode (one has exploded in my hands... I was really lucky, there was no consequences for me). Thinking about it, there was more times such devices coursed fire/troubles then any other equipment. They are also getting crazy when something is wrong with the power line (start switching to battery and back continuously, etc.).
  3. Modern Realtek chips do have ASIO driver. And the latency can me quite low with it ( under 7ms, so not worse then most dedicated interfaces). Some notebook manufacturers have branded drivers with "forgotten" ASIO part (or just control panel part of it). I had that issue with Dell XPS. Older chips can be used with WASAPI or ASIO4ALL. All such chips I have tested had between 10ms and 16ms usable latency. For MIDI tasks that is sufficient. Note that unlike with native ASIO drivers, latency reported from other drivers (including ASIO4ALL) is wrong in most cases. So do not trust "I have 3ms latency with Realtek!" posts. But you should be able to get under 20ms even in worse scenario.
  4. Tons of adds periodically appear on top of posts, Firefox/Linux. Not consistent, re-opening the thread removed them. PS. what people expect from "free" forum for "free" product? Have you never played in F2P games? 😎
  5. Yes, CbB/Sonar engine is what it is. It does not support "ahead" audio processing like some other DAWs (Studio One, REAPER, etc.). So turn off PDC compensation (special button) and leave with consequences or do not use corresponding plug-ins during recording. No other workarounds.
  6. I have tried to use mouse for that (dedicated mouse as a MIDI controller). Cost $5, does not require watch and simpler to use then any touch device (especially as small as a watch). Back with normal controller now, big buttons and normal knobs. Most people do not try to use MiniGuitars and NanoViolines for a reason, while theoretically that is possible 😀
  7. A bit on-topic... Melodyne in REAPER has full freedom. It can be used as a Track FX. Note that content of the track/item is not "frozen", it is possible to move/edit clips on the track with Melodyne. And in REAPER release (and after Melodyne update) that is really working pretty well (== without crashing, including adding Melodyne without stopping the transport). I have just 3 observations: when used on an item (so more like Sonar way, to correct a small peace instead of the whole track), audio preview is not working (by current REAPER design). So this option is effectively almost useless. people put Melodyne on every track in 20-30 tracks project and then wonder why computer is instantly 100% loaded during editing (Melodyne continuously analyze updated source in the background) without "save minimal undo" compatibility option set (unset by default for all plug-ins), any edit operation in Melodyne is painful on a slow computer. So if Cockos manage to audition Melodyne as a take FX, REAPER ARA integration can be called perfect...
  8. Windows 10 can influence the DAW performance and some not ASIO aspects of audio. But as long as the device and the driver are the same, you can expect the same latency. Also as I have mentioned the first post in that thread does bot contain everything, people was measuring interfaces on Windows 10 and not only with 44.1kHz. For Scarlett, in one of the posts it is mentioned that "64" setting is probably 128 samples per buffer internally. All interfaces have some "extra" latency settings, some of them can expose a part of these settings in some form to the user. The latency is a sum of many delays: AD + transfer to computer + driver + transfer from computer + DA. The buffer size is just a chunk size in which audio is processed in the DAW. That directly influence the latency, f.e. if a DAW works with 48kHz/128 the "buffer length" in time is 2.8ms. Since the DAW becomes the whole buffer, that theoretically can not happened before 2.8ms since the first sample is digitized. But all other processes are not instant, f.e. the DAW should have time to process the buffer. The difference between measured latency and the buffer size latency is what the interface+driver have to do the rest. F.e. 7.3ms - 2.8ms = 4.5ms. The smaller the buffer own length (f.e. 96kHz/64 - 0.6ms) the smaller total latency can be, with the same "overhead" (4.5ms + 0.6ms = 5.1ms). In practice, not all components of the overhead are constant.
  9. https://www.gearslutz.com/board/music-computers/618474-audio-interface-low-latency-performance-data-base.html Note that many interfaces/conditions are not in the first post, googe the thread for almost all interfaces RTL tables. Note that not all posts there have equal "quality". And "traps" are not only numbers taken from "some DAW", but also RTL screenshots when the interface has some build-in route and so the "loopback" was performed without DA-AD conversion. Also these numbers should be interpreted as "the best you can get". So, if you are able to use some mode (like 96kHz/32), you will get the same numbers. But it can happened the particular mode with particular interface/driver is not usable (on particular computer, DAW, project, etc.). It took me a while to understand that many (most?) people are not interested in low latency. They do not use in DAW monitoring, except may be MIDI for which latency is less important. So even some "high end" devices have big latency.
  10. * https://www.gearslutz.com/board/showpost.php?p=12352524&postcount=1163 So according to that post, there can be somehow wrongly labeled settings for 6i6. So "64" is more "128". So the next thing to check is why you can not go lower. That is computer related. It can be that nothing can be done (as f.e. with my 8 years old Celeron class desktop), but with relatively powerful computer, even 6 years old, it should be possible to reduce the buffer size after tweaking. * your original 7.3ms is good. In fact too good for that Presonus. All reports indicate around 10ms for the same settings. Note that this interface can report wrong numbers to the DAW. Make a loopback check, manual or with RTL, to get real latency. ------ UAC according to all tests it has very good latency. Is is a bit more expensive then other and definitively bring better latency for that money. But it can not do 7.3ms under 48kHz/128, so I could not resist from "trolling" a bit. UAC owners could prevent that by "wait... 7.3ms? even my good latency UAC can not do that with such settings". And it was "7.3ms? its too high... my UAC is better". 😉
  11. Do you mean "not high enough"? All published results I could find show that UAC-2 has 7.7ms under the same conditions. I can not judge the interface because I do not have it, but all "ultra super under 2ms" RTL for UAC spammed across the Internet are about "96kHz/24 samples per buffer", normally commented with "with a 50 tracks full of heavy effects". I guess they have borrowed computer from aliens (or they have done the test with REAPER in playback mode and anticipative processing on, in other words not running anything in realtime).
  12. I hope your problem is solved by new audio interface. 7.3ms is not bad, the best you can get at 48kHz/128 is 6.6ms. That is not great improvement for 6x price. And so the question is why you could not use 64. Can be the interface, but can be something else. You will know soon 😉
  13. No hosts? 😂 And for the topic... Common, the only big country which has tried to calculate what is real price for everything and attempted to use such prices is not there last 25 years. Is someone still think that producing a product and selling it IS the way to make BIG money in THIS world? LOL.
  14. Assign pads to different MIDI channel from keys (on the controller). Use the same input for both tracks, but select particular channel for each track. Use Generic Surface to learn transport buttons.
  15. My list does not include multi-cpu checks. I have racks of servers, but they are not used for audio 😉
  16. While many things can be tweaked so Sonar/CbB can run fine, expecting it can work as most performance optimized DAW to the date (I mean REAPER) is hopeless. One responsible for fluent operations with small buffers feature, the anticipative processing, simply does not exist in the Cakewalk engine. But it should be possible to make CbB working. With bigger buffer size and less plug-ins, but still. When I think I have troubles, I normally start with my personal list: http://www.azslow.com/index.php?topic=395.0
  17. As the original question is about "recording", it is different from intermediate format (which better keep 32bit FP) and the final format (which can be 16bit). Each bit is ~6dB (SNR, DNR, etc... just a approximation, but it works well in all math). When you record without some hardware compressor/limiter and let say set the gain to result -18dB average and record into 16bit, your average resolution will be 13bits. And not so loud section can easily be recorded with just 10bits. During mixing and mastering, you will level this signal (with compressors, EQ, etc.). Try bitcrash something to 10bit, that is easy to notice even with $1 headphones on $1 Realtek built-in interface. And if you record close to 0dB, a part of signal is going to be digitally clipped. That you can also hear on low end equipment. So 24bit for recording is a good idea. With 32 samples per second you can get up to 16Hz frequency. So the frequency response it not good by definition 😉 I believe that 16bits dithering make sense. I am sure there is better equipment then my and way more advanced listeners, I can hear the difference only starting from 14bit downwards. But is such examples I think it is important to mention was it with unusual amplification or not. Was you playing already mastered track and at the very end could hear the difference in reverb, or was it just reverb sound and the signal was amplified +12 dB or more. Because with sufficient amplification it is possible to hear the difference of 24bit dithering on notebook speaker. I had to amplify more then 60dB + max all other volumes to achieve that, so it make no sense but possible 🙃
  18. I do not think disabling updates can be perceived as a hack. When I have installed Windows 8, 8.1 and then 10, I had to agree they are going to collect information and push updates by default. And I have not switching off anything. With 8, 8.1 and at the beginning with 10 there was no problems. But at some point my system has started to reboot when I was about to work, and not some quick reboot since my computer is old. So at the middle of the day I had to wait an hour till it finish something. Also telemetry is taking longer and longer to collect, also it does not wait till the system is idle, it runs at its own wish, with 100% system consumption. During the last year I had to connect to many friends remotely to fix "automagically installed" problems. While I have agreed that MS does something on my system and what they do is "legit", in practice that means I no longer can use my own computer when I want. Someone can say "your own fault". But I was born in the USSR: if someone fool you, fool him. I am sure most people here do not understand that, but the whole capitalistic world is based on fooling each other. USSR was an attempt to change that, but that attempt was unsuccessful 😀
  19. If I remember correctly, I have paid around $40 (to MS directly, so absolute legit) for XP to Win8 upgrade. XP was pro, so I have Win 10 Pro. That is about the "inflation" in the Windows world 😉 Whatever I have tried with editor, I could not prevent Windows to contact MS. And I am not alone. Yes, I could disable these stupid "windows is restarted" thing, as well as crashing working system during drivers updates. But every time I see some unexpected disk activity, that is MS. That can be partially disabled in the services and scheduler, but many things are reverted after updates (or prevent updates running smoothly). And for "go legit". Why someone need to pay money to get LESS service? That make no big sense for me. For the topic. Well, I am prepared for whatever happens with Sonar Platinum+/Cakelab, 32bit, etc. I have a backup DAW which runs on all currently available platforms, has no online authorization nor dongles and can load CW projects with most plug-ins (including DX). The only worry is about DimPro, which has CCC authorization and is not "Bandlab". The rest can go to hell without troubles for me. But as long as it is working, why not enjoy the live 😁 PS. For mentioned thread with authorization... My conclusion is that Bandlab should work fine 6 month after update, for any "offline" work. Using it without Internet in the near, especially live, is "no go". For that I have X3 and "another DAW".
  20. Sorry man, but you describe a common mistake of sample rate interpretation. Yes, with 96kHz you have just 2 reference points to build 48kHz wave. But that is sufficient to reconstruct it (and all lower frequencies) perfectly. Note that sampling (and corresponding reverse conversion) is using the fact that any audio is a combination of waves. There is no "square wave" in audio. Think about your speaker... to reproduce perfect square wave, it has to move instantly. That is not possible (the speed of light limit...). A "square wave" and other jumping forms (really approximations of them) are used in subtractive synthesizers since from the frequency spectrum perspective they have "all frequencies".
  21. I forgot to mention a trap with 88/96kHz. When an audio interface DAC is capable to use at least part of extra frequencies, the analog output signal can contain hi frequencies (up to 48kHz). The problem is with monitors, so what they are going to do with that frequencies. Theoretically they should cut everything they can not reproduce, but practically they can output some distortion in lower frequencies. More detailed explanation can be googled (and that is one of most plausible explanation what audiophiles "hear" from 96kHz recordings... they hear the difference, and taking the price into account think it is always "better").
  22. "Hi-end" devices and related sites are more about believe then about technic. And sure they do all possible that an average person has no chance to find any usable information to prove something is a science fiction... But in some cases rough estimation is possible. Examples from your link: (number one) "DAC Chip: Xilinx Artix 7". "Extremely informative" (means no information at all). While Xilinx publish very detailed specification for all own chips (including ready to use DACs), "Artix 7" is a FPGA seria, which in general has no DAC(s). So no characteristics what they really use can be found. (number two) "DAC Chip: ESS 9028PRO". Better. At least ESS 9028PRO has some DAC inside, with some characteristic. But sure, it is an ... audiophile chip. That means public specification is "the whole 2 pages" long, more with words then with numbers. Fortunately there is a hint: "feature 129 dB DNR". What will be DNR or real 24bit DAC? 144dB. You can guess what it should be for real 32bits... And since they have powerful "processing" before, that can be "perceived" DNR (which can be 120dB with 16bits). Also note that FP is not mentioned. I have seen that in advertisements of some LG mobile phones, sure not on LG site 😀 And in case they mean 32bit precision, 64bit FP format should be used with such ADC/DACs (32bit FP has max 24bit precision).
  23. Google articles from Craig about sample rate and plug-ins. By itself 44.1 capture all details a human and equipment can deliver, but some processing needs 96kHz to sound right and you can avoid continuous up/down sampling in this case if you stay 96 all the time. Also some interfaces have lower latency on higher rates, useful when you need throw the DAW monitoring (has such interface and your system can handle related extra load). Final down-sampling is not absolute precise science since it requires LPF and all known algorithms have some tiny pitfalls. But if you need high rate for processing, it is better to do down-sampling one time at the end where you can control it and re-do easily instead of hopping all implicit or explicit oversampling up/down conversions are good. Also note that not all DAWs support automatic oversampling (relevant only in case you use several and transfer the content between). Till there is hardware processing inside the same interface (and that processing is done in FP) or I have missed a revolution in ADC technologies (around 20bit meaningful precision without any dynamic gain following the signal in real time) , I do not see a reason to do so. Except saving CPU cycles on 24->32 conversion for the price of recording garbage into 1/4 of used space (with the consequence of increased IO load).
  24. I suggest you spend some time learning Totalmix. RME interfaces are Digital Mixers, with "software" inputs and outputs in addition to the hardware IO. The last is what confuse you, f.e. "Outputs 1/2" as seen by Cakewalk (or any other software) are NOT hardware outputs. That are just software output channels which can be mixed to any hardware outputs.
  25. azslow3

    Demo mode. Why?

    Authorizing with BandLab assistant was not my question. Your product, your rules. I just want to understand these rules. So: "a period of time" means 6 months. If the period is over, Cakewalk instantly reverts into demo without a warning these dates are no shown Am I right so far? just starting Assistant is not sufficient to reset the period I am almost sure I was running Assistant during last 6 months, but without updating anything. So what exactly is required to avoid "demo" next day?
×
×
  • Create New...