Jump to content

MediaGary

Members
  • Content Count

    64
  • Joined

  • Last visited

Community Reputation

46 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is decimal arithmetic so the full precision of binary values isn't here, but you will get in the right ballpark. 48000x3=144kByte/s for each track. 32x144k=4608kB/s for 32 tracks 4,000,000k is 4GB / 4608k ~ 868 seconds ~ 14.5 minutes per 4GB segment. EDIT: Feeling ambitious... Binary arithmetic: 4194304k is 4GB yielding -15.17 minutes per 32-track segment. Therefore you'd need 4-segments x 3hr = 12 segments (48GB). The last time I used this X-Live recording process, I think any of the SD cards over 32GB had to be externally formatted, but perhaps ExFAT is supported within the X-Live now... dunno. I only mention it because (obviously) you'd have to go for relay-recording between two 32GB SD cards to meet your 3-hour requirement HTH
  2. Yes, the import and breakout/explosion works fine. As expected, you have to carefully position/concatenate the 4GB segments to avoid clicks/dropouts at the edges of the files. But yes, when I was doing remote recordings, it was my standard procedure and Cakewalk always worked with 8, 16, and 32 tracks.
  3. You've certainly piqued my curiosity: I have used a Midas M32 (full-size) in my studio since 2016 or perhaps little earlier. For most of that time it was on the normal X-USB interface/expansion, and I have since migrated to using an X-ADAT expansion driven by RME HDSP 9652's. When using USB, I was always using USB2-only ports and set to 24-bit recording in Cakewalk; never 32-bit. [That's for the hardware capture. Track processing at 32-bit is left as-is.] Most of the time I ran X-USB at a 256-sample buffer. I now run most of the time at a 64-sample buffer with the RME. For a while, concert events were recorded via a Behringer X32 Core using an X-Live card with a 'safety' concurrent recording into a Lenovo laptop. Again, the drill was ASIO, USB2-only and 24-bit. There has never been a problem with either the outside events or the studio M32. It should at least be encouraging that that your setup isn't unusual, and should work just fine. Checking the LatencyMon statistics is important to establish how well-behaved your system should be. Also checking for a mis-matched sample rate with the native motherboard sound thingie is good advice. I generally leave the Windows set for 'No Sounds' . That's about all the in-the-blind advice I can offer. Stay encouraged, and let's hope we can quickly get this sorted.
  4. Just to be clear, are you looking at a VU meter or the Peak/RMS meters of the track? I ask because the "standard" calibration of a a VU is that 0VU is equal to -18dBFS RMS. For recording, it's routine to target -18dBFS average with -12dBFS or -6dBFS peaks, but you're in a for a world of weak signals if you target -18dB on the VU meter. Let us know for sure.
  5. Truly, it's wonderful that you got to the bottom of this issue. I wish I had seen this thread earlier, as I could have saved you some heartache. Back in 2015, I had a plan to add two more SSHD's (a reasonable term for a hybrid HDD) to the two that I already had in order to build a very interesting RAID array. Things went badly *very* quickly, and that flavor of the project was abandoned. It's documented in the 2nd article of a 5-article series on my website. [ https://www.tedlandstudio.com/torpedo-at-the-dock ] The additional SSD cache in the HDD of a hybrid drive isn't especially effective because its small size makes it inherently slow, and it's not especially well-matched to media-based workloads that audio and video creates. Since then, I've made quite a few combinations of hybrid hard drives, and finally, at this time, I use a limited-horizon approach to this by using PrimoCache. You can read my summary description of PrimoCache, and the way that I use it in two posts that I did on another forum: [ https://www.gearslutz.com/board/showpost.php?p=15353007&postcount=13648 ] [ https://www.gearslutz.com/board/showpost.php?p=15354350&postcount=13655 ] Thanks OP for closing the loop and updating us with the solution!
  6. About the curve... I had the 55-inch (diagonal) LG OLED C6 curved screen on my DAWs for about two years. The curve for that one peaks at about 1.5-inches deep across its 48-inch width. If it was two flat panels in a 'V' arrangement, it calculates out to about a 3.4-degree angle. One advantage of curved screens is that it reduces the color/contrast falloff that's typical of IPS and VA technology panels when viewed an oblique angles. OLED panels don't have that falloff issue, so curves are wasted in that respect. I specifically chose the Samsung Q80T QLED for its contrast consistency at oblique angle views. I still would rather have kept the flat LG CX were it not for its too-aggressive brightness limiter. I have since learned that there is a way to overcome the CX ABL by delving into the service menus. Oh well... Once I went back from curved to flat, my brain had to re-adjust! My eyes/brain had developed habit of making the curved top edges of app windows of the apps to look 'normal' to me. When I went flat, it looked *wrong* for two days until I re-adjusted. In terms of sound, there was no difference to me in the interaction between the speakers and the screen. That's not surprising, considering how mild the curvature is.
  7. Watch out for people who get lazy with terminology in the world of computers! It's deadly. The Intel Hyperthreading and AMD Symmetrical Multithreading are both technologies that provide two threads per physical computer core; hence the 6/12 or 8/16, etc. terminology.
  8. This has come up before on this forum. I'm providing two links (below) to my posts on the topic. I used a curved 55-inch LG OLED display for about two years, and have since gone back to a flat LED display of the same size, even after testing the new LG CX OLED. I found that the auto-dimming, ABL (Automatic Brightness Limiter) and self-protection algorithms were bugging me. https://discuss.cakewalk.com/index.php?/topic/8458-new-34-monitor-age-old-questionproblem/&do=findComment&comment=77226 https://discuss.cakewalk.com/index.php?/topic/24049-how-is-cakewalk-on-a-4k-monitor/&do=findComment&comment=198593
  9. I got a mental cramp as I tried to translate your description into a topology diagram. If you'd draw one for me, I can be more helpful. Nevertheless, it seems to me that CbB can't see the reported latency of two "layers" of drivers, so its 'Recording Offset' setting needs to be changed to achieve alignment. Get an out-to-in jumper cable on the out-of-time recording input, and run a pre-recorded click track to it. Record the input click track coming through the loopback and adjust the Recording Offset value to make is all happy. That's the best I can offer without a more clear understanding of what's connected to what.
  10. No, I actually didn't think it through to check whether the LP EQ was standard or not. I just remember it being a 'heavy' plugin. My intent is for the test project to be based on an unadorned/vanilla version of CbB so that everyone can participate and we can have valid comparisons between machine configurations.
  11. Can you offer a standardized CbB project that we can all use for comparison? Perhaps a beastly combination of linear phase EQ's, synth/patterns, and other heavy stuff to stress performance the aspects that you'd like to explore.
  12. Seems to me your best first move is to install a AMD Radeon video card. Nothing exotic is required; it's just a way to get DPC latency-friendly behavior from the drivers, since the Nvidia card you have now is a major hindrance to *any* good audio experience. Don't get me wrong, I run an Nvidia card in my AMD-based rig right now, and everything runs great. However, something older/cheaper might be easier to find, and the AMD Radeon world is a good place to start.
  13. Latency of USB ports is *practically* unrelated to its speed in Gbit/sec. You will find that USB 2.0 and 3.1, 3.2 in all its flavors have latency figures that are all clustered around what the driver suite is able to accomplish. Each vendor has its own device driver implementation, and that is a strong predictor of what you'll experience/measure. Also, as you know, a USB 3.x port will 'downshift' to run at USB 2.0 speeds when presented with a USB 2.0 device. Also keep in mind that PCI is less than 1.1Gbit/sec, and that PCIe interfaces tend to be just one PCIe lane. Usually that's a PCIe 1.1 lane, so 250MByte/sec or a net of 2Gbit/sec (payload after decode) is a common performance metric. However, both PCIe and PCI have a vastly different and more efficient driver implementation, and therefore is able to achieve lower latency than USB. I've attached a chart I made a few weeks ago that summarizes the Round-Trip-Latency of all of the attachment methods that I've used with my Midas M32 and Behringer X32 mixers, both in Win10 and macOS. To helps with the nomenclature of the chart: DVS is Dante Virtual Soundcard, AES16e-50 is a Lynx PCIe card, the DN9630 is a USB 2.0-to-AES50 adapter, the MADI interface is an RME PCIe ExpressCard, the LoopBk was direct ADAT-out-to-ADAT-in of an RME HDSP 9652 connected through an external box to a PCIe slot. Lastly, X-ADAT is the way that RME 9652 card is connected through my Midas M32 that serves as the center of my studio.
  14. Hey, @GreenLight I have a 4k screen, and there's a screenshot capture of how I have it organized when I'm running Cakewalk. I run CbB at 100% so nothing is magnified. That requires a pretty large screen that allows all of the little numbers/icons in the Control Bar to be represented. Use a website called www.isthisretina.com to compare and calculate reasonable viewing distances. If your 4:3 screen is 1280x960, then it's about 84 pixels per inch with a "retina distance" of 41-inches. To maintain the same 84-PPI , a 52-inch diagonal screen would be required at 4k. To simply have a reasonable 35-inch viewing distance, my calculations show that a 45-inch diagonal screen would be required to run at 4k/100% . Pick the size that works for you.
  15. Just adding my testimony here: I have/use Nectar 2 Suite; always as an insert. It's super handy to have around because it's a very convenient way to keep a profile of a "plug-in chain" for individuals that send me vocal tracks. The EQ, compression, de-essing, saturation, etc. all in a single preset saved me a bunch of time yesterday when a church called me for an emergency edit/mix for some Christmas presentation content they needed. I find the manual pitch correction function to be clumsy, but the De-Breath works well. I only have used the Harmony Generator function twice in the past 6 years (also have Nectar 1) so I can't say much about those functions.
×
×
  • Create New...