Jump to content

MediaGary

Members
  • Content Count

    55
  • Joined

  • Last visited

Posts posted by MediaGary


  1. 1 hour ago, Bruno de Souza Lino said:

    You mean, using all the stuff that doesn't come with CbB but comes with SONAR? How would people that don't have access to these even test it on their systems?

    No, I actually didn't think it through to check whether the LP EQ was standard or not. I just remember it being a 'heavy' plugin.

    My intent is for the test project to be based on an unadorned/vanilla version of CbB so that everyone can participate and we can have valid comparisons between machine configurations. 


  2. On 1/28/2021 at 9:02 AM, Noel Borthwick said:

    Could any of you folks running high core count PC’s please try the latest hot fix? ESP those running AMD Ryzen systems.
    There are some optimizations and fixes for MMCSS that might improve performance. If nothing I just want to make sure that it doesn’t cause any issues.

    Can you offer a standardized CbB project that we can all use for comparison?  Perhaps a beastly combination of linear phase EQ's, synth/patterns, and other heavy stuff to stress performance the aspects that you'd like to explore.


  3. Seems to me your best first move is to install a AMD Radeon video card. Nothing exotic is required; it's just a way to get DPC latency-friendly behavior from the drivers, since the Nvidia card you have now is a major hindrance to *any* good audio experience.  

    Don't get me wrong, I run an Nvidia card in my AMD-based rig right now, and everything runs great.  However, something older/cheaper might be easier to find, and the AMD Radeon world is a good place to start.


  4. Latency of USB ports is *practically* unrelated to its speed in Gbit/sec. 

    You will find that USB 2.0 and 3.1, 3.2 in all its flavors have latency figures that are all clustered around what the driver suite is able to accomplish.  Each vendor has its own device driver implementation, and that is a strong predictor of what you'll experience/measure.  Also, as you know, a USB 3.x port will 'downshift' to run at USB 2.0 speeds when presented with a USB 2.0 device.

    Also keep in mind that PCI is less than 1.1Gbit/sec, and that PCIe interfaces tend to be just one PCIe lane.  Usually that's a PCIe 1.1 lane, so 250MByte/sec or a net of 2Gbit/sec (payload after decode) is a common performance metric.  However, both PCIe and PCI have a vastly different and more efficient driver implementation, and therefore is able to achieve lower latency than USB. 

    I've attached a chart I made a few weeks ago that summarizes the Round-Trip-Latency of all of the attachment methods that I've used with my Midas M32 and Behringer X32 mixers, both in Win10 and macOS.   To helps with the nomenclature of the chart: DVS is Dante Virtual Soundcard,  AES16e-50 is a Lynx PCIe card,  the DN9630 is a USB 2.0-to-AES50 adapter, the MADI interface is an RME PCIe ExpressCard, the LoopBk was direct ADAT-out-to-ADAT-in of an RME HDSP 9652 connected through an external box to a PCIe slot.  Lastly, X-ADAT is the way that RME 9652 card is connected through my Midas M32 that serves as the center of my studio.

     

    LatencyComparison-4.png


  5. Hey, @GreenLight

    I have a 4k screen, and there's a screenshot capture of how I have it organized when I'm running Cakewalk.   I run CbB at 100% so nothing is magnified.  That requires a pretty large screen that allows all of the little numbers/icons in the Control Bar to be represented.  

    Use a website called  www.isthisretina.com  to compare and calculate reasonable viewing distances. If your 4:3 screen is 1280x960, then it's about 84 pixels per inch with a "retina distance" of 41-inches.  To maintain the same 84-PPI , a  52-inch diagonal screen would be required at 4k.  To simply have a reasonable 35-inch viewing distance, my calculations show that a 45-inch diagonal screen would be required to run at 4k/100% .   Pick the size that works for you. 

     

    FullScreen-PrimaryMix.png

     

     


  6. Just adding my testimony here:  

    I have/use Nectar 2 Suite; always as an insert.  It's super handy to have around because it's a very convenient way to keep a profile of a "plug-in chain" for individuals that send me vocal tracks.  The EQ, compression, de-essing, saturation, etc. all in a single preset saved me a bunch of time yesterday when a church called me for an emergency edit/mix for some Christmas presentation content they needed.

    I find the manual pitch correction function to be clumsy, but the De-Breath works well. I only have used the Harmony Generator function twice in the past 6 years (also have Nectar 1) so I can't say much about those functions. 


  7. On 12/10/2020 at 4:28 AM, Bill Ruys said:
    • RAM is 3200, CL16
    • Motherboard is MSI MAG X570 Tomahawk WiFi
    • I have the MOTU 896 Mk3 hybrid, so I can use either USB or Firewire - currently using Firewire via a Texas Instruments based PCIe card
    • Windows 10 release 20H2

    I'm back with two  questions:

    • Did you happen to run the LatencyMon tool by Resplendence on your former 3900X?
    • Can you run LatencyMon for 7 or 10 minutes on this new 5900X?

    https://www.resplendence.com/downloads


  8. This is hopefully helpful:  I have an MR18 that is similar to the XR18 which is functionally similar to your X18 (whew!). 

    First checking the hardware routing ... in the X-Edit app, the In/Out panel (pop-up window) there is a 2x11 grid where 'USB 1/2' column has a blue dot in the cell on the 'Main' row.  That's what's needed to get CbB 1/2 out to go to the physical main outs of your X18. 

    As for what you mean about "no audio going to the X18 via the USB lead", you should check the 'Meter' view in the main X-Edit app to see if something is coming from CbB or not.  I'm not familiar with the tablet app that runs the X18, checking those two things should get you closer to a solution.

    Keep us posted. 


  9. You should run Xmeters in the taskbar of your computer.  I have it configured to show operating system stuff in orange, and the other color for application work. You'll quickly see that a CPU logical core is never dedicated to one or the other, but *very* dynamically mixes the workload.

    Another thing is that it's quite interesting to see CbB go all out with Drum Replacement. It's actually quite beautiful to see how evenly loaded all 32 logical cores are during that process. 

    https://entropy6.com/xmeters/

    • Like 1
    • Thanks 1

  10. I have both, and work within both. 

    The Win10 is the AMD 3950X in my signature, and the macOS machine is a 12-core 96GB 2010 Mac Pro (affectionately called the Millennium Falcon) that runs Catalina 10.15.6, has Alpine Ridge Thunderbolt, 10GbE, and will soon benefit from running insert plugins via AudioGridder hosted in the AMD machine.  Both machines are concurrently connected to my Midas M32 mixer; the AMD via USB 2.0, and the Mac via a K-T DN9630 AES50-to-USB 2.0 adapter. 

    My reason for this elaborate configuration is that I collaborate with musicians who have projects in Reaper, CbB (Win-only), Logic Pro X (Mac-only) Digital Performer, and Studio One.  In pre-Covid days, they'd bring over the projects on a portable drive of some sort. These days, things are so slow, that I'm learning video editing and doing wild audio configuration experiments to keep myself entertained. 

    As for preference, it's still a 50/50 thing after several years of having a foot in both camps.  I like the backward-compatibility of Windows that preserves the investment in hardware, allows PCI (distinguished from PCIe) devices to still be used, and allows the re-use of technology orphaned from data centers.  On the Mac side, I like the multicam features of Final Cut Pro (Mac-only) that have yet to be matched by DaVinci Resolve, and the soft-synths built into Logic Pro X.

    My eyes are happiest when using CbB.  The other DAWs are less beautiful to me. 

    • Like 1

  11. Thanks to @msmcleod and to  @Will_Kaydo for your interest in this topic. I may have to draw a proper picture of my network connectivity, but the Echo Pre8 and the RME 9652 are in separate machines across a 10GbE network, so I don't get the benefit of a loopback/in-computer timing.

    RME 9652 PCI-----Computer1-----10GbitEthernet-----Computer2-----FireWire----Echo Pre8

    The 10Gbit cards are Solarflare 5122F's that have fiber optic links through a MikroTik [CRS309-1G-8S+IN] LAN switch. Solarflare cards have sophisticated drivers with TCP/IP Offload capabilities. That may or may not contribute to what I'm seeing here.  Computer1 and Computer2 are both HP Z220 workstation computers.  Comp1 has an 8GB i5-3470 and its primary job is as a server.  Comp2 is a 16GB i7-3770 whose primary job is as an Administration machine. 

    Those two machines are participating and showing the 12-millisecond arrival difference between the Firewire and ASIO Link network signals.

    The picture attached below is the topology of the two studio computers when the MADI and ADAT connections  were merged via the RME driver in the AMD/Win10 computer that's in the lower left side of the  diagram.  This combination achieved a 0.4-millisecond difference. The computer in the lower right is a 2010 Mac Pro now running Catalina.  Since that diagram, I'm temporarily running the M32 via good old USB 2.0. 

    The fact that ASIO Link Pro works at all may be enough, since the network-sourced audio inputs at best would be supplemental for experimental.  However, if The function of ASIO Link Pro and AudioGridder ever showed up within the same product, like a poor-man's point-to-point Dante with the added sophistication of horizontal/remote VST processing, that would be a game-changing event in audio studio solutions.  [hint, hint]

    TedLand-Topology-20200620B.jpg

    • Like 1

  12. On 10/6/2020 at 6:14 AM, Will_Kaydo said:

    .... For this to work successfully, you have to uninstall all previous instances of any Asio drivers installed on your machine.  Some hardware Asio drivers clash with this.  ...

    Wow, thanks for that tip!  

    Because my test machine had been used for *many* tests along the way, I had to un-install the ASIO drivers for the Klark-Teknik KT-USB, Midas MR18, Dante Via, and the Echo Pre8 before starting to get some forward momentum. 

    The initial testing over the network was ... uninspiring.  I managed a best case of a 12-millisecond difference (~576 samples) between the networked audio and the 'native' Echo Pre8 audio arriving in Cakewalk.  In comparison, when I had the Audient ADAT coming in the Cakewalk along with the MADI audio from the Midas M32, merged via the RME HDSP 9632 and RME MADI drivers, the difference was  around 0.4-milliseconds (~20 samples)

    Since this is just an exercise in what is possible in networking with free products like AudioGridder, it's no big deal, but certainly reveals how much it'll take to eventually make this a production-friendly solution.

    • Like 1

  13. On 5/14/2019 at 4:02 AM, msmcleod said:

    So it looks like there is a way to get ASIOLink to combine more than one ASIO interface into one:

    • Open one instance of ASIOLink, and set the ASIO driver to be your second ASIO interface, setting the output to Network (just use the local network IP) & enable network. 
    • Open a second instance of ASIOLink, set the ASIO driver to be your primary ASIO interface, set the output to ASIOLink's ASIO driver and enable network input using your local network IP.

     

    Thanks for showing us the way on this.  I'm stumped right now:

    I have ASIO Link Pro 2.4.4.2 running in two Win 10 (Home and Pro) computers in my "Test Lab" before I migrate to the Production Studio...

    • Computer-A = ASIO Echo Audio Firewire (Echo Pre8) with ASIO Link Pro and CbB
    • Computer-B = ASIO RME HDSP 9652 clock source ADAT with ADAT-1 /ADAT-2 from two different Audient ASP800's.
    • AES/EBU master clock is from a Midas M32 to an Aardvark Sync DA that uses 3x BNC 75-ohm coax word-clock connections (2x ASP800 1x Echo Pre8)
    • Network is 10GbE Ethernet ; Computer-A=192.168.1.121  Computer-B=192.168.1.123

    I have verified that I have 'Received Audio Data' on 'Network In' coming from Comp-B to Comp-A.  Network in a Comp-A is patched to 'ASIO Driver Out Mix' Signals from both the ASP800's and the Echo Pre8 show up as expected.  

    When I start CbB the first error message is that 'Analog Out 1-2' is in use by another application.  The next error/problem message is that all of the 16 outputs of the Echo Pre8 are unavailable and muted.   Another problem is that within CbB 'Preferences>>Devices'  all the of Echo Pre8 check boxes work normally, but all the ASIO Link Pro check boxes for 'Input Drivers' and 'Output Drivers' are greyed out and inoperable.  

    I can't shake that feeling that I've missed something obvious, so I'm appealing here for ideas and diagnostic directions to take. 


  14. On 9/11/2020 at 3:02 AM, lmu2002 said:

    ...The  new pcie  4.0 offers even faster max read/write speeds (3GB-5GB per second), twice the speed of pcie 3.0. However, the full speed in both cases is available only on the bigger drives (1TB and over). And also, most likely only the first m.2 slot runs at the maximum speed (via cpu instead of chipset) which also favours the idea of just one disc.

    To give a perspective regarding  DAW environment, if you have a 5min song with 50 full length audio tracks in 192KHz/24bit, your project is less than 10GB in size. ...

    I did an experiment a couple months ago that may be relevant to this discussion:

    The tested device was a 500GB Gen4 NVMe drive running the ATTO Benchmark in both a direct-to-CPU PCIe slot, and a switched PCIe slot.  The result was a ~1-percent difference in the throughput.  Link is below:

    https://www.gearslutz.com/board/showpost.php?p=14878090&postcount=1827

    Also, outside of the space requirements for the audio, it's important to keep both the overall speed capability and the user experience in view.   There was a pretty careful test done to compare the sample loading performance of an NVMe drive versus a conventional SATA SSD.  The differences in the loading/usage experience is far smaller than the speed difference between the two technologies.  I'd like to see other tests of this kind, particularly in the context of CbB.  Link is below:

    https://docs.google.com/document/d/1wL8XYGgd_O9fomMrK1EpSnZJeQwhVOAn91e82byj8s4/edit

    Lastly, keep in mind that 50 mono tracks at 192kHz needs less than 40MBytes/sec to playback. 

    • Like 1
    • Thanks 1

  15. Yes, I heartily recommend that you watch the Task Manager to see how your RAM usage goes.  

    While I was in "go-big-or-go-home" mode, I built my new AMD machine with 128GB RAM because I have ambitions of running my Vienna Ensemble Pro and some other very impressive libraries that I've purchased.  Along with that, I have the intent to run AudioGridder in the AMD machine as a VST/VSTi server to my venerable 2010 Mac Pro.

    So far, the high water mark for my RAM usage has been when I had DaVinci Resolve and CbB open concurrently.  Even that was a total of less than 12GB.  The biggest usage of the RAM is the 24GB that it's using for PrimoCache.  I gained experience with PrimoCache running in the Win10 partition of the Mac and was very pleased.

    Even though the boot drive in the new machine is a Gen4 NVMe, and my Picture Cache for CbB lives there, on a whim I ran the Primo Cache trial and was amazed at this one thing:  I opened a CbB project that had not been touched since Feb 2020, so the Picture Cache would need to be completely regenerated.  The project is 34 mono tracks, 2 hours long.  Because of PrimoCache, the Picture Cache was generated within 15 seconds.  It was truly impressive to see all those CPU cores lit up, and to get of sense of what had to be some high-rate data movement. 

    You can't directly observe the data rate of disk I/O in PrimoCache, but a benchmark like ATTO shows peak speeds over 22GBytes/sec. That's not a typo...it's 22,000MBytes/sec writing, and around 17GBytes/sec peak reading.  That's the benefit of the 'L1' RAM cache in PrimoCache. The latency of RAM is about 20-50 nanoseconds, so it's 1/1000-th the wait time of even an NVMe which has latency numbers in the 20-100 microsecond range.  The other lovely benefit of the PrimoCache implementation is that the 'L2' cache essentially makes the 'L1' RAM cache non-volatile, because it replicates the L1 RAM cache during operation, and restores it during reboot.  I have 200GB of L2 cache apportioned on a Gen3 NVMe.

    Unfortunately, that's the *only* circus trick that makes all that RAM worthwhile if I'm not yet building an orchestral soundtrack for the next blockbuster. (Will we ever go to the movies again?) 

    So, back to my original premise, spend some time watching the telemetry of the tools within Win10 so you don't over-invest.  I think half of the RAM I have may wind up in the new machine that I will soon build for my son.


  16. Many people never get past the inflammatory headlines and thread titles, so I'm posting the descriptive article that won't simply cut/paste from the a source website. 

    The source website displays a large portion of a book called:

    Ballroom, Boogie, Shimmy Sham, Shake
    A Social and Popular Dance Reader
    Edited by Julie Malnig (C) 2009
    Pages displayed by permission of University of Illinois Press

    The specific chapter that mentions Cakewalk starts on page-55. That chapter is :

    "Just Like Being at the Zoo"  /  Primitivity and Ragtime Dance
    by Nadine George-Graves

    I think the fair-use practices will allow me to type out and paste this portion of the chapter here.  I'll remove it if there's any issue, but the web link is below the re-typed part that I've provided here.  I added paragraph breaks for ease of reading, as the original page-56 has no breaks in it.

    The Cakewalk was the most popular black social dance to influence the social dancing of the ragtime era. In fact, many early rags are Cakewalks, and the Cakewalk's syncopated rhythms directly led to ragtime musics's style. The Cakewalk had its roots on the Southern plantation when slaves would get together and hold contests in which the winners would receive a cake often provided by the master. The style of dancing had many influences, including African competitive, Seminole dancing in which couples paraded solemnly, and European dancing and promenading that the slaves witnessed in the big house.

    The Cakewalk was a mockery of these European styles, but when the slaves performed for the whites, their masters often mistook the playful derision for quaint approximations of their dances. In this couple dance, dancers stood side by side, linked arms at the elbow, leaned back and pranced about high-stepping and putting on airs (figure 3.1). The men usually wore suits with tails, top hats, canes, and bow ties. The women wore long dresses, heels, and often carried a parasol.

    Cakewalks were a regular feature in minstrel shows and black vaudeville, and because of the influence these traveling shows had on popular dancing, the Cakewalk quickly made its way to Northern dance halls. The Cakewalk led the way for Southern black dances to gain popularity in the North, thereby playing a seminal role in the creation of ragtime dance.

    [https://www.google.com/books/edition/Ballroom_Boogie_Shimmy_Sham_Shake/zCSDBjRqC5EC?hl=en&gbpv=1&dq=cakewalk+slavery&pg=PA56&printsec=frontcover]

    • Thanks 2

  17.  I have run Sonar and now CbB on Win7/Bootcamp and then Win10 natively in my 2010 Mac Pro for many years without problems. There are lots of possibilities of what could be the root cause of your symptoms.

    Start with the basics;  

    • Let us know what MacBook model year and configuration [RAM/SSD-space/USB interface and any converters] you're running
    • Document which release/maintenance level of Win10 you're running
    • Run LatencyMon to establish its real-time friendly profile
    • See if there's a way to manage SpeedStep CPU clock changes (always a source of pain)
    • See if the behavior is different with the native audio function 

    As you do that, the community here will chime in to help you navigate into calmer waters.  There are plenty of insightful people here, so you came to the right place.


  18. On 5/24/2020 at 4:00 AM, Bill Phillips said:

    @MediaGary, are you using a 100% display scale factor?

    Yes, that's 100%.  It's only reasonable/possible to do that when the physical display is large enough. 

    I know I'm repeating myself, but I can't remember where/which forum:  The pixels-per-inch (PPI) that my eyes can manage is around 81 PPI at my working distance of about 43 inches from the screen.   That's the same PPI that my 28-inch HD-only screens had. 

    I liked the 3-screen HD arrangement with track view in the center, VST's on the left and console/busses on the right. However, there was never enough horizontal space to manage the number of busses I typically use unless I went to the narrow view, which is inconvenient.  The price of 4k screens held me back until I got a Craigslist deal on the LG OLED 55-inch screen.  

    On an unrelated note, check out how I changed the registry to show the title bar color of inactive windows as deep burgundy while the active window bars are medium green. It helps enormously when there are so many windows.

    • Thanks 1

  19. There are a couple of "moving parts" in the considerations regarding displays (monitors) and video cards. 

    First is your viewing distance.  Based on that, the optimal size of the display is readily determined.  As an example, I am comfortable working 42-inches away from my 55-inch diagonal display that sits behind my mixer.   At my other working position, I'm comfortable 30-inches away from 34-inches away from my 32-inch display. Both are at 4k (3840x2160).

    The top 25-percent of the 55-inch display is above my comfortable viewing angle, so in CbB I put my track view in the lower 75-percent and use the top for the console view of busses.  I added a screenshot of what I do; although the area for the VST plugins is covered with a Gmail window.

    This website linked below will allow you to play with some calculated values so you can use your current experience with the relationship between pixel density (PPI), resolutions and viewing distance to make a more precise estimate of what would give you a happier computing experience. 

    [https://www.designcompaniesranked.com/resources/is-this-retina/]

    If you use a TV as a computer display, the TV must support 4:4:4 chroma subsampling at 60Hz refresh at UHD to give satisfactory text. That was a big/rare deal a few years back, but is much more common on TV's today.  Computer displays always do that just fine. 

    FullScreen-PrimaryMix.png

    • Like 1
×
×
  • Create New...