Jump to content

MediaGary

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

14 Good
  1. Hey all, I need help in developing a signal routing plan for a Vienna Ensemble Pro Server. Up until now, I've had no need to delve into Aux tracks, Patch Points and external routing in CbB. Recently, I've moved my PCIe UAD-2 card to a separate workstation, and have installed Vienna Ensemble Pro 7 in order to use it across the Ethernet network within the studio. So far, I've been able to confirm basic functions of getting signals from the CbB machine, across the Ethernet network to the VEP7 Server, through the desired effects and back into CbB. The sticky part is that I'd like it to work like an External Insert. That would be most convenient for me, in that I can easily choose an bus-style effect like a Delay and put it on a CbB bus, or chose an insert-style effect like a compressor and put it on a track. Since VEP works like a soft-synth, things aren't so straightforward; as the External Insert function in CbB only offers physical outputs in the list, so some creativity is needed. That's where the deep experience of this community can save me lots of pain. I'm sure some of y'all out there have been using VEP6 or VEP7 for soft-synth hosting, and this requirement of mine isn't far off of that usage case. There are no concerns about CPU capacity or link speed (Solarflare SFN5122F 10GbE all around) but there are limitations on how complex the template for instantiating the individual UAD effects would be on the CbB side. Bonus points to suggestions for getting the VEP7 server to run at 2x the CbB project sampling rate. All guidance is very appreciated!
  2. I keep a squeaky toy in my hand to 'mark' false starts, pronunciation, and inflection problems. In that way, I can simply stay in the zone of reading, without doing anything with the keyboard. When reviewing/editing, the squeaks are obvious visual markers, and are easy to spot/correct and splice the good stuff into a glorious contiguous reading performance.
  3. I can easily see how the load of large sample libraries can be a challenge to HDD throughput capability. That's a much more throughput-sensitive and latency-sensitive application than the mere record/playback of multi-track audio. I was simply posting my multi-track recording capture and playback experience using a normal spinning (Seagate 2.5-inch 7200RPM HDD in the laptop) drive. The whole thing with concerts is to reliably get the tracks back home from the venue, and the Lenovo laptop has been stellar in that respect. I would just fire up Waves Tracks Live with all 32 channels enabled (256 sample buffer size) and throw away the silent tracks during import to a DAW (CbB as the favorite) for post-production. The rest of the remote recording chain was a Behringer X32 Core connected to a Midas DL151 and a Behringer SD8. Although it's "tourist info" and not directly pertinent to the primary discussion, I have since acknowledged that >16 tracks remotely was a rare requirement, but having redundancy makes me a more relaxed recordist. (I was prompted into action after a friend told me a horror story of his laptop failing during a concert capture.) To that end, I now have a Midas MR18 and a QSC TouchMix16 connected via microphone splitters capturing 16 channels each. The MR18 records to the laptop, and the QSC records to a native flash drive in a USB port. In the studio, the (rare) low-latency audio requirements are met with an RME MADI ExpressCard in a PCIe slot; it connects to an X-MADI slot in the M32 mixer. My work doesn't often involve soft synths (usually just piano VSTi) , although I'm hoping that possibly I can make some visiting guitar players happy with the palette of amp sims in my machine. I think the description of my perspective on drive requirements puts us in the same harmonious choir.
  4. My normal remote concert recording setup uses a lowly dual-core laptop (Lenovo T400) running a 7200RPM 500GB HDD in a caddy where the DVD normally lives. 32 mono tracks is no challenge for this recording setup for capture and playback. In my studio, I never need more than 24 concurrent tracks for capture, but projects easily grow to 60 tracks through a variety of requirements. Nevertheless, the regular 2TB 7200RPM HDD was the capture and playback device for quite a few years. I do everything at 48K/24-bit, except for audio-only CD projects. I did have to edit a complex instructional CD set that became 300 tracks before all the pieces were in place. That project began to be troublesome after I had more than about 180 tracks on the HDD, so I moved it over to a SATA-II connected 500GB SSD. Since it is convenient, I continue using active audio-only projects on the SATA-II SSD, but other than the extreme track counts that can happen once in a while, there's no compelling demand for SSD to manage audio-only. I reserve my RAID-0 devices (3x 2TB-SSD & 3x 6TB-HDD on an Areca RAID controller) for the video work. I have gobs (~50TB) of backup HDD's, so it's no problem having a relatively small active audio SSD.
  5. Thank you Jim [ I promise *not* to call you Mr. Roseberry again :-) ] for your pithy summation. You have prevented me from digging too deep into the weeds on this topic to develop an answer that fits my neurotic approach to technical questions. I was reading through this document in my non-existent 'spare' time [ https://www.mindshare.com/files/ebooks/pci express system architecture.pdf ] In its 1038 pages, it covers all the stuff about PCIe function, but of course leaves it to the reader to infer how an "aggressive" driver would cause citizenship problems within the overall system. This document may still be handy for my project to get a GC-Titan-Ridge Thunderbolt working properly in my 2010 Mac Pro. We'll see. Okay, back to the original topic! Thanks to all!
  6. Your instinct is right in that a change of driver is necessary, but that aspect of that driver behavior pertinent to audio performance is to find a video card/driver that's "agile" in letting go of the PCIe slot. The video card driver of the R7 250 is possibly "hogging" the PCIe slot, but that behavior is not specifically related of the number of active lanes within that slot. There is an audio forum that I frequent that has good things to say about AMD RX580 cards regarding their compatibility with audio performance. I have an Nvidia GTX 1070 card in my machine which is supposed to be less good in that respect, but it's working fine with no audio issues. Many people have done well for audio work using the native Intel graphics functions. Here is a link to a long thread on the issue. In short, there are confirmations and counter-cases, so it's nothing absolute in either direction: [https://www.gearslutz.com/board/music-computers/1212416-dpc-latency-better-amd-graphic-cards-3-card-comparison.html ]
  7. As Mr. Roseberry points, out, their ADAT A/D conversion is typically under 1-millisecond of delay. I have collected data about ADAT conversion latency for a couple of products that I own: - The ART TubeOpto-8 is 32-samples A/D latency at 44.1k (ref: ART User Manual). - The Audient ASP800 is 39 samples (ref: Audient tech support). From that point onward, it's traditional to ignore any propagation delays in the fiber optics or wires because of the small distances involved. To offer some pedantic info, signals propagate at about (Velocity Factor = light speed in a vacuum) 0.65-VF within fiber optic transport, and at about 0.95-VF for signals in copper wire. Data rate is often confused with propagation speed and gets us into confusing discussions about latency for audio. My best thought experiment that helps to clear the air is to walk through the propagation delay issues that the Mars missions must endure. No matter the data rate in kilobits or megabits, the propagation delays from Earth to Mars are an average of about 14 minutes each way; dependent on the relative positions of the two planet in their respective orbits. Best case is about 4 minutes, and worst case is about 24 minutes. Okay, pedantic itch is scratched! Carry on!
×
×
  • Create New...