Jump to content

MediaGary

Members
  • Content Count

    15
  • Joined

  • Last visited

Community Reputation

17 Good
  1. Your question didn't mention the CPU that you have, but that's an essential part of question of runtime for MP3 encoding. The MP3 encoding is an especially CPU-bound task, while the mixdown work invokes the speed of the storage drives, varies with the quantity of tracks, quantity and intensity demands of the individual plug-ins, etc. The type of audio interface and its connection to the PC are not a factor in this aspect of exporting. Another part of the question is the 'Quality' slider and the target bitrate. You should do a couple of experiments that separate the processes: First 'Bounce to Tracks' the entire mix to a stereo track, and then 'Export' that single stereo track as an MP3 with various settings for target bitrate, and the 'Quality' slider position. That separation of mixdown processes and MP3 settings will give you a sense of where the time is burned.
  2. I'll provide info for the lower half of your questions: For zero cost, the ExFAT file system is read/write compatible for macOS and Win10. This is appropriate for data-only drives. I use that format for my RAID 'media' drive that is native to the 2010 Mac Pro that runs HiSierra and Win10. However, I also use (small cost, big benefit, highly-recommended) Paragon Software 'NTFS for Mac' and its palindrome product 'APFS for Windows'. I have used the NTFS for Mac for many years, and it has been perfect. I also have the HFS for Windows but haven't upgraded to anything that demands APFS yet. My machine isn't Bootcamp, but instead is a 'native' Win10 installation. This uses a separate physical SSD for boot. I recommend you do the same for your Mac, although I haven't thought through the process for your machine. I've done a separate external Win10 boot drive for a friend's 2017 iMac without incident, so you should be optimistic about success. NTFS for the Windows boot drive is the only choice, much like APFS is the only choice for a macOS Catalina boot drive. Thunderbolt-3 is plenty fast for anything you intend to do with data. Tbolt-3's throughput for data drives maxes out at about 2700MBytes/sec. That's near the top of what a PCIe 3.0 x4 NVMe drive can deliver.
  3. So far the "Death Modes" that I've directly observed with SSD's have *never* been matters of wear. A "good" controller will simply make the SSD read-only when the wear limits of the flash array have been reached. Of the four failures I've been able to get close to (three of mine, one in an online forum with SMART telemetry info) the failures have been as follows: - New (yes, new) SanDisk 2TB CloudSpeed ("enterprise-grade") had its internal DRAM controller fail, so it dropped writing speed down to 14 MBytes/sec. - Very old 90GB OCZ SSD intermittently freezes while remaining visible to the OS. That's a classic controller failure. It's now in a landfill with the others. - Patriot SSD 128GB would "disappear" from the SATA port. That's another classic controller failure. - Online guy with SMART telemetry info had the controller completely lose all info about bytes written and its empty cells map. Cannot read or write at all. The basic takeaway is that there will be no warning about imminent failure, so good backup disciplines are essential, perhaps more essential than before.
  4. You should play around with an online calculator called IsThisRetina. [ https://www.designcompaniesranked.com/resources/is-this-retina/ ] The major parameters of how you see your screen are your working distance, the pixels-per-inch density, and the screen size. The "Retina" distance is an ergonomic figure of how the limits of normal human visual acuity will have the image look no better with additional resolution or pixels-per-inch. A backstory may be useful: I migrated from 28" 1920x1200 used at a distance of 42-inches. That's an ~81PPI screen that has a Retina Distance (RD) of 43 inches. That's why my aging eyes were happy with it. I was using it just inside the RD value. -When I replaced it with a 30" Apple Cinema 2560x1600 at ~100PPI, I struggled to use this screen at 100-percent because the fonts were too small, and I wound up using it at 125-percent, with the concomitant loss of information (quantity of tracks, busses, etc) on the screen. The RD of that Cinema was 34 inches, and I was far outside of that number with my 42-inch working distance. I then got a 40" Samsung and ran it at UHD 3840x2160, and things got worse, because the RD was now 31-inches. I finally got a 55-inch UHD screen and am happy again at 100-percent. Guess what? A 55-inch UHD 3840x2160 is ~80PPI, and an RD value of 42-inches, just like when I started with the original 28" 1920x1200. The calculator says that the 34" 4K display has an RD of 27-inches. If you work that close, you have a shot.
  5. What audio device are you using? Is it ASIO? Is it shown in the 'Preferences' menu?
  6. Hey all, I need help in developing a signal routing plan for a Vienna Ensemble Pro Server. Up until now, I've had no need to delve into Aux tracks, Patch Points and external routing in CbB. Recently, I've moved my PCIe UAD-2 card to a separate workstation, and have installed Vienna Ensemble Pro 7 in order to use it across the Ethernet network within the studio. So far, I've been able to confirm basic functions of getting signals from the CbB machine, across the Ethernet network to the VEP7 Server, through the desired effects and back into CbB. The sticky part is that I'd like it to work like an External Insert. That would be most convenient for me, in that I can easily choose an bus-style effect like a Delay and put it on a CbB bus, or chose an insert-style effect like a compressor and put it on a track. Since VEP works like a soft-synth, things aren't so straightforward; as the External Insert function in CbB only offers physical outputs in the list, so some creativity is needed. That's where the deep experience of this community can save me lots of pain. I'm sure some of y'all out there have been using VEP6 or VEP7 for soft-synth hosting, and this requirement of mine isn't far off of that usage case. There are no concerns about CPU capacity or link speed (Solarflare SFN5122F 10GbE all around) but there are limitations on how complex the template for instantiating the individual UAD effects would be on the CbB side. Bonus points to suggestions for getting the VEP7 server to run at 2x the CbB project sampling rate. All guidance is very appreciated!
  7. I keep a squeaky toy in my hand to 'mark' false starts, pronunciation, and inflection problems. In that way, I can simply stay in the zone of reading, without doing anything with the keyboard. When reviewing/editing, the squeaks are obvious visual markers, and are easy to spot/correct and splice the good stuff into a glorious contiguous reading performance.
  8. I can easily see how the load of large sample libraries can be a challenge to HDD throughput capability. That's a much more throughput-sensitive and latency-sensitive application than the mere record/playback of multi-track audio. I was simply posting my multi-track recording capture and playback experience using a normal spinning (Seagate 2.5-inch 7200RPM HDD in the laptop) drive. The whole thing with concerts is to reliably get the tracks back home from the venue, and the Lenovo laptop has been stellar in that respect. I would just fire up Waves Tracks Live with all 32 channels enabled (256 sample buffer size) and throw away the silent tracks during import to a DAW (CbB as the favorite) for post-production. The rest of the remote recording chain was a Behringer X32 Core connected to a Midas DL151 and a Behringer SD8. Although it's "tourist info" and not directly pertinent to the primary discussion, I have since acknowledged that >16 tracks remotely was a rare requirement, but having redundancy makes me a more relaxed recordist. (I was prompted into action after a friend told me a horror story of his laptop failing during a concert capture.) To that end, I now have a Midas MR18 and a QSC TouchMix16 connected via microphone splitters capturing 16 channels each. The MR18 records to the laptop, and the QSC records to a native flash drive in a USB port. In the studio, the (rare) low-latency audio requirements are met with an RME MADI ExpressCard in a PCIe slot; it connects to an X-MADI slot in the M32 mixer. My work doesn't often involve soft synths (usually just piano VSTi) , although I'm hoping that possibly I can make some visiting guitar players happy with the palette of amp sims in my machine. I think the description of my perspective on drive requirements puts us in the same harmonious choir.
  9. My normal remote concert recording setup uses a lowly dual-core laptop (Lenovo T400) running a 7200RPM 500GB HDD in a caddy where the DVD normally lives. 32 mono tracks is no challenge for this recording setup for capture and playback. In my studio, I never need more than 24 concurrent tracks for capture, but projects easily grow to 60 tracks through a variety of requirements. Nevertheless, the regular 2TB 7200RPM HDD was the capture and playback device for quite a few years. I do everything at 48K/24-bit, except for audio-only CD projects. I did have to edit a complex instructional CD set that became 300 tracks before all the pieces were in place. That project began to be troublesome after I had more than about 180 tracks on the HDD, so I moved it over to a SATA-II connected 500GB SSD. Since it is convenient, I continue using active audio-only projects on the SATA-II SSD, but other than the extreme track counts that can happen once in a while, there's no compelling demand for SSD to manage audio-only. I reserve my RAID-0 devices (3x 2TB-SSD & 3x 6TB-HDD on an Areca RAID controller) for the video work. I have gobs (~50TB) of backup HDD's, so it's no problem having a relatively small active audio SSD.
  10. Thank you Jim [ I promise *not* to call you Mr. Roseberry again :-) ] for your pithy summation. You have prevented me from digging too deep into the weeds on this topic to develop an answer that fits my neurotic approach to technical questions. I was reading through this document in my non-existent 'spare' time [ https://www.mindshare.com/files/ebooks/pci express system architecture.pdf ] In its 1038 pages, it covers all the stuff about PCIe function, but of course leaves it to the reader to infer how an "aggressive" driver would cause citizenship problems within the overall system. This document may still be handy for my project to get a GC-Titan-Ridge Thunderbolt working properly in my 2010 Mac Pro. We'll see. Okay, back to the original topic! Thanks to all!
  11. Your instinct is right in that a change of driver is necessary, but that aspect of that driver behavior pertinent to audio performance is to find a video card/driver that's "agile" in letting go of the PCIe slot. The video card driver of the R7 250 is possibly "hogging" the PCIe slot, but that behavior is not specifically related of the number of active lanes within that slot. There is an audio forum that I frequent that has good things to say about AMD RX580 cards regarding their compatibility with audio performance. I have an Nvidia GTX 1070 card in my machine which is supposed to be less good in that respect, but it's working fine with no audio issues. Many people have done well for audio work using the native Intel graphics functions. Here is a link to a long thread on the issue. In short, there are confirmations and counter-cases, so it's nothing absolute in either direction: [https://www.gearslutz.com/board/music-computers/1212416-dpc-latency-better-amd-graphic-cards-3-card-comparison.html ]
  12. As Mr. Roseberry points, out, their ADAT A/D conversion is typically under 1-millisecond of delay. I have collected data about ADAT conversion latency for a couple of products that I own: - The ART TubeOpto-8 is 32-samples A/D latency at 44.1k (ref: ART User Manual). - The Audient ASP800 is 39 samples (ref: Audient tech support). From that point onward, it's traditional to ignore any propagation delays in the fiber optics or wires because of the small distances involved. To offer some pedantic info, signals propagate at about (Velocity Factor = light speed in a vacuum) 0.65-VF within fiber optic transport, and at about 0.95-VF for signals in copper wire. Data rate is often confused with propagation speed and gets us into confusing discussions about latency for audio. My best thought experiment that helps to clear the air is to walk through the propagation delay issues that the Mars missions must endure. No matter the data rate in kilobits or megabits, the propagation delays from Earth to Mars are an average of about 14 minutes each way; dependent on the relative positions of the two planet in their respective orbits. Best case is about 4 minutes, and worst case is about 24 minutes. Okay, pedantic itch is scratched! Carry on!
×
×
  • Create New...