Jump to content

OutrageProductions

Members
  • Posts

    965
  • Joined

  • Last visited

Everything posted by OutrageProductions

  1. @Starship Krupa Without diving into the bowels of theory regarding Laplace & Fourier transforms and their integration, imagine if you will two signals that are to be convolved (f and g) into an input (or output algorithm) to become (f*g). Who is to know in terms of clocking, since computers have to access and process the data in a serial manner, if the series of f is first in the function or is g first? The outcome, in theory should be identical. But in practice, if you add jitter or other clocking anomalies, even miniscule, and/or more input data (h, i, j, k, et al) [as in mixing multiple tracks], can one reasonably predict that every operation of convolution be identical every time? Granted, this all happens in a DAW at the boundary limit of the Nyquist parameters in use at the time (sample rate & bit depth), and the 'audible' outcome of the convolution subjectively appears identical, but on the individual sample level, may not actually be. It can be shown that (f*g) =(g*f) mathematically and it can be examined scientifically, but may not be heard as identical by the average human. And I, for one, prefer not to sweat the small stuff. To a dog it may appear as a 'threshold shift', similar to humans when air pressure differences occur on either ear. Along those lines, for 25 years I have used a massive software suite called EASE to analyze and predict acoustic environment responses using convolution in a virtual space. I can tell you that, all other parameters being equal, no two convolutions will come out EXACTLY mathematically identical, even though the auditory stimulus is subjectively the same. Acoustics have a multitude of parameters that can be infinitely variable (temperature, humidity, air pressure, etc.) that can change instantaneously in time and space, and we learn to accept a certain margin of error. I'm at risk of losing my "drone" license...
  2. Not if he was triggering samples that randomize round robin.
  3. I highly doubt that CbB is having any internal clocking issues, or you would have bigger problems. Have you tried to turn OFF the MMCSS in <Preferences><Playback and Recording>? (Wild guess...)
  4. So then, for the most part, the arpeggiator is working correctly. Eliminates that part of the chain. It may have to do with an input latency issue, not immediately evident from all your preference settings. To rule out any particular instrument, have you tried input with something simple like the TTS1 on a piano patch? Also may be useful to DL and run MIDI OX while playing input to see if there are any clues there. Additional thought; are you using input quantize when playing? Are you playing chords in that are time justified (full ¼, ½, or whole) notes. TBH, I don't know that I've ever engaged the midi arpeggiator until after the chords have been recorded. >edit: I just tried it on a piano IRT and it works, but the input performance HAS TO BE EXACT or the arp can get off in timing.
  5. Try this experiment: Create a chord of one measure whole notes, copy it to measure 2. Set a <Latch> arp pattern (with any other settings that you want) in the midi track inspector, select the clip in bar 2 (only) and bounce to clip. Then inspect the actual note positions in the PRV to see if it is printing in the wrong space, or if it is some sort of latency issue from elsewhere (like this):
  6. Video sleds can and do use XML EDLs (Edit Decision List) to cut scenes together, but the audio is always synchronized via SMPTE. Even my Canon DSLR has time code embedded in the metadata when shooting video. But when I'm on the film mix (to picture) stage it's all SMPTE. Foley, dialog, SFX, music, the whole kit-n-kaboodle. Even the engineers are a little "clocky". ?
  7. @John Vere; " b,b,but MY speaker cables are made out of gold, so they must be better!"... as stated by people who flunked physics. ?
  8. NED Synclavier was in that boat too. I used one for 15 years that cost $500k... and in the end the road cases were worth more than the 'clav.
  9. That's why the film industry invented the 'clapper' in 1927 to synchronize audio sources. Then came the NAGRA tape machine (the first ones were hand cranked, spring operated, clockwork models), then came crystal locked oscillators and SMPTE. But even when I produce a multicam, multi-mic session, I still use a "clicker' so that I can line up the impulse response and sync the sources. "Old skool was a kool skool". ?
  10. Same sample rate & bit depth? Dither on or off? (IIRC CbB defaults to dither ON, at least on my system.)
  11. Disable the Windows search engine, indexing services, and network traffic. If left enabled, as the audio files get larger and flushed from the buffer to the disk, Win will start indexing in the background and steal time slices.
  12. @Starship Krupa "using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null." Any electronic engineer worth his salt will tell you that a square wave is the premier test signal; it has immediate and discernible effects on rise time (how fast a circuit can respond to an infinitely short impulse) and over/undershoot (known as 'ringing') that determine quality of AD/DA conversion. AFAIK, most I/O interfaces are still using some version based on the backbone of the Motorola 56k family of chips. But the true tell is in the quality of ancillary design and components like capacitors and resistors that determine both observations, objective and subjective. Which is why tubes and transformers are always 'colored' and generally more pleasing (to some) than discrete circuits in analog audio. Luckily, the modern ability to map and recreate those characteristics in the digital realm has come a long way recently. Hence, some excellent renditions of the LA2a. ?
  13. Because of physics and the Nyquist theory (and I think that my MSEE holds some sway on this subject) the more likely function of the "quality" of a final audio product has to do with the innate quality of the D to A converters that are processing into the analog realm, moreso than any minor differences in the algorithm of any particular DAW. Although, dither and floating point can have a large effect, in the end... a conversion of sample rate and bit depth is pretty hard to screw up. It's just mathematical conversion. My 2 cents worth.
  14. I suppose it could depend on the practitioner, but my tests have always started each frequency inaudible and raise it until I indicate that I can discern it from background noise.
  15. Most professional audiologists will start at low level and move up until you indicate that you can hear it. That eliminates psyco-acoustic precognition. And they will jump around in frequency too. I have my hearing tested every 3 years. I still have 18k in both ears at age 65. Always wear hearing protection around loud sustained noises, and haven't been to a rock concert since 2003. I consider myself VERY lucky!
  16. To be honest, I've never seen this happen in any project. And you did mention that some other/demo/new projects operate normally. I might make a safety copy of this project, then start deleting tracks 1 by 1 to see if it goes away at some point. Eliminate the obvious.
  17. Well... you have me confused... because for the last 25 or so years the PPQ has always been divisible by 2 like this: So if you are actually running at 925 PPQ, there is something other-worldly about your system clock. (Unless that's a TYPO, then your excused ?)
  18. NI Kontakt (full version) is really good at speading samples across the playable range while simultaneously using FFT to time stretch. But most instruments (with non-constant timbre) need to be multi-sampled at minimum about every 4th or 5th semitone to be even marginally useful. But synth waveforms are usually easy to pitch and time correct without too many artifacts.
  19. I've seen that happen by using an inferior ASIO interface and driver set. Happened in SONAR from X1 - CbB with (some) M-Audio and MOTU FW drivers from WIN7 - 10. Very spotty. Got rid of the drivers and it's been a dream since 2018.
  20. @jack c. NI Session Horns Pro (sax, 1st & 2nd Tpts), Scarbee Jay Bass.
  21. OK. That's officially weird. What happens if you use <CTRL + END> keys? Just for snicks - n - giggles, what happens if you set the Time Ruler, the Now Time display, and the Track View time display to the same measurement: In your screen shot it looks like you may be looking at Milliseconds, not M/B/T or SMPTE. Not sure if that matters. One more query: does it happen the same way in one of the stock themes?
  22. Have you had a look at this setting in the Track View?: [<View><Display><Display Ghosted Data>] When NOT checked and you close an automation lane, it will not be visible over the clip.
  23. In addition to what @scook said, you can also 'pin' the fx/vst window open with the 'push pin' icon in upper right corner of the GUI window.
  24. If you are referring to the play head (vertical now time indicator) extending well past the end of the audio in the song, you probably have some automation that goes past the song, most likely volume CC#7. Look in your automation lanes for stuff that extends beyond the logical audio ending. Easiest way to delete all of that is to engage ripple edit, select all, drag from after the end of the audio to the right where the play head finally stops, delete, then REMEMBER to disengage ripple edit! Playback will then stop after the last midi note or end of audio decay, whichever is longer.
  25. Already there. In most color themes, note velocity (other than the velocity lane at bottom) is indicated by depth of color gradient of the note itself. Higher velo is brighter, lower is dimmer.
×
×
  • Create New...