Jump to content

OutrageProductions

Members
  • Posts

    845
  • Joined

  • Last visited

Everything posted by OutrageProductions

  1. Also consider disabling any services that may not be necessary. Things like Search Indexing and background virus/malware scans can really chew up CPU time slices. Consider excluding your Cakewalk program & audio storage drive locations from the virus scan path.
  2. The concept is simple, but you may be slightly confused by terminology. Busses just operate as a summing fader. You can add as many busses as needed and use them to 'group' things like drums, percussion, guitars, keyboards, vocals, reverb, etc. Each of those 'groups' can have overall EFX treatment if desired, and then all of those busses can be sent to a Master (aka 'Mix') buss, then the Master feeds your audio outputs. A lot of folks will (if they have enough horsepower) insert some "mastering & monitoring" tools on the Master buss for listening (and sometimes printing) an output file.
  3. Can you play the project all the way through from head to tail? If so, try to export with <render in real time> and <audio bounce> engaged. And maybe change the output file destination just to see if it's a write permissions issue. BTW the latest version is commonly referred to as 2022.11.
  4. +1 for installing latest version of CbB! I left SONAR X3 on my machine, installed CbB, migrated setting and plugins, then used Win P&F ctrl panel to uninstall only the SONAR application after I was happy with everything. All my old bundled plugs stayed installed and running.
  5. Try to increase your driver latency in CW (or in the ASIO ctrl panel) to just less than 30ms round trip (approx 128 samples) and see if it helps. Then back it down until the crackle appears again and you'll know just how fast your rig can respond. I've learned to use headphones and play just a bit in front of the beat, then iron it out in post. Do this without the TH2 engaged first. Edit/preferences/audio/driver settings/mixing latency/buffer size. [Some ASIO devices require this to be set in the ASIO device control panel, if such exists.]
  6. I have examined the minidump files when CW very occasionally quits unexpectedly on big projects on me, and 99% of the time it is because I tried to do something (like freeze or expel a large VSTi, or cancel an Export) and move on to the next operation before the program has had a chance to properly flush and reallocate RAM. I just go get a fresh cupper and in 7 minutes it clears out of the Task Manager and I can relaunch and continue. I'm used to it happening about once a month. But I also only reboot the machine if required by Win updates about once a month also. In between big sessions I flush the Standby and Working Sets of RAM with a batch file I wrote in the CMD window. Never fails to make things continue to work more or less seamlessly.
  7. I've been using a KK S61 Mk1 for 5+ years. It works seamlessly altho it does not have the 'auto track focus' that is integrated into Cubase & Nuendo (that's on CbB, not NI). And using Mackie Control surface, the transport controls on the hardware work; rec, stop, play as normal, loop turns an existing loop marked on/off, ff & rw move now time by 1 measure. Do not know if the 'mixer' knobs on Mk2 will integrate as ctrl surface. I've heard of too many hardware failures with the Mk2 for me to have to upgrade. That's all I need. Integration in Cubase is deeper like automatic track/instrument focus from hardware, but I just work around that.
  8. I would bet that both the oscillator and the oscilloscope were MODELED on some existing piece of hardware and the resulting algorithm inherently includes the anomalies found in 'real-world' impulse responses. If they were strictly developed on a purely mathematical basis, I imagine you would see a more geometrically square wave without artifacts. Which would probably phreak out anyone with a minimal electronics background. Sinusoidal responses, and their derivatives, are found to be generally symmetrical in the normal world. Aside from cardiac infarction, obviously. 😁
  9. With the Scarbee basses, the samples from 101 to 127 velo range are all the same dynamic range, so it's an increase in volume only. In my Indiginus instruments (and especially in ISW Shreddage instruments) it can get you in serious trouble with velo scaled articulations.
  10. Seriously trying to ask about someone's favorite DAW (and why) is like herding cats in a rainstorm. BTW, my favorite color is the smell of the number nine... but only in the vacuum of space. 😁
  11. That was a stickler fer sure, but it controls the relative (I guess you could say 'unity') gain of the note velocity from the MIDI channel. Sort of increases the dynamic velocity curve, if you will. For example; in NI Scarbee Jay Bass (and IIRC) Pre Bass, I have to bump the MIDI volume from 101 to 127 before recording so that I can get a reasonable (-6 to 0db) level on the instrument for input monitoring. (yeah, yeah, I know there are ways to do it in the VSTi and save it, but it gets overwritten every update). I found some really cool long evolving drones, risers, and soundscapes that I really like in things like NI Reaktor and Absynth, but they are extremely stochastic. Play it one time and I love it; next pass... not so much.
  12. We're just here to observe how real nerds turn a dead horse into glue by beating it to a pulp.
  13. @Starship Krupa Without diving into the bowels of theory regarding Laplace & Fourier transforms and their integration, imagine if you will two signals that are to be convolved (f and g) into an input (or output algorithm) to become (f*g). Who is to know in terms of clocking, since computers have to access and process the data in a serial manner, if the series of f is first in the function or is g first? The outcome, in theory should be identical. But in practice, if you add jitter or other clocking anomalies, even miniscule, and/or more input data (h, i, j, k, et al) [as in mixing multiple tracks], can one reasonably predict that every operation of convolution be identical every time? Granted, this all happens in a DAW at the boundary limit of the Nyquist parameters in use at the time (sample rate & bit depth), and the 'audible' outcome of the convolution subjectively appears identical, but on the individual sample level, may not actually be. It can be shown that (f*g) =(g*f) mathematically and it can be examined scientifically, but may not be heard as identical by the average human. And I, for one, prefer not to sweat the small stuff. To a dog it may appear as a 'threshold shift', similar to humans when air pressure differences occur on either ear. Along those lines, for 25 years I have used a massive software suite called EASE to analyze and predict acoustic environment responses using convolution in a virtual space. I can tell you that, all other parameters being equal, no two convolutions will come out EXACTLY mathematically identical, even though the auditory stimulus is subjectively the same. Acoustics have a multitude of parameters that can be infinitely variable (temperature, humidity, air pressure, etc.) that can change instantaneously in time and space, and we learn to accept a certain margin of error. I'm at risk of losing my "drone" license...
  14. Not if he was triggering samples that randomize round robin.
  15. I highly doubt that CbB is having any internal clocking issues, or you would have bigger problems. Have you tried to turn OFF the MMCSS in <Preferences><Playback and Recording>? (Wild guess...)
  16. So then, for the most part, the arpeggiator is working correctly. Eliminates that part of the chain. It may have to do with an input latency issue, not immediately evident from all your preference settings. To rule out any particular instrument, have you tried input with something simple like the TTS1 on a piano patch? Also may be useful to DL and run MIDI OX while playing input to see if there are any clues there. Additional thought; are you using input quantize when playing? Are you playing chords in that are time justified (full ΒΌ, Β½, or whole) notes. TBH, I don't know that I've ever engaged the midi arpeggiator until after the chords have been recorded. >edit: I just tried it on a piano IRT and it works, but the input performance HAS TO BE EXACT or the arp can get off in timing.
  17. Try this experiment: Create a chord of one measure whole notes, copy it to measure 2. Set a <Latch> arp pattern (with any other settings that you want) in the midi track inspector, select the clip in bar 2 (only) and bounce to clip. Then inspect the actual note positions in the PRV to see if it is printing in the wrong space, or if it is some sort of latency issue from elsewhere (like this):
  18. Video sleds can and do use XML EDLs (Edit Decision List) to cut scenes together, but the audio is always synchronized via SMPTE. Even my Canon DSLR has time code embedded in the metadata when shooting video. But when I'm on the film mix (to picture) stage it's all SMPTE. Foley, dialog, SFX, music, the whole kit-n-kaboodle. Even the engineers are a little "clocky". πŸ˜‰
  19. @John Vere; " b,b,but MY speaker cables are made out of gold, so they must be better!"... as stated by people who flunked physics. πŸ˜‚
  20. NED Synclavier was in that boat too. I used one for 15 years that cost $500k... and in the end the road cases were worth more than the 'clav.
  21. That's why the film industry invented the 'clapper' in 1927 to synchronize audio sources. Then came the NAGRA tape machine (the first ones were hand cranked, spring operated, clockwork models), then came crystal locked oscillators and SMPTE. But even when I produce a multicam, multi-mic session, I still use a "clicker' so that I can line up the impulse response and sync the sources. "Old skool was a kool skool". πŸ˜†
  22. Same sample rate & bit depth? Dither on or off? (IIRC CbB defaults to dither ON, at least on my system.)
  23. Disable the Windows search engine, indexing services, and network traffic. If left enabled, as the audio files get larger and flushed from the buffer to the disk, Win will start indexing in the background and steal time slices.
  24. @Starship Krupa "using sine waves as source material (not even a great test because it leaves out transients), with Cakewalk and Mixcraft as the DAW's under test and they didn't null." Any electronic engineer worth his salt will tell you that a square wave is the premier test signal; it has immediate and discernible effects on rise time (how fast a circuit can respond to an infinitely short impulse) and over/undershoot (known as 'ringing') that determine quality of AD/DA conversion. AFAIK, most I/O interfaces are still using some version based on the backbone of the Motorola 56k family of chips. But the true tell is in the quality of ancillary design and components like capacitors and resistors that determine both observations, objective and subjective. Which is why tubes and transformers are always 'colored' and generally more pleasing (to some) than discrete circuits in analog audio. Luckily, the modern ability to map and recreate those characteristics in the digital realm has come a long way recently. Hence, some excellent renditions of the LA2a. 😁
  25. Because of physics and the Nyquist theory (and I think that my MSEE holds some sway on this subject) the more likely function of the "quality" of a final audio product has to do with the innate quality of the D to A converters that are processing into the analog realm, moreso than any minor differences in the algorithm of any particular DAW. Although, dither and floating point can have a large effect, in the end... a conversion of sample rate and bit depth is pretty hard to screw up. It's just mathematical conversion. My 2 cents worth.
Γ—
Γ—
  • Create New...