Jump to content

bvideo

Members
  • Content Count

    112
  • Joined

  • Last visited

Community Reputation

32 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes, the seeming failure to track the manual offset is not explained by known PDC factors. However, the constant 1020 sample offset could possibly still be explained if we knew about the conditions of the soloed track, buses, and live input settings, not to mention which effects were there. Dave, your suggestion for using the latency monitor is a great one for breaking down the elements of bgewin's audio interface situation, starting with a bare project.
  2. Here's my guess at explaining the PDC control: First, the control affects just tracks with live input, meaning all echo- or record-enabled audio tracks and synth tracks (audio created by MIDI on their associated tracks or by MIDI input). The control enables or disables PDC for just those tracks. Other tracks (non-live) still have PDC enabled. I think the idea is that you can record along with your PDC (delayed) backing tracks while hearing the live input echoed through the tracks you are recording without PDC-induced delay (i.e. minimal delay equivalent to your round-trip latency). I.e. you are playing in sync with what you hear. When recording is stopped, the material you recorded is moved earlier in time so that it will henceforth play back in time properly compensated with the required PDC of the whole project. This kind of leaves open all questions about busses or aux tracks that are fed by these live tracks. It's easy enough to imagine that effects on your live tracks that apply plugin delay could be de-compensated, but harder to imagine how busses that mix multiple tracks could disable PDC just for a live input and not for other tracks in the mix. The manual hints about this in saying: Also, there are some disclaimers about live-input tracks that already have recorded material on them. When using PDC override on such live tracks, that prerecorded data will play back non-compensated and thus be out of sync with the PDC-compensated tracks. Ugh. So there is a note in the manual: (bold emphasized by me) If you are recording in a PDC-required environment and you are matching against already-recorded material, if any of that material is on a live-input track (echo- or record- enabled) and you play against it, your recorded material will be placed (advanced) in its track not in sync with that prerecorded material. Therefore, it seems PDC is not necessarily a one-button magic solution to recording with PDC-needing effects, depending on existing routing or preexisting recorded material. The workarounds of the above quotes need to be heeded. Bypassing effects could be a one-button solution, with the obvious limitation that what you're hearing when you record is not the same sound as the intended sound with effects. (But that could sometimes sound better anyway 😬)
  3. Is some plugin not reporting its lookahead / plugin delay? (See also Live Input PDC override in the manual). Or maybe just PDC override needed?
  4. Have you explored the sysx (system exclusive) view? It provides assistance in capturing and storing sysex messages, long or short. Captured messages can be copied, edited, played back, and inserted into a track. Other thoughts: Preferences -> MIDI -> Playback and Recording has a setting to capture vs. discard incoming system exclusive; some hardware synths have settings for "do or don't" emit sysex messages for some of those button pushes. (sorry this is so late in chiming in ...)
  5. Your topic made me think about what Cakewalk can do to help with scales. Tuning has been pretty well covered. The "problem" with scales is that in Cakewalk note names are represented by the 12-tone names, e.g. A, A#, etc. However, Cakewalk has a thing called a "drum map" that lets you define a different mapping of names to MIDI note numbers. In the general case, you can configure each note number with a name and an output port/synth and separate output note number. A MIDI track can be assigned to a drum map instead of a synth. The PRV for that track then labels keys with the names you configured instead of the usual A, A# etc. I don't think this was particularly intended to use for alternate scales, but maybe you can find a way to use it.
  6. The DC removed by sound forge or cakewalk is only the constant bias that is computed over some length of the signal. Both sound forge and cakewalk offer a "first 5 seconds" option that computes the bias over that segment (instead of the whole clip) and then applies the bias to the whole clip. Either way, "remove DC offset" certainly fails to remove any VLF energy of any periodic waveform shorter than, say, 4 times the length of the clip. So except for certain theatrical purposes, it does seem like a good idea to filter out inaudible periodic signal somewhere in the signal chain.
  7. Here's an example of viewing a signal with a known 0 offset zoomed out, so it looks horribly skewed negative. (It's not very musical because it's a tone artificially constructed for an example.) It looks like it has a large negative offset, but it is indeed 0 (as per sound forge). But when zooming up on this signal, the line drawing is relatively thinner, so you can get a better idea of how the area above 0 might be equal to the area below zero (thus DC offset = 0). Although this signal is artificially constructed, synth waves are often quite artificially generated too. The point is you can't judge the DC offset by looking at a zoomed out drawing on your monitor.
  8. I'm not sure DC offset can be observed with a waveform drawn at that scale. Actual "area" of top and bottom can't be judged by eye because the thickness of the lines is so large compared to the underlying pattern. That half waveform looks really unusual at that scale, but if it still looks that way after applying DC offset correction, it could look different zoomed way up. But here's another issue: that FM pad Dimension Pro patch seems to generate a very large very low subsonic component overlaid over the whole waveform. (Maybe less than .5 Hz) In viewing the waveform, parts of it could lie entirely in the positive or negative region. Removing DC offset won't change anything unless you focus on a short portion. Even then, you can still see the effect of the VLF. Putting a high pass filter (Prochannel EQ) on the Dimension Pro audio track results in a much more normal looking waveform. Here's the original waveform, about 5 seconds worth, 7 consecutive notes. After Apply Effects -> DC offset (shifted,but not really looking 0 offset)(edited here) edit: image replaced Recorded with Prochannel HP filter enabled, set to 31 HZ:
  9. Going by the one error code you got, disk I/O load or performance may be marginal. Maybe whatever changed has something to do with how disk I/O is managed. Does your task manager performance tab show anything about heavy disk load?
  10. Does Windows 10 still have some kind of limit (e.g. 10?) on historical MIDI hardware installations? There can be "ghost" devices that are not shown in device manager. There's a procedure to get them shown. It involves running the cmd prompt "as administrator", issuing "set devmgr_show_nonpresent_devices=1" and then invoking the device manager: start devmgmt.msc. Microsoft tells how here. If there are ghost copies of MIDI interface hardware, you can delete them. Whether that's what is keeping Cakewalk from seeing the current one remains to be seen.
  11. Have a look at the cakewalk signal flow diagram. Maybe you can use a normal send both to your output channel and also to a patch point aux channel where you can then add the FX. The aux channel output then takes the place of your original channel output. I don't know why external send audio output is suppressed because there is no return channel.
  12. The document explains that well enough. But from what we can tell, the numerology doesn't match the document. Whereas the document says: "A window of 50 percent extends only a quarter of the way toward the adjacent quantization points", a 50% window actually reaches half the way. So all notes are quantized. Namely, all notes are at most 1/2 the distance from one of the two quantization points it lies between. Also "would you like to try it"?😄
  13. I didn't look for "split clip to notes". That would be great for batch production of this scheme. There have been requests for this on the forum in the past. The same thing does exist for audio beats, but "Split clips at audiosnap pool" is greyed out on midi clips and "Split beats into clips" only works in the audiosnap palette (on audio clips). Using 'tab' for "step to next note" and typing 's' to split at every note is not too bad. After the time-mangled clips are bounced into a single clip, the "Run CAL" process using LEGATO.cal will adjust the note lengths.
  14. In order to bounce those clips back to a single clip, it is necessary to turn off the locks. So after restoring the tempo, select them all, turn lock off, then bounce. Then it can be stretched as a clip. As far as note lengths, there may be a CAL script to extend the note lengths to match the interval between notes. I didn't see it in the quantize dialog. It's getting to look like more of a pain to do it very often. If it's just a percussion sequence, you can store it as a groove clip in your library and modify it as needed.
×
×
  • Create New...