Jump to content

bvideo

Members
  • Posts

    189
  • Joined

  • Last visited

Posts posted by bvideo

  1. Does the arpeggiator give options for sync to beat? Or does it simply clock each sequence starting when that key is pressed? If the latter, then the arp is just reflecting how much out of sync your keyboard technique is. For on-the-fly playing on an arp, it seems like it might be preferable to have an auto-quantizer enabled.

  2. Here's a page of comparisons of some measurable qualities of how various DAWs do sample rate conversion (SRC). It's one proof that all DAWs are not alike. Of course SRC is only one technology of audio processing, and depending on the project and other DAW optimizations, it may or may not play a part in outcomes.

    SRC can be invoked when a project is rendered at a sample rate that is different from some of its source files. It is also used by some DAWs and plugins in an oversampling stage to improve results of audio processing that are sensitive to aliasing in high frequencies.

    Maybe not all the audio outcomes shown in the various graphs on that page are audible. It looks like some of them should be. The audio material used in the tests is not the least bit musical. But it is convincing to me that DAWs are different.

    Just for fun, you can note that some older versions of Sonar can be compared against the X3 version.

    • Thanks 1
  3. 12 hours ago, Starship Krupa said:

    I just did an experiment using 4 different DAW's.

    ...

    According to Sound Forge, For instance the integrated LUFS ranged from 17.12 to 20.02. Maximum true peak ranged from -0.76 to -3.90dB. The length of the rendered audio files ranged from 18 seconds to 25 seconds, due to the way each DAW handles silent lead-ins and lead-outs that are selected.

    ...

    If LUFS is a calculation over the whole file, the ones with silence will come out different from the ones without, even if the sound part were identical.

    • Like 1
  4. On 3/26/2023 at 10:39 AM, ancjava said:

                 ... moments of silence (moresoe - those moments of silence appear also in exported wav file)...

     

    If exporting the wav file is done by non-realtime render or bounce, moments of silence are not likely caused by CPU exhaustion or anything about the sound card.

    But what, I don't know.

    maybe: Is there a master bus plugin that is not authorized on your home system?

    • Like 1
  5. Did you get such a discount for your Teac 3340S? I still have mine, but it badly needs service. I think it was expensive, and still can get high prices used. It has playback through the record heads so you could lay down more tracks in sync with the existing tracks. That only became obsolete when computer-based recording became readily available (Cakewalk 1.0?). Occasionally I use mine for capturing old family recordings, such as from 1958 or so.

  6. You can try changing your 64-bit on/off, and plugin load balancing on/off. And depending where the rendering mistakes are, you could pre-bounce some tracks to reduce the load so you could then use real-time render, or conversely, to eliminate rendering the misbehaving tracks  in fast render. Or do all the tracks misbehave?

  7. That Wdf01000.sys is a "driver framework", so it could be used by any non-class-compliant device on your computer. Since it is a "gaming" computer, there may be a device built in that uses a driver tuned to gaming. Another possibility is that your anti-malware software has provided some alternate or stacked drivers, e.g. for keyboard and mouse (mine does), though wdf01000.sys is unlikely from anti-malware software.

    So mainly have a look at the driver stack for keyboard and mouse. Control panel -> device manager -> Keyboards (or) Mice and other pointing devices (double click or right click) -> properties -> Driver -> Driver Details  and see if wdf01000.sys is there.

  8. Check for the MIDI 'shift' key configuration.

    Quote

    You can designate a MIDI Key or Controller (usually a pedal) to act as a key binding shift key. That way you can require that the key binding shift key be depressed before any command is triggered, so you only lose one note or pedal from its regular function.

    image.png.3a56cdd4a3f666e49fb425016d3e6e35.png

    Check in the upper right of this "preferences" subwindow.

     

    • Like 2
  9.  

    On 2/28/2023 at 3:04 AM, azslow3 said:

    I have not done the test, but one "theoretical" note. Let say you have 100Hz sin wave. That means its "turnaround" is 10ms. If the interface input to output latency is 5ms, recording input and output simultaneously should produce 180° phase shift. I mean visual shift between waveforms depends from the frequency and the interface RTL.

    This is the special case of "phase" corresponding with a polarity flip. It can only happen with a purely symmetrical wave and only with a 180° time shift as determined by the period of that wave's fundamental frequency. So it would be possible to construct a case where transmitting a sine wave through a time delay of exactly 1/2 the wave's period would appear as polarity reversal.

     

    (edit: some pointless material removed)

  10. The drivers for USB or DIN (usually connected by USB, only much older equipment was connected by serial or printer port) can be viewed by going through the device manager and finding the item associated with that device and inspecting the driver stack. Most likely, they all use the Windows "class compliant" drivers, but it's worth checking. Variations are more likely within the h/w dongle or, in your case, the synth, which, by the way still has to pass the DIN output through another interface (which is also USB connected?)

    Audio drivers, particularly ASIO, are the ones that suffer from some manufacturers' efforts.

  11. "... wait until everyone has upgraded their systems (including all gear) to MIDI 2.0 ..." (User 905133)

    This thread reminded me we had one on the old forum (midi "Jitter" it Does Exist). It was started in 2007 and it frequently referred to the imminent MIDI 2.0 😄. The final post on that thread was in 2015. It was full of wild conjecture and a lot of misinformation, but there were a few examples of good experiments.

    I just finished reading the whole thing, and I don't recommend it (it's 18 pages!). But it did point out that there were variations from one VST to another and even from one note to another for at least two VSTs (one of them seemingly purposeful). And also variations from one MIDI interface to another, and external synths made things even wilder. So experiments need to develop anchors that eliminate unpredictable latencies and jitters.

    In John Vere's post, there are indeed mysteries in the latency timings. But there are quite a few places where latency can spring up (including the drum brain converting to MIDI, including velocity, and also generating the associated audio). The 180 degrees out of phase is not especially mysterious, since the mic wirings or the speaker's amp could be responsible for that. (And it isn't strictly phase, which would relate to some timing issues -- it's a signal polarity reversal.)

    Also, the little bump on the "monitor mike" at 294 samples, where did that come from? Did that mic hear the drum pad from a short distance away?

    It can be worth it to try a VST (not TTS 1), maybe a drum VST in parallel with the drum machine.

  12. "Does not work" leaves out some important description. "Does not NULL" maybe, leaving little differences? Or "Does not line up" meaning the two signals are different in phase? If there is no difference, then why would we need oversampling? Your test might help one way or another in that question. If there is a visible difference, is it a difference that sounds different? If there is some phase shifting in the over all oversampling/resampling process, it may be difficult or impossible to judge anything by the visuals.

  13. There's a basic issue with USB MIDI interfaces: the standard for device configuration does not include a serial number or any other unique ID for MIDI interfaces. Therefore, it is difficult for the OS to distinguish one from another. Only if you are lucky, when you boot Windows it will find the interfaces in the same order it did last time, and thereby match up with the friendly names. If you ever unplug/replug one, or if  reboot or sleep/wakeup finds them in a different order, they will not match up.

    There are some other discussions about this (example)

  14. Sending the DRM from Cakewalk should be asking for a channel number. That number needs to match a global setting in your M1. I can't find my old M1 manual just now, but is it possible the channel number is 1-16 in the global setting but 0-15 in the DRM dialog...

    Same consideration when sending a sysex dump; the channel number in that sysex needs to match the M1 setting.

  15. 13 hours ago, Jimmy NaNa said:

    4. Another MIDI issue I've frequently had is that when I unfreeze a MIDI instrument track it will crash immediately upon playback. The only way to get it working again is to create a new instrument midi track and copy the midi to it without playing, then deleting the track that causes the crash.

    Does this happen with more than one particular synth? There have been unfreeze problems before related to at least one specific synth (sektor).

  16. In the manual, it seemed I could see a playback mode (how to transmit your composition) to an external synth|host (cf Cakewalk) in such a way as to transmit the bank/prog numbers. When cakewalk receives them, it will populate the track header widgets with the corresponding names from your .ins file.

    As far as your latest problem, it is best to save your project as a cakewalk native project (.cwp) for reopening to continue editing. You can write your .mid file separately for sending directly to your synth. A .mid file can't hold all the information of a Cakewalk project, so is not good for revisiting / reediting.

  17. I wonder what you are actually doing in Cakewalk. Almost sounds like you are composing on the PA900 and just using cakewalk to create a midi file. But it looks like you don't really need cakewalk for that. Isn't there some way to save your PA900 composition to a midi file on a local thumb drive or to local storage? Most likely, that file will have all the required sound setup. It looks like you can make the PA900 look like a hard drive to your PC. Then save your composition to the PA900 internal storage and then copy midi files from your PA900 to your PC.

    I looked very briefly at the manual, and it looks like you can do without Cakewalk. Or if you want to use cakewalk, it looks like you can configure your synth to send program/bank changes so CW will have them for writing a midi file. (See pg 121)

×
×
  • Create New...