Jump to content

bvideo

Members
  • Posts

    186
  • Joined

  • Last visited

Everything posted by bvideo

  1. Which synth are your G2s being sent to? Does that synth actually play G2s or does it have a limited range (as some synths do when they are trying to emulate real instruments)?
  2. When you are in INTRANET mode do you have a DNS that will return a failure notice right away when some program makes a name request?
  3. Interesting - when I go to start a disk check from properties->tools[check] on an NTFS drive, it says: " You can keep using the drive during the scan. If errors are found you can decide if you want to fix them." That makes it sound like it's not unmounted. Perhaps you are using CHKDSK differently for a special purpose.
  4. Supposing that it is taking time loading up audio files to start playback (more files, slower startup), maybe it's worth playing with the file I/O settings, such as buffer size and caching. If other daws are very different in this scenario, maybe they are optimizing by preloading the audio whenever the playback cursor lands, not waiting for a "start playback". Hard to imagine any other differences. Related issues may be the speed of your drive and the fragmentation of your recorded audio.
  5. Besides DC offset it could also be an extreme low frequency component. Also, it could be the nature of the waveform, which could have equal energy but different maxima on the plus or minus side of center. There's some discussion about these things here.
  6. In Kontakt. Do they let you edit or view the instrument definition? But then what can you do about it? Can you do a single tempo change just before that note is played? Or just after it stops?
  7. Check inside the instrument definition for that MIDI piano. That same thing happened to me with Dimension.
  8. Tempo change ought not affect a sample. Could be a tempo-based effect, either part of the instrument definition or on the audio path.
  9. "Pressing Enter": this may trigger focused buttons.
  10. You can create a junction on your new F: that directs the system to find the files in the folder on H:.
  11. FYI for future reference: the term "class compliant" does not apply to ASIO. Most operating systems provide a common driver for "class compliant" devices so they will work with the driver model(s) built into the OS. But Windows does not provide ASIO, so class compliant doesn't apply. P.S. The term "class compliant" refers to compliance with the USB specifications for device classes. Device classes include audio, mass storage, midi, and several others.
  12. Switching back and forth between program and combination mode: if it's a Korg, it will transmit a sysex when you push the button. Capture each button into a Cakewalk sysex bank and you can embed them in each track where you need them. Other makes probably do something or other on their MIDI out when you push buttons, so same idea.
  13. User 905133, I'm very sorry for all the confusion. I meant to say that your rewording ("Your recent take") of my earlier post was right in its overall effect. My quoting of your modified quote of mine lost the color from your post, as well as the notion that I was requoting your modified quote from me. I haven't yet found post numbers to help make references to previous posts more precise. Now the quotes are looking nested, and that's getting confusing too. My original "strictly speaking" post was in response to trying to draw a parallel between Cakewalk's echo input and standalone MIDI module connections. I think that was 12 or 13 posts back*, so a lot of posts have intervened. As far as the workflow you mentioned in the post 2-back* from here, yes I agree it's convenient to automatically mute previously recorded stuff that we are rerecording. I think Cakewalk might have attempted to cover that with the various modes for recording and take lanes and such, so previous takes are not heard while we are rerecording. Same situation for audio too, I'm guessing. The old old original resurrected post was not about recording. * my post counting does not take into account what happens when someone posts while I am still typing.
  14. Hi User 905133, Your recent take: Right, though I was talking from the strictly hardware point of view of setting up connections. I was speaking in terms of the impossibility of connecting (cabling) the sound module's input to the sequencer's two jacks, output and thru, at the same time. (Typical MIDI module: one input, one output, and usually a thru.)
  15. Hi Starship Krupa, Here's how I originally read your post: I took it to mean: "its" and "it"= the sequencer [keyboard OUT] ---> [ Sequencer IN] [Sequencer OUT] ---> [Sound module IN] [Sequencer (or? ...) THRU] ---> [Sound module IN] ("to the sound module ("Thru")") ... so it sounded like the sound module had two inputs. We should probably sync up on the MIDI jack and cable models we are using before we use up too many colors. 😄 The thread probably got revived because I referenced it in a different thread by someone talking about the lack of a clear MIDI signal path.
  16. Strictly speaking, if you have your sound module connected to the sequencer's MIDI output, you don't have it connected to its MIDI thru*. Also, the MIDI thru jack is usually not programmable**. There would rarely be way to suppress (mute) the notes going through it***. In those ways it doesn't correspond very well with Cakewalk's echo input. Cakewalk's sequencer doesn't require a "thru" concept because it can deliver a single MIDI source to multiple tracks, hence to multiple synths****, without any notion of a daisy chain. The requirement to mute the recorded data while echoing the input data seems somewhat arcane. Does anybody miss having that feature for audio? ------------------------- (* If you could do that, imagine the headache of your sequencer sending your synth's output through both the "output" and the "thru" to your sound module.) (** Sometimes a synth will have only one output, hardware switchable between out and thru.) (*** Since it was originally meant as a daisy chain, it was not meant to be acted on by the equipment that provides it.) (**** or even multiple paths to a single synth)
  17. bvideo

    MIDI Track Mute Q?

    This MIDI failure to mute and goofy solo bothers me too, as does the lack of a midi signal flow chart.
  18. bvideo

    Midi

    MIDI output to an external synth goes to a "port" on your MIDI interface. You can route multiple midi tracks to a single port using the widgets on each track. For softsynths, each softsynth behaves like a port and you can send multiple midi tracks to it. In another sense, there is no such thing as any kind of bus for MIDI, namely a position in the track view or a channel in the mixer view that can aggregate MIDI data the way an audio bus does for audio data.
  19. If you're interested, the Cakewalk PRV allows displaying multiple tracks. When one track of a multitrack PRV is focused, that track's notes are highlighted, and the others are not. You can configure presets (called "filters") to define groups of tracks to quickly call up for displaying and editing together. It is just another idiom for multichannel editing, but it favors having one channel per track. The event view and staff view use a similar paradigm, but different GUI ops. In this way, Cakewalk aims to address the wish to do coordinated editing on multiple channels on multiple synths. It would be a bit limiting to depend on a conventional sequencer track for use as containing a unitary multitimbral group while it doesn't really accommodate multiple ports or synths. For that purpose, a track would need to be able to address multiple output ports / synths, not just multiple channels. Over years, Sonar & Cakewalk evolved ways to group tracks for purposes of grouping voices or other concepts (cf. folders and views), apparently instead of generalizing the track concept. The DAW world is pretty competitive. They sure don't all work the same, or have the same strengths, but they all have to try to keep up, while evolving in their own ways. The Cakewalk manual covers most of what we are talking about, but it's pretty big and maybe not organized to everyone's taste. From the old days of a standalone MIDI sequencer and multiple synths, the Cakewalk manual picks up pretty well.
  20. About the selection of list of channels: I think there is a historic idea that the receiver of the output side is the element most concerned with the disposition of channelized data, not the source, and not the recorder (as an intermediate). The synth receiving from a multi-channeled track would be programmed to deal with channels, presumably sending some of them to their assigned voices and ignoring others. Also, the general paradigm in Cakewalk for audio and MIDI is to record as received, with little or no provision for modifying the data before it gets recorded. Recorded data can then be edited, and output data can be modified on the fly (FX). For MIDI, the obvious exception is that a track can be configured to drop all but a single channel during recording. That panders to the notion of a monophonic synth. Also, it allows data coming from a single source, like a sequencer, to be easily recorded to separate tracks, divided by channel, again with the notion of monophonic synths. Setting aside the presence of monophonic synths, it's usually just easier to deal with single-channel tracks. For example, the control widgets in track view can only affect a single channel. Putting envelopes or controller streams in a multi-channel track makes editing more complicated. But for curiosity, where would a multichanneled data stream come from that would benefit a MIDI recorder to keep multiple channels and drop others coming from such a source? (no challenge implied, just curious) You've picked up on that gap in Cakewalk such that there is no simple way to direct a single recorded stream to multiple synths (your 'f)'). It's easy with audio. It's been requested for MIDI. "MIDI aux tracks, please." [monotimbral is a better word than monophonic here]
  21. It may be fair to say that Cakewalk and Sonar have not made a big practice of educating it's users on the basics of MIDI. There are certainly primary sources available that would be better for learning the MIDIoms, and that is quite sensible. MIDI in Cakewalk reflects the patterns developed in the original mission of MIDI 40 years ago. Computerization of MIDI (e.g. GUIs, event-processing plugins, soft synths, tracks and lanes, controller "envelopes" replacing series of controllers, etc.) has been (mostly) gently integrated with the basics over the years. Old hands from the 80s don't usually have problems grasping what any DAW MIDI implementation has done. But one thing is truly missing from Cakewalk documentation: MIDI signal flow. How can we possibly know: meters, mute and solo buttons, channelizing, merging input with recorded events, track gain, clip gain, track volume, where MIDI FX fit in, etc. That fitting together of basic elements is truly idiomatic to Cakewalk, and can't be fully derived by the logic of MIDIoms (Cf. my pet peeve: mute and solo) or even by the logic of audio. It's been my experience that audio software and hardware always comes with some kind of diagram showing signal flow. Why not MIDI in Cakewalk? It's complicated.
  22. Might as well list your midi h/w interface and its driver version here and your OS version, since it's fairly likely the midi driver relates to both the hang and the crash.
  23. That quote fails to mention the quality of the algorithm used for upsampling or any bandpass filtering (also assuming they mean downsampling is taking every n'th sample). Better to quick read over the wikipedia article for an idea of digital resampling. Also, there's this site http://src.infinitewave.ca/ that offers comparisons of various daw results for 96KHz to 44.1KHz resampling. Bottom line from infinitewave* is Sonar X3 downsampling, while not being the very very best of all possible digital SRC algorithms for all test scenarios, doesn't apparently have audible artifacts. (They don't seem to have CbB explicitly listed.) (* Magix Vegas 17 yikes!)
  24. Yes, the seeming failure to track the manual offset is not explained by known PDC factors. However, the constant 1020 sample offset could possibly still be explained if we knew about the conditions of the soloed track, buses, and live input settings, not to mention which effects were there. Dave, your suggestion for using the latency monitor is a great one for breaking down the elements of bgewin's audio interface situation, starting with a bare project.
×
×
  • Create New...