-
Posts
197 -
Joined
-
Last visited
Everything posted by bvideo
-
Hi User 905133, Your recent take: Right, though I was talking from the strictly hardware point of view of setting up connections. I was speaking in terms of the impossibility of connecting (cabling) the sound module's input to the sequencer's two jacks, output and thru, at the same time. (Typical MIDI module: one input, one output, and usually a thru.)
-
Hi Starship Krupa, Here's how I originally read your post: I took it to mean: "its" and "it"= the sequencer [keyboard OUT] ---> [ Sequencer IN] [Sequencer OUT] ---> [Sound module IN] [Sequencer (or? ...) THRU] ---> [Sound module IN] ("to the sound module ("Thru")") ... so it sounded like the sound module had two inputs. We should probably sync up on the MIDI jack and cable models we are using before we use up too many colors. ? The thread probably got revived because I referenced it in a different thread by someone talking about the lack of a clear MIDI signal path.
-
Strictly speaking, if you have your sound module connected to the sequencer's MIDI output, you don't have it connected to its MIDI thru*. Also, the MIDI thru jack is usually not programmable**. There would rarely be way to suppress (mute) the notes going through it***. In those ways it doesn't correspond very well with Cakewalk's echo input. Cakewalk's sequencer doesn't require a "thru" concept because it can deliver a single MIDI source to multiple tracks, hence to multiple synths****, without any notion of a daisy chain. The requirement to mute the recorded data while echoing the input data seems somewhat arcane. Does anybody miss having that feature for audio? ------------------------- (* If you could do that, imagine the headache of your sequencer sending your synth's output through both the "output" and the "thru" to your sound module.) (** Sometimes a synth will have only one output, hardware switchable between out and thru.) (*** Since it was originally meant as a daisy chain, it was not meant to be acted on by the equipment that provides it.) (**** or even multiple paths to a single synth)
-
This MIDI failure to mute and goofy solo bothers me too, as does the lack of a midi signal flow chart.
-
MIDI output to an external synth goes to a "port" on your MIDI interface. You can route multiple midi tracks to a single port using the widgets on each track. For softsynths, each softsynth behaves like a port and you can send multiple midi tracks to it. In another sense, there is no such thing as any kind of bus for MIDI, namely a position in the track view or a channel in the mixer view that can aggregate MIDI data the way an audio bus does for audio data.
-
Cakewalk Midi Track Basics Flow and Few Questions
bvideo replied to Sridhar Raghavan's topic in Cakewalk by BandLab
If you're interested, the Cakewalk PRV allows displaying multiple tracks. When one track of a multitrack PRV is focused, that track's notes are highlighted, and the others are not. You can configure presets (called "filters") to define groups of tracks to quickly call up for displaying and editing together. It is just another idiom for multichannel editing, but it favors having one channel per track. The event view and staff view use a similar paradigm, but different GUI ops. In this way, Cakewalk aims to address the wish to do coordinated editing on multiple channels on multiple synths. It would be a bit limiting to depend on a conventional sequencer track for use as containing a unitary multitimbral group while it doesn't really accommodate multiple ports or synths. For that purpose, a track would need to be able to address multiple output ports / synths, not just multiple channels. Over years, Sonar & Cakewalk evolved ways to group tracks for purposes of grouping voices or other concepts (cf. folders and views), apparently instead of generalizing the track concept. The DAW world is pretty competitive. They sure don't all work the same, or have the same strengths, but they all have to try to keep up, while evolving in their own ways. The Cakewalk manual covers most of what we are talking about, but it's pretty big and maybe not organized to everyone's taste. From the old days of a standalone MIDI sequencer and multiple synths, the Cakewalk manual picks up pretty well. -
Cakewalk Midi Track Basics Flow and Few Questions
bvideo replied to Sridhar Raghavan's topic in Cakewalk by BandLab
About the selection of list of channels: I think there is a historic idea that the receiver of the output side is the element most concerned with the disposition of channelized data, not the source, and not the recorder (as an intermediate). The synth receiving from a multi-channeled track would be programmed to deal with channels, presumably sending some of them to their assigned voices and ignoring others. Also, the general paradigm in Cakewalk for audio and MIDI is to record as received, with little or no provision for modifying the data before it gets recorded. Recorded data can then be edited, and output data can be modified on the fly (FX). For MIDI, the obvious exception is that a track can be configured to drop all but a single channel during recording. That panders to the notion of a monophonic synth. Also, it allows data coming from a single source, like a sequencer, to be easily recorded to separate tracks, divided by channel, again with the notion of monophonic synths. Setting aside the presence of monophonic synths, it's usually just easier to deal with single-channel tracks. For example, the control widgets in track view can only affect a single channel. Putting envelopes or controller streams in a multi-channel track makes editing more complicated. But for curiosity, where would a multichanneled data stream come from that would benefit a MIDI recorder to keep multiple channels and drop others coming from such a source? (no challenge implied, just curious) You've picked up on that gap in Cakewalk such that there is no simple way to direct a single recorded stream to multiple synths (your 'f)'). It's easy with audio. It's been requested for MIDI. "MIDI aux tracks, please." [monotimbral is a better word than monophonic here] -
Cakewalk Midi Track Basics Flow and Few Questions
bvideo replied to Sridhar Raghavan's topic in Cakewalk by BandLab
It may be fair to say that Cakewalk and Sonar have not made a big practice of educating it's users on the basics of MIDI. There are certainly primary sources available that would be better for learning the MIDIoms, and that is quite sensible. MIDI in Cakewalk reflects the patterns developed in the original mission of MIDI 40 years ago. Computerization of MIDI (e.g. GUIs, event-processing plugins, soft synths, tracks and lanes, controller "envelopes" replacing series of controllers, etc.) has been (mostly) gently integrated with the basics over the years. Old hands from the 80s don't usually have problems grasping what any DAW MIDI implementation has done. But one thing is truly missing from Cakewalk documentation: MIDI signal flow. How can we possibly know: meters, mute and solo buttons, channelizing, merging input with recorded events, track gain, clip gain, track volume, where MIDI FX fit in, etc. That fitting together of basic elements is truly idiomatic to Cakewalk, and can't be fully derived by the logic of MIDIoms (Cf. my pet peeve: mute and solo) or even by the logic of audio. It's been my experience that audio software and hardware always comes with some kind of diagram showing signal flow. Why not MIDI in Cakewalk? It's complicated. -
Closing Cakewalk Causes Kernal Corruption
bvideo replied to Jerry Gerber's topic in Cakewalk by BandLab
Might as well list your midi h/w interface and its driver version here and your OS version, since it's fairly likely the midi driver relates to both the hang and the crash. -
Dowsampling question (integer vs non-integer ratio)
bvideo replied to Jakub's topic in Cakewalk by BandLab
That quote fails to mention the quality of the algorithm used for upsampling or any bandpass filtering (also assuming they mean downsampling is taking every n'th sample). Better to quick read over the wikipedia article for an idea of digital resampling. Also, there's this site http://src.infinitewave.ca/ that offers comparisons of various daw results for 96KHz to 44.1KHz resampling. Bottom line from infinitewave* is Sonar X3 downsampling, while not being the very very best of all possible digital SRC algorithms for all test scenarios, doesn't apparently have audible artifacts. (They don't seem to have CbB explicitly listed.) (* Magix Vegas 17 yikes!) -
Yes, the seeming failure to track the manual offset is not explained by known PDC factors. However, the constant 1020 sample offset could possibly still be explained if we knew about the conditions of the soloed track, buses, and live input settings, not to mention which effects were there. Dave, your suggestion for using the latency monitor is a great one for breaking down the elements of bgewin's audio interface situation, starting with a bare project.
-
Here's my guess at explaining the PDC control: First, the control affects just tracks with live input, meaning all echo- or record-enabled audio tracks and synth tracks (audio created by MIDI on their associated tracks or by MIDI input). The control enables or disables PDC for just those tracks. Other tracks (non-live) still have PDC enabled. I think the idea is that you can record along with your PDC (delayed) backing tracks while hearing the live input echoed through the tracks you are recording without PDC-induced delay (i.e. minimal delay equivalent to your round-trip latency). I.e. you are playing in sync with what you hear. When recording is stopped, the material you recorded is moved earlier in time so that it will henceforth play back in time properly compensated with the required PDC of the whole project. This kind of leaves open all questions about busses or aux tracks that are fed by these live tracks. It's easy enough to imagine that effects on your live tracks that apply plugin delay could be de-compensated, but harder to imagine how busses that mix multiple tracks could disable PDC just for a live input and not for other tracks in the mix. The manual hints about this in saying: Also, there are some disclaimers about live-input tracks that already have recorded material on them. When using PDC override on such live tracks, that prerecorded data will play back non-compensated and thus be out of sync with the PDC-compensated tracks. Ugh. So there is a note in the manual: (bold emphasized by me) If you are recording in a PDC-required environment and you are matching against already-recorded material, if any of that material is on a live-input track (echo- or record- enabled) and you play against it, your recorded material will be placed (advanced) in its track not in sync with that prerecorded material. Therefore, it seems PDC is not necessarily a one-button magic solution to recording with PDC-needing effects, depending on existing routing or preexisting recorded material. The workarounds of the above quotes need to be heeded. Bypassing effects could be a one-button solution, with the obvious limitation that what you're hearing when you record is not the same sound as the intended sound with effects. (But that could sometimes sound better anyway ?)
-
Inserting Midi Control Sequences into a Midi Track..
bvideo replied to Sridhar Raghavan's topic in Cakewalk by BandLab
Have you explored the sysx (system exclusive) view? It provides assistance in capturing and storing sysex messages, long or short. Captured messages can be copied, edited, played back, and inserted into a track. Other thoughts: Preferences -> MIDI -> Playback and Recording has a setting to capture vs. discard incoming system exclusive; some hardware synths have settings for "do or don't" emit sysex messages for some of those button pushes. (sorry this is so late in chiming in ...) -
Cakewalk, VSTs and Custom Pitches/Tuning, Indian Music
bvideo replied to Sridhar Raghavan's topic in Cakewalk by BandLab
Yes, that should be simpler. -
Cakewalk, VSTs and Custom Pitches/Tuning, Indian Music
bvideo replied to Sridhar Raghavan's topic in Cakewalk by BandLab
Your topic made me think about what Cakewalk can do to help with scales. Tuning has been pretty well covered. The "problem" with scales is that in Cakewalk note names are represented by the 12-tone names, e.g. A, A#, etc. However, Cakewalk has a thing called a "drum map" that lets you define a different mapping of names to MIDI note numbers. In the general case, you can configure each note number with a name and an output port/synth and separate output note number. A MIDI track can be assigned to a drum map instead of a synth. The PRV for that track then labels keys with the names you configured instead of the usual A, A# etc. I don't think this was particularly intended to use for alternate scales, but maybe you can find a way to use it. -
The DC removed by sound forge or cakewalk is only the constant bias that is computed over some length of the signal. Both sound forge and cakewalk offer a "first 5 seconds" option that computes the bias over that segment (instead of the whole clip) and then applies the bias to the whole clip. Either way, "remove DC offset" certainly fails to remove any VLF energy of any periodic waveform shorter than, say, 4 times the length of the clip. So except for certain theatrical purposes, it does seem like a good idea to filter out inaudible periodic signal somewhere in the signal chain.
-
Here's an example of viewing a signal with a known 0 offset zoomed out, so it looks horribly skewed negative. (It's not very musical because it's a tone artificially constructed for an example.) It looks like it has a large negative offset, but it is indeed 0 (as per sound forge). But when zooming up on this signal, the line drawing is relatively thinner, so you can get a better idea of how the area above 0 might be equal to the area below zero (thus DC offset = 0). Although this signal is artificially constructed, synth waves are often quite artificially generated too. The point is you can't judge the DC offset by looking at a zoomed out drawing on your monitor.
-
I'm not sure DC offset can be observed with a waveform drawn at that scale. Actual "area" of top and bottom can't be judged by eye because the thickness of the lines is so large compared to the underlying pattern. That half waveform looks really unusual at that scale, but if it still looks that way after applying DC offset correction, it could look different zoomed way up. But here's another issue: that FM pad Dimension Pro patch seems to generate a very large very low subsonic component overlaid over the whole waveform. (Maybe less than .5 Hz) In viewing the waveform, parts of it could lie entirely in the positive or negative region. Removing DC offset won't change anything unless you focus on a short portion. Even then, you can still see the effect of the VLF. Putting a high pass filter (Prochannel EQ) on the Dimension Pro audio track results in a much more normal looking waveform. Here's the original waveform, about 5 seconds worth, 7 consecutive notes. After Apply Effects -> DC offset (shifted,but not really looking 0 offset)(edited here) edit: image replaced Recorded with Prochannel HP filter enabled, set to 31 HZ:
-
[CLOSED] Cakewalk 2022.02 Early Access [Updated to Build 27]
bvideo replied to Morten Saether's topic in Early Access Program
Going by the one error code you got, disk I/O load or performance may be marginal. Maybe whatever changed has something to do with how disk I/O is managed. Does your task manager performance tab show anything about heavy disk load? -
Does Windows 10 still have some kind of limit (e.g. 10?) on historical MIDI hardware installations? There can be "ghost" devices that are not shown in device manager. There's a procedure to get them shown. It involves running the cmd prompt "as administrator", issuing "set devmgr_show_nonpresent_devices=1" and then invoking the device manager: start devmgmt.msc. Microsoft tells how here. If there are ghost copies of MIDI interface hardware, you can delete them. Whether that's what is keeping Cakewalk from seeing the current one remains to be seen.
-
Have a look at the cakewalk signal flow diagram. Maybe you can use a normal send both to your output channel and also to a patch point aux channel where you can then add the FX. The aux channel output then takes the place of your original channel output. I don't know why external send audio output is suppressed because there is no return channel.
-
The document explains that well enough. But from what we can tell, the numerology doesn't match the document. Whereas the document says: "A window of 50 percent extends only a quarter of the way toward the adjacent quantization points", a 50% window actually reaches half the way. So all notes are quantized. Namely, all notes are at most 1/2 the distance from one of the two quantization points it lies between. Also "would you like to try it"??
-
I didn't look for "split clip to notes". That would be great for batch production of this scheme. There have been requests for this on the forum in the past. The same thing does exist for audio beats, but "Split clips at audiosnap pool" is greyed out on midi clips and "Split beats into clips" only works in the audiosnap palette (on audio clips). Using 'tab' for "step to next note" and typing 's' to split at every note is not too bad. After the time-mangled clips are bounced into a single clip, the "Run CAL" process using LEGATO.cal will adjust the note lengths.
-
In order to bounce those clips back to a single clip, it is necessary to turn off the locks. So after restoring the tempo, select them all, turn lock off, then bounce. Then it can be stretched as a clip. As far as note lengths, there may be a CAL script to extend the note lengths to match the interval between notes. I didn't see it in the quantize dialog. It's getting to look like more of a pain to do it very often. If it's just a percussion sequence, you can store it as a groove clip in your library and modify it as needed.