Jump to content

bvideo

Members
  • Posts

    189
  • Joined

  • Last visited

Posts posted by bvideo

  1. There's a thing called groove quantize that maybe you could use. The manual describes this in detail. Synopsis:

    Quote

    Groove Quantizing is a way to edit a track so that its rhythmic feeling and, optionally, controller data are similar to some other piece of music. The other piece of music forms a groove pattern that you store in a groove file, which has an extension of .grv.

    First, construct one beat or measure of the pattern you'd like to emulate. Craft the timing and the accent you want to reproduce. Save that as a groove. Then you can apply that to a different sequence of nine notes to reproduce the timing and accents.

  2. More than just separate tracks: when you are using the same synth patch, even using two different instances or channels, playing two of the same notes can sometimes cause some undesirable phase cancellations. Two different patches may be needed, or at least some detuning or chorusing. By separating the parts, you will guarantee yourself to be able to work around any unison problems.

  3. I'm not familiar with any videos, but for sure some people here can help you with that. The Cakewalk pdf manual seems to be pretty clear about what can be done with groove clips. Importing is on p. 709, looping p. 711, finer points, including changing normal audio into a groove clip, in a few following pages. Find the manual under the "Cakewalk By Bandlab" menu at the top of this page. Or here.

  4. Audiosnap & Groove/Loop are mutually exclusive. You could bounce it then loop it -or- just loop the original. A groove clip has usually been created such that enabling looping will get it to the project tempo as well as enabling the feature of dragging it for loop purposes.

  5. Properties of tracks and buses, and also prochannel controls, clips, and now the arranger, are now all in the inspector pane. Type 'I' (eye) to open the inspector (or main menu views->inspector). Click on a track. There are icons at the top of that inspector - make sure the triple-horizontal-bar icon is selected, and there are the track properties.

    Instrument definitions are in the edit->preferences dialog (or just type 'p'), under the midi section. There is kind of a tree there, and with help from 'help' you can figure out how to configure your synth to always use the right bank select method.

    • Like 1
  6. The sysex shown is GS reset, meaning all patches are to be set to default? Then it must be the patches embedded per track in the file that are not being set properly. The post about reading banks maybe has a big clue. The instrument definition for the chosen synth has bank numbers and also an option that chooses how the bank number should be interpreted ("bank select method"). Perhaps the older versions of Pro Audio don't use that.

    From the help pdf:

    Quote

    Normal               Take the value of Controller 0, multiply it by 128, and add the value of Controller 32 to derive the bank number.

    Note: A synthesizer manufacturer may refer to Controller 0 as the MSB (Most Significant Byte) and to Controller 32 as the LSB (Least Significant Byte).

    Controller 0 only    The value of Controller 0 is the bank number.

    Controller 32 only      The value of Controller 32 is the bank number.

    So check the bank numbers in your instrument definition file; also if it has "Controller 0 only", change it to one of the others & vice versa.

    Quote

    To change the Bank Select method

    1. Highlight and expand the instrument in the Instrument tree.

    2. Expand the Bank Select Method branch in the Names tree.

    3. Drag the desired bank select method from the Names tree to the Instrument tree.

     

    • Like 1
  7. 1 & 1/2 semitones is about the ratio between 44100 and 48000. In other words, if your plugin thinks it should generate audio at 44100 when your sound card is playing at 48000 or vice versa it will be that much out of tune. Does your project match the sound card setting? And does other audio in your project sound OK? Does Carbon Electra have its own setting for sample rate (unlikely)?

    • Thanks 1
  8. There's a thing called "snap offset" that can be set as a special property of a clip. Normally a clip snaps at its left edge to whatever landmark you're using in the track, such as "beat". The snap offset is a point in the clip that is used as the snap point of the clip instead of the left edge.

    It could be handy in the case of aligning a clip that has a pickup, strum, or some other feature before the first "beat" of the clip.  (See help for "snap offset".) I don't know if it works for loops, though. Also, since loops are a multiple of beats long, the end of the clip would be short of the end of a beat by the same amount of its snap offset (if snap offset works on loops at all). That would be OK when repeating the loop.

    • Great Idea 1
  9. Most likely the two different software implementations are different in how they define %. Rather than a standard acoustical definition, it represents a reproducible setting for the particular control.

    That said, your use of reverb on a bus with sends from different instruments at different values is one reasonable approach for saving CPU by using a single reverb. You may want to verify the sends are post fader (it's the default) if you want to preserve the mix value as you adjust an instrument fader.

  10. If you are using MIDI long notes and expecting to start playback in the middle of any of those notes, that is almost guaranteed not to align with the rest of your score. The reason is that the DAW only has two choices when starting playback in the middle of any note: 1. start the note at the beginning or 2. don't play the note. In case 1, some kinds of synth voices can't possibly align when started at just any place in the score because they are being started at the "wrong" time.

    If you were to freeze those notes into their audio equivalent, you'd then be able to start playing anywhere.

    If you're not talking about long MIDI notes, then never mind. It must be something else.

  11. In real life, true stereo is represented by relative signal delay between left and right and also reflections from the environment, maybe more than volume differences. One can also argue for differences in frequency response. The relative signal delay depends on the distance between microphones or ears. Reflections and frequency response also depend on microphone orientation and ear physics. Microphone orientation can certainly also affect relative volume as well as frequency response.

    You could try this on a signal recorded in mono: put the straight signal on left and the same signal delayed by a fraction of a millisecond on the right. With the same amplitude on both sides, the signal moves to the left. Experimenting with the delay amount can be fun.

    When considering how to reposition instruments (even when recorded in stereo), why not consider relative delay?

    One reason why not is listening (or mixing) stereo to mono can wreck the whole sound if only delay is used to represent stereo.

    A "true stereo" panner would have a lot of work to do.

    • Thanks 1
  12. What a pain. The Nord should have just shifted the notes before sending them. As it is, the track you record from the Nord would probably play back correctly on the Nord, but not on other synths. Also, most likely, the octave shift CC will not be sent when switching to a program preset that has a programmed octave shift. Again, not friendly to other synths.

    It's possible a CAL script could run through a track and replace all the CC 29s with the appropriate octave shifting. Hopefully the CC event has an included value to indicate which direction to shift. And beware sustain (CC-64) messages and also notes held across a CC-29. But  the shift in a patch change probably won't be seen and may confuse your entire strategy. Also, what happens when you send MIDI notes to the NORD outside of the configured range? Would a track restructured this way still play back correctly?

    CAL is a custom scripting language built in to Cakewalk. If you have a programming background you could do it. There are also a few people on this forum that are so skilled that they might volunteer to do it for you.

     

  13. 5 hours ago, Skyline_UK said:

    Thanks Noel.

    "If you have a single audio track (? MIDI?) feeding the synth this means that all the work is being done by a single core. "  I have one instance of SampleTank loaded and 14 separate MIDI tracks outputting to a MIDI channel each in SampleTank.  Is there a better way to set this up so that more cores are used?  E.g. more instances of SampleTank?

    "One thing you can try in this use case is see if plug-in load balancing helps."  

    I always have that ticked.

    My PC spec should have no trouble with such a relatively small project.  It's puzzling.

    ------------------------------------------------------------

    EDIT:

    It seems I may have been using samplers like SampleTank incorrectly for years... 

     I tried loading two instances of  it and directing the MIDI outputs of the  MIDI tracks half to each and the crackling stopped when using 1048 samples.   I then split the load between three instances of SampleTank and could  achieve 256 samples with no crackling.    At 128 samples there was    slight crackling, an engine load of  c.108% and  late buffers.

     I always  thought that as     SampleTank   is multi channel/multi timbral it was  more efficient to load one instance of it and direct all my MIDI channels to separate channels in SampleTank.    Now it seems (I'll try it tomorrow) that the best way (in terms of the PC's multi-processing engine that is) is maybe to load a separate instance of SampleTank for every   channel, each instance handling only one channel!

    This new tool has already given me a revelation! 💁‍♂️🤫

     

    It's a hot topic here

    • Like 2
×
×
  • Create New...