Jump to content

azslow3

Members
  • Posts

    671
  • Joined

  • Last visited

Posts posted by azslow3

  1. If "Quick start" is working (you can control track volume), your device is delivering what it should.

    For the Instrument. Start by looking if you can write an automation for the parameter in question (without controller, just try to add automation line for that parameter or write automation using GUI). If that is possible, you can assign it to hardware slider. If not, (2) and (3) will NOT work (VSTi normally support automations for parameters which make sense to automate, but there are cases when they "forget" to do that).

    I assume possible. For (2), replace "Strip" Action in the "Quick start" with "ACT" Action, select "Slider 1" in parameters. Try "ACT Learn"... if not working, follow "AZ ACT Fix" instructions I have linked.

    • Like 1
  2. On 4/11/2024 at 3:31 PM, Stephen Power said:

    Thank you for the help @azslow3. I cleared the control surfaces window (nothing in it now), and set the track input for one plugin with timbre and expression settings (Small String Gestures, Crow Hill Company) to 'All External - Midi Omni'. 

    Unfortunately, this did not make any difference. The controller will not 'midi learn' and it is not controlling any settings in the plugin.

    The instrument has to support the feature explicitly. When supported, normally you can right click on the control and there is something like "MIDI learn" option. Then you send MIDI, like CC message, by moving/pushing/turning controller and plug-in remembers that. When there is no such option, instrument may response on some particular CCs. Normally mentioned in the documentation for the Instrument. In this case you externally configure the controller to send what is expected.

    Going with Surface Plug-ins (any of them) require first "MIDI learn" inside surface plug-in and then "ACT Learn" the parameter. As I have mentioned, the second part had quite some bugs in the history of Cakewalk. Check "AZ ACT Fix" utility: https://www.azslow.com/index.php/topic,297.0.html also read https://www.azslow.com/index.php/topic,13.0.html (some comments can be obsolete/fixed, but in general still valid). So using it allow editing mapping without "ACT Learn", but read the documentation, the utility is not intuitive. Note you can use it for control the mapping for any surface plug-in, you don't have to use AZ Controller.

    AZ Controller is most flexible but also most complicated surface plug-in. There are manual, tutorials and examples. If you master it (will take some time...), you will be able quickly assign your sliders to anything within seconds.

    To understand how "ACT MIDI" surface plug-in works (and optionally recreate its functionality in AZ Controller), read https://www.azslow.com/index.php/topic,107.0.html From that, even if you can not/will not follow, you will understand why the whole topic is not "simple".

    Note that ideas behind Control Surfaces in DAWs are almost the same for all controllers and all DAWs. Inside Cakewalk with AZ Controller you can learn these concepts without any background. In some other DAWs for the same you need programming skills (C++ in REAPER, Python in Ableton, etc.), and some DAWs don't have open  API in public. So you have an opportunity to look "behind the scene" with relatively small effort.

     

    • Like 1
  3. 18 hours ago, Stephen Power said:

    I have set up the relevant CC's correctly in Chrome, but the controller is having no effect on any of the instruments I've tried it with (mainly Musio and Opus).
    ...

    May be REMOVING everything in "Control Surfaces" preferences is what you really want... Just make sure you set Omni MIDI as instrument input ;)

    The explanations. Parameters of Instruments and FXes can be controlled 3+ different ways in Cakewalk:

    1. MIDI assign/learn inside the Instrument (many CCs are per-mapped). Some FXes may have MIDI input, in this case they can be controlled using the same approach. But FXes in general can be controlled used following methods only
    2. Using "ACT Dynamic Plug-in Mapping" feature, with "ACT MIDI", "Generic Surface", "AZController" plug-ins and corresponding configuration. The mapping work with plug-in in focus, the mapping is done with "ACT Learn" button. There was many bugs with the mapping, at least in the past. "AZ ACT Fix" utility can help keep persistence under control
    3. Using "ACT Direct Plug-in Control" feature, possible with "AZController" (also used by Mackie Cakewalk plug-in, but it works with Mackie compatible devices only). In this case you can control specific parameter of specific plug-in, so even several plug-ins in parallel. But for that you have to manually select what exactly you want control in the configuration.
    4. * Cakewalk has direct "Remote control" feature, so you can learn DAW parameters (and with some workaround Instrument parameters) directly. Unfortunately, 20+ years old design bug is still there (assigned MIDI still leaks to everything else) and I remember some other bug reports about persistence for assignments. So, till you do NOT use any MIDI instruments in the Project, I can't recommend using it (when unlucky, side effects can make you think you have serious problem with your minds...).
    • Like 1
  4. Good Sonar and Next overviews. Thanks!

    For stem separation, at least some time ago (when I was interested) my favorite was Ultimate Vocal Removal. Most (all?) separators differ just in trained model, and results depends from the music you separate (the quality of particular tracks is the source and model dependent, one model can deliver better drums while other better vocal, extracted from the same source...).

    UVR is a GUI with several parameters and models. It is offline and open source, as everything else it uses. Back-ends have good scientific explanations, comparisons and references.

    • Like 1
  5. For me https://www.bandlab.com/membership is just black... It contacts tons of trackers (google, tiktok, facebook, outbrain, etc.), my Firefox refuse to load it.

    From the beginning,  "free" CbB was demanding re-activation every half an year. So it was clear it will not be really "free" forever. All these locking, started at Platinum time, especially after "new" version of Z3TA2 with online activation as  the only visible change, and especially related bugs, was not convincing. Now the owner is different and management behavior is even less convincing: "our free product will be paid product... soon..."

    But I have thought may be there is re-start in the development. Unfortunately the video linked early in this thread has demonstrated the second change of GUI framework and several tiny features.  No MIDI routing/VST and  no offline pre-processing during playback and so I guess still the very same 20 years old engine.

    I am hobbyist, but last years I have payed quite some money for plug-ins and a bit for  the DAW. And I have no regrets because each time it was absolutely clear for what I pay (and how much... for Bandlab membership I can't even find the price in euro and info about VAT). I want a DAW be able to run during lifetime (my! not DAWs!). And I know that is the case with Sonar X and my current DAW. Any version, at any time, offline.

    In other words, Sonar X (and CbB, till possible to run) projects will be supported by ReaCWP (I am still fixing bugs when spotted), but any changes in the format introduced by "new Sonar" will not be supported/fixed. I mean if you have project files from X/CbB, don't overwrite them when "testing" "new Sonar". Just in case you need ReaCWP in the future... who knows...

     

    • Like 2
    • Thanks 1
    • Great Idea 1
  6. OSARA is a REAPER add-on, it has automatic installer but can be installed manually by copy one dll into expected directory (and loading shortcuts afterward). Works with any screen reader, under Windows and OSX.

    Sonar 8.5 was so popular because of CakeTalking (payed software) and JSonar (free). From what I know, developers was from the community, I mean not sighted developers with target users as testers, but other way around. Sonar X1 had completely new interface, Windows 8 also breaks several things. So, that direction is stalled long time ago.

    OSARA was born as a demand to move on somewhere. Since REAPER is the only other DAW with open API (in practice with way more possibilities then Cakewalk API), the choice of the DAW was simple. The idea was to make it free and open source, also the developer (AKA the owner) of REAPER is aware the community exists (clear from the REAPER's change-logs). All that is "alive", I mean there is active REAPER developer, active OSARA developer, active community "manager" and several sighted helpers. Most people have moved away from Sonar 8.5 (from what I know, including JSonar developer). I mean, I am not sending Annabelle down the rabbit  hole, I just try to convince to take the route other have already taken (and not regret that).

    Cakewalk had intention to make CbB accessible. May be some day there will be yet another attempt. There are some people which in fact use Sonar X3/CbB, after creating the project in 8.5. With XTouch Compact and NI Keyboard that kind of works. But I can't recommend that, at least at the moment.

    • Like 1
  7. Unfortunately Speakerphone has no demo, so I can't check myself.  May be worse to ask developers if the new version is JUCE based (most popular framework) and if so they should not forget to check accessibility. Recent JUCE version are accessible.

    As I have written before, for the host it is better leave Sonar 8.5. I know that is not easy, even for experienced people. But there is a big community which can help, most REAPER/OSARA users was working with Sonar 8.5 before.

    For plug-ins, fortunately there is movement toward native accessibility. JAWS / HSC / SIBIAC scripts for inaccessible plug-ins was fighting against the wall, and once something was developed there was a new version of plug-in and the work was voided.

    • Like 1
  8. On 3/8/2024 at 4:23 AM, John Vere said:

    I says it uses a player called FluidSynth. I was thinking, is this a VST?  If so I want it!  Sadly it isn't.

    You have even "liked" my post in the previous thread, about FluidSynth VST...

    I have checked and I am not delivering SysExes. I don't remember why, in the source I have a comment  "todo" for it 😏

    There was several technical problems with GM in VST3 format.  And I have not invested time into GUI... But that does not prevent current version more or less  working.

  9. 14 hours ago, pulsewalk said:

     

    1. No, no tempo changes.
    2. No, no "unusual" settings, as far as I know, just standard default settings at 134 bpm.
    3. Automations are created sometimes by arming a VSTi or VST FX and moving knobs, but most often by drawing the automation. In the case of these "corrupted" automation paths, the automation was actually drawn by hand.

    That means my example with tempo change is not (or not only) the problem you observe... I don't have other ideas at the moment, but I will think more.

    14 hours ago, pulsewalk said:

    The last example is from a completely new project file with only one audio track which I use for mastering. So it's not even an old complex project... it's completely new, fresh and with one track. Oh yeah, there's now an archived audio track also, but it's not used, and I think it was put there after I discovered the corrupted path, so yeah... but that's it.

    So may be you can publish it (without audio, just .cwp file). Who knows, something there can give a hint (Cakewalk and I can see internal project structure), even when in its current state the bug is not reproducible.

    PS, OT

    14 hours ago, pulsewalk said:

    I feel you're  insinuating that I'm not taking you serious, and that I'm ignoring msmcleod. This is of course not the case and I'm sorry if you took it that way. One can miss something, which is not the same thing as ignoring.

    Thank you for the clarification. I am long time present on this and previous Cakewalk forums, and there was several people which for one or another reason refused to communicate with me. Please don't get me wrong, I respect such decisions. But the silence sometimes irritates me, since in that case I don't know if my attempts to help make sense. Peace :)

    • Like 1
  10. @pulsewalk answering questions which can help identify the origin can... well.. help identify the origin. If you don't take me serious, ok, but at least don't ignore questions from msmcleod (assuming you want the problem is found and fixed...)

    @msmcleod it seems like I have found a method to reproduce "reverse Z":

    AutomationZ.jpg.affdc41daaf24ada470cde000a75896e.jpg

    1. zoom to the samples level
    2. add an automation with 2 points, so a "jump" composed by two linear segment. Note they are aligned to samples and points can be moved by samples only.
    3. add a tempo change before points in question. Notice that points are no longer aligned to the samples (at least visually). 
    4. adjust the tempo change such way that automation points are behind sample tick, but close to the previous tick then to the next one.
    5. start dragging the point, it will snap to the nearest (previous) sample, producing the picture above.

    Note in case points are "closer" to the next sample, CBB does not allow dragging to previous sample.

    In saved project, at least in my case, "start" time for some segments is saved as  "musical" but "duration" as "absolute" (I obviously don't know how these parameters are really called in Cakewalk, but I guess you understand what I am talking about ;)).

     

    • Like 1
    • Thanks 1
  11. @pulsewalk Give them a project with the problem...

    Background. Automations in Cakewalk are saved as segments, using Musical time (ticks). Each segment has "start" and "duration", specified in Ticks. If "start"+"duration" is not equal to the "start" of the next segment, it can look like your screenshot.

    How (and if) that can happened only Cakewalk knows. Even if it is aligned in ticks, it can be they calculate resulted sample  the way the result is no longer aligned.

    Note that default segment type is "linear". But there are other types, including "jump" (in this case there will be no "previous value" point visible).

    VST standard is fuzzy how automation segments should be transmitted to plug-ins. At the very same place in the documentation they claim there should be a linear segment for the whole block, then giving an example with single not block border aligned points. Then mentioning "jumps" should be specified with 2 points with the same time (sample). Finally technically it is possible to specify several points for a block, not in time order... And so, what Cakewalk does and how that is interpreted by particular plug-in is almost up to them 🤪

    From the "user" perspective, check the following:

    • do you have tempo changes in the project? If yes, are "Z" appears in such changes (only) or also at constant tempo?
    • do you have "unusual" settings in Project/Clock? (not audio Sync / TPQ != 960)
    • how you create your automations? (controller, moving knob/slider by mouse, editing points)
  12. Cakewalk is far from convenient for MIDI processing, since VST MIDI processors are not supported (can be used as VSTi only) and MIDI routing is not flexible (apart from hard to follow and sometimes buggy).

    You can check Ctrlr (free). It allows talk to external MIDI port from within plug-in (so without DAW). Not sure that works from several instances to the same device...
    If the target is just making a panel to control external device with MIDI, you can stop reading here ;)

    For complicated MIDI hardware controlling, REAPER is probably most flexible. Not only it allows VST MIDI processors (most DAWs except Cakewalk allow that), it supports:
    * arbitrary MIDI routing (including between tracks and sends to external MIDI)
    * several MIDI buses on one track (a track is not limited to just one MIDI stream)
    * Input FX - plug-ins can change MIDI (and audio) before it is recorded
    * build-in MIDI modulation. Including MIDI learn and MIDI control for parameters in any VSTs, even those which don't support MIDI control directly.
    * build-in audio modulation, so audio level can control arbitrary parameter. Combined with "parameter to MIDI" capable plug-in (f.e. build-in), audio level can be "converted" to MIDI changes and then processed/routed/recorded.
    * when some wish is still not covered, scripting and plug-in extension are supported, with access to everything (MIDI, audio, content, DAW/project state, etc.). And some special MIDI related plug-ins make use of that (f.e. ReaLearn).

    PS. Long, long time ago Cakewalk had build-in solution for controlling external hardware. It was called StudioWare. Unfortunately for unknown reason that functionality was abandoned...

    • Like 1
  13. 4 hours ago, Bapu said:

    FWIW, ReaCWP does create the Reaper project with small clips ( non consolidated) in their proper place, so maybe this is yet another 'bug' in ProjectConverter which I will report to Jürgen.

    ReaCWP tries to put clips with "all goodies" specified in Cakewalk (sec/beat, clip, offset, loop, etc.). I went throw DAWProject documentation and all that is possible to specify. But what is the "right" way to do that is unclear. Unfortunately, unlike in Cakewalk and REAPER, where "track with all related parameters(->lane)->clip with all related parameters" is hard to interpret wrong (since that is logical and follows what users see and do...), DAWProject introduce different hierarchy of containers, without precise specification how it should be build (at least at first look, all provided with the specification examples are "too simple").

    PS. I hope DAWProject is not going to be another "VST3" (where no-one knows how to implement many things "right", not even if the "right" way exists...). Triggering endless incompatibilities...

  14. On 11/4/2023 at 9:03 PM, Bapu said:

    Really? Where? How?

    Sorry, that statement was theoretical... I mean apart from Cakewalk, I am the only person which knows CWP format sufficiently to convert it into something... ;)

    On 11/4/2023 at 9:03 PM, Bapu said:


    The reason I ask is when I use ReaCWP to open a new .rpp and then use ProjectConverter to write the .rpp to DAWProject format,

    That DAWproect file opens in Bitwig and pretty much works (some VSTs come over with odd states compared to the original).

    However, the DAWProject file opens in Studio One but no audio clips are present, all the tracks/folders are there though.

    For VST conversion problems... Is it about VST2, VST3 or both?

    Recently I have found that ReaCWP doesn't convert correctly some VST3 presets. I have a bit better code, but I have not put it into ReaCWP yet (it was written for other VST presets related utility...).

    But there is yet another possible problem, not really tested by me nor confirmed by someone else: https://forums.cockos.com/showthread.php?t=281800
    So it can happened I have overseen something.

    In case:

    1) the problem you observe is with VST3
    2) the state of the VST is correct in REAPER
    3) you have a minute (since you know exact VST and state which does not work, I guess the test will not take long),

    Please for VST/state in question save .vstpreset (state file) in REAPER and load it in Bitwig. Does it loads correctly?
    Note that REAPER bug (if there is real bug) probably affect just some VST3 and particular states, since many don't use the second section for state (or ignore it).

    On 11/4/2023 at 9:22 PM, Bapu said:

    On further testing this *only* happened with a ReaCWP created .rpp. If I use ProjectConverter on a .rpp fully created from scratch it loads with all audio clips in Studio One and of course Bitwig. This would indicate that there is something about how ReaCWP created the .rpp and passed that off to Project Converter that boinks up Studio One but not Bitwig.

    "Standards", especially complex, especially at the beginning, are rarely implemented completely and without bugs. I guess something about audio clips as created by ReaCWP is not implemented in Studio One (DAWProject importer) yet. ReaCWP tries to match clip parameters in Cakewalk (time base, length, offsets, loops). Directly created in REAPER project follows current defaults, which can be different. If you give an example to Jürgen (the author of ProjectConverter), he can probably understand the reason quick (since he knows exactly what and how is converted, so he can guess what from that can be incompatible).

  15. On 10/31/2023 at 8:19 PM, DeeringAmps said:

    Yes. And since on Zoom's website it looked to me like there is no "user" way to control preamp gain, I surmise (as you did) that they are "burning in" some kind of automatic gain control in front of the ADC.
    I'll handle the compression/limiting and gain staging "Thank you very much!"

    Originally  I have also thought there is some "auto gain control", but that is complicated.

    So it seems like used approach is way simpler, this one from Tascam, but I think Zoom use the same:

    portacapture_x6_w_dual_adc.jpg

    so two ADCs with fixed gains, merging the results using digital math.

    • Thanks 1
  16. 4 hours ago, Starship Krupa said:

    I did some digging and PreSonus and Bitwig have DAWProject on GitHub.

    @azslow3, you may find it interesting. I'd love to hear what you have to say about it if you take a look under the hood.

    My opinion about DAWProject is still the same as early in this thread ;)
    I can write CWP to DAWProject, but not reverse direction. But ReaCWP followed by ProjectConverter (probably) can do the trick.

    Every DAW has own "extensions" for VST. Some have more capabilities then other, some are published (f.e. SO, REAPER) other are proprietary (f.e Cakewalk). The same for not DAW specific extensions (ARA is proprietary, but corresponding registration is possible for any developer). Note that DAW extensions intersect even less then other features, I mean if you use specific DAW extension it probably will not exist in other DAWs.

     

    • Like 1
  17. Since I have mentioned that indirectly only in my lengthy post:

    24bit and 32bit (floating point) files have the same precision for the sample amplitude. The difference is that in the first the precision of particular sample depends from its amplitude and in the second it does not. F.e. if a track with max level -24dB is rendered into 24bit file, 4 upper bits in any sample will be 0. So effectively it will be 20bit format.
    32bits solve the problem, the source with any level has 24bit precision. And there are no "consequences" apart from the file size.

    32bit (float) recording in some Zoom devices serve the same purpose. But since that is technically implemented using "tricks", there is some degradation in fidelity in comparison with (proper gain staged) 24bit recording device. Most users claim the degradation is unnoticeable (in the corresponding price range segment) and well worse the fact you can forget about gain knob (it does not exist) and concentrate on more important things...

  18. 8 hours ago, Max Arwood said:

    Someone said there are no 32bit converters. Me I got to google everything lol.  Check out Mytek - True 32 bit integer. It guess this is future proofing your  recording, and emptying your pocket book simultaneously.  Wow 384kHz /32 bit integer. Now some math guy - how many bytes per minute does this thing use 😊 and how many empty 0’s are in my files per minute?

    I have found only one Mytek ADC device, in "discontinued" section. And from its documentation:

    Quote

    The Brooklyn ADC is capable to record true (not floating point) 32bit files.

    Now I have got it, we are using FALSE 32bit files, since they all are filled with floating points... 🫢

    What is not possible to find in the documentation are characteristics of the device, normally given for middle-range audio interfaces and always known for top interfaces. But sure, in HighEnd world "This is absolutely the best analog-to-digital converter ever created by Mytek" covers everything and leaves no questions (those who ask simply can't understand the whole spirit...).

    • Haha 1
  19. 10 hours ago, reginaldStjohn said:

    I was confused about people claiming some interfaces recording at 32Bit since I have never seen an A/D manufacture that actually produces a 32 bit ADC at audio Sampling rates.

    From Zoom's own website it says the Zoom L8 is a 24 bit interface.  If it allows you to record in 32 bit then it is converting t he 24 bits to some 32 bit format in the driver.  That is basically the same thing that all DAWs do . Am I missing something?

    In fact it seems like L8 is not 32bit recorder (unlike F2/3/6/8, M2/3/4 and UAC-232). So saving into 32bit from L8 make no sense...

    • Like 1
  20. 10 hours ago, Will. said:

    Why would you want to record at the bit rate and deph with built in analog to digital (AD) Converters lowering the noise floor for you? 

    If you can record in 441/ 24 or 441/32 with the exact same recording as 48/24 why would you want to waste precious system resources? 

    Keep in mind, recoring at 48/24 does not give you better quality then 441/24. What bring you the quality is your preamps of your interface.  Higher bit only creates higher headroom.

    Many (most?) interfaces can record with lower latency under 48kHz (the number of extra samples is normally the same, but the sample "length" is smaller). Some interfaces can go even lower with 96kHz, but that is less common.

    Up-sampling to 96kHz is "strait forward" (and almost all plug-ins are at least tested with 96, since that is "common" rate, unlike 88.2...).

    3 hours ago, John Vere said:

    My new Zoom L8 can record at 32 which is a new thing in consumer gear as far as I can tell. I’m going to be using it from now on for my own original music because it will mean no dithering will be required at any step.

    In fact the opposite is true for (at least current) 32bit recording.

    32bit was introduced to avoid gain staging, great for Field Recorders (when the level is unpredictable) and other similar situations (no wonder Zoom, as major player in field recorders, has adopted the technology). Note that fidelity is not top priority for field recorders...
    What and exactly how they do that is "proprietary", but from all sources they are not using true 32bit ADC (all ADCs are for physical reasons fixed point), the output 32bit floating point is somehow composed from outputs of 2 different ADCs (in some devices they are 16bits...). Both ADCs are not used with "optimal" gain, the target is avoid digital clipping even in case an airplane will fly right over the microphone,  not to retain best fidelity.
    And so there is quite some "math" in combining the result, which is in any case way more significant then any "dithering".

    BTW when converting 24bit Fixed-point to 32bit Floating-point there is no dithering, the second has the same 24bit precision as the first one.

    Note that the consequence of the last statement: saving the input from 24bit interface into 32bit file is just wasting space (extra 8bits in each sample will be constant).

    45 minutes ago, John Vere said:

    VST instruments are audio tracks. So same thing. Lots of people upsample and claim it makes a huge difference.  

    It CAN make audible difference, VST dependent. That was discussed many times in this (and old Cakewalk) forum. With practical examples.

    The reason is again pure technical, some mathematical algorithms used for audio processing or generation "produce" frequencies outside 20kHz. Even when working with a stream of samples which can't have such frequencies. So resulting "numbers" land somewhere in range of allowed numbers, generating aliased frequencies.
    "Properly written" plug-ins are aware, they up-sample before and down-sample (with LPF) after such (or total) processing. But not all plug-ins are "properly written" (even so some of them are well known as "good").

    But a difference between 48kHz and 96kHz in a file (physiologically) can't be spotted by a human. Such claims exist in the Internet, normally in High-end equipment forums/ads/blogs, sometimes even with tests... Note that hardware (including analog) is also a subject of "not properly made". So any such component in High-end chain will produce aliases in lower (audible) range. Such aliasing is so subtle, that it is not only declared as "the difference", but can be perceived as "a feature" (even so in reality that is just "a bug"...)  

    ---

    So:

    • record into 24bit (44.1/48/96kHz, depending from own preferences and required side-effects, 48kHz is a good balance).
    • process in 64bit. 32bits are sufficient, but have 24bit precision. Every FX in a chain can "destroy" the last bit. After huge number of processing passes, that can "leak" into the result. So, practically that is impossible to perceive, but theoretically it is, and modern computers have power 🤪... Plug-ins are likely processing in 64bits in any case.
    • up-sample for "buggy" plug-ins (or in general, for safety, if you have sufficient resources...)
    • render/bounce/freeze into 32bit float. Only for the final result (so after maximizer) 24bit is a reasonable option. 64bits is an overkill for persistency
    • do not bother about "dithering" options for anything above 16bit format, no equipment will be able to reproduce that (till you save intermediate files into 24bit, which as I have written is a bad idea).
    • especially when working in 96kHz, put LPF before output to the interface (till you knows what your gear does with "High-end" frequencies...).
    • Like 2
  21. They have finally implemented "compatible" with Cakewalk takes system... Not that previous one was bad, but for me it always was let say inconvenient (probably just because I was used to Cakewalk approach).

    70€... well, the last time I have payed was more then 1899 days ago ;)

    • Like 1
  22. So, one indirect way converting CbB -> DAWProject already exist. First convert to REAPER (with ReaCWP), then convert to DAWproject (with ProjectConverter)

    Till DAW developers put real effort to do this right (automatically or after asking user rendering/converting unsupported by export format parts of the project, like it is done for "Audio export" or saving into common formats from graphic editors), exporting to different DAW, changing there and exporting back is not going to be fluent. So one direction transfer is probably primary application for any export format in the DAWs world.

    • Like 2
  23. On 9/30/2023 at 8:29 PM, OeAi said:

    as for the ending life project, can this be a one of the latest additions ?
    well, it's a mostly the same export functionality, but through a different .dll, that should be possible to change/update later
    any needed workaround you can add into their github project prob, this can be just a "transitions" .xml file for future updates or something
    https://www.kvraudio.com/news/bitwig-and-presonus-announce-dawproject-a-daw-agnostic-project-format-58830
     

    CbB is current version of OLD Sonar and there will be NEW Sonar which is most probably the next version of CbB. So no EOL.

    If you want "escape" from Cakewalk products, there is solution (for one other DAW).

    That is not "mostly the same" nor simple. Every program (including notepad, DAWs, plug-ins, games...) are just "different .exe or .dll". Conversion between formats on the level DAWProject propose is possible, but requires significant programming work (and skills in the topic). From exporting perspective, at the moment DAWProject will just add yet another 2 DAWs into which you can export easily.

    Also note that DAWProject has quite significant assumptions about project structure, from quick look it is "compatible" with CbB/Sonar. But that doesn't mean any DAW can fit own paradigm into it. And even more important, the evil is in details... OMF and alike formats are simpler and exist for long, the compatibility for them is still not perfect.

     

×
×
  • Create New...