Jump to content

Multi-Sonik

Members
  • Content Count

    29
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Multi-Sonik

  • Birthday 01/01/1972

Recent Profile Visitors

406 profile views
  1. THRU ? God d***n it! Why haven't I thought of this word before? 😆 😳😳 Thanks! (Sorry 'bout that... my bad...)
  2. Hello Bandlab-ers. If I remember correctly, i'm pretty sure there was a GO TO SELECTION END shortcut in Sonar a couple of years ago. But now, looking at the documentation or the keyboard shortcuts editor, I cannot find any reference to it yet.... So now, I am even doubting there ever was that feature... lol... Can someone help me to find it back? (Or maybe there is a newer feature / workflow enhancement that I overlooked that kind of make this "go to selection end" irrelevant currently)? Regards.
  3. Just a quick comment here to thank the developers for the RIPPLE EDIT mode feature. I, for one, was always a bit scared of the RIPPLE EDITING modes and was not using it that much in ANY software I used... (in my defense, that mode was not available in SONAR when I was using it more frequently...) But now, in my current AUDIOBOOK session, I've been using this mode a lot and it has proven to be solid! It helps a lot cleaning out the out-takes and other house-cleaning tasks prior to actual audio processing.
  4. OK, tried it again after deleting all my FXs in the session and making sure that all my gain knobs / faders were all at default... Same problem. Afterwards, I tried a "regular" export (master bus) and the rendered file has the same loudness as the original one in the session. So, I think there is something wrong with the EXPORT feature regarding ARRANGEMENTS. Looks like the exported files are louder than the source ones... Therefore, back to the old editing style for my AudioBook session for now... as this issue is making my noise floor exceeding the -60dbfs requirement.... Side note: I would also think that it would be interesting to be able to select ARRANGEMENTS as a source for the BOUNCE TO TRACK feature, don't you think? Right now, it does not seems to be possible.
  5. Hum... so, I think I just might have discovered a bug. Using the EXPORT AUDIO feature and selecting the desired arrangement, I can notice that the FX bins might always be processed. On my first export, I had kept my Fx bins (tracks and buses) active while exporting. Obviously, the "loudness" of my rendered track was not similar to my "source track". (Remember, I am using the arrangement feature just to do an edited version of the narration... audio processing will come later...) So, I did the export again. This time i De-activated all of my FX bins / ProChannels (tracks and buses) prior to it. I also UNCHECKED all the fx-related options in the export window (see capture). But STILL, the loudness of my rendered track is significantly higher. Seems to me that the Fx's are always processed, regardless of the export settings AND of the session's FX ON/OFF toggles... NOTE: All my tracks and buses in my session are MONO...
  6. Hello people! ... message in a bottle here... Allow me to write down here my "in progress" reflection on editing VO / Narrations best practices / workflows. I am currently working on my first audio book (getting back from an 80% DAW-ACTIVITY STOP that lasted for the last decade... so be forgiving... building back my chops here...) Sometimes, trying to sort out some thoughts might be helpful for me and others and even trigger a discussion about better workflows? Maybe not... anyway... bombs away. ... SO... being the lazy a**h*** that I am, i'm trying to re-invent the wheel on day one of working on a new project... sooooo me.... (but i'll call it "process optimization attempt" officially) 😉 CONTEXT / OBJECTIVES Working on a audio book with a session that lasts about 3 hrs and should be edited back to about 90-120 minutes. Working with, of course, TIME RULER MARKERS Trying to evaluate scenarios to help share time-ruler markers between other software tools... Testing the new ARRANGEMENT feature to edit out the outtakes and render correctly-edited files (no audio process yet...) Evaluating alternative workflows... QUESTIONS Is there anything like clip-based time markers/rulers? (to comment at the "original recording" time period, in order to keep the "session / project time ruler" fully avail for the editing process). Is there anything like a VST "time marker ruler" (in order to share time-based production notes between programs...)? According to KVR in 2018, there is no such thing... Is there a way to better take notes of the "out-takes" WHILE RECORDING? Like, I do not know... be able to use the mute tool to mute out portions you already know you won't be keeping? I was tempted to use a midi track with clips... Next time i'll try the ARRANGEMENT FEATURE to mark... (I just was afraid to use it while mission-critical recording). Should some time-marking capabilities be avail at the TRACK FOLDER level? Should 2 more "marker types" be made avail? Like a "cut start" and a "cut end" marker type? (I know i can just mark "I" and "O" in the comments, but to have a specific type could be more easily connected to other editing tools / workflow... or even to CAL? I dunno...) ACTUAL ATTEMPTS Tried to use the new ARRANGEMENT feature in CwbBl (like some kind of a CUTLIST like in sound forge... but here, of course, I identified what needed to be KEPT). So far so good... it is really scary though to look at the time-line shift around when rendering... makes me feel like an audio region in the latter portion of my timeline will be deleted... The "markers to arrangements" feature worked as expected. Render proceeded as expected. Proceeding now to listen to this render to see if clips and pops can be heard (of course, the "reading flow" might not be the best at that point... but I did try to evaluate what was the narrator's current flow at the time of the edits, to minimize further editing down the road... I guess I am about to know if I succeeded or not... 😉 WORKFLOW PROPOSITIONS Since the ARRANGER feature scared me a little, I might try to use VOLUME AUTOMATION (or other type of automation that could be "shared" between softwares) to cut out the bad takes of my recordings, and then render the file to a new track and complete the file using the strip silence feature... I am kind of hoping that if I needed to also work in another program for specific reasons (like SoundForge or spectraLayers) I could manage to export the file combined with the automation envelope in order for me to have some kind of time markers shared between programs somehow... PROS : would be "movable" with my original recorded clips, leaving my time ruler markers / arrangements sections free to be used for the "finished" files... CONS: navigation would not be as easy as moving between markers using shortcuts... I'll keep you posted... maybe... Do find this post interesting somewhat? Or should I just stop it right here? Regards.
  7. Ok finally had time to test again this little beast: - Unpatched the ADAT and WORD connections : using only the COAX input of the Tascam from the coax out of the JOE MEEK = fail (kind of expected, though) - Same as above, but adding a WORD CLOCK connection (Tascam as master and JoeMeek as slave) = fail. - Tested the above in Tascam's MIXER, INTERFACE and MIC PRE modes. Same result... Re-patched the whole thing as it was originally, except that now I am adding a full SPDIF loop (coax) between my 01v96 and the tascam = SUCCESS. All my analog and digital channels are working. So, looks like the COAX out of the Joe Meek is not working at all or there is really a setting that I am overlooking. Checked its manual again, but everything looks fine. Giving up for now! Thanks!
  8. Thanks Craig. I'll start from your comment and experiment. I will do as much as I can to keep the 20x20 as the master clock though, (to keep flexibility when working with Sonar projects that are using different sample rates... oops... CwbBl ... 😉 ) But still, fun fact, my whole setup stayed the same when I did my previous test except the device the JOEMEEK SPDIF out was connected to... (test A, to the US20x20... NOT working / test B to the Yamaha 01v96... WORKING). The whole time, the ADAT connexions (full in/out loop) between my US20x20 and the 01v96 kept working. I'll post an update soon.
  9. Hello ! I'm re-patching my studio today and stumbled with a little unexpected problem: Trying to use both SPDIF and OPTICAL ADAT on the Tascam. (ADAT connected to my Yamaha 01v96 and SPDIF to a JOMEEK preamp) Studio setup is up-and-running, no sync issues at all between devices. Device is in Audio Interface mode through the ASIO driver. Clocking: the US-20x20 is the master clock via WORCLOCK to the JOMEEK. Yamaha 01v96 syncs via ADAT. Problem: SPDIF signal is not being received by the US-20x20 (ADAT signals are working as expected) Sending SPDIF signal from a JOEMEEK TWINQ preamp to the SPDIF input of the US-20x20 does NOT work... BUT... when connecting the same cable from the JOMEEK directly to the SPDIF input of my 01v96 it works instantly. Tested also the US-20x20 in PREAMP and MIXER MODE. SPDIF signal not transmitted... Theory: must I complete the "SPDIF loop" (using both In/Out connectors) on the US-2020 for it to work? Asking because the JOMEEK only has a spdif out connector... Driver issue? Any other US-20x20 users around here using successfully both ADAT and SPDIF signals simultaneously ? Please advise. Regards, Cakewalkers!
  10. Thank you very much Noel. Appreciated. Just to clarify, I'm not even remotely requesting advanced DAW features in here... just a way to properly pass information between collaborators in the simplest ways, while having strong basis (Like project level KEY and TEMPO and timeline- synced comments). the KISS principle definitely applies here. Thank you about considering the project info. This is great to know it is being considered. You guys are working tons and are proposing great tools. As a long-time user (since the 90's) I can see how much work has been put into CWBBL (Which I still call Sonar... ) Regards.
  11. Hello guys. Attempted for the very first time yesterday to use my Bandlab account and Bandlab's web app to collaborate with a friend on a little project. I'll right down my reactions and questions in the hopes to here about how people is using this workflow to collaborate with people using other DAWS. I would like to know also if we have any info about Bandlab's development roadmap regarding this use case. (Heck, I am even wondering if it would be a good idea that a section of this forum should be dedicated to this topic...) CONTEXT 2 people using BANDLAB web project to collaborate I'm using CWBBL Collaborator is using ABLETON For me, the essential data that should be ported between CWBBL and Bandlab's web app could be something like: Connect the comments tab (Web App) and the PROJECT INFO TAB in CWBBL(Not connected at the moment, it seems) GENERAL TEMPO DATA (not a complete tempo map) That works fine... problem is, Ableton can have decimals in its tempo... like 80.15 BPMs... not possible in the Web App I think... so sync issues might be incoming down the road... will have to sync "manually" to the reference wav file) Is the KEY data connected? Forgot to test that yesterday (Global project key) Track comments would be great (Have not yet found them if they are implemented) Markers / comments on the project's timeline are important IMO (I could not even find a way to add a marker in the web App...) I understand that they are developing a live session feature... I, for one, would like to see that happening BUT only when the "basics" I'm talking about above are implemented. Am I getting something wrong? Missing something about the global concept? Overlooking features altogether? I'm a bit puzzled actually. Would you like to share thoughts / experiences ? Regards.
  12. Hello again. Continued a little while this morning, and compared the PC373D with other combinations of regular headphones from a surround bus "downmixed" to stereo using Waves' NX plugin. Generally speaking, the NX seems to be more accurate... So i'll have about 3 different setups to fool-"surround" and get used to surround workflows.
  13. Hello everyone. Just a quick comment: I decided last week to buy a new Mic/Headset combo for my regular work in IT... and I just thought about finally giving a shot to basic and affordable surround technology (well... simulated, that is...) So, after about 20 minutes online shopping, I chose the Sennheiser PC373D and clicked "order" and thought "hey, pretty shure it won't be usable as-is with any DAW... but i'll try it anyways". Well, got it today. Plugged it in 15 minutes ago, fired up Sonar (I know, but I always adored that name...), tried the WASAPI (shared) driver (never got anything to work properly with onboard audio devices ever before... or anything at all for that matter, even after 60 minutes of failed tweaking...) Well, the PC373D worked spot-on!! No snap-crackle-pop-fests going! Clear, un-interrupted audio flow right away. Sonar detected enough stereo outputs on the Sennheiser to allow myself to quickly goof around with 5.1 up to 7.1 configurations. I then fooled around using the surround panner and I thought it was actually pretty good (Ok... my first time ever on surround, but I could feel enough the movement in "space" and thought it was precise enough to actually try stuff out... I'm hoping that, from there, minor tweaks and validation will be needed when listening to this project on a real surround setup). ... Ok... Ok, sometimes I can hear some light flams or artifacts, but for now I would guess that's because I randomly connected the outputs instead of looking up a proper setup reference document (saying this because sometimes "panning" the audio "hard behind" seems to be more precise than panning "front-standard", so...) Just thought I should let you know. Who else is goofing around with surround headphones? I'll test my Waves NX plugin later. 😉
  14. Thank you both for your replies. Very helpfull.
×
×
  • Create New...