Jump to content

LarsF

Members
  • Posts

    223
  • Joined

  • Last visited

Posts posted by LarsF

  1. Might this serve to improve?

    TTSSEQ.INI settings

    "IgnoreMidiInTimeStamps=<0 or 1> Boolean 0 (disable) This line determines whether or not Cakewalk ignores any MIDI time 
    stamping that a MIDI driver does. If you’re experiencing increasing 
    delays between the time you play a MIDI note on a controller and the 
    time you hear Cakewalk echo it, setting this line to 1 may help. Also, 
    if you find that Cakewalk is recording MIDI data at a different time 
    from when the data was played, setting this line to 1 may help. If the 
    MIDI driver is using a different clock from Cakewalk, the time 
    discrepancy increases the longer that the MIDI driver is open, so you 
    need to tell Cakewalk to ignore the timestamp that the MIDI driver 
    adds to the data (set the value to 1)."

  2. This again raises need for a full document with all alternatives represented.

    - mono  interleave - mono clips

    - mono interleave - stereo clips

    - stereo interleave - mono clips

    - stereo interleave - stereo clips

    - then all combinations of mono, mono/stereo, stereo/stereo plugins after each other - and how these will be routed.

    - like mono interleave and first mono/mono plugin and then mono/stereo plugin

    - and any combination thereof after each other

    Just a table showing this would be incredibly useful so you can work with that.

    Manual also state something about a checkbox in plugin manager to change some behavior regarding mono.

    I find manual really good and extensive, but could be improved with an article addendum stating this to leave the guesswork out.

    If a dialog just showing the pins of track plugin bay and each plugins pins - that would be awesome.

    Some daws allow to alter this, but as a first step just seing how things are routed would really make a difference.

    I once by accident inserted a plugin from recently used, of wrong kind, and you scratch your head bald over what you hear.

    Just pop up this kind of dialog showing routing and you can fix your mistakes.

  3. Many thanks.

    So if a stereo plugin last - one can count on that both those outs go to those two points directly out, instead of mono panner do some acrobatics either dropping one and panning the other or similar?

    So changed to stereo panner, or something.

    As I recall Reaper did same mono signal to both ins, if stereo plugin.

    Just a simple matrix to show where audio went.

     

    To be sure I put stereo plugin on a bus and use a prefader send from mono track to it(and show fader down) - so bus gets the same on both ins if panner is center.

    Thought about what happends if just putting plugin on track directly simplifying things, but nice if  predictable.

    EDIT: Putting plugin directly on mono track it seems you have to set interleave to Stereo for plugin to work correctly.

    If in future Cakewalk get some visual showing how routings are done, preventing first getting strangeness that have to be fixed, would be great. How much strangeness will be from inserting maybe a couple of mono plugins+a stereo plugin - or going stereo plugins all the way due to how it works best - I'm not sure. And rather not spend hours finding out either.

  4. I know in Reaper and Cubase you have full control what to do with various channels on plugin, cut them, reroute them etc.

    Before I knew about the routing options in Cubase I got really strange result, because one channels just bypass mono plugin.

    In Cakewalk

    a) do you just set interleave to stereo and plugins of stereo kind is handled from the mono clips there?

    b) other way?

    Thanks.

  5. I found a plugin that works really well in Cakewalk, can be used on mono or stereo track. Plugin does not need to be surround channels. But you can create surround environment for parts of mixes as you please, for phones or speakers.

    https://wavearts.com/products/plugins/panorama-6/

    Seems really cool, and good manual actually explains what every control is about, how it affect the end result.

    Audio bouncing on harder material and how a believable audio image of a room is created. And get exact positioning of source and listener, above, behind, below or whatever. Doppler effect as well for moving objects. For reflection model you tell which material on each wall, floor and  ceiling.

     

    There is a load of binaural algorithms, meaning how the head of listener is emulated.

    Licensing is ok, 3 pc's of same user, and either serial number or iLok - pricetag good too. I thought it will would be perfect for my videos of timelapses and stuff and target headphones with mixes(but speakers can be used too, they have an elimination for crosstalk that appear between speakers).

     

    Ran it for a couple of hours now, and is really cool.

  6. I listened to a whole bunch of binaural stereo demos and found the same weakness for phones on all of them - they miss out on the horisontal plane that is in front of you and behind you. This needs to be created on phones and especially if doing VR which this plugin claimed to be for - but useless for that purpose since not there in any I found.

    Demos were made for having stereo speaker in front of you - then that horisontal plane is there automatically. But behind you still as bad not having a full horisontal plane there. I thought it would be fun to do phones targeted mixes but just getting stereo on phones it grows upwards with more reverb moving objects away, not in front of you as I expected for claiming to be for VR. 

     

    If you move in a hallway and a sound source in front of you - it should be felt being in front of you, not just your head blown out of proportion. If making VR goggles for games this is essential or you don't feel this is real.

     

    I just love Wishbone Ash and this song with this kind of demo

     

  7. Could be being used to other daws that does not have audio tracks at all on outs, only automation tracks in track view.

    Cubase and StudioOne as examples, you have to fiddle with extra tracks. Cubase even hides frozen audio, not visible to preview doing automation.

     

    Cakewalk implementatiton is the best there is, in my view.

    You just freeze and you get audio where you had meters going before, so good overview when to do automation later.

    Especially when doing multi out VST instruments a great feature.

    And unfreezing is as easy as freezing later if to continue working on that track.

     

    All other daws always created extra tracks to keep track of, and some even manual handling of freeing resources and stuff.

     

    But if you really need separate audio somewhere else, you got your answers earlier.

  8. 5 hours ago, Lum Tham said:

    Dear all,

    Goodday.

    I watched youtube videos and the application of compressor seems to be limited to only apply as FX on vocals. 

    You might say there are two ways to use compressor - as an effect or to even out levels.

    On guitar you have a nice sound strumming chords and you push your amp in one way, then filling in single notes may become thin. A compressor can remedy that so much weaker power of single notes are lifted in level for a bit.

    Some kind of limiter on master is common I think. As said could be seen as compressor with very high ratio, 10 or more.

  9. I just found over the years that when automatic does not work, to have full control you go manual.

    I assumed playback was handled by engine but where on timeline anything happends is up to devs. And adjusted seek would at least play back where timeline is. And if you slave to your clock that offset would remain was my attempt with this angle.

    But export might have issues writing audio in the right spot, nothing automatic anymore. But everything up to export could be handled with these seek I believe.

    But maybe as Corivo said - MF engine is just approximate positioning and maybe not what Heather and other professionals need.

    So the work would be pointless if having other issues anyway. Just a thought.

     

    I just do audio export, not video, and align audio in NLE. So not sure why professionals don't do that, why stick to the daw.

    I found implementation really good as I made black frames at start. Just zooming in and getting frame numbers on thumbnails consecutive and where first nonblack appear, I set start marker and align bar 5 to that spot. and do audio export from that spot too.

     

    So another route to solve it might be to handle presented frame numbers or something whatever HeatherHaze needed to know the spot even introducing offset frames at start. Not sure how she meant. If you find your actual video after blackframes start 500 frames into timeline, you subtract that value from what is presented - so where in original is alway at hand. Then she could use blackframes trick at start. Maybe she could chime in and tell exactly what are the needs.

     

    Then for export one could introduce a checkbox saying something "substitute preview video in project, ask for original at export" or similar. So the black frame lead version in project is replaced for the actual export if writing project audio into it.

    Then audio is replaced in that separate video. I guess if having 4k-8k video possibly, you would not want that for preview anyway. Unnecessary heavy load for just preview purposes.

     

    Just another bunch of ideas....if this is a turnoff for those not happy with Cakewalk for video.

    Start MS rant.....

    About MS, they seems to be flooded with rookies in later years. The attempt to replace Xbox forum as one was a disaster and made by real beginners. And was unusable for a year until some other people took over. They destroyed and fully working forum, and not sure if recovered yet.

    I stayed on VS2005 due to that they removed the fully integrated help and search system. Still use it. But that they maintained it with fixes to work on Windows 7 and 8 is nice. A load of patches, but still that they did.

    On the good side, they are learning the customerbase still left. FLS issue was fixed etc. But that it even were an issue show how little they know how the pc is used out there - how daws load hundreds of plugins etc. Most households probably do well without a pc at all. All needs for internet, social media is handled by smartphones and pads - and MS missed that train all together. So they better listen to userbase with big ears if even having a pc market in future, the foundation for MS as a business.

     

    Windows 10 is a mockup with all these updates turning the machine upside down just by a new update. I avoided getting new machine now for 4 years - I simply don't want an ever changing machine. They should move back to make Windows 11, 12 and so forth - and let people have fully functioning machines not interrupted by updates. It's just convenient for MS to maintain one platform. So they go cheap there too. Making a platform targeting people that surf internet mostly - which probably use pads instead anyway.

     

    After all these years would be nice if MS could make a safe os to start with. Just adding stuff that make it vulnerable and have to constantly be fixed.

     

    End rant....

  10. Found this article

    https://docs.microsoft.com/en-us/windows/win32/medfound/seeking--fast-forward--and-reverse-play

    Two calls

     

    HRESULT GetCurrentPosition(MFTIME *phnsPosition);

    HRESULT SetPosition(MFTIME hnsPosition);

    handle seeking in video.

    So why cannot daw timeline offset itself to video and just set another time?

     

    If using GetCurrentPosition you add internal offset in timeline to that.

    When using SetPositon() you subtract offset from timeline and send to SetPosition().

     

  11. 35 minutes ago, Jean Corivo said:

    However, changing the starting point is impossible and the entry points and operate in an approximate manner.
     

    By that approximate you mean - when we enter a marker from left or from right I can see that playback does not always start on the same frame?

    When you change an entry point, is that when you placed cursor on any pos in timeline and start playback?

    What kind of offset are you giving MF calls for this?

    Frames offset from start or timecode offset?

  12. I find Cakewalk video handling pretty good, but not dependent on that a piece of music is frame accurate when scenes shift.

    I don't notice it's there, which is a good thing. Both Cubase and Reaper felt seriously sagging starting/stopping transport etc.

    Since using mixed locked and non-locked markers though - it would be swell to be able to shift to previous/next locked and previous/next non-locked. So quickly jump to next scene(locked in time) or a musical spot(non-locked, on grid).

    Maybe modifier key together with current previous/next marker or separate key binding to do either.

     

    As retired programmer I don't know how you could not just subtract an offset in frames project from the place where video is located and run.

    This frames offset is saved with project.

    - timeline pos minus frame offset become negative - no video is running as transport runs

    - timeline pos minus frame offset is positive - run video from that calculated spot when transport runs

    When positioning cursor on timeline, somehow this translate into a pos into video. Playback starts and this location is found.

    So why this call to start video cannot just as well adjust the pos by some frame offset, beats me.

    Like you have a file position to tell where next file write goes and similar - isn't there same thing for a video?

    Just seems so unlikely this does not exist - that one parameter is frame offset, or timestamp into video - or a separate call to move that position, which I assume is what happends when you put cursor in an arbitrary spot and then start playback. Video is synced to that.

     

    Or some other solution which Heather referred to, if having black frames as start to work as offset into timeline, that she needed correct frame - for scene changes I assume. Can there be a calculation so such offset method could work for professionals doing score.

     

    I have never used project offset - to tell that project bar 1 is somewhere else than far left( if Cakewalk has that).

    Is there a solution with this?

     

    Since i slide in some initializing clips before project, usually start on bar 5 or 10 or so, seems to create problems if I cannot go earlier than 

  13. Don't know why MF would care if daw just used an offset where to start playing the video?

    What decides if video plays and from where on timeline?

    Why cannot Cakewalk just keep an offset where video starts, just puzzled over this.

    As I recall you can turn off playing loaded video.

    So why not offset in code, is my question as retired programmer. MF must get time from somewhere like Cakewalk now do relocate video as you might have moved playback cursor and then start playback. So I guess Cakewalk have an internal offset in timeline where this is to relocate.

     

    I feel that StudioOne are as low on resources playing back video, like mp4, as Cakewalk, and StudioOne allow offset.

    And StudioOne immediately locate video with transport stopped - if relocating project cursor - which Cakewalk does not - would be great if it did.

    Not sure though what engine is used, but really good in my experience.

    Cubase new video engine is horrible on resources and I used VidPlayVst I linked to instead - cpu lowered by 16%. Project went from 36% to 20% total using VidPlayVst instead of native video engine. I waited bleeding 18 months for Steinberg to fix a new video engine and when it came it were a resourcehog.

     

    VidplayVst also allow to set a midi C-note anywhere as video is supposed to start. So if to make do with no thumbnails.

    Based on some open source code, I think.

     

  14. Some software seem not to accept mov from the context it's apple quicktime or similar. So not even looking what video format it contains.

    I got that from Corel products NLE's Pinnacle and Movie Studio or something.

    I get mov-files from canons cameras that contain h264 and bring into PowerDirector and make a mp4 instead and bring to Cakewalk. No issues. And no need to do avi. Resource friendly and everything, hardly notice I do video either(1080p).

     

    But if content is quicktime in mov files - another story maybe.

  15. I just did like this with mp4:

    - dragged into video editor like PowerDirector or whatever

    - offset file on timeline, in my case just 8s in.

    - produce a new mp4 - that will have black frames at start of video.

    - this does not even render file and reduce quality if using smart render(called SRT or something)

    - import that into Cakewalk

     

    So whatever used to convert mov file to mp4 probably can do this too.

    Video runs really well in Cakewalk like this, and very low on resources compared to Cubase for one.

    New video engine in Cubase is the worst crap I ever used for video.

    Then using scene markers locked in time really good.

     

    If wanting second video, like you get an updated video that change your project since scenes were cut differently- I use this

    https://vidplayvst.com/downloads.htm

    So you can use original video in that, and drag new in regular place and compare where it differs.

    Vidplayvst allow to offset by itself with a midi C note somewhere where to start.

    I used Vidplayvst instead in Cubase while using that, much lower on resources(but no thumbnails).

    • Like 2
×
×
  • Create New...