-
Posts
1,500 -
Joined
-
Last visited
-
Days Won
1
Everything posted by Mark Morgon-Shaw
-
DAWproject file exchange standard
Mark Morgon-Shaw replied to Starship Krupa's topic in Feedback Loop
I collab with lots of people on music for TV ( vocalists , more specialised instrumentalists etc. ) and I'm yet to meet anyone working in that part of the industry that exchanges DAW files with each other. We all just send audio stems, even the folks that use the same DAW because the likelihood of everyone having the same plugins and VSTi's is slim to none. A guitarist buddy of mine once sent me his project to mix and we made sure he didn't use any plugins I hadn't got so I just cake his Cakewalk Project....Ughhhh..garish colours everywhere, things laid out in the wrong order, no busses etc. It's just far easier to import audio into your own preferred template and on the rare occassion we ever decide we need to re-do a midi part we just ask for it. So I vote "No" and to spend the dev time on more fundamental issues like the big missing features people have been asking for at least the last 5yrs ( ie. Sampler - Chord track - Better Hardware integration etc ) rather than devising a complex solution to a problem that is better solved in other ways. -
It's not a fix , it seems you have a fundamental misunderstanding of the workflow Meldoyne was designed around. Lets go back in time to 2001 when Melodyne first came out. It was a standalone application that you had to upload or record your audio into so it could do it's thing. https://www.soundonsound.com/reviews/celemony-melodyne A couple of years later we got Melodyne Bridge & ReWire so we could transmit our DAW audio directly into Melodyne instead of importing / exporting between two separate Apps. In 2011 we got ARA ( audio random access ) which meant we no longer had to "transfer" our audio tracks into Melodyne manually- Melodyne could just "access " them with no other transfer necessary which gives a feel of it being more like part of the DAW than the seperate App it actually is. Over all those years the reccommended workflow has always been something like : 1. Record parts 2. Comp parts into complete performance or section 3. Transfer comped audio section into Melodyne 4. Perform non -destructive edits with Melodyne 5. Apply edits destructively and transfer corrected audio back into DAW Whilst the methodology has changed from separate programs, to bridge plugins, and then to ARA this workflow has remained unchanged. From what I have read you are trying to shoehorn an immediate/online workflow into what is essentially an offlne process by a separate App. Now I'm not saying it wouldn't be nice if it just "worked" as you think it should and we had rock solid glitch free instant swapping of audio freely between Meldodyne and Cakewalk regardless of where we punch in / out etc. But the truth is it's not really designed to do that and if you've been using it since 2001 like I have and been through all those various stages mentioned above plus all the changes to Cakewalk over those 20+ years you'd probably understand why it doesn't work that way and avoid the inevitable problems it causes. Maybe they can invent ARA 3 and make this a reality but for now you're either going to have to change your workflow and make a comp of your vocals and work with Melodyne once you've finished tracking ( i.e. how it works in professional studios ). Or alternatively use a pitch correction plugin like Autotune that operates in real time so you can make as many changes as you want on the fly becaause the audio is not being transferred to a different App to be processed. Either way the goal would be to deliver a more fully finished track to Melodyne for further enhancement, and not as part of the recording process.
-
DAWproject file exchange standard
Mark Morgon-Shaw replied to Starship Krupa's topic in Feedback Loop
But do the mixes come out any better ? -
DAWproject file exchange standard
Mark Morgon-Shaw replied to Starship Krupa's topic in Feedback Loop
I never said they did. I just said learn one DAW really well because they can all do everything. -
I'm not talking about loading the samples for the project. Even if empty , when it loads for the first time it does a scan / update for any new libraries which takes time.
-
Have you by any chance got Kontakt 7 in any of the slow loading projects as it does a re-scan of everything the first time you load it which slows down project load time considerably here. Once it's done it the first time it subsequently loads much faster.
-
Or paste them into Chat GPT and ask it to summarize
-
We have trodden very similar paths.... Fostex X28H 4 track synced to Music - X 8 track Black Face ADAT synced to Cakewalk Cakewalk Pro Audio 9 - I still synced it to the ADAT for the main audio tracks and a few extra tracks recorded into the DAW A bedroom full of hardware synths, keyboards , reverbs , delays, compressors and other outboard - I even had a patchbay ? Then at some point I just sold it all and did everything in the box - so much easier and productive. Now I make 100 tracks a year.
-
Sounds like throwback kinda music, I've heard worse with BIAB but once you've heard a few they start to get very same-y Replacing the BIAB parts would go a long way to fixing this.
-
If you're making music just for yourself, you might be missing the point. Let's hear it.
-
Interesting... Universal Music Group (UMG) and BandLab Technologies, parent company of social music creation platform BandLab, have announced plans for what they call “an expansive, industry-first strategic relationship concentrated on artificial intelligence”. According to UMG, the partnership will be “centered on empowering the next generation of artists, including within BandLab’s global community”. The statement adds that “the alliance will advance the companies’ shared commitment to [the] ethical use of AI and the protection of artist and songwriter rights”. BandLab’s partnership with the world’s largest music rightsholder indicates a significant statement of intent for the Singapore-headquartered music technology firm. It also arrives at a pivotal time for the wider music industry amid the rise of AI use in music making. The company’s powerful flagship music-making app BandLab, partly built on AI-driven tech, has attracted over 60 million registered users to date. It is claimed to be “the world’s largest social music creation platform”. It also runs a tool called SongStarter that uses AI to allow users to generate musical “ideas”, including beats, melodies, and chord progressions, that can then be built upon via the main BandLab platform. Today’s news follows BandLab’s announcement of its support for the Human Artistry CampAIgn, becoming the first music creation platform to do so. BandLab Technologies also recently hired AI music expert Drew Silverstein (the Co-founder and former CEO of AI-driven music platform Amper Music) as Senior Advisor, AI, Innovation, and Strategy. Together, UMG and BandLab Technologies say that their two companies will “pioneer market-led solutions with pro-creator standards to ensure new technologies serve the creator community effectively and ethically”. Today’s news marks Universal Music Group’s latest AI-related alliance. In August, YouTube and UMG formed a partnership that they say will jointly develop AI tools that offer “safe, responsible and profitable” opportunities to music rightsholders. There are two key aspects to that partnership. First is a ‘Music AI Incubator’ at YouTube – a program by which new tools and innovations will be developed at the company in close conjunction with artists and the music biz. For now, this ‘Incubator’ is kicking off via a partnership between YouTube and UMG, incorporating feedback and guidance from UMG-signed talent. The second major element of YouTube and UMG’s announcement was that, within and beyond YouTube’s AI ‘Incubator’ project, it is publicly committing to three principles/pledges that will guide its development of music-based generative AI tools in the future. “WE ARE EXCITED TO ADD BANDLAB TECHNOLOGIES TO A GROWING LIST OF UMG PARTNERS WHOSE RESPONSIBLE AND INNOVATIVE AI WILL BENEFIT THE CREATIVE COMMUNITY.” SIR LUCIAN GRAINGE, UNIVERSAL MUSIC GROUP Commenting on the partnership with BandLab, Sir Lucian Grainge, Chairman & CEO, Universal Music Group, said: “We welcome BandLab’s commitment to an ethical approach to AI through their accessible technology, tools and platform. “We are excited to add BandLab Technologies to a growing list of UMG partners whose responsible and innovative AI will benefit the creative community.” “GIVEN BANDLAB’S PASSION FOR MUSIC AND THEIR DEDICATION TO NURTURING EARLY-STAGE ARTISTRY AT THE NEXUS OF ECOSYSTEM TRANSFORMATION, THEY ARE AN EXCELLENT PARTNER THAT IS COMPELLING FOR US ON MULTIPLE FRONTS.” MICHAEL NASH, UMG Michael Nash, EVP and Chief Digital Officer, UMG added: “Meng Kuok and his team at BandLab Technologies, as well as the Caldecott Music Group network, have achieved impressive scale at the dynamic intersection of social music and creator technology innovation. “At UMG, we constantly seek to empower and support both established and emerging artists. Given BandLab’s passion for music and their dedication to nurturing early-stage artistry at the nexus of ecosystem transformation, they are an excellent partner that is compelling for us on multiple fronts. Added Nash: “This is more important than ever right now as AI assumes an increasingly prominent place in the evolution of music creation tools. We look forward to establishing new creative, marketing, and commercial opportunities for our artists and actively engaging with BandLab’s creator community through a highly synergistic structure, collectively protecting today’s and tomorrow’s future superstars through responsible approaches to utilization of AI in the creative process.” “THROUGH OUR JOINT EFFORTS, WE ANTICIPATE A FUTURE OF MUSIC THAT IS INNOVATIVE, REWARDING, AND ENDLESSLY INSPIRING.” MENG RU KUOK, BANDLAB TECHNOLOGIES Meng Ru Kuok, CEO & Founder, Caldecott Music Group and CEO of BandLab Technologies, said: “BandLab Technologies and our wider Caldecott Music Group network is steadfast in its respect for artists’ rights and the infinite potential of AI in music creation and we believe our millions of users around the world share in this commitment and excitement. “Though new technologies offer unbelievable possibilities to break down more barriers for creators, it’s essential that artists’ and songwriters’ rights be fully respected and protected to give these future generations a chance of success. “As demonstrated by BandLab embracing the Human Artistry Campaign principles and this collaboration with UMG, we are committed to getting it right. Through our joint efforts, we anticipate a future of music that is innovative, rewarding, and endlessly inspiring.”
-
Some uses You can quickly check your project's waveforms and loudness levels. You can easily see where clipping has occurred. You can use the Stats tab to see your loudness stats. You can save time by not having to output a file which can take a while. You can easily identify any issues with the waveforms before committing to a full render. About the loudness stats Once the Dry Run process is finished, you can view the loudness statistics for your project by clicking on Stats/Chart → Open Render Statistics. This includes the integrated LUFS, true peak, and maximum peak levels, as well as the short-term and momentary loudness levels. This information is useful for ensuring that your track meets the loudness standards of streaming services like Spotify, Apple Music, and YouTube. These render statistics can be enabled, disabled, and configured in Preferences → Audio → Rendering. ▶ Here’s a full list with all terms used and their meanings. Useful actions There’s a bunch of useful actions that allow you to dry run render things like your mix, your tracks, time selection and even selected items! Open the Actions menu (shortcut: ?) and do a search for “dry run”, you’ll find: Calculate loudness of master mix via dry run render Calculate loudness of master mix within time selection via dry run render Calculate loudness of selected items via dry run render Calculate loudness of selected items, including take and track FX and settings, via dry run render Calculate loudness of selected tracks via dry run render Calculate loudness of selected tracks within time selection via dry run render Calculate mono loudness of selected tracks via dry run render Calculate mono loudness of selected tracks within time selection via dry run render
-
True, Misha, but in the world of production music, having a taste that resonates with an audience isn't just a preference; it's a prerequisite. Whilst your passion for BIAB is admirable I can't help but wonder why I've never heard any of your music.
-
Plug-in Manager Scan Window Shrinks Midway
Mark Morgon-Shaw replied to sjoens's topic in Feedback Loop
Are you sure you're not just moving further away ? -
What could be cooler is a new breed of ' Skip Marker ' that when reached wil jump to the next marker. It could just be an added option on the existing markers
-
Multiple Multidock Instances Request
Mark Morgon-Shaw replied to SloHand Solo's topic in Feedback Loop
Yes I suggested something similar about 4yrs ago -
An informed one. I hear tracks from people that use it all the time. Dreadful.
-
What the F*** do People Do With the Power Supplies???!!!!
Mark Morgon-Shaw replied to Byron Dickens's topic in Gear
Me too Then I still can't find the one I want and have to buy a new one -
Great. Do I take it the next release is new Sonar and not CbB ?
-
Still annoying though especially if you have a lot of busses. I've started a new project tonight and can confirm they won't stay hidden. This has not happened in previous versions AFAIK as I've always had them hidden. Now I can't permanantly hide them, they just re-appear when the project is re-loaded Annoying
-
You could get a much better result from Ezdrummer, EzKeys & EzBass
-
I use it a lot and I've noticed no difference in that respect here.
-
Because I rarely use them and they waste space in the console buss view ...so it ends up like this Instead of this