Jump to content

Teegarden

Members
  • Posts

    190
  • Joined

  • Last visited

Everything posted by Teegarden

  1. If well implemented the aim is to do the exact opposite of this...Hence the option to tweak afterwards to your personal liking... Good models are better at pattern recognition and identifying details than humans, see some examples below in the post. I know the possibilities of these new technologies well as well as the limitations, already years ago I initiated an international machine learning project which now is also supported by the EU. Again: speed things up. Right now I need to check the tempos and selected the kind of measure etc. Also, I need to experiment with the settings to get the desired human feel and with complicated solos you need to select parts with entirely different notes and speed to treat separately (in my experience, but maybe I'm missing something here). A well defined model would do most of it en would give me some fast parameters that I could use to make it perfect. I do admit that the work needed for my proposed feature might be too much in compared to the potential benefit. On the other hand, I could see it being developed along with several other deep learning features aimed at different parts of the audio enhancement all from the same feed used to train the models. There are probably many other DAW procedures that could benefit from AI solutions as well. The training: good question. I don't know the exact scales and possibilities for and limitations of the bakers. There are many free AI systems being developed, so I could imagine perhaps partnering with one of these developers. Perhaps with GPU audio (see point 3). I guess it could run extremely well if you implement technologies like developed by GPU audio. They are also aiming at machine learning implementations and are IMHO an important direction for the future of audio technology. They’ve now forged official partnerships with AMD and NVIDIA – the latter of which recently hosted inundated GPU Audio workshops at their Deep Learning Institute. The focus of these workshops is for developers to get their hands on GPU Audio’s groundbreaking software in readiness for an upcoming SDK release. Software like iZotope is being used by many here on the forum. Part of it makes use of AI implementations. It usually runs well om modern computers AFAIK. @Will. I guess from all you wrote you don't use iZotope's AI tools. Well, I just love that kind of tools and there clearly is a market or it. My feature request is intended to be seen as from a same perspective as how Neutron, RX, and Ozone implement it. The point you guys seem to miss is the following: AI is intended to give you a starting point closer to where you usually go for a finished project It just saves time, you will most of the time need to tweak/fine-tune once a model has provided a suggestion Suggestions are not supposed to be static, just like with human error each time a slight variation will be presented even if you didn't change any parameter or piece of the audio. All I asked for is extra options, not replacing existing ones. You don't need to see the benefit, as long as others like me do see it. It's personal preference. You use it or you don't use it. Everything is fine. I've seen AI (a misplaced term, it is machine learning - and its subset deep learning) being implemented around me in many different fields. The systems get better quickly and can take some (often boring, needlessly time consuming ) work out of your hands and depending on the application are sometimes already better than humans e.g. diagnosing diseases. Not just audio-visual implementations, but robotics, drug development etc. I wouldn't come up with the suggestion if I wouldn't have had some experience myself with the implementation of these models and know the exceptional possibilities (and limitations) if done well... Some examples to illustrate how entirely different fields benefit from the latest developments: AI peer reviewers unleashed to ease publishing grind “It doesn’t replace editorial judgement but, by God, it makes it easier,” Lip reading: Lipnet—deep learning network created by Oxford University with funding from Alphabet’s DeepMind— has achieved a 93% success in reading people's lips. The best of human lip readers have only a 52% success rate. A team at the University of Washington used lip sync to create a system that adds synthetic audio to an existing video. Diagnosing diseases: 1) AI surpasses humans at diagnosing diseases: deep learning algorithms can correctly detect disease in 87% of cases, compared to 86% achieved by health-care professionals. The ability to accurately exclude patients who don’t have disease was also similar for deep learning algorithms (93% specificity) compared to health-care professionals (91%). 2) IBM Watson’s accuracy rate for lung cancer is 90%, compared to a mere 50% of human physicians. Transcribing audio: Microsoft AI Beats Humans at Speech Recognition. The AI system had an error rate of 5.9 %, comparable to that of human transcribers employed by Microsoft. When researchers repeated the test, its error rate was 11.1%. This was virtually on par with the human result of 11.3 percent. AI can be creative and make unique pieces: 1) Critics of AI nauseatingly argue that machines could never be creative (HMMMM, sounds familiar...🤨), or curious, or discover anything of significance — because they lack consciousness. Nevertheless, a team at Tufts have proved naysayers wrong. Intelligence does not need consciousness to discover new knowledge. By combining genetic algorithms with genetic pathway simulation the researchers created a system that was able to make the first scientific theory to be discovered by an AI: of how flatworms (or the species “planaria” to the initiated) regenerate body parts. The AI-generated theory will have a significant impact in human regenerative medicine. 2) An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy. I love this picture, it could just as well have been done by an artist. They could have fooled me.
  2. Professionals IMHO: studio engineers that work in and/or own a sound studio, musicians that live well from making music, photographers that live well from photography, people that are being rereferred to in the field as experts,... Please let's not try to outsmart each other OK, I don't need or want a sampler at all. You like to have one, so I support that because it is important to you and doesn't interfere with my workflow.. Would be nice if you could approach things that way as well...
  3. @Will. I never said they are accurate. The idea is that they learn to interpret human error and sounds and the way they are being mixed and mastered in different styles by the great producers and implement that on a piece of music that you feed it to give you a better start point. Plenty of professionals agree that AI tools do make life easier (and get better every day), even when you (like I indicated before) want/need to adjust things. Main advantage is that you can start much closer to a desired end result. Just some quick tweaking often does the job. As for the human error: as long as it is not to outrageous it can be a much desired thing. Without it everything would be (too) clean/synthetic. Just like many prefer analog modelled processors instead of pure digital ones, the analog distortion makes sounds warmer, more natural. What doesn't work for you doesn't mean that it doesn't work for others... For me and many others it certainly does work and with AI tools I usually get results that also professionals appreciate and they save me lots of time. Anyway, we fully agree that (studio) acoustics and the ears are ultimately the crucial factors. That doesn't change the fact that AI tools can speed up things. So I definitely would like to see AI implementation all over the place (with quickly adjustable parameters for fine tuning, as the top AI software usually provides). NOTE: it could be an extra option in a menu, so no one would have to sacrifice his preferred "old-school" workflow
  4. Yes, that's exactly what I would like to see improved. Currently you have to check what you've been doing, set parameters accordingly and experiment until you finally have a satisfying result. And yes, I know you can make your own presets (and of course play more accurate, just saying before you come up with that😘) etc. I want a one push of a button that automatically takes all the work out of my hands, just like e.g. the latest Luminar Neo does with a photo (or IZotope Neutron 4 does with a mix to get you much faster where you want to be). If you don't like the result you can very quickly adjust to your liking. Much faster than the traditional way of improving things. Everything that speeds up my recording process is more than welcome... I know all the basics but never dreamed of becoming a studio engineer, I prefer to put most of my time in composing and playing music.
  5. I don't know if it already exists, but with the new wave of AI tools in audio I could imagine that the following would be a next logical option for a DAW which might not be so difficult to achieve (AFAIK mainly feeding enough source material in such tool to let it recognise patterns like they do with AI photo software): A tempo recognition option that scans the audio, and that identifies the main tempo as well as tempo changes of the main song as well as tempo/rhythm detection of solos. The latter especially, since there can be many time variations in solos Than the option to quantize (in particular handy for an instrument solo or voice line) according to what the quantize tool suggests based on music style, or any value between a hard quantize and human feel whereby all (unexpected) tempos of the notes are quantized in a way the musician intended with just one push of a button
  6. Your test is nicely in line with this one from Techpowerup confirming that cooling is not a big issue for the 7950x : Ryzen 9 7950x cooling requirements thermal throttling However, it would be nice if the following could become a "consumer friendly" option: delidded ryzen 9 7900x delivers huge drop in temperatures That might significantly reduce power consumption, run quieter and even enable it to be faster without throttling. Any thoughts if this (delidded with accompanying tailored cooler) will become available for next batches of processors?
  7. Maybe one of these links will help you out: how to find audio bitsample rate How to stretch audio in CbB Fixing timing problems in audio clips Finding the Tempo of an Audio Clip How do I stop Cakewalk changing the speed/pitch of my samples? How Groove Clips work in Cakewalk Hope this helps!
  8. Exactly, the button lit shows that FX is engaged, the button lit with strike through shows bypass is engaged... Thanks for making this point😁 Well, then it should say so (and not just the tooltip which needs extra mouse action). When I see FX my (and I guess most others) first impression is that it is to enable FX. Why would you not support efforts to improve a standard layout so new users can immediately understand what they are doing? If Tesla had followed this same logic we would still be stuck with a future of gas-guzzling polluting cars🥴. It is normal that older users are used and have adapted to sometimes totally irrational workflows. However, if you want to attract many new (young) users in a fast changing world where many quickly abandon software and look for an alternative if it doesn't work the way they can figure out at first sight, it would be wise to have the courage and guts to change a system in such a way that it makes sense to most people. For me, as a logical (at least trying to...) thinking person, the combination of @Starship Krupa and @Canopus would make the (almost, I'm still trying to figure out whether a red lit button should do it, or that in bypass state the button should just not be lit without a strike through) perfect solution (thanks for that! The more logical buttons are being put in the themes the better CbB becomes!) For the record: I also adapted to an illogical layout over time, having related muscle memory. That will never stop me from improving things and acquiring new muscle memory (nice therapy to combat neurodegeneration 😬) Standard FX and PDC are activated, so having the buttons lit indicates it's working. Bypass on FX and PDC activated: button lit red with strike through means bypass is working. Of course, the tooltips need to be adapted to the state of the button... If you want to make it clear the way the official layout works, the buttons should be renamed "FX Bypass" and "PDC Bypass". This seems to make the most sense to me. I guess that is mainly a bar space issue.
  9. Congrats, certainly one of the better songs I've heard on the forum (to my taste😬), nice composition, well timed, punch, space, nice vocal, great solo... I'm far from a mixing expert, still struggling to master it. Having said that, when listening to your song, my ears get tired quickly, I think has to do with a slight lack of warmth, and a bit too much mid?/high on average, especially on the cymbals. Maybe also a bit too much reverb (maybe slightly less turned in, a shorter tail and/or some EQ on the high reverb frequencies?). Have to confess that while listening I noticed that hay fever is slightly blocking my ears🥴, so it probably will already sound better when that's over...
  10. I have the same card and amount of memory. No issues seen like this. Does not seem like a performance issue (your PC is more than capable for a few tracks), sounds like something else is not in order. Weird that 256 does works, however, do you have an AMD processor? The first generations suffer from internal latency issues (I have a first generation Threadripper and ran into this problem once in a while, however extensive tweaking and a little overclocking especially memory has mostly eliminated my problems). I would also use latencymon to see if any specific process is responsible for problems. It has helped me a lot finetune my PC in the past. The things below I wrote in first instance, but considering that you have no problems at 256 they probably are not applicable to your problem. I leave them here, just in case... I can't see it very well on your video, but it looks like your audio signal on the track itself is way to high. Maybe just a case of gain staging (reduce the gain on the track)? Did you check your cables, connectors and pots on external hardware? I once found out that a jack was hardly making any connection after many year of use. It took me hours of checking everything I could think of before I find the culprit. Just cleaning it did the trick. Below some things that are probably not related to your current problem, but worth knowing anyway: Windows updates also reverses certain settings once in a while, so it doesn't hurt checking if all tweaked AudioPC settings have changed in the mean time... With GPU driver updates I also found that at moments when I did not pay enough attention new audio drivers were installed, messing up my preferred settings. I just found out that my latest network driver update also had restored power management (which needs to be off...) Also, regularly make sure your runtimes are up to date : up to date Runtimes all in one package
  11. I don't use this function, but someone who might be able to help you out is Mike from Creative Sause (he's also a forum member). He has an Arturia keyboard as well as extensive knowledge about CbB. Here's a video he did on his keyboard: Arturia KeyLab 88 MKII. I assume that Arturia uses the same way of interacting with a DAW on their different types of keyboards. Hope this helps.
  12. Just watched your tutorials. I like the way you explain things and the fact that you also show some less known options for use and other ways of using CbB with other soft/hardware like connecting to Zoom, using Leimma Global Tuning system for exotic scales and modes etc. Very useful info if you want to use those. As for the folder locations I think @Starship Krupa has a good point. @Ricebug, no clue what's on your HD but If you need more space I think it is better to move other things like documents, photos, movies etc. to make space. If you can afford it try to opt for a larger C drive preferably a PCIe SSD. I recently updated to a 1 TB SSD which gives me plenty of room for projects which also load very fast. Most of the instrument libraries are on two other build-in 2 TB SATA SSDs. Prices seem to go down again at the moment. For documents, finished songs, movies and primary C-backup I use an 8 TB HDD.
  13. I do get what you're saying, you've also pointed at this in other related topics before, but did you study all recent info about their technology? Apart from VST3 plugins they talk about delays caused by DAW architecture. A few years ago I've seen an early development video where they showed a DAW running a project only on a GPU, the entire CPU was offloaded to the graphics card at a maximum latency of 1 ms. They specifically ask DAW developers to contact them to fully unlock the GPU potential for the specific DAW. Why not just contact them and ask what the benefits for CbB could be and how to achieve them? Maybe there's a huge benefit waiting for us all... No, it means that with your current hardware, your DAW might be capable of offloading CPU processes to the GPU, this means less DAW processing related latency in general and many more tracks all running at low latency with the same PC (providing your GPU meets certain specs, but it doesn't mean you need a more powerful graphics card, just one that fits their requirements). Your current PC setup might just become much more powerful and capable. Next to that, plugin developers can write new versions of their VST3 software with this technology implemented making them much more capable. This is not related to the potential integration of the GPU Audio technology in a DAW itself. So the two independent benefits are: DAW integration, making a current DAW more efficient VST3 integration making plugins much better Question is whether the CbB bakers see a possibility to implement it or not. Other DAWs are already jumping the bandwagon, so why not our favourite DAW?
  14. I’ve been following this development for many years, hoping it would come to DAWs. It finally has arrived… GPU Audio introduces their technology with their own plugins and the option to work with developers to implement their GPU audio technology in DAWs and plugins and provide insane performance! They are looking to cooperate with DAW developers to significantly increase the power of DAWs. Reaper is implementing it already. Some key points: 1 ms buffer standardize for VST3 use - regardless of instance count 150 microseconds buffer for custom software Thousands of GPU cores render your audio in real-time Here is an interview from last week where the GPU audio guys give some very interesting info about the current status, development and possibilities for the (near) future. They offer their FIR Convolution Reverb Plugin beta version for free: https://earlyaccess.gpu.audio/ @Noel Borthwick any chance you could work with these guys to make CbB a real powerhouse?
  15. Instead of asking the obvious (yes, depending on what you do and your system configuration, other DAWs can have the same if not more problems. CbB is actually very stable since the last few years the development has strongly focussed on ironing out bugs instead of introducing new potential bug introducing new features), and assuming that you did watch the right, essential tutorials in order to be able to start well, ... why don't you provide your hardware and software specs and describe what you're trying to do and point out your exact problems? There are plentiful of very skilled users on the forum that are more than welcome to help you out when you encounter a specific problem! Just for the record: CbB is a fully featured professional DAW that finds its right place among the top DAWs with a support team that actually listens to feedback and quickly implements bug fixes and improvements whenever possible. I have not seen this kind of interaction with any of the other DAWs.
  16. I was writing a reply and was interupted for one hour. After posting I noticed you already had several posts with some good info. Anyway, hope you succeed!
  17. Just a warning: anything you do on your PC increases the chance you won't be able to recover your files... Windows system restore is made to protect and repair the computer software. System Restore takes a "snapshot" of the some system files and the Windows registry and saves them as Restore Points. When an install failure or data corruption occurs, System Restore can return a system to working condition without you having to reinstall the operating system. It repairs the Windows environment by reverting back to the files and settings that were saved in the restore point. See: what is system restore So, that doesn't work for files that you need to backup. There is also Windows Backup and Restore, which lets you backup files, but you will always need another drive to backup your files safely... I prefer to use Acronis true image (there are several others free and paid) to do this instead of Windows and make a complete drive image of my C drive which also contains my CbB projects (for speed, since it is a fast SSD). You can also set things up to automatically backup files that have just changed. I use GoodSync for this, any file I change is immediately backed up to another HD (not the one where I backup my C-drive image, so I always have at least two backups, except that the drive image is set to once a week). For the backup drive you can use a standard HD (cheaper and more space). Once every 6 months I also make backups of the backup drives on other HDs, but that's just me being overly cautious... Assuming you haven't done this and have been using the PC there's a big chance that files have been irrecoverably overwritten. If you are lucky there might still be some files left that can be recovered. You can try to do that with this free program: Recuva . Here's a list with several other options + explanations: free data recovery software tools (updated 2022) Hope you can get something back!
  18. I found this video and the accompanying discussion to be quite helpful: F**K SECRECY: Hearing Loss and Music Production. Let's talk. It covers the problems of mixing with imperfect hearing, and it provides tools and techniques to make sure that the final mix is tailored to the public and not to the sound engineer with impaired hearing.
  19. That's what I was referring to and what I find a suboptimal solution...
  20. +1 for this. It has been requested several times over the years. There are several ways to indirectly create a bus template like saving them on in a dedicated track template, but I find that a mediocre way to do something that seems so logical en simple (but the bakers might have a different opinion on that😮...) to implement. I've got many busses in a huge template each preloaded with several plugins. The template takes long to load and fills a lot of screen. I would like to be able to start with a cleaner lightweight template and add busses whenever I feel the need, the same way I can add tracks and not to have to eliminate them from my standard template. You might consider changing the title to something like "feature request: bus template"
  21. Is this what you're talking about: bandlab-releases-cakewalk-bandlab-2021-12-free-update ? Could be handy for the bakers to get additional info regarding what matters most when further improving the DAW.
  22. Hi Scott, I like the short text combined with animated graphs. I already know how to use the Arranger, but was curious to how you explain the use of it. I usually pick up a few new workflow enhancement from new tutorials, just like this time. Just one thing I would like to see improved is the quality of the animated pictures. At least on my PC they seem very fussy and any text in them is unreadable, which makes the overall tutorial less informative than it could have been. Other than that, I'm looking forward to your upcoming tutorials!
  23. If it's the same plugin each time you could do the following: Press the cancel button in the VST scan screen when Cakewalk is loading. Then open the plugin Manager: Utilities -> Cakewalk Plugin Manager Locate the VST that is the culprit, select it and click Exclude Plug-in (under Manage Exclusion List). If you have both VST2 and VST3 versions of a plugin you could try to see if the VST2 will load without problems if the VST3 was the problem and vice versa. While you're at it make sure you use the latest CbB version and the latest plugin version as well and that the runtimes are updated (Visual C++ Redistributable Runtimes All-in-One). Maybe also try to reinstall the VST plugin.
×
×
  • Create New...