Jump to content

GPU audio potential finally unlocked! Is CbB ready for it?


Teegarden

Recommended Posts

I’ve been following this development for many years, hoping it would come to DAWs. It finally has arrived…

GPU Audio introduces their technology with their own plugins and the option to work with developers to implement their GPU audio technology in DAWs and plugins and provide insane performance!

They are looking to cooperate with DAW developers to significantly increase the power of DAWs. Reaper is implementing it already.

Some key points:

  • 1 ms buffer standardize for VST3 use - regardless of instance count
  • 150 microseconds buffer for custom software
  • Thousands of GPU cores render your audio in real-time

Here is an interview from last week where the GPU audio guys give some very interesting info about the current status, development and possibilities for the (near) future.

They offer their FIR Convolution Reverb Plugin beta version for free:

https://earlyaccess.gpu.audio/

@Noel Borthwick any chance you could work with these guys to make CbB a real powerhouse?

  • Like 1
  • Great Idea 2
Link to comment
Share on other sites

The only way I can see this technology working is if VST3 developers modify their existing algorithms to take advantage of these new API's.  Whether they'll do this or not is anyone's guess.

The way VST's work is pretty simple from a high level:  The DAW prepares an un-effected audio buffer in memory, and passes it to the VST.  The VST then does its processing and passes the effected audio buffer back to the DAW.  This buffer then gets passed to the next VST, and so on, until is ready to be mixed with all the other tracks in the DAW.   Obviously there's some delay compensation logic in the DAW to ensure the buffers are all processed at the correct time, but apart from that, it's the VST's that are doing all of the heavy calculations.

The current functions in GPU's are more generally suited to large matrix transformations rather than pure DSP functions.  Perhaps GPU Audio have found a way to leverage the GPU functions in a way that is more suitable to DSP - but as I said, this is more a thing for VST3 developers to deal with rather than DAW developers.

  • Great Idea 2
Link to comment
Share on other sites

7 hours ago, msmcleod said:

The only way I can see this technology working is if VST3 developers modify their existing algorithms to take advantage of these new API's.  Whether they'll do this or not is anyone's guess.

The way VST's work is pretty simple from a high level:  The DAW prepares an un-effected audio buffer in memory, and passes it to the VST.  The VST then does its processing and passes the effected audio buffer back to the DAW.  This buffer then gets passed to the next VST, and so on, until is ready to be mixed with all the other tracks in the DAW.   Obviously there's some delay compensation logic in the DAW to ensure the buffers are all processed at the correct time, but apart from that, it's the VST's that are doing all of the heavy calculations.

The current functions in GPU's are more generally suited to large matrix transformations rather than pure DSP functions.  Perhaps GPU Audio have found a way to leverage the GPU functions in a way that is more suitable to DSP - but as I said, this is more a thing for VST3 developers to deal with rather than DAW developers.

I do get what you're saying, you've also pointed at this in other related topics before, but did you study all recent info about their technology?  

Apart from VST3 plugins they talk about delays caused by DAW architecture.

A few years ago I've seen an early development video where they showed a DAW running a project only on a GPU, the entire CPU was offloaded to the graphics card at a maximum latency of 1 ms. 

They specifically ask DAW developers to contact them to fully unlock the GPU potential for the specific DAW.

Why not just contact them and ask what the benefits for CbB could be and how to achieve them?  Maybe there's a huge benefit waiting for us all... 

 

7 hours ago, Skyline_UK said:

I don't understand all the gobbledegook.  Does it mean you now have to have a powerful graphics card as well as a powerful PC?  My PC was spec'd for audio and I didn't opt for an expensive graphics card for it.

No, it means that with your current hardware, your DAW might be capable of offloading CPU processes to the GPU, this means less DAW processing related latency in general and many more tracks all running at low latency with the same PC (providing your GPU meets certain specs, but it doesn't mean you need a more powerful graphics card, just one that fits their requirements). Your current PC setup might just become much more powerful and capable.

Next to that, plugin developers can write new versions of their VST3 software with this technology implemented making them much more capable. This is not related to the potential integration of the GPU Audio technology in a DAW itself.

So the two independent benefits are:

  1. DAW integration, making a current DAW more efficient
  2. VST3 integration making plugins much better


Question is whether the CbB bakers see a possibility to implement it or not. Other DAWs are already jumping the bandwagon, so why not our favourite DAW?

Edited by Teegarden
  • Like 1
Link to comment
Share on other sites

Taking the number of threads about "how to optimize my PC to run with low latency" with suggestion to "convince your GPU driver not block the system", adding GPU into real-time chain is going to be a pain (and I guess the reason they ask DAW developers to cooperate). At the end, all that is interesting for people with top CPU+ top GPU only, which are using so many effects that the system can't deal with that at all. It just not worse the trouble otherwise.

On the positive side, may be they manage to convince NVIDIA and AMD monitor real-time performance of the hardware/drivers. Since any increase of latency in updated drivers will make all related audio products unusable, the number and attitude of complains will be high 🙄

Another point, GPUs are traditionally inefficient in general computation to consumed power ratio (1kW PSUs in gamers PCs are not for CPU). I have nothing against 2 GPU fans running full speed when UFO is exploding, but if the same happens when I record with a mic in the same room, I am definitively going to be unhappy 😒

  • Haha 1
Link to comment
Share on other sites

I've got a pretty moderate nVidia GPU in my PC (fanless GT1030).  I've run 10 of their beta IR on it, all using different IRs, and it uses under 20% of the GPU (I think it's more likely to run out of GPU memory as it only has 2GB and those IRs used almost half of it).  That's not bad for a fanless, cheap and fairly old GPU; maybe not earth shattering, but not bad.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

On 4/22/2022 at 12:50 PM, Teegarden said:

Question is whether the CbB bakers see a possibility to implement it or not. Other DAWs are already jumping the bandwagon, so why not our favourite DAW?

The real question to ask would be: Will it cost CbB any money? We're on a free DAW. I doubt that they will give this to any DAW or Plugin developers for free . . . ? 

Also - is it for both Windows and Mac? I've seem to have missed that in the interview.

Edited by Will.
  • Like 1
Link to comment
Share on other sites

sounds like one of those things that needs prototyping to determine any benefits, an expensive thing with no definite/guaranteed benefits - also hasn't CUDA been around for a few years already? why the new interest, why hasn't this been exploited already?

Link to comment
Share on other sites

27 minutes ago, pwalpwal said:

sounds like one of those things that needs prototyping to determine any benefits, an expensive thing with no definite/guaranteed benefits - also hasn't CUDA been around for a few years already? why the new interest, why hasn't this been exploited already?

Apparently you can test a demo version when you sign up. I'm yet to get my "glutes" back home.

Anyone willing to run a test (on their test machines) and share some results? 

Edited by Will.
Link to comment
Share on other sites

Many plugin developers already make use of the GPU. Maybe they've figured it out now, and maybe this product will make it easier to implement, but for years it was a real PIA to get GPU support to work reliably across all manufacturers' hardware because each one is different, even within a given product line. I was involved in one vendor's beta and it took months to get it working on my particular card, even though it worked flawlessly on cards from other manufacturers.

My guess is that DAW developers wouldn't be in any hurry to take this route and incur the inevitable support nightmare that would follow.  It would make far more sense for plugin developers.

But what do I know? I once predicted the end of hardware-based audio processing, and now everybody's using  TC Powercores or ProTools HD. ;)

Link to comment
Share on other sites

Would it be possible/doable/worth it to develop GPU-friendly plug-ins to supplement the native versions, and recommend some specific graphics cards? At that point, it would be like having a Universal Audio accelerator version and a UA Spark version of a plug-in, but you wouldn't be locked into one software company. That would also eliminate the issue of trying to make it work across multiple graphics cards.

 

  • Like 1
Link to comment
Share on other sites

If a lot of the future dev’ to make this happen is down to Plug-in developers I can see lots of problems. 
Even now with VSTs there are many devs that don’t follow the standards set down in the specs causing all sorts of problems. 
Will they now be confronted by another set of specs that will ultimately be poorly implemented causing yet more problems. 
Im grateful to be running my Waves plugins in a Soundgrid server so don’t have to worry too much about latency and plug-in count. 
Using the power of GPUs for processing would be great but I need to see this happening not just read about. 

Link to comment
Share on other sites

On 4/22/2022 at 3:33 AM, msmcleod said:

The only way I can see this technology working is if VST3 developers modify their existing algorithms to take advantage of these new API's.  Whether they'll do this or not is anyone's guess.

The way VST's work is pretty simple from a high level:  The DAW prepares an un-effected audio buffer in memory, and passes it to the VST.  The VST then does its processing and passes the effected audio buffer back to the DAW.  This buffer then gets passed to the next VST, and so on, until is ready to be mixed with all the other tracks in the DAW.   Obviously there's some delay compensation logic in the DAW to ensure the buffers are all processed at the correct time, but apart from that, it's the VST's that are doing all of the heavy calculations.

The current functions in GPU's are more generally suited to large matrix transformations rather than pure DSP functions.  Perhaps GPU Audio have found a way to leverage the GPU functions in a way that is more suitable to DSP - but as I said, this is more a thing for VST3 developers to deal with rather than DAW developers.

You are a smart man.

Michael

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...