Alexander Prokopchuk is an experienced developer manager and computer scientist. Alexander develops the DSP (audio) part of algorithms, leads the team of developers in technology core components design and implementation.
Basil Sumatokhin originally developed the whole concept around the GPU AUDIO. He has more than 14-year experience in the sound production and processing industry. His key competencies (related to product development and management) are composition, sound production, vocal tuning, mixing, mastering. He has a solid background in the Pro Audio market and a deep understanding of products and market propositions. He also knows the DSP area, which helps to design new sound effects and products.
Alexander “Sasha” Talashov is the most versatile team member. He is a computer scientist and engineer (dedicated 2 years in doctoral research related to HPC area in a top Russian engineering university) and an experienced developer manager. He is one of the authors of GPU AUDIO technology and its architect. Spending his last 5 years in the Pro Audio industry, performing market research and analysis, building business networks, he proved his value as an entrepreneur, a people organizer (co-organized a team of 15 professionals), and a fundraiser (4 deals closed).
For the past few decades, the idea of utilizing the power of GPU for sound processing purposes amounted to not more than a pipe dream. Although there were a few commercial attempts to build GPU-powered products – mostly CUDA and OpenCL – none of them were able to overcome the 10-20 millisecond latency buffer, or run at least ten instances simultaneously, meaning that different products (at the time meaning Acustica Audio and LiquidSonics products, Nils Schneider’s reverb) could not work simultaneously on the same GPU. The only thing that was proved at that time is that there are a myriad of fundamental computer science problems that must be resolved, or better yet avoided, before any GPU-powered Pro Audio solution or product can be developed. And of course, adding to that the many more engineering problems that must be resolved to make it stable and reliable in terms of DPC, different hardware setups, etc. – well, there is no wonder many gave up on the concept for so many years. One of the foundational reasons it was impossible to bring tens, hundreds, or even thousands of GPU-based DSP tasks running together is that originally, a GPU is a SIM(D/T) device. Our approach comes at this problem by considering how one can actually work the SIMD device to act as a pseudo-MIMD for this purpose, which is something you can’t do with any modern GPU SDK.
Another innate problem is that there is no SDK API that allows you to transfer and dispatch data for these hundreds of instances of task chains with differently typed (and furtherly dispatched) data, so you have to figure out how to pack them and run them optimally, so it can be done within the realtime (1-2 msec) latency scope. By resolving these fundamental issues mentioned above, we open the pathway to succeeding, but at the same time we open the door to many other issues to be resolved, such as how to make GPU isolated from other processes in order to operate in low-latency mode, how to “offload graphics / GUI work” from the powerful GPU so it can handle the Pro Audio stuff etc. And this is before we even touch the DSP part yet, as there are lots of low-level code problems related to the recursive nature of most algorithms, e.g. IIR. The good news? We’ve found a way to handle it and are doing it now.
By solving this entire set of these problems, a new Pro Audio computing standard called GPU AUDIO arises and we are proud to present it on YouTube for the first time ever. Let’s discuss the what, the how and the why, and ultimately get into who we believe this is for and how it can revolutionize workflows and beyond!