{"id":38,"title":"Diving deeper into the tech behind AMD support: interview with GPU Audio","bg_image":{"url":"https://eap-spaces.fra1.cdn.digitaloceanspaces.com/storage/newsfeed/article/bg_image/38/8a54-image.jpg","collage":{"url":"https://eap-spaces.fra1.cdn.digitaloceanspaces.com/storage/newsfeed/article/bg_image/38/collage_8a54-image.jpg"}},"type":"Newsfeed::Article","preview":"GPU Audio has partnered with AMD to open up a plethora of music production and innovation opportunities through the revolutionary parallel processing of graphics cards. After months of the two comp...","views":0,"content":"\u003cp\u003eGPU Audio has partnered with AMD to open up a plethora of music production and innovation opportunities through the revolutionary parallel processing of graphics cards. After months of the two companies working and collaborating together to make this happen, AMD users around the world can now experience ultra-responsive audio technology first-hand.\u003c/p\u003e\u003cp\u003e“It is a big accomplishment — a great amount of work was put into that,” Alexander Prokopchuk, GPU Audio’s Chief Technical Officer, humbly says. And like with all ground-breaking achievements, making it happen involved a number of challenges.\u003c/p\u003e\u003cp\u003eOne of the biggest difficulties had to do with the differences in the GPU architecture, and although NVIDIA’s graphic cards were similar to the ones of AMD, they were not the same. “The software components — drivers, APIs, and how we can communicate with devices — are completely different,” Sasha explains. “On our side, the minimum of what we wanted to provide as a platform was a unified API so that our partners can develop a product and not have to do anything special to make it work on a particular hardware configuration.”\u003c/p\u003e\u003cp\u003eDeveloping the device code and getting the scheduler to work properly proved to be especially cumbersome as the devices are proprietary, meaning that there were no solutions on the market for how to write a common code that could then be compiled for graphic cards produced by different vendors.\u003c/p\u003e\u003cp\u003eFor this project, GPU Audio and AMD worked together, with the latter providing any necessary support and feedback. There are high-level standards for classic workloads on the GPU that work very well in game engines — that’s what they were designed to do — but they lack many features that the scheduler needed. “It was bad at executing thousands of small tasks simultaneously and arranging them in a way that the GPU could execute them efficiently, maintaining high occupancy and low latency,” he continues. In the case of a game engine, it’s expected that the whole graphics card is used when running a game. In the GPU Audio scenario, there are a ton of tasks that should be run concurrently and with a latency as low as 1 millisecond.\u003c/p\u003e\u003cp\u003eThe team began to experiment with the solutions and found a viable option. NVIDIA developed CUDA, which serves as the industry-standard general purpose parallel computing platform on graphics cards. What makes this product so special is that it doesn’t have to be tied to a specific task such as video rendering, but can still accelerate the processing speed. To support the CUDA ecosystem on its devices, AMD created HIP, also known as the\u003ca href=\"https://developer.amd.com/resources/rocm-learning-center/fundamentals-of-hip-programming/\" rel=\"noopener noreferrer\" target=\"_blank\" style=\"color: var(--color-action);\"\u003e\u0026nbsp;Heterogeneous Interface for Portability\u003c/a\u003e, which lets developers “design high-performance kernels” on the GPU. This allows the code to then be run in any environment with minimal changes applied.\u003c/p\u003e\u003cp\u003eSince the graphics cards can be categorized into two types — consumer (typically installed in laptops or used for video production and gaming) and enterprise (usually applied for cloud solutions and servers) — both CUDA and HIP were designed for the latter. GPU Audio community, however, has consumer GPUs, and another challenge was figuring out how to make it work since those devices were mostly untested and unsupported.\u003c/p\u003e\u003cp\u003eAt the same time, because the server world is largely dominated by LINUX and the HIP compiler also targets the LINUX systems, nobody really knew how it would run — or whether it would even run — on Windows.\u003c/p\u003e\u003cp\u003e“We established a connection with AMD and consulted with them,” Prokopchuk shares, explaining that this was very helpful in understanding the challenges, computation, and architecture.\u003c/p\u003e\u003cp\u003eAs a first step, Daniel Mlakar ran several performance tests on an AMD laptop with the scheduler and the GPU Audio code originally developed for NVIDIA graphics cards. “We did rendering speed measurements in order to deliver a proof of concept,” Sasha elucidates. “We compiled the code so that it would run on AMD LINUX, and then moved it to Windows.” It was a success since it worked on Daniel’s machine. Making it work on Windows was more difficult: having support from AMD, the development teams (namely C++ and GPU teams) figured out how to compile binaries on LINUX and then execute and launch them on Windows. The next step was moving the compilation process on Windows as well, thus making the whole solution Windows-compatible.\u003c/p\u003e\u003cp\u003e“From there, it was a very hard and time-consuming process of trying to understand how and where it actually works — there was no data,” Sasha says. “We acquired different AMD laptops and tried to reproduce this success on those machines.” It took two teams to test out the concept on various machines.\u003c/p\u003e\u003cp\u003eIt was a very complex and meticulous process for many reasons. On Windows, the experimental enterprise GPU driver provided to us by AMD worked only with a discrete AMD GPU, making an integrated APU inaccessible. Later, those experimental updates to the device drivers and HIP itself were released publicly and the team was able to make it work with the public version as well after another cycle of testing. Another challenge was connected with the device architecture versions and their management.\u003c/p\u003e\u003cp\u003eLike all technology, GPUs constantly evolve, with newer hardware having new instructions and various optimizations. If in the NVIDIA world the connection between marketed GPU names and internal compute capability versions can be easily interpreted, it’s a much more arduous task in the AMD world. “The marketed GPU name doesn’t guarantee that the underlying architecture will be the same,” Prokopchuk continues. “You can have two computers with the same GPU name but internally, it’s a different chip, different architecture, and different binaries.” The CTO admits that the complexity of this problem was underestimated, and it took the same amount of time figuring out the environment in which the technology would work as for the first part of the project. The only difference was that it took two development teams to solve this challenge.\u003c/p\u003e\u003cp\u003eBoth GPU Audio and AMD agree that it is a big accomplishment not only for the two teams but for the future of audio too. After spending more than half a year developing the technology to offer AMD support, GPU Audio is excited to be putting it out and\u003ca href=\"https://earlyaccess.gpu.audio/\" rel=\"noopener noreferrer\" target=\"_blank\" style=\"color: var(--color-action);\"\u003e\u0026nbsp;letting the community try it\u003c/a\u003e. GPU Audio is testing limited Early Access support at the moment, and with the community’s help, the company will expand the AMD support as quickly as possible.\u003c/p\u003e","pathname":"diving-deeper-into-the-tech-behind-amd-support-interview-with-gpu-audio-38","human_date":"07 Oct 2022","read_time":"4 minute read","category":null,"related_items":[{"id":1,"title":"ADOBE MAX LOS ANGELES 2022","bg_image":{"url":"https://eap-spaces.fra1.cdn.digitaloceanspaces.com/storage/newsfeed/event/bg_image/1/efa3-image.jpg","collage":{"url":"https://eap-spaces.fra1.cdn.digitaloceanspaces.com/storage/newsfeed/event/bg_image/1/collage_efa3-image.jpg"}},"type":"Newsfeed::Event","preview":"GPU Audio joins our partners, AMD, to demonstrate how GPU Audio plugins can be used to enhance workflows of post production workstations. Find us at the AMD booth, nearby Meta and other great compa...","views":0,"pathname":"adobe-max-los-angeles-2022-1","human_date":"18 Oct 2022","human_time":null,"event_type":null,"registration_link":""},{"id":2,"title":"AES NYC 2022","bg_image":{"url":"https://eap-spaces.fra1.cdn.digitaloceanspaces.com/storage/newsfeed/event/bg_image/2/6528-image.jpg","collage":{"url":"https://eap-spaces.fra1.cdn.digitaloceanspaces.com/storage/newsfeed/event/bg_image/2/collage_6528-image.jpg"}},"type":"Newsfeed::Event","preview":"October 19-20, Time TBA","views":0,"pathname":"aes-nyc-2022-2","human_date":"19 Oct 2022","human_time":null,"event_type":null,"registration_link":""}]}