I had the opportunity to return to NVIDIA’s annual GPU Technology Conference (GTC) event that was held in San Jose, California, a few weeks ago, and similar to last year, I was hard pressed to find any AEC-specific technologies. However, I did get the opportunity to learn more about some of the key advances NVIDIA has made in the broader field of graphics-enabled visualization that is also relevant to AEC in addition to other industries such as gaming, media and entertainment, manufacturing, and industrial and product design. I also got a chance to understand why the GTC is billed as a “premier AI and deep learning event” even though NVIDIA is best known as a company that makes graphics processing units (GPUs) for the gaming and professional markets.
This article explores both these aspects of NVIDIA’s technology—AI (artificial intelligence) and graphics—in more detail.
At the GTC conference, there were an overwhelming number of AI companies in the Exhibit Hall, showing technologies for machine learning, deep learning, and others that come under the broad umbrella of AI. Given that the use of AI in AEC applications is starting to see some traction (see the recent AECbytes article, “AI in AEC: An Introduction”), it is helpful to understand why a graphics company has come to be so focused on this technology.
While GPUs were originally designed to speed up and enhance graphics for gaming and visualization, their architecture has made them very applicable to fields such as AI. Unlike the CPU (central processing unit) of a computer—which is its main processor and can be considered the most basic unit of a computer—the GPU—an optional component, but which is getting increasingly more common—is a lot more powerful. While the CPU typically contains just a few cores—2 to 4 currently being the most common, with 8 cores in more high-end computers—a GPU is composed of hundreds of cores, making it much more powerful than a CPU. In addition to accelerating graphics, the advanced processing capabilities of a GPU are now also being harnessed in accelerating non-graphics computational workloads in many fields. And given how computationally intensive AI and machine learning is—the computer has to parse through vast amounts of data and process millions of instructions—the GPU has become a critical component of AI technology. This is why companies like NVIDIA that make GPUs are turning their attention to AI. As this article from Forbes states, “nowadays, for machine learning (ML), and particularly deep learning (DL), it’s all about GPUs.”
The application of GPUs to AI has turned out to be a crucial business development for GPU companies like NVIDIA. There’s only so much you can improve by way of 3D rendering and visualization. CPU companies like Intel are improving their own integrated graphics capabilities to compete with those offered by low-end graphics cards, and while the gaming and multimedia industries still need high-end graphics cards, the ability to now deploy GPUs for big data processing, cryptocurrency mining, and the AI technologies of machine learning and deep learning has given them new avenues for development and innovation. While improvements in graphics can eventually hit a ceiling considering how advanced they are already, with AI, we are barely getting started, and there seems to be no limit as to how “smart” an AI application can be. Also, the scope of AI is so much greater than the scope for graphics—given that it can be applied to any field—allowing companies like NVIDIA that are developing the hardware to run AI the opportunity to continue to innovate and advance their offerings. And that is precisely what NVIDIA is doing, which is why the GTC is billed as the premier AI event rather than one focused on graphics and visualization alone.
While NVIDIA continues to improve its GPUs to make them more powerful, it has also launched a powerful graphics platform called RTX that it developed primarily for real-time raytracing. The importance of raytracing—an advanced rendering technique that accurately captures the effects of lighting, including reflections and shadows, in a scene—for creating highly photorealistic renderings is well known in the field of architectural visualization. However, it requires so much processing power, even with high-end graphics cards, that it has, so far, been applied only to still images to create static renderings. What the RTX platform focuses on is the ability to apply raytracing even in fast-moving sequences such as in gaming, animations, walkthroughs, and virtual reality, all in real-time. For AEC, this means that a client or designer can don VR glasses and navigate through a design in real-time and experience the same high quality of visualization that you could get by creating a photorealistic rendering of a static scene.
The RTX platform is able to achieve the acceleration needed for real-time raytracing not just by using high-end GPUs, but also with algorithmic improvements to the rendering pipeline powered by AI. Just as with the AI denoising technology that I saw in last year’s conference—which uses machine learning to train a renderer to remove graininess more quickly from a rendered scene—machine learning can also be used to train raytracing algorithms to accelerate the rendering to make it real-time. Thus, NVIDIA’s AI developments also help to improve its own graphics capabilities, a double win for the company.
There are several additional features that go into making the RTX platform capable of real-time rendering, and while a more detailed discussion of them is beyond the scope of this article, they can be seen at: https://developer.nvidia.com/rtx.
At the GTC conference, I saw many applications built on the RTX platform demonstrating its power of real-time photorealistic rendering and AI-enhanced graphics, video and image processing. While the majority of these were in the gaming and media industries showcased by companies such as Epic Games, Unity, and Pixar, there were also some demonstrations from graphics and design companies such as Adobe, Autodesk, and Dassault, as well as presentations from AEC firms such as Cannon Design, WETA Digital, Arup and KPF, showing their use of RTX for visualization. I also had the opportunity to see Enscape, which is entirely on architectural visualization, and develops a bidirectional plug-in for Revit, SketchUp, Rhino, and ARCHICAD that uses RTX as the key technology for real-time rendering and virtual reality (Figure 1).
While most of GTC is devoted to showcasing technologies such as AI and deep learning, robotics, and autonomous vehicles, making it not much of a draw for AEC technologists, it does provide a deeper dive into the realm of computer graphics and in understanding how developments in graphics can benefit AI and vice versa. In addition to Enscape, I also saw a couple of other solutions that seem to have some potential for AEC: iGel, a terminal device that facilitates cloud-based rendering; and 3D Mapping Solutions, which currently focuses on capturing detailed road networks for automotive applications but can expand its capabilities to capture high-fidelity city modeling data for AEC. NVIDIA may not be a key technology company in the AEC space, but its graphics and AI developments certainly do have important ramifications for us.
Lachmi Khemlani is founder and editor of AECbytes. She has a Ph.D. in Architecture from UC Berkeley, specializing in intelligent building modeling, and consults and writes on AEC technology. She can be reached at lachmi@aecbytes.com.
Have comments or feedback on this article? Visit its AECbytes blog posting to share them with other readers or see what others have to say.
AECbytes content should not be reproduced on any other website, blog, print publication, or newsletter without permission.
Copyright © 2003-2024 AECbytes. All rights reserved.