In my article on the AIA 2023 conference that was held earlier this summer, I concluded that:
“… given how rapidly AI is taking over the technology airwaves in society as a whole, it was hardly surprising to see it also emerging in AEC technology solutions. In fact, I found the excitement — and potential game-changing capability — of generative AI to be almost identical to how it was with BIM when it was first introduced over twenty years ago. Rather than the mostly incremental upgrades we have seen since then, we should start seeing radically new solutions and capabilities in AEC technology powered by AI.”
Imagine a BIM application to which we feed a 2D napkin sketch of a floor plan and ask it to design a detailed 3D BIM model from it!
It’s really not that far out. At least, that’s the sense I got from watching NVIDIA’s keynote at the SIGGRAPH 2023 conference that was held earlier this month. SIGGRAPH is the leading computer graphics conference held annually that showcases both academic research and industry developments in the field, and it was the perfect venue for NVIDIA — being the leading vendor in computer graphics — to showcase its latest developments. These were mostly in the area of generative AI and its integration with NVIDIA’s own Omniverse technology, which is relevant to all industries using 3D graphics, including AEC (Figure 1).
NVIDIA has been working in the field of AI (Artificial Intelligence) for several years now, retooling its GPUs (graphics processing units) so that they could be used to meet the increased computational requirements of AI and machine learning in addition to serving its traditional base of the gaming, movie-making, design, engineering, and manufacturing industries. (See the article on the NVIDIA GTC 2019 Conference). The investment it has made in AI has paid off handsomely — the growing surge in generative AI has made NVIDIA, at the time of this writing, the most valuable company in the world. (See the business articles, Nvidia hits record high as AI boom lifts bets on another strong forecast on msn.com, and Nvidia Revenue Doubles on Demand for A.I. Chips, and Could Go Higher on nytimes.com.)
A great example of how NVIDIA first brought artificial intelligence — the “regular” kind of AI rather than “generative” AI — and computer graphics together was in the area of real-time rendering. Its RTX platform, powered by AI, is now able to do real-time raytracing, with raytracing being the most photorealistic kind of visualization that is traditionally reserved for final renderings that are static or final animations in which each frame has to be prerendered with raytracing. To be able to provide raytracing in a fully real-time animation is a giant technological leap, and the way it works is by the use of AI to infer 7 out of 8 pixels in every rendered frame of the animation (Figure 2). Of course, real-time raytracing also needs advanced GPUs, and the Racer GPU that NVIDIA has developed for its RTX platform can handle 100 times more geometry, many more lighting calculations, and output the rendering at much a higher resolution than the previous generation of GPUs.
Moving on to generative AI, NVIDIA sees it as the start of a new era in computing, the emergence of a brand-new computing platform after almost 15 years, so momentous that it can only be compared to previous history-defining milestones such the introduction of the PC, the Internet, and mobile cloud computing. Made possible by a combination of large language models (such as the ChatGPT series) and generative models, generative AI allows the generation of almost anything that can be represented by a structure, with the generation guided by human natural language. In addition to spurring the development of brand-news tools — billions of dollars are being invested in AI companies, and just about every single domain and every single industry is pursuing ideas in generative AI — the technology is also being added to existing tools such as Microsoft Office, Adobe’s Creative Suite (Figure 3), SketchUp (Figure 4), and others, giving them exciting new capabilities.
To meet the computing requirements of generative AI, NVIDIA has developed a brand new processor, GH200, its most advanced so far. Named after Grace Hopper, GH200 has 72 cores, four petaflops of processing speed (one petaflop is one thousand trillion calculations per second and is a milestone in high-performance computing), 141GB of HBM3e memory, and a data transfer rate of five terabytes per second. It can be progressively scaled up to provide even more processing power and speed by connecting multiple modules together for individual workstations as well as in trays and racks for servers and data centers (Figure 5). This kind of processing capability not only enables generative AI to be done more easily but also much more cost-effectively.
In addition to developing new hardware solutions for generative AI, NVIDIA is also incorporating the technology into its Omniverse software. Omniverse is an open platform for real-time 3D visualization and simulation that NVIDIA had introduced in 2021. (See the article, NVIDIA GTC 2021: The Omniverse). Omniverse can host models and other assets created in different applications, allowing design professionals using different tools to be able to work together in one environment and see in real-time the changes that each of them is making to the overall project (Figure 6). This kind of synchronous collaboration is useful not just in AEC, but in all design fields that rely on visualization including movies, gaming, manufacturing, product design, and so on, and the number of “connectors” that plug into Omniverse for the live sync capability continues to grow. in AEC itself, the applications with connectors to Omniverse include Revit, Archicad, SketchUp, Vectorworks, Rhino with Grasshopper, 3ds Max, ESRI’s CityEngine, SimScale, Bentley’s LumenRT, Maxon Cinema 4D, Autodesk Maya, Blender, TurboSquid, and Epic Games’ Unreal Engine.
In its SIGGRAPH 2023 keynote, NVIDIA showed its vision of how generative AI and Omniverse come together. In addition to the capabilities for photorealistic visualization, building physics, and simulation that are already available, generative AI can also help to actually create the virtual world that is being designed on the Omniverse platform. NVIDIA illustrated this with the animation of a car model being designed by BYD, the world's largest electric vehicle maker. The company, WPP, that is working with BYD to build their next generation of car configurators, used generative AI in Omniverse to design a virtual environment for it using natural language commands. Figure 7 shows two different backgrounds generated by AI in real time for the car model with the language prompts, “An image of a stormy sky with the sun emerging between two clouds between two mountains,” and, “An image of distant mountains on the horizon under an early evening sunset.” Figure 8 shows the car model and assets from other applications being aggregated and composited with the virtual environment in Omniverse.
NVIDIA also introduced, in relation to the same car design example, the concept of a “super digital twin.” While the design tools shown in Figure 8 create a physically accurate digital twin of the car, WPP artists are using Omniverse to extend the digital twin to include all possible variations, options, and configurations of each asset within the model, so that it can be deployed as a fully interactive, real time, 3D configurator on Omniverse, where everything is rendered in real time. This is in contrast to today’s configurators which require hundreds of thousands of images to be pre-rendered to represent all possible options and variants. The 3D environment for the “super digital twin” can be generated by AI, as shown in Figure 7, or it can be scanned from the real world using LIDAR.
Another key development related to Omniverse is in the file format that is used, which is USD (Universal Scene Description). This is an open file format for 3D data interchange originally developed by Pixar to simplify entertainment industry workflows and allow artists to work collaboratively on a scene, and it was the foundational technology on which the Omniverse was developed. There is now an effort to formalize the open-source aspect of USD, starting with rechristening it as OpenUSD and the formation of the Alliance for OpenUSD with Pixar, Apple, Adobe, Autodesk, and NVIDIA as the founding members. The Alliance's mission is to foster development and standardization of OpenUSD, write a formal specification for it, and accelerate its adoption.
The new release of Omniverse, in addition to incorporating generative AI, is being built on the expanded capabilities of the OpenUSD format which include physical accuracy, real-time simulation, and understanding of geospatial data. Combined with NVIDIA’s powerful computing capabilities, this makes the Omniverse platform especially well-suited to heavy industries that make physical objects and would like to build them virtually first to improve productivity, minimize errors, and reduce waste before building them physically (Figure 9). Some examples that were shown include the BMW Group, which is using Omniverse to digitalize their global network of factories; Deutsche Bahn, which is using Omniverse to create a digital twin of their entire railway network so they could operate it completely in digital; and Amazon, which is using Omniverse to digitalize their warehouses so that their workers don’t have to walk as far to access items for shipping (Figure 10).
I found it very illuminating to take a step back from the AEC technology space and look at the technology of generative AI as a whole. The presentation from NVIDIA helped me understand it a lot better — while it still seems “magical” to me, especially the text-to-image capability, I can better appreciate how the magic is actually happening.
It also made me realize just how much more can be done with generative AI, given that this is only the beginning. There can, of course, be terrible consequences to society as a whole from the widespread adoption of generative AI — which are being widely discussed and debated as they should be — but from a purely technology perspective, I find it an amazing testament to human ingenuity. I think it will completely turn out the way we use technology and how we make it work for us.
Lachmi Khemlani is founder and editor of AECbytes. She has a Ph.D. in Architecture from UC Berkeley, specializing in intelligent building modeling, and consults and writes on AEC technology. She can be reached at lachmi@aecbytes.com.
Have comments or feedback on this article? Visit its AECbytes blog posting to share them with other readers or see what others have to say.
AECbytes content should not be reproduced on any other website, blog, print publication, or newsletter without permission.
This article explores the application of AI (Artificial Intelligence) in tools across each of the three main processes in AEC: Design, Construction, and Operations/FM. Some of the applications discussed are TestFit, Reconstruct, and ALICE Technologies.
The Omniverse, showcased by NVIDIA at GTC 2021, is a 3D virtual world that can contain models created by many different people in many different applications in diverse locations. This article looks at what it is, how it works, and how it is being used by leading AEC firms like Foster + Partners and KPF.
Copyright © 2003-2024 AECbytes. All rights reserved.