AI in AEC: An IntroductionAECbytes Feature (February 28, 2019)

If there is one technology trend that stands out for generating the most “buzz” these days, it has to be AI (artificial intelligence). The buzzword used to be “cloud computing” a few years ago, but that has now become passé, at least from a discussion-and-debate perspective. While the actual implementation of cloud technology—where all our data resides on cloud servers rather than on local computers or in on-premise servers—is still in progress, it has been generally understand and accepted that having our data centrally located on the cloud, where it is always securely accessible to whoever needs it, makes a lot of sense rather than passing the data back and forth using email or FTP.

Now, it’s the turn of AI. Hardly a day goes by when I don’t read about some interesting implementation of the technology. Here are just a few of the many AI implementation articles I have come across recently: for powering Amazon’s one-hour deliveries; by China’s leading ride-sharing company; for improving stock trading; for correcting grammar in writing; for creating virtual worlds quickly from real-world videos; and for even competing in debate. McKinsey has a whole article on how AI is being applied for social causes such as crisis response, economic empowerment, education, health, environment, etc. (https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good).

While the AEC industry can hardly be described as being on the “leading edge” when it comes to adopting new technologies, AI in AEC is starting to see some traction. This article compiles what we have so far in terms of the use of AI in AEC applications. But first, it would be helpful to understand some of the technology underlying AI so that we have a better comprehension of the AEC applications that use it and can understand how they do what they do.

The Technology Underlying AI

Contrary to the impression conveyed by movies (like Ex Machina) and TV shows (like Westworld), the popular notion that AI will lead to “machines taking over the world” is nothing more than science fiction. The most fundamental thing to keep in mind about AI is that it is, first and foremost, a computing technology, and just as with other computing technologies, it is based on programs created by humans. It may seem like magic to those who don’t write the actual code that powers AI programs—just as flying a plane first felt like magic to those who did not design airplanes or pilot them—but there is actually a whole branch of computer science and programming dedicated to AI. (I have some first knowledge of this as I took an AI course during my Ph.D., and I studied how to use LISP, which was the AI programming application at that time.)

While an indepth discussion of AI technology is beyond the scope of this article, an excellent primer on AI by Booz Allen Hamilton is available at: https://www.boozallen.com/s/insight/thought-leadership/the-artificial-intelligence-primer.html. In brief, though, we often hear the terms “AI” and “machine learning” used interchangeably; however, they are not the same thing. AI is more of a concept, and it refers to, in a broad sense, the ability to create computers that exhibit “intelligent” behavior.  The word “intelligent” here does not mean “human”—it simply refers to tasks that appear to be “smart” to people, similar to how calculators seemed “smart” when they first came out. Using an example closer to AEC, we describe BIM as the “intelligent” representation of a building, when in fact, it is simply a bunch of lines of computer code using more advanced algorithms compared to CAD.

Machine learning is currently one of the main technologies that powers AI. As the name suggests, it refers to training computers to detect patterns using a large amount of data sets, which in turn is used to classify information or to predict future trends. I found a very interesting article that highlights the human labor behind this effort: https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-labeling.html. It describes how a company in China employs people just to go through photos and videos and label everything they see so that AI can make sense of them. There is a quote by one of these workers that nails AI to a tee: ““I used to think the machines are geniuses. Now I know we’re the reason for their genius.”

Machine learning itself can be categorized into supervised learning, unsupervised learning, and reinforcement learning, as described in this article: https://thenextweb.com/syndication/2018/11/21/the-difference-between-ai-and-machine-learning-explained/. An additional term that we are likely to hear when AI is being discussed is “deep learning”—this a technique for implementing machine learning that uses a type of algorithm called “neural networks,” modeled after the connected networks of neurons in our own brains (see https://skymind.ai/wiki/neural-network).

While further technical details of these concepts belong to the realm of computer science and engineering rather than AEC, it should be noted that technologies such as expert systems, rule-based design, and generative design that we are familiar with in AEC, and which already power some of our more advanced applications, are not AI. Of course, there’s no reason why we cannot have applications that combine the use of these technologies with AI, but we are not there yet. In fact, we are, as an industry, barely getting started with AI. Let’s see what we have so far.

AI-Based Applications in AEC

A good indication of how new AI is to AEC is that the first use of the technology that I came across was only in 2018, and this was in the context of NVIDIA’s GTC 2018 Conference where NVIDIA showed a new technology called AI denoising that was incorporated in its rendering engine. AI denoising uses machine learning to train a renderer to remove “noise”—i.e., graininess—more quickly from a rendered scene, dramatically reducing the time it takes to render a high fidelity image that is visually noiseless.

While rendering is not exclusively in the AEC domain, I did come across three applications using AI that were more AEC-related towards the end of 2018: BASIS, a project planning tool that uses AI to assist and guide planners through the process of building a project plan, capturing insights and learnings from prior projects and using the stored knowledge to make informed suggestions during the planning process; BricsCAD BIM, which uses AI in add-ons that analyze the model and automatically organize and classify its elements without needing them to be predefined by the user; and OpenSpace, which uses AI to automatically stitch together the thousands of video frames recorded by cameras mounted on construction hard harts into a single record of each point of the job site, which is then transposed to the digital model of the site, allowing the entire construction team to see a 360° photographic view of any point of interest. All these three applications are described in more detail in the 2018 year-end article published in AECbytes two months ago.

Conclusion

Given that we went from just one general-purpose rendering application to three AEC-specific applications using AI in a such short span of time—from April to December of 2018—we should expect to see it a lot more going forward, both in the form of new applications as well as in the form of enhanced features in existing applications. This article can be seen as just an introduction to what will hopefully be many exciting developments in AEC technology, using the brand-new, cutting-edge, and rapidly growing science of AI.

About the Author

Lachmi Khemlani is founder and editor of AECbytes. She has a Ph.D. in Architecture from UC Berkeley, specializing in intelligent building modeling, and consults and writes on AEC technology. She can be reached at lachmi@aecbytes.com.


AECbytes content should not be reproduced on any other website, blog, print publication, or newsletter without permission.