Depending on the people you talk to, architects approach artificial intelligence (AI) with a range of anticipation, skepticism, or dread. Some say algorithms will handle drudge work and free designers to focus on the more creative aspects of their jobs. Others assert that AI won’t live up to its hype—at least not in the near future—and will make only marginal improvements in the profession. And a third group worries that software that learns on its own will put a lot of architects out of work.
Science fiction writers have been imagining robots that think like human beings for more than 100 years. But the field of artificial intelligence really began in the middle of the last century with British mathematician Alan Turing’s 1950 paper “Computing Machinery and Intelligence.” In 1956, at a conference hosted by Dartmouth College in New Hampshire, mathematician John McCarthy coined the term “artificial intelligence” and, with a group of participants, explored how to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves,” according to the event’s proposal.
Imbuing computers with true intelligence, though, has proved to be more difficult than originally imagined. Sixty-five years after the Dartmouth conference, computers can process huge amounts of information, analyze it to find correlations and patterns, and then make predictions based on those patterns. What makes AI different from previous forms of computation is Machine Learning (ML), which employs algorithms that get better at performing certain tasks the more they do them; they learn without having to be programmed to do each step. The bigger the data set used to “train” an algorithm, the better it will perform. By 1997, IBM had developed a chess-playing program called Deep Blue that was able to beat Gary Kasparov, the world chess champion at the time. Today, Google Translate does a pretty good job of recognizing text in one language and communicating it in another. A program known as GPT-3 will take a few word prompts and write a paragraph of text that seems at first glance to have been written by a person. Algorithms allow autonomous vehicles to navigate city streets, radiologists to identify cancerous tumors, and online shopping services to recommend products to their customers.
But computers still don’t think like people. They have no awareness of anything beyond their own predetermined capabilities and don’t have anything close to common sense. They know only what they have been shown and lack the ability to generalize from one task to another. A 2016 Obama Administration report on the future of AI identified the technology’s potential to “open up new markets and new opportunities for progress in critical areas such as health, education, and the environment,” but admitted that “it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years.”
In the past decade or so, software for architects has evolved from CAD to scripted geometry engines like Rhino and parametric BIM platforms like Revit—moving from the representation of buildings (in plan, section, elevation) to more responsive systems that show the impact of one change on the rest of the project. Thanks to faster and cheaper computers and the enormous computing power and storage capacity of the cloud, AI systems are now able to encode information and relationships in increasingly complex layers. Because they’re able to process vast amounts of data accessible from internet-based sources, they can create statistical correlations that approximate learning, says Phillip Bernstein, author of a forthcoming book on AI and an associate dean at the Yale School of Architecture. […]
Read more: www.architecturalrecord.com