AI Learns to Design Using Visual Cues As Humans Do
Trained AI agents can adopt human design strategies to solve problems, according to findings published in the ASME Journal of Mechanical Design. Big design problems require creative and exploratory decision-making, a skill in which humans excel. When engineers use artificial intelligence (AI), they have traditionally applied it to a problem within a defined set of rules rather than having it generally follow human strategies to create something new. This novel research considers an AI framework that learns human design strategies through observation of human data to generate new designs without explicit goal information, bias, or guidance. The study was co-authored by Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University’s College of Engineering, Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon, and Chris McComb, an assistant professor of engineering design at the Pennsylvania State University. “The AI is not just mimicking or regurgitating solutions that already exist,” said Cagan. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.” How good can AI be? “The answer is quite good.” The study focuses on truss problems because they represent complex engineering design challenges. Commonly seen in bridges, a truss is an assembly of rods forming a complete structure. The AI agents were trained to observe the progression in design modification sequences that had been followed in creating a truss based on the same visual information that engineers use–pixels on a screen–but without further context. When it was the agents’ turn to design, they imagined design progressions that were similar to those used by humans and then generated design moves to realize them. The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems. The framework was made up of multiple deep neural networks that worked together in a prediction-based situation. Using a neural network, the AI looked through a set of five sequential images and predicted the next design using the information it gathered from these images. “We were trying to have the agents create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” said Raina. The researchers tested the AI agents on similar problems and found that on average, they performed better than humans. Yet, this success came without many of the advantages humans have available when they are solving problems. Unlike humans, the agents were not working with a specific goal (like making something lightweight) and did not receive feedback on how well they were doing. Instead, they only used the vision-based human strategy techniques they had been trained to use. “It’s tempting to think that this AI will replace engineers, but that’s simply not true,” said McComb. “Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively.”
Reference: ” Learning to Design From Humans: Imitating Human Designers Through Deep Learning” by Ayush Raina, Christopher McComb and Jonathan Cagan, 16 September 2019, ASME Journal of Mechanical Design.DOI: 10.1115/1.4044256 This paper is part of a larger research project sponsored by the Defense Advanced Research Projects Agency (DARPA) about the role of AI in human/computer hybrid teams, specifically how humans and AI can work together. With the results from this project, the researchers are considering how AI could be used as a partner or guide to improve human processes to achieve results that are better than humans or AI on their own.