XClose

The Bartlett

Home
Menu

Design in translation

What does AI-driven generative design mean for creativity and production in architecture? Words: Clare Dowdy

Space syntax

The term “Artificial Intelligence” (AI) was coined as long ago as the 1950s, and, while certain disciplines have already put it into practice, recent developments mean it could soon have a significant impact on designers of 3D environments.

A handful of software companies are building vast databases of designs, from which, with the help of AI, architects can learn and which have the potential to inform their ultimate design choices.

These experiments are coming out of research on generative design – which might loosely be described as the place where humans and machines collaborate to create things – and have the potential to change how architects operate, believes David Kirsh, Professor of Cognitive Science at the University of California, San Diego, and a Leverhulme Visiting Professor at The Bartlett’s Space Syntax Laboratory.

Kirsh is part of a conversation between researchers from across The Bartlett – including Robert Aish, Sean Hanna and Abel Maciel – who are collaborating on an inquiry into what they see as the newly-energised promise of generative design.

In architectural terms, this approach has been used to solve complex optimisation problems for more than a decade. For instance, given the dimensions of a building, generative design software can work out the optimal shape of its roof if it is to resist wind strain, and can minimise its heat signature and material weight.  

However, “generative design is now going beyond optimisation and simple parameter exploration,” says Kirsh, “and this is where it gets interesting and promises to revolutionise creativity and design practices.”One such piece of research is Autodesk’s Project Dreamcatcher, the next generation of CAD.

It is based on an AI-driven generative design engine, which itself is based on a huge knowledge base created through machine learning techniques that classify pre-existing objects that perform functions. To use Dreamcatcher, designers input specific design objectives which they characterise in terms of goals and constraints, including functional requirements, material type, manufacturing method, performance criteria and cost restrictions.

Or, as Mickey McManus, Visiting Research Fellow at Autodesk, puts it: “Now there’s the opportunity to say, I’m not going to draw what a wall looks like, I’m going to say ‘it needs to be lightweight, to deal with stresses, it needs to create spaces that allow sunshine’. I abstract my goals.”

The system then both creates and searches a vast number of generated designs that satisfy, in varying ways, the design requirements. The resulting alternatives are presented back to the user, along with the performance data of each solution. The designers then evaluate the generated solutions in real time.

They can return at any point to the problem definition to adjust the goals and constraints to generate new results. Once the design space has been explored to satisfaction, the designer can output the design to fabrication tools, or export the resulting geometry for use in other software tools.  

“When successful, this process promises to radically transform the nature of design and industrial workflow,” Kirsh predicts. That doesn’t mean humans won’t continue to be important to the creative process. “It’s not all that likely that designers will want to use AI to generate designs,” says Kirsh, “Rather, I think they will use it for hints, in structural situations.”

Sean Hanna, Reader in Space and Adaptive Architectures at The Bartlett, backs this up: “AI certainly won’t be taking over,” he says, “because the design trade-offs that go on in an experienced mind are hard to make explicit. The imagination of taking a machine-proposed candidate and turning it into something wonderful can’t be easily duplicated anytime soon. Human creativity and human judgement shouldn’t be downplayed. Good taste is something hard to programme into computers.”

Where AI’s vast databases will be able to help is in imagining scenarios that are beyond human intuition. “Our intuition has always been bounded by our experience,” explains Hanna. “We don’t have intuition about things that are too big to design (such as cities), or too small to see, or that we can’t imagine. But we’re having to design these things more and more. AI can draw patterns that potentially will be the tool to make design decisions for things beyond our intuition.”

Timandra Harkness, author of Big Data: Does Size Matter? and co-presenter of BBC Radio 4’s Future Proofing series, backs this up. “Machines don’t think, in the sense that we think, but they can solve problems we set them in ways we would not think of. So a machine, incapable of original thought itself, could be a springboard to more original thought in its human user.

As Ada Lovelace [the 19th century mathematician who worked on Charles Babbage’s proposed computer] put it: ‘There are in all extensions of human power, or additions to human knowledge, various collateral influences, besides the main and primary object attained.’”

McManus says Autodesk’s goal is “to import work and fields quite far from the field in which the person is initially working. The goal is to begin to broaden the kind of systems you should consider to give you insight.” And when a piece of computer software has access to huge amounts of data, a shift will occur in what sort of recommendations it will make.  

Kirsh describes this data opportunity as the magic of mating. “You should get a whole contact sheet [of candidates] and you tick off the ones you like. You could ask to see the next generation, and AI would take two different shapes and try to marry them.” The fascinating part would be seeing how AI would marry these different candidates, and what unviable hybrid forms it would ‘abort’, he adds.

Death of the author

In the long term, “there could come a time when AI is capable of thinking a bit more like us,”says Hanna, “to be able to make ‘reasoned’ judgements, then it could act as an agent in its own right.” If that’s the case, then humans’ relationship with AI will be transformed into a more collaborative one. Hanna envisages a time when a computer will be treated as a member of the design team, with a different set of experiences and expertise.

While potential opportunities abound, the arrival of AI in the architects’ studio also throws up questions, particularly around ownership and education. To access software such as Dreamcatcher, designers freely give it access to their early ideas, concepts and intermediate steps. The software would then render that work anonymous and merely sell the patterns back to other users.

However, Kirsh points out that “if you’re a great designer, then every trace of your work is worth something. So in the future, people will start saying, ‘you’re making billions of dollars from the information that we’re providing, in aggregate that’s worth a lot!’ That’s a looming user rights issue.”

Meanwhile, if an AI comes to be regarded as a member of the design team, “we don’t know what’s going to happen to the notion of authorship in design,” points out Hanna. Will we go back to the Middle Ages, when Gothic cathedrals were often designed and built by anonymous masons? In schools and universities, the arrival of AI will change the way students are taught.

“Design briefs will have to be recast in the more abstract form the tools require,” Hanna predicts. Students will have to learn this translation, and will have to shift their focus towards problem solving. Could this mean that with AI’s involvement in architecture, a different type of student might be attracted to the discipline? “It’s an open question whether or not architects will have to be different sorts of people,” says McManus. “Anyone  could learn this, and we need to capitalise on the learning potential of people. You might have to unlearn things because of automation.”

Kirsh also points out that new tools typically lead to new solutions. “Every time there’s a new powerful tool, the designs that emerge look different from the time before that tool was invented.” He cites parametric modelling tools. “Certain things that had been difficult became easy.” And that is one potential outcome for AI-enabled architecture to which innovative designers will surely look forward.  

New dimensions

To explore the ways generative tools change thought, Kirsh ran a five-day hackathon. A group of students, post-doctoral researchers and practising architects were taught how to use two tools: Fusion 360, a state-of-the-art 3D modelling programme, and Dreamcatcher, Autodesk’s generative design programme.

The hackathon’s goal was to explore the way designers change their approach when working with different tools. The participants were split into two groups and were asked to design a number of objects, including a table that converted into a wall divider and a chair that became a bed.

“We asked all the participants to tell us about their design dimensions and how they were thinking about design,” says Kirsh. “We asked them to annotate their intermediate sketches, and to describe the changes they made, and the rationale for those changes.”

The team considered how the two tools changed the way the participants thought, how they framed problems, and how far apart their design candidates were. “It was delightful to hear the kinds of questions they asked, and their concerns for what it meant for creativity,” says McManus.

Afterwards, the participants were asked what they wished they had explored, what he calls “the thinking behind the thinking”. Kirsh intends to follow the hackathon with experimental studies involving designers worldwide who are already experts in using generative tools. “The nature of design dimensions is a promising area to explore for the theory of design thinking,” he says.