Architecture should prepare for major disruption by AI — here’s why…

January 1, 2023

No items found.

A friend recently sent me a screenshot from the website “Will Robots Take My Job?” - it placed a reassuringly low 3% risk score for architects.  We are ‘knowledge workers’, meaning our value is derived from a hard-to-reproduce synthesis of specialist knowledge, applied in a bespoke manner to each unique situation.  

The traditional view is that knowledge workers will be just fine in the future and that the jobs most prone to disruption by AI products will be those focused on manual, repetitive and mechanical tasks.  As an example, it is easy to imagine haulage drivers worrying about their role in a world dominated by driverless transportation. However, this widely predicted outcome plays into a comfort bias that most knowledge workers (who are also the ones making these predictions) will be somehow immune, that their work requires uniquely human insight and discernment.


No alt text provided for this image
Image: Architecture is traditionally seen as a low automation risk, source: "willrobotstakemyjob.com"

The latest wave of AI tools that have emerged in Q4 of 2022 are quickly challenging that narrative, with new “previously unimaginable” applications of this technology being deployed on a weekly basis.  The pace of change is so rapid that it is hard to now imagine any field of knowledge based work that can remain untouched in the longer term.  Leading the charge in November this year was the latest version launch of OpenAI's “ChatGPT”, which just a few weeks later feels like it could be an irreversible Big Bang moment for the digital age.

Arthur C. Clark said “any sufficiently advanced technology is indistinguishable from magic” and using ChatGPT does feel like a mysterious kind of sorcery; it is a chat bot (working from a Large Language ML model)  that is able to respond to any prompt with outlandish sophistication, writing back faster than most people can read the words as they land.

ChatGPT is not the first model of its kind, but it is the first to really capture the collective imagination of popular culture.  It will adopt (imitate) any voice or personality you ask it to, it will write a script for a debate between Charles Darwin and Mother Teresa on Theology, complete a full length essay structured into sections with a word limit or produce a comedic version in rhyming couplets.



These are not yet great works of genius and I am yet to be really moved by something I’ve read in the responses (unless you include incredulous laughter) - but they do display considerable “competency” in the tasks requested.  If development were to stop here and go no further, the current version brings extreme levels of utility alongside this competency - it can for example, spin up a website in seconds from scratch with just a small amount of design brief guidance, it can write impeccable computer code (in Python and other languages) to create new functions and workflows to solve laborious tasks between different computer programmes.  It can even provide step by step direction on how to ‘theoretically’ perform a medical procedure, finding the correct medical literature and summarising it into a nice numbered list of ordered actions.



In parallel to these text-to-text models, more applications are emerging every day; most notably text-to-image platforms such as Midjourney, Stable Diffusion, DALL-E Open Ai Labs and others who have made giant leaps forward in the quality of their recent model output.  Go exploring on Hugging Face (the “Github for AI”) and you will find new experiments that are successfully turning text prompts into 3D forms, or models that turn 2D imagery into 3D models.  Text to audio is also possible with new models able to mimic an author’s voice almost flawlessly if trained on a large enough set of sample data.  We will soon see original smooth video content being combined with audio by these models created from detailed just text prompts.



No alt text provided for this image
Image: Sample of output from Stable Diffusion. Text prompt used: "a vast robotic factory space of the future, designed by both Luigi Nervi and Richard Rogers, epic scale, warm natural light through ceiling and high vaulted windows ,image style Ismail Inceoglu"

The text based models alone bring with them the potential for huge disruption to knowledge workers in journalism, screenwriting, customer care, sales, perhaps even law.  Imagine two independently briefed ML Large Language Models fighting out contract term wording clause by clause to best serve their client’s interest?  A sufficiently well trained model could certainly read a contract in a few seconds and flag the key areas that will prove contentious and maybe even start proposing amendments.


As we marvel at each new magic trick that these tools can pull from the hat, the computer engineers are working on the next version improvements and as they do, my brain jumps to the question: “Will text-to-architecture be next?”.  



It’s taken me a while to reach acceptance on this idea, but in just a couple of months I’ve gone from being a total sceptic to feeling that some version of this idea not only seems likely, but inevitable with enough time.  In a number of cases it is already beginning to happen.  Architects may not like that idea, but in practice they may not have a choice in the matter.



Architecture is a kind of practical art form, arising out of essential needs and a patron with an idea.  When done well it can transcend spatial constraints and augment the very particular characteristics of a place. It is a form of expression that is subject to many layers of real-world facts; like budget, fashion, material properties and building physics. Alongside music and art, I also consider architecture to be a specifically human means of expression.




No alt text provided for this image
Image: Example of Stable Diffusion generated image for a hospital atrium of the future (generated in 15 seconds)

The reason AI is so beguiling/scary (depending on who you ask); is that it makes us question the “humanness” of our ideas. If something can imitate fluent human behaviour and insight without detection (ie, it passes the Turing Test) then when it comes to the best design idea being allowed to win, the author of that idea (human or not) may become unimportant in practice. The humanness of an idea will likely become a kind of nostalgic afterthought.



Here is a circumstantial summary of where the architecture profession currently finds itself at the commencement of 2023 and why I feel the current position makes us vulnerable to significant upheaval.  It is wholly subjective and based upon my own experiences:




Observation 1: High Value Problems

Architects tackle high value, high cost, high risk projects where the fidelity and unique insight of strategies at early planning stages have an enormous impact on long term value and financial performance.  These insights are mostly based upon past experience and building precedents.

An AI Perspective: Generative Design products will be deployed at the outset to allow for many more options to be tested concurrently, with each analysed against quantifiable metrics held within the client’s brief.  An effective ML model for this application would be able to call upon an enormous dataset of well-labelled and structured precedent studies in order to explore countless solutions for review. (Google’s Delve are well on the way to this product).



Observation 2: Mechanical Task Work

Most small and medium sized practices routinely deploy highly trained professionals on time-consuming “mechanical” tasks that require repetitive work across several software platforms and licencing (rather than one).

An AI Perspective: Software platforms are already emerging that link Generative Design tools in 2D and 3D, with massing diagrams, rendering, environmental testing and spreadsheet capability happening synchronously in a single software platform.  This trend will likely continue exponentially and will soon leverage ML models too (see Spacemaker AI, cove.tool, TestFit, Giraffe Technology etc).




Observation 3: Vulnerable Business Model

The business model of architectural practice operates with a high degree of waste that is difficult to avoid.  Architects will often carry out free/low-paid work to support and enable clients to bid for sites or provide free/low-paid design work through very costly design competitions for high profile sites, this work causes a massive budget overhead for practices.

An AI Perspective: Frustrated clients will soon be able to speculate in three dimensions at development appraisal stage and build their site briefs directly, without involvement from an architect using generative feasibility design tools. At the same time products for Architects will soon be able to render architectural detail using basic prompts instantly onto simple block massing models (see 3D Blender & ChatGPT text prompts) and this can increase speed of design testing and visualisation.


Observation 4: Low Profitability, Low Investment in R&D

Running costs are high in architectural practice, with modest to low amounts of profits generated as a fraction of turnover, as compared to other professional services.  This drastically limits investment in Computer Science based R&D.

An AI Perspective: A lack of internal digital R&D investment in the field means architects are not equipped to compete in development of AI tools for themselves at present (unlike tech companies, who don’t wait to be asked).




Observation 5: Industry Hunger for Standardisation

Clients are increasingly hungry for standardisation and repetition to bring down the cost and risk of building (particularly in housing) which is increasingly unviable to build at present and the housing crisis is growing every year.  Architects continue to design projects largely from scratch.

An AI Perspective: MMoC are set to accelerate with adoption of AI models which will allow contractors and clients to take designs and rapidly re-calculate adaptations to make them suitable for off-site manufacture and system standardisation (have a look at KOPE.ai for example).  Furthermore, typology specific models could now be built that iterate designs based only on agreed construction system parameters, locking in organisation wide preferences for procurement ahead of planning.




Observation 6: Response to Climate Crisis

The climate crisis requires a rapid and complete technical response.  The current ability of architects to take regular Carbon measures for every design decision is lacking and this process needs to happen synchronously with design development at every stage of a project and with every design decision, few are equipped to meet this challenge adequately, because it’s requires a radical rethink of current working practices and the tools aren’t smoothly integrated into the design process.

An AI Perspective: With sufficient data, we could have an ML model running calculations of each option and possible change in parallel with our design work, providing estimates (within an acceptable margin of error) that show us when a design decision will have a material impact on carbon consumption, be that through embodied carbon intensive design measures (like long span floors and transfer structures) or through the enlarging of windows (increasing operational cooling loads in summer).  This would allow smooth ‘fuzzy’ estimates for decision making on carbon while design is in process and then static ‘precise’ estimates at the end of each RIBA stage.  The data set doesn’t currently exist for this, but if architects can come together effectively, it will in the future.



These observations are by no means exhaustive and if this all sounds a bit depressing, it’s not meant to - it’s an objective description of where architects find themselves in order to highlight that where the obvious inefficiencies exist, so do opportunities. This is where we get into the ethics of how one should utilise these new tools.  



The wave of innovation that approaches can be put to enormously productive and positive use, but it does require us as architects to engage and get involved in the development of these datasets and tools and not be passive onlookers.  If we don’t, the future of our cities could largely be defined by computer engineers who build the Generative Design ML models of the future.  We must be aware that ML models rely on pre-conditioned and well labelled training data, which is open to the bias and subjectivity of the humans building them.



We must also get organised with our data, ML requires rich data sets to train models effectively and currently architects hold much of this data.  Before we get comfortable with the idea that if we don’t permit access to this data, then no product development can happen,  we should remember much is already publicly accessible through our publishing promotional activities and with little to no IP controls in place that would be effective in slowing the AI freight train from hitting us. For example, the current text-to-image models can mimic well known illustrators but they do not credit them, ask permission, or pay them for their use (as far as I am aware)...



If history is any measure, Computer Science tends to jump in and break things in order to achieve progress and this approach is proudly referred to as ‘failing fast’.  When it comes to the Built Environment this mode of thinking won’t ask for IP permission to build Generative Design models from harvested publicly accessible building drawings, diagrams and renders,  and it won’t approach our field with caution.  



Despite these concerns, there is great “co-design” potential for machine learning models, if constructed to enhance and build upon current working methods.  The co-design idea imagines that alongside your team of designers, you can also instruct an ML model to run parallel design study tasks at lightning speed, like a 10x productive team member.  We could find ourselves calling upon a series of tailor-made models built for very specific problem solving application and scope, which are then put to work on demand by the architect. Off the top of my head - here are 5 soft targets for this kind of work:



1. Automated massing option testing for predetermined area briefs and measurable 3D site constraints.

2. Data-based urban analysis tools that estimate commercial activity and pedestrian behaviour to improve design of ground floor and street activation and transport and access strategies.

3. Deep dive contextual analysis and facade study elevation option generation.

4. Dynamic building core auto-design from areas, with preset criteria for each typology and structural / mep combination.

5. Live Building Regs and fire compliance flagging and redesign suggestions.




The list will go on and on … and the longer it does, the more challenging the thought experiment gets.  In this mode, the architect becomes more of an astute editor, relied upon to ask the ‘right’ questions at the right time, guiding selection, discovering new avenues and bringing together insights from many sources. This role will require great skill and judgement and if managed well, we could find ourselves with more time for more creative focus in other areas and become more productive and fulfilled overall.  


That would be the ‘best case scenario’, but right now there seem to be as many in the optimism camp as there are in the pessimism camp. I recently asked ChatGPT to speculate on the impact of OpenAI on architecture as though it were an episode of Black Mirror. Somehow the fact that my imagination immediately jumped to this dark conclusion says something about the apprehension with which we anticipate such era-defining innovations.  The internet age for example, has brought such rapid transformation to society that we have a kind of techno-cultural whiplash - so that new technologies now conjure a Pavlovian sensory response of both foreboding and excitement in equal measure.  The internet has been an awesome vehicle for equal access to information and connectivity, but at the same time it has proliferated mis-information, divided people with endless culture wars and given us all smartphone addiction.




No alt text provided for this image
OpenAI’s ChatGPT “Playground”, text prompt shown in grey, AI response shown in green.

Rather than jump straight to the dystopian conclusions, there is reason here for great optimism and I can get comfortable with the idea of adopting a series of bite-sized models that are purpose built by both computer engineers and architects to assist with labour intensive and inherently wasteful workflows.  

I am however much less comfortable with the idea that once these discrete models are built, that one could link them all up and take out the intermediary altogether - creating a single “model of models” that would start to curate and edit in a holistic manner for itself.  In such a world, many clients would likely outsource much of their knowledge work directly to this type of product and avoid hiring an architect for many tasks.  It would be cheaper and in many cases it would be easier than having to collaborate with an architect on a vision, if you know what you want and you have the tools at your disposal. This much more utilitarian outcome, while playing to incentives of clients and software companies would I’m certain, be unambiguously bad for the built environment.



To go further, architects are probably not going to be the target market for the new wave of products that will emerge, it is more likely that products will aim at developer clients instead because they are a bigger market, with more money to spend and a hungrier appetite for productivity savings.



One such product already in existence is “Delve by Sidewalk Labs (Part of Google)” , which launched in 2020 and is taking on the job of rapid feasibility study testing for future sites.  The simple and intuitive interface allows you to go straight to any site in the world and develop a numerical business case for a development, while carrying out design iteration automatically at the same time in 3D.  



No alt text provided for this image
Image: Delve marketing image. Delve is a product that can generate and quantitatively appraise hundreds of ‘design options’ in minutes.

We have as a practice had a look at the product for feasibility study analysis and in general have been surprised to discover far more functionality than expected, it can complete 3D modelling tasks, option testing and accurate area reporting in absurdly short timeframes.  Once you have input your requirements, the model goes to work, unironically reporting that it is “looking for the perfect designs…” as it runs the model.  When the proposals land after about 5 minutes of processing, there may be more than hundred presented and they are rank ordered on quantifiable metrics, such as development yield, neighbourhood walkability and sunlight potential etc.

Right now the product is most tailored to US sites where zoning requirements  apply and so there is limited application to the more nuanced European system of “design negotiation” such as the UK where sensitive context, Conservation Areas, Townscape and Visual Impact Analysis and Rights of Light all play a more pivotal role.  Currently the actual design quality of the proposals are - from an architectural perspective - naive in their understanding of how building types are best arranged and combined, they are more like a 3D representation of a spreadsheet, but they do give you an instant sense of scale, massing and overall development potential and high level site strategies.  One can imagine that once these models are trained on richer, better labelled data sets of higher quality building types, the models will begin to present genuinely insightful and persuasive results that are less easily distinguished from a traditional architect’s output in a feasibility study.



While we have found the quality of outputs fairly crude so far, Delve does achieve an end-to-end feasibility study process; a framework to augment a complex (and sometimes conflicting) series of priorities and requirements for a site that can conform to constraints and at the same time give a live view on the business case.  The rank order system of results feels very analytical but perhaps architects have much to learn from presenting their qualitative work in this client-friendly and quantitative way.  We should expect the design quality of proposals from products like Delve to improve exponentially in the coming months and years.



Alongside Delve there are a host of other software products emerging in the Generative Design space, each targeting subtly different markets.  Many of the products are being grouped as “AI” but are really leveraging “parametric” software packages like Grasshopper on the backend and providing users with a simple and elegant UI to substantially improve productivity of design iteration and option testing.  Whether these products all currently deserve the label of “AI” requires a deeper dive, but if they don’t now they surely will soon.



No alt text provided for this image
Screenshot from Testfit, a residential focused US product that generates full building plans from minimal user inputs conforming exactly to area and accommodation briefs which can be updated dynamically.

These parametric generative design tools apply mathematical constraints and rules to vector lines and shapes produced by the user and these allow for enormous complexity to be “generated” instantly from just a few lines and inputs from the user.  Notable names in this space include Spacemaker AI, TestFit and Giraffe Technology who can each simulate residential feasibility layouts for whole buildings in a couple of clicks.  Any architect who has planned residential buildings knows that the apartment type mix (for example 25% 1 bed, 65% 2 bed, 10% 3 bed) is critical to get right early for your most repeated floor plates and making a change is a major headache, but with these tools it’s as simple as moving a percentage slider up and down and the model recalculates the plan and produces a full areas schedule alongside instantly.  


As with the previous Delve example, we could pour over the actual quality and merit of layouts and building design that are produced, but the end-to-end software interface is in place so they just need to get better at designing. To use an analogy, we might consider all of these tools to be in the early stages of Architecture School right now, in need of some more learning about historical precedent and some pivotal design critiques in order to improve.



No alt text provided for this image
Image: Stable Diffusion artwork based upon "Wanderer above the Sea of Fog", a painting by Caspar David Friedrich

We stand right at the start of a new era for architecture and we are presented with an opportunity to fine tune and perhaps even to transform how we go about our work.  There is potential to remove much of the drudgery of repetitive tasks and optioneering, the possibility of a technical “renaissance” exists - that could reduce waste and focus instead on vision, quality and value.  I hope architects will lean into this revolution, get themselves organised and get ready to work together (as a whole profession) to lead the change, rather than find themselves subjected to it.


More like this

AI is now providing insanely powerful tooling and if we’re honest, an almighty boost of synthetic talent.

“The Maker and the Critic”, experimenting with Midjourney AI

AI is now providing insanely powerful tooling and if we’re honest, an almighty boost of synthetic talent.

May 13, 2023