Ever the divider of opinion, the computer’s entrance into the architectural profession posed questions that are now more current and pressing than they have ever been
‘What does a playboy of the Western world do with a new toy?’ This question was posed over 50 years ago at the first conference organised by the Boston Architectural Center (BAC), titled Architecture and the Computer. The toy was the computer, and the playboy was the ‘creative’ architect: could the ‘Mother of All Arts’ really benefit from this bulky, expensive and seemingly incomprehensible machine?
In the mid-1960s, for players from the periphery of the profession − such as engineers, planners or contractors − the use of the computer was already becoming an integral part of day-to-day business. Yet for the ‘creative’ discipline of architecture, discourse on the topic was still in its infancy, and experiments in computation were being conducted in isolation at university enclaves and a few affluent corporate offices like SOM. This was a situation that the organisers of the conference intended to rectify.
Initially intended as a modest part of BAC’s ‘program in local Practitioner Education’, Architecture and the Computer soon grew out of proportion. Dozens of early registrations, coming from practically all major architectural institutions and offices, made clear that the topic had struck a chord. Interest came from Masters students (like a young Emilio Ambasz from Princeton) all the way to large offices (such as Doxiadis Associates). IBM donated $1,500 and the proceedings were published with a subsequent grant from the Graham Foundation.
On 5 December 1964, more than 600 participants gathered at the Sheraton-Plaza in Boston. As well as lectures and discussions taking place in the grand ballroom, the mirrored lounge space of the hotel’s Venetian Room accommodated a series of quirky exhibits for the entertainment of participants. These included visual simulations by the Airplane Division of Boeing, a kinescope recording on Lincoln Laboratory’s and MIT’s progress in computer graphics, and a computer-generated movie by Bell Laboratories on the production of computer-animated films. Most impressive, perhaps, was a live demonstration of MIT’s remotely controlled IBM 7094 computer, situated across the Charles River in Cambridge. The presentation of the engineering program called STRESS (Structural Engineering Systems Solver) was instantly relayed to the crowds by CCTV.
Guests included not only architects and planners (for example, Serge Chermayeff from Yale and François Vigier from Harvard), but also architectural historians (MIT’s Henry A Millon), computer engineers, mechanical engineers, electrical engineers (Marvin Minsky, then co-director of the MIT Artificial Intelligence Lab and member of Project MAC), cartographers (Northwestern University’s Howard Fisher, founder-to-be of the Laboratory for Computer Graphics at Harvard), and representatives from large American corporations such as IBM (George Swindle of the Programming Systems Division) and Westinghouse (Lisle G Russell, mechanical engineer at the Systems Engineering and Development Group).
Opinions were more muddled than divided. They ranged from enthusiasm to sheer terror. Middle-aged, self-professed non-experts appeared the most panicked of all, pondering the fate of the ‘traditional role of the architect[s]’ who would be forced to ‘take a back seat’ should they choose to use the ‘tools of the devil, the enemies of humanism, art, diversity and beauty’.1 Of all invited speakers, a remarkably open-minded 81-year old Walter Gropius pointedly addressed such concerns in a statement, titled ‘Computers for Architectural Design?’, read for him by Millon. He sarcastically observed: ‘Some people scorn violently the idea that lifeless machines could be of any advantage to inventive thinking. […] I believe that by this attitude the baby is cast away with the wash.’2
For all the confusion, there was a clear movement into two camps: one of technophiles enamoured with the computer as a tool and its immediacy of application in the design process; the other of techno-intellects invested in a long-term project towards a critical theorisation of computation. For members of the latter group, such as Minsky, computation in the creative fields would only reach new, unknown limits through Artificial Intelligence (AI). Responding to discussions on the practicality of the computer in spawning plans or perspectives, he pleaded: ‘[L]et’s not worry about […] how computers are going to help us with small things. For in no more than 30 years, computers may be as intelligent, or more intelligent than people.’ As Vigier also noted, to merely utilise the computer as a quick but dumb drafting machine did not seem to be ‘really the best use of the animal’.3 The stakes were clearly higher than the conference organisers had anticipated.
In a similar vein, Christopher Alexander criticised those thoughtlessly absorbed with the application of the tool as an end in its own right. In his famous essay contributed to the proceedings, he cautions that: ‘anybody who asks “How can we apply the computer to architecture?” is dangerous, naive, and foolish […] because his preoccupation may actually prevent us from reaching that conceptual understanding, and from seeing problems as they really are.’ Articulated fifty years ago, Alexander’s polemical stance points to a sore spot of the discipline’s neurotic obsession with the use of new toys to this day: the necessity of simply thinking through and thoroughly any given design ‘problem’ before attempting to mechanically compute it. ‘We do not wander about our houses, hammer and saw in hand, wondering where we can apply them.’4 Alexander’s advocacy of system thinking, and the debatable aesthetic production that might stem from it is still a matter of contention among architects. However, his main point in this early essay is still relevant: to fully explore the potential of any computational architecture one has to initially leave the computer aside.
Many of the innovations prophesied at Architecture and the Computer − like CAD, BIM, computer renderings or the use of robotics in fabrication − have long been fulfilled. Most have even become part of standard professional practice and education, to the point of banality. The days of time-sharing room-size mainframe machines or calculating ‘the amount of computation you get per dollar’ may be long gone. Nonetheless, the core questions about the instrumentality of the computer in the creative process remain largely unanswered. Is it possible to embed ‘the extra-sensory qualities of the architect’ in programming? What happened to the fantasy of computing problems ‘that are in essence ethical’ ones? How does one go past the notion of the computer as a ‘dumb’ tool? Most importantly perhaps, what are the prospects of an artificially intelligent architectural computation, and the relegation of truly independent judgement to the computer it presupposes? These questions are all the more current and pressing today than they were half a century ago.
1. ‘Tentative Outline’, Boston Architectural College Archives, xi, 3, 21.
2. Walter Gropius, ‘Computers for Architectural Design?,’ 1, Boston Architectural College Archives.
3. Architecture and the Computer, 45, 47.
4. Christopher Alexander, ‘A Much Asked Question about Computers and Design’,
in Architecture and the Computer, 52-54.