digital morphogenesis
DESCRIPTION
Undergraduate DissertationTRANSCRIPT
Acknowledgements
I would like to thank the following:
1. Prof. Malay Chatterjee for his co-ordination
2. Mr. Anand Bhatt for his guidance
3. Karthik for his support
4. Friends and family for inspiration
5. My computer for the warmth
6. The Lord for giving me all this.
I am dedicating this dissertation to my family.
This Page is intentionally left blank
“…Unfortunately, no one can be told what the Matrix is, you have to
see it for yourself…
You take the blue pill, the story ends, you wake up in your bed and
believe whatever you want to believe; you take the red pill, you stay in
wonderland, and I show you how deep the rabbit hole goes…
…Remember, all I am offering is the truth, nothing more.”
Contents
Abstract 03
1. Introduction 05
1.1 ‘Design’.
1.2 Externalisation.
1.3 Computer system
1.4 Algorithms
2. Why Algorithms and why computers? 20
3. What are we upto? 22
4. Till where? 23
5. Ouch…!!! 25
6. How are we doing it? 26
7. §Architecture d[algorithm] = [Algotecture]k + C
7.1 Algotecture.
7.2 Two sides of the circuit – The sixth sense
7.3 [Theories of] Design?
7.4 History of Algotecture
8. Genetic Algorithms
8.1 Who Designed the Hedgehog?
8.2 Natural Selection: The Logic.
8.3 Genetic Algorithm
8.4 Genetic Algorithm in Architecture
9. Research 65
9.1 Design of a high rise – Thesis project
9.2 Serpentine Pavilion
9.3 British Museum Great Court Roof.
10. The blue pill or the red one?
11. Appendix
12. Bibliography
List of illustrations
Figure 1 The left and the Right brain
Source: http://www.ritualsofhealing.com/Portals/1282/images
Figure 2 Lamborghini Murcielago
Source: http://amansworldonline.com/wp-content/gallery/reventon-jet-fighter
Figure 3 Traffic Signal
Source: http://en.wikipedia.org/wiki/File:StoplightMexico.jpg
Figure 4 Lamp Algorithm
Source: http://en.wikipedia.org/wiki/File:LampFlowchart.svg
Figure 5 Moore’s Law
Source: http://www.dai.ed.ac.uk/homes/cam/images/MoravecsLaw.jpg
Figure 6 Guggenheim Museum, Bilbao
Source: http://images.google.com/imgres?imgurl=http:// serpentinegallery.org
Figure 7 Space allocation program outcome
Source: Algorithmic Architecture, Kostas Terzidis.
Figure 8 Fractal Graphic
Source: http://upload.wikimedia.org/wikipedia/commons/2/2e
Figure 9 Hedgehog
Source: http://colonos.files.wordpress.com/2008/10/435hedgehog.jpg
Figure 10 Charles Darwin
Source: Scientific American, January 2009 issue.
Figure 11 Design of a High Rise – Thesis Project, Karthik.D
Source: Karthik.D, Architect, India.
Figure 12 Parametric Experiments I
Source: Karthik.D, Architect, India.
Figure 13 Parametric Experiments II
Source: Karthik.D, Architect, India.
Figure 14 Ground floor plan of the high rise
Source: Karthik.D, Architect, India.
Figure 15 Serpentine Pavilion 2002 – Night view
Source: flickr.com
Figure 16 Algorithm – Serpentine Pavilion 2002.
Source: Digital Tectonics, Karthik.D, School of Planning and Architecture
Figure 17 Serpentine Pavilion 2002 – Day view
Source: flickr.com
Figure 18 British Museum Great Court Roof
Source: flickr.com
Figure 19 Parametric Analysis - British Museum Great Court Roof
Source: Digital Tectonics, Karthik.D
Figure 20 Stress Function - - British Museum Great Court Roof
Source: Digital Tectonics, Karthik.D, School of Planning and Architecture
Figure 21 The Logic of Natural Selection
Source: Scientific American January 2009 issue.
1 | P a g e
Abstract
“…In the coming decades, we will see how we are becoming masters of intelligence; how
science will allow us to create and manipulate intelligence almost at will. We will be creating
‘intelligent machines’, moving from ‘machine intelligence’ which is happening now. And
ultimately we will redesign our own minds. Driving all this is the exponential growth of
computing power… a typical mobile phone of today can perform a billion calculations per
second, which is 300,000 times more than the IBM supercomputer of the 1970s, which was the
state of the art technological development then. The exponential growth of computer power will
profoundly reshape the human civilization… our world has already made smarter than it was ten
years ago, and as computing power doubles every eighteen months, its propelling us towards a
very different future… by 2020, intelligence will be everywhere… scientists call it ubiquitous
computing… already large part of our lives, our society, our economy are run by machines with
specialized artificial intelligence… and due to the explosion of computing power, more and more
machines are designed to think for themselves…”1
This dissertation is about design, entirely about design – the process. In the next few
pages we will look at how it has undergone and is undergoing a paradigm shift towards what is
popularly known as emergent architecture, in this 21st century. This book will serve as both a
reference and a casual free-time reading material for those beginners who are keen in
understanding the basic principles behind architectural computation, which forms a huge chunk
of this new emerging field.
2 | P a g e
All this is one of the consequences of the exponential development in the processing
speed of computers, enabling extremely complex processor oriented calculations to be done in
mind-blowing speed. We are entering a new world of thought processing system, which, by
any means, should not be compared to the thought processing system in the human mind,
since, both work on extremely different platforms – both are complementary to each other, like
the two sides of the coin. And, we will see how this system will affect architecture as a whole;
the cases of specific relevant projects will help the reader to correlate the theory to the real
world.
Before starting to read this essay, readers are advised to free up their mind and forget
all preconceived ideas about design and computers. This will not only help in accepting facts for
analyzing, but also help in avoiding frequent contradictions to statements made here, which
might have meant something else in the end.
3 | P a g e
Design “Design is directed toward human beings. To design is to solve human problems by identifying
them and executing the best solution.”
Ivan Chermayeff – Graphic designer
“Design can be art. Design can be aesthetics. Design is so simple, that's why it is so
complicated.”
Paul Rand – Graphic designer, US
“The management of constraints”
Dino Dini – Game developer
“As a verb, ‘to design’ refers to the process of originating and developing a plan for a product,
structure, system, or component with intention”
Wikipedia.com – The online encyclopedia
"Design is the human power to conceive, plan, and realize products that serve human beings in
the accomplishment of any individual or collective purpose."
Anonymous
"Design is virtually everything you see, and it's also almost everything you don't see."
David Fisher - architect
“Design is art optimized to meet objectives.”
Shimon Shmueli, Founder of Touch360
4 | P a g e
The very word ‘design’ is the first problem we must confront in this dissertation since it
is in everyday use and yet given quite specific and different meanings by particular groups of
people. We might begin by noting that ‘design’ is both a noun and a verb and can refer either to
the end product or to the process. This book is primarily about design as a process. We shall be
concerned with how that process works, what we understand about it and do not, and how it is
learned and performed by professionals and experts.
To some extent we can see design as a generic activity, and yet there appear to be real
differences between the end products created by designers in various domains. A structural
engineer may describe the process of calculating the dimensions of a beam in a building as
design. In truth such a process is almost entirely mechanical. You apply several mathematical
formulae and insert the appropriate values for various loads known to act on the beam and the
required size results. It is quite understandable that an engineer might use the word ‘design’
here since this process is quite different from the task of ‘analysis’, by which the loads are
properly determined. However, a fashion designer creating a new collection might be slightly
puzzled by the engineer’s use of the word ‘design’. The engineer’s process seems to us to be
relatively precise, systematic and even mechanical, whereas fashion design seems more
imaginative, unpredictable and spontaneous. The engineer knows more or less what is required
from the outset. In this case a beam that has the properties of being able to span the required
distance and hold up the known loads. The fashion designer’s knowledge of what is required is
likely to be much vaguer. The collection should attract attention and sell well and probably
enhance the reputation of the design company.
5 | P a g e
Actually both these descriptions are to some extent caricatures since good engineering
requires considerable imagination and can often be unpredictable in its outcome, and good
fashion is unlikely to be achieved without considerable technical knowledge. Many forms of
design then, deal with both precise and vague ideas, call for systematic and chaotic thinking,
need both imaginative thought and mechanical calculation. However, a group of design fields
seem to lie near the middle of this spectrum of design activity. The three-dimensional and
environmental design fields like architecture require the designer to produce beautiful and also
practically useful and well functioning end products. In most cases realizing designs in these
fields needs both the left and the right brain.
Figure 1: Source: http://www.ritualsofhealing.com/Portals/1282/images
6 | P a g e
Consequently it follows that design is a process which is composed of two inherent
kinds of sub processes – The creative, which requires the right brain, and The computational,
which uses the other side of it. While the computational processes are well defined and based
on proven logic, the creative process does not have a clear formula or a logic behind its
execution. It is here where the so called abstract, the intangible, the noble, the humanistic part
of the whole process begins. It is this part of the universe which has not been understood
properly in a scientific manner. Over the years, extensive research has been, and is being
performed for unleashing the mystery behind this part of human cognition.
The process of design, and especially the human process of architectural design is
largely dependent on human cognition and thought. It is affected by various variables –
emotion, past experience, health mood etc., to name a few; because of this, the human process
has its own inherent merits and demerits. We are not going to discuss about the merits of it
here. We are flipping to the other side to ponder upon the demerits of this process. The very
fact that it is dependent on these kind of variables conveys that it is unreliable.2
The first step towards the understanding this mysterious side of the human process of
architectural design is the process of externalization.
7 | P a g e
Externalization
Externalization means to put something outside of its original borders, especially to put
a human function outside of the human body. In a concrete sense, by taking notes, we can
externalize the function of memory which normally belongs in the brain.
In Freudian psychology, externalization is an unconscious defense mechanism, where an
individual "projects" his own internal characteristics onto the outside world, particularly onto
other people. For example, a patient who is overly argumentative might instead perceive others
as argumentative and himself as blameless.3
Externalization is a process in which an entity is defined completely without even a
negligent amount of ambiguity. For example, consider this picture:
figure 2: Source: http://amansworldonline.com/wp-content/gallery/reventon-jet-fighter
8 | P a g e
Two persons comment about this picture:
Person A says
“Lamborghini is amazingly fast!”
Person B says
“361 Kmph is the top speed of Lamborghini Murcielago RGT!”
Analyzing these two statements, one may come to a conclusion that person B has
described what he had wanted to say in a much more technical and unambiguous manner, than
person A. As I put it, person B has externalized his thought process. A didn’t. This is what is
meant by the word externalization.
Most of the times, people like certain type of things, whether or not they are buildings,
but when you ask them why, they fumble. It is a very common issue which we may face in our
day to day lives. This is just because that the person is not able to externalize his thoughts, to
be able to make the other person understand the issue perfectly.
Summing it all up, externalization can be defined as a process in which an information
within a system, numerical or otherwise, is defined externally or communicated to another
system completely, perfectly, without any uncertainty.
9 | P a g e
The relevance of this term externalization is important here because, it is this process,
which is the first known gateway to break the barrier of this subjective, abstract, noble and
humanistic part of the brain, that is the right brain, gets to see the other side or it, or vice versa.
Externalization can be in any definable concrete form – it depends on the end user of
this information. For another human, externalization can be in terms of the words of the
language which he/she speaks; for a neuron, it is in terms of charge difference between the
chemicals that constitute it; for a cell phone, it might the push of a specific button, or a touch
on the specific area on the touch screen. But for transfer of information in a well defined
manner to a computer system, requires the process of externalization to be in terms of a
language which the computer can understand and each computer language, like English, or
Mandarin, or any other human language, has a definite syntax of arrangement of
subcomponents, commonly known as the grammar of the language.
For understanding more about this, we first have to know more about a computer and
computer programs and how they work.
10 | P a g e
Computer system
A computer cannot be compared to a human mind, and vice versa, since, both are
entirely different entities, processing information in an entirely different way.
A computer is a machine that manipulates data according to a list of instructions.
Computer programs are instructions for a computer. A computer requires programs to function.
Moreover, a computer program does not run unless its instructions are executed by a central
processor; however, a program may communicate an algorithm to people without running. 4
Figure 3: Source: http://en.wikipedia.org/wiki/File:StoplightMexico.jpg
11 | P a g e
Suppose a computer is being employed to drive a traffic light. A simple stored program
might say:
1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. Jump to instruction number 1.
Computers have two important characteristics:
1. It respond to a specific set of instructions in a well defined manner.
2. It can execute a prerecorded set of instructions.
Computers work on the principle of duality – 0’s and 1’s
They can understand only these two entities, represented in some form. This
representation is usually called storage (of data). Any complex information, whether it is high
definition video, or an amazingly detailed video game or an high resolution three dimensional
12 | P a g e
render, or any other data, should first be converted to this simple format of the 0 and the 1.
This means that the operation which we intend to do using the computer should be
communicated to it in terms of 0s and 1s.
These 0s and 1s are nothing but the optical information in a compact disc, or the
magnetic information (+ or – charges) in a hard disk drive, or physical presence in a punched
card and so on; whatever the case may be, the computer understands only this – the
information represented with two distinct entities.
Computers are not humans – they cannot understand subjective ideas which a human
mind could understand: love, humor, anger etc., these qualities are the unique properties of
the human mind, often called human qualities or human feelings. A computer cannot
understand what love is, unless you define it as
The emotional state of the human mind when the brain consistently releases a certain
set of chemicals, including pheromones, dopamine, norepinephrine, and serotonin, which act in
a manner similar to amphetamines, stimulating the brain's pleasure center and leading to side
effects such as increased heart rate, loss of appetite and sleep, and an intense feeling of
excitement. (given everything is encoded into 1’s and 0’s – the word love is then said to be
externalized)
This discussion makes us come to the conclusion that computers can only understand
rationalized ideas, either if they are rational by their own, or if an irrational idea is externalized
13 | P a g e
in a rationalized form (which is done basically through an algorithm, however complex it may
be).
14 | P a g e
Algorithms
Contrary to common belief, the word algorithm is not Greek. Its origin is Arabic, based
on a concept attributed to an 8th century Persian mathematician named Al-Khwarizmi. An
algorithm is a procedure for addressing a problem in a finite number of steps using logical if-
then-else operations.5
An algorithm is a sequence of finite instructions, often used for calculation and data
processing. It is formally a type of effective method in which a list of well-defined instructions
for completing a task will, when given an initial state, proceed through a well-defined series of
successive states, eventually terminating in an end-state.
Algorithms are nothing but flow charts – well defined flow charts.
Figure 4: Source: http://en.wikipedia.org/wiki/File:LampFlowchart.svg
15 | P a g e
The figure above shows a simple algorithm to deal with a lamp which does not work.
Almost any kind of complex idea can be broken down into multiple simple steps, by the use of
flow charts, an algorithm.
Theoretically, as long as a problem can be defined in logical terms, a solution may be
produced that will address the problem’s demands. An algorithm is a linguistic expression of
the problem and as such it is composed of linguistic elements and operations arranged into
spelling, and grammatically and syntactically correct statements. The linguistic articulation
serves the purpose not only to describe the problem’s steps but also to communicate the
solution to another agent for further processing. In the world of computers, that agent is the
computer itself. An algorithm can be seen as a mediator between the human mind and the
computer’s processing power. This ability of an algorithm to serve as a translator can be
interpreted as bi-directional: either as a means of dictating to the computer how to go about
solving the problem, or as a reflection of a human thought into the form of an algorithm.
Traditionally, algorithms were used as mathematical or logical mechanisms for resolving
practical problems. With the invention of the computer, algorithms became frameworks for
implementing problems to be carried out by computers. While the connotation associated with
the action of giving instructions, commands, or directions is subconsciously assumed to be
aimed at a sentient worker, the computer, despite its once human identity, is not a human
being and therefore should not be treated as such. (Perhaps it would be more accurate if a new
name was given that would reflect more accurately its true potential, such as portal,
transverser, or, hyperion X.)
16 | P a g e
So it turns out that - an algorithm becomes a rationalized version of human thinking.
As such it may be characterized as being precise, definite, and logical, but at the same time may
also lack certain unique qualities of human expression such as vagueness, ambiguity, or
ambivalence.
Design is considered an extremely complex human activity guided by several empirical
and non empirical parameters. Now that we have understood what design, algorithm, and a
computer is, let us proceed further, by writing three important statements which can be
derived from the above discussion:
17 | P a g e
Design is largely guided by human thought.
Algorithm is a rationalized version of human thinking.
A computer can understand rationalized ideas only.
18 | P a g e
Notes and references:
1. Michio Kaku, Intelligence revolution (documentary), 2007, BBC.
2. Bryan Lawson, How Designers Think, Architectural Press, Burlington, 2005, pg 9.
3. Wikipedia.org, accessed on October 9, 2008.
4. ibid.,
5. Kostas Terzidis, Algorithmic Architecture, Architectural Press, Burlington, 2006,
prologue.
Why algorithms and why computers?
Figure 5: source: http://www.dai.ed.ac.uk/homes/cam/images/MoravecsLaw.jpg
Moore’s Law: The speed of the computer doubles, the size and cost halves, every 18
months.
The current era marks a distinct paradigm shift in the field of architectural design, the
largest one to name, after the renaissance, which WILL change how people, and especially
architects recognize the process of design, and consequently the word design itself.
The process of architectural design itself has undergone change whenever a new
technology was introduced to the architect. It is hard to predict the impact of information
technology on any discipline, especially one like architecture, because technology tends to
create its own uses and often changes established methods and practices in the course of its
adoption. Yet understanding the principles on which architectural design and computing are
founded is a necessary first step in bringing about these changes. Only then will the
development of methods and tools progress in a direction that can truly help the discipline and
the practice of architecture, and only then can their relevance, impacts, and desirability for the
profession of architecture and the environments it creates be fully understood.
There has been a profound debate since a decade back, over the issue of artificial
intelligence serving directly to the process of architectural design, which was otherwise
considered basically to be a purely human dependent process. But with the advent of extreme
computational power, there have been more and more people who are accepting the fact that
the computer can be an active partner in the real time architectural design process.
This dissertation will not answer any question, rather it will pose a question to the
readers, to think and ponder about – is design still a purely human activity? Are we going to
treat computers as dumb machines which tells us what two and four add to? Should the 21st
century architect be a programmer too?
Thinking about these questions will not only help the reader to understand his/ her own
inherent limitations in a more rationalized and clear cut manner , but also will clear the myth
behind the use of computers as active partners in design.
Finally, after all, the aim is just to make better buildings, a better environment, and a
better place to live in.
What are we upto?
To study what computer algorithms are, how they work, and how they can be used to
make the design process more efficient, and thereby the product also.
To study at least one way of algorithmic design, namely the use of genetic algorithms in
design, and analyze its features.
To study and analyze buildings which were algorithmically designed.
Till where?
This dissertation aims at understanding and questioning the way in which our
environment is being designed today, viz., the human process of architectural design, and how
efficient use of computer processing will enable them to make the process more productive
and healthy, in a philosophical manner, enabled by algorithms, commonly known as the
algorithmic design process. The underlying potential of this kind of emerging field will be
discussed in a broad perspective.
Since the range of topics covered in this dissertation is too broad and contains the
systematic understanding of the basics of various related fields, viz., architecture, computers,
mathematics, and evolutionary biology, this will be a book for those dedicated beginners who
intend to proceed towards in making themselves masters of emergent architectural computing,
for the betterment of themselves, the environment and consequently architecture itself.
Also, since this book focuses mainly on beginners, there won’t be much area devoted to
computer programming or complex mathematics, statistics and biology. Rather, all these will be
explained to the reader in a simplified manner with suitable examples and analogy at specific
areas. There is a section devoted for advanced readers where external resources which will be
useful for them, in this field will be given in an organized format.
But nonetheless, the reader is expected to have a broad idea on the following
a. Computers – programs and hardware
b. Mathematics and statistics
c. Evolutionary biology
A few examples of buildings which were constructed by employing algorithmic design
processes are also studied, towards the end of the literature.
Ouch…!!!
As this dissertation is based on a emerging field, case studies will be virtual due to
absence of projects of these kind in the country.
Although this dissertation will focus on algorithmic design processes, not much will be
discussed regarding the actual programming part of the design.
A very large source of information will be from the world wide web, which might be
considered as a limitation by some people considering the authenticity of the
information available from the internet. But as far as possible, information is collected
from famous registered websites, which are free from data theft, or unauthorized
copying.
Since many ideas in this book are personal ideas from the author himself, several
controversies may arise and some readers may not agree to certain ideas written in this
book. If such is the case, readers can feel free to write to [email protected] to
carry forward the debate, for the purpose of knowledge.
How are we doing it?
This book is mainly divided into three categories:
Algotecture1
Genetic algorithm in architectural design.
Case studies.
The first part will focus mainly on the emerging architectural culture which employ the use
of computers as active design partners through the usage of algorithms, its history, and its
temporary limitations.
The second will focus mainly on the usage of genetic algorithms in architectural design, as a
specific case in Algotecture.
The third will be an analysis of some of the buildings which were algorithmically
constructed, whether genetic or otherwise.
§Architecture d[algorithm] = [Algotecture]k + C
Algotecture
Algotecture1 is a term coined here to denote the implementation of algorithms in
architecture. Before jumping into the actual part, I would like to make one thing clear.
Algorithm is not CAD or computer graphics – the former is an individual entity which can
operate without a computer also, but on the other hand, the latter are essentially features
which can run only by a computer. This is important in the sense that this differentiation
excludes the process which solves the problem and the machine which carries out that.
The articulation of such a process requires the articulation of a strategy for solving
problems of two kind
1. Target can be defined
2. Target cannot be defined
For the second one we slightly change the word ‘problem solving’ as ‘problem
addressing’ since solving a problem whose solution cannot be defined does not make sense.
We will understand this more in the coming pages.
Within the world of computation, solutions can be found [either addressed or solved]
for almost any problem with any magnitude of complexity, which justifies the usage of a
computer to do that. Eg: Structural analysis of a building information model.
Yet there are some problems whose level of complexity, uncertainty, and fluidity
requires a harmonic relationship between the power of the human mind and the enormous,
ever increasing calculating power of the computer. Such a harmonic relationship can be
achievable only by the use of algorithmic strategies where the human mind communicates with
the computer for the purpose of addressing a problem.
In design, especially architecture, problems are not that simple for it to be treated like
just another addition process performed between two integers. It is something more in the fact
that the fluidity in defining both the problem and the solution to it is more in design. It follows
that algorithm is not a trend, its not a cool word, not an obscure programmer’s conspiracy, it is
a way of thinking – and because of its power to translate human thoughts to the massive power
of the computer, it allows human thoughts to extend beyond their limitations.
In design, algorithms can be used to solve, organize, or explore problems with increased
visual or organizational complexity. In its simplest form, a computational algorithm uses
numerical methods to address problems.
Noting our first point that algorithms are independent of the agent which solves the
problem, it follows that historically algorithms have been used extensively – algorithm is
nothing but an instruction, a rule or a command, and architecture without these features is
meaningless. Realization of a design process would not have been possible without the use of
algorithms.
In the last two decades, architecture has changed from a manually driven tool based
tectonic world to a computer driven form based design and global practice. This
transformation, while impressive, has not reached its full potential because of the following
reasons
1. The lack of computational education of architects
2. The plethora of confusing literature on digital design
And there is hardly any bright examples of using computers in their fullest potential as design
tools – not to mention the NURBS based formal mongering by prominent avant-garde practices
such as Gehry, Morphosis and Zaha Hadid.2
Figure 6: Source: http://images.google.com/imgres?imgurl=http://www.serpentinegallery.org/
Ignoring the economic impacts of it, The Guggenheim Museum at Bilbao was a perfect example
of how bad architecture can be.
Two sides of the circuit – The sixth sense
Algorithmic logic is about the articulation of thoughts and a vague struggle to explore
the possibilities of existential emergence. When composing an algorithm for a computer to
understand, one is closely involved with the syntax and a grammar which the machine
understands and which are closely related to the features of it. Unlike human languages which
depends on the communicative power between humans, an algorithmic language depends
upon the communicative power between the brain and the computer. Such a dependence is
not superior, inferior or even equivalent, but rather complementary to the other – computers
are complementary to the human mind: Like the five senses, the computer has become the
sixth sense for the human mind by which thoughts and actions which were considered
unimaginable a few years ago is becoming reality today.
The true power of the algorithm lies in the synergetic relationship and the mutual effort
of both the human mind and the computer at once, in addressing problems. Both are
incomparable.
The computer is not a human mind. It is not a human designer. It is rather a counterpart
to human imagination, a source of ideas, and a portal into another world new to the human
mind.3
[Theories of] Design?
To identify the problem of design in general, and of architectural design in particular, it
is necessary to describe and understand the process of design. While many definitions and
models of design exist, most agree that “design is a process of inventing physical things which
display new physical order, organization, form, in response to function”. However, since no
formula or predetermined steps exist which can translate form and function into a new,
internally consistent physical entity, design has been held to be an art rather than a science. It is
considered to be an iterative, “trial-and-error” process that relies heavily on knowledge,
experience, and intuition.
Black box theory of design:
Traditionally, intuition is a basis of many design theories, often referred to as “black
box” theories. According to them, design, as well as its evaluation, tends to be highly subjective.
While such a position relieves the designers from explaining, justifying, or rationalizing their
decisions and actions, it also enables the designer and a circle of critics to exercise authoritative
power. The problem with this is not necessarily in the lack of objective criteria but rather in the
lack of rational consistency. If design is to be studied as a process, then a series of reasonable,
justifiable, and consistent steps should be established. The presence of intuition as a source of
inspiration, decision, or action is considered arbitrary, obscure, and, as such, “black.”
Problem solving theory of design:
In contrast, another set of theories defines the design process as a problem-solving
process. According to the latter, design can be conceived as a systematic, finite, and rational
activity. As defined by researchers over the past 40 years, for every problem a solution space
exists, that is, a domain that includes all the possible solutions to a problem. Problem-solving
then can be characterized as a process of searching through alternative solutions in this space
to discover one or several which meet certain goals and may, therefore, be considered solution
states. Alternatively, a problem space does not always necessitate the identification of a
solution as a target, but instead may involve simply addressing the problem for possible
alternative solutions that are not known in advance. In many cases, the solution to a design
problem may deviate from the original objectives.
History of Algotecture
With the advent and the popularization of computers, in the early 1960s, the need for
rationality In the design process was beginning to emerge. The idea was to define the process
of design as an abstract picture which represents it [the process] as a function of the contextual
demands ie., an algorithm.
Since the problem presented in this form was extremely complex and inappropriate for
computers of those age, further research was frozen. However, similar processes were used for
simple tasks like preparing the zoning diagram for an architectural project or preparing the
space allocation program for it, like the one shown in the figure.
figure 7: Source: Algorithmic Architecture, Kostas Terzidis
Yet many of these lacked the aesthetic quality which was produced by a human mind
working single-handedly.
These problems, as well as the practical needs of architectural offices, led to changes in
the approach. Rather than competing with, emulating, or replacing designers, the approach in
the 1970s was predicated on the belief that they should assist, complement or augment the
design process. The machine was introduced as an aid to instruction, as a mediator for the goals
and aspirations of the architects. The computer could communicate with architects by
accepting information, manipulating it, and providing useful output. In addition to synthesizing
form, computers are also able to accept and process non-geometric information about form.
Therefore, it is necessary for architectural design languages to be invented to describe
operations on building databases. One pioneering effort in this area is GLIDE4, a language which
allowed the user to assemble buildings. Another approach in the direction of computer-
augmented architectural design was the manipulation of architectural forms according to rules.
Basic structural and functional elements were assembled to make volumes (elements of
composition) which, in turn, were assembled to make buildings. All elements were stored in the
computer’s memory in symbolic form, and the user operated on them by manipulating symbols
in accordance with rules derived through the classic academic tradition. Design was then being
considered more of a systematic and rational activity. Since then, many of the experimental and
empirical rules of design are being explored in various form based and function based design
experiments. Form-based design is viewed as an activity, which entails invention and
exploration of new forms and their relations. Various methods of analysis have been employed
in the search for new forms: formal analysis involves the investigation of the properties of an
architectural subject. Compositional principles, geometrical attributes, and morphological
properties are extracted from figural appearances of an object. In contrast, structural analysis
deals with the derivation of the motivations and propensities which are implicit within form and
which may be used to distinguish the difference between what is and what appears to be.
One approach to form-based design is that of shape grammars5. They were developed
to carry out spatial computations visually and are used to generate designs based on
production rules. A shape grammar consists of rules and an initial shape. There are two types of
shape grammars. An interesting variation of shape grammars is that of fractal generative
systems. Based on a scheme, formulated by the German mathematician Von Koch, a fractal
process consists of an initial shape (the base) and one or more generators. From a practical
point of view, the generator is a production rule: each and every line segment of the base is
replaced by the shape of the generator. A fractal generated graphic is shown below:
Figure 8: Source: http://upload.wikimedia.org/wikipedia/commons/2/2e
Paradoxical as it may appear, humans today have become capable of exceeding their
own intellect. Through the use of intricate algorithms, complex computations, and advanced
computer systems designers are able to extend their thoughts into a once unknown and
unimaginable world of complexity. Yet, the inability of the human mind to single-handedly
grasp, explain, or predict artificial complexity is caused mainly by quantitative constraints, that
is, by the amount of information or the time it takes to compute it and not necessarily to the
intellectual ability of humans to learn, infer, or reason about such complexities. Both architects
and engineers argue for the deployment of computational strategies for addressing, resolving,
and satisfying complicated design requirements. These strategies result from a logic, which is
based on the premise that systematic, methodical, and rational patterns of thought are capable
of resolving almost any design problem. While this assumption may be true for well-defined
problems, most design problems are not always clearly defined. In fact, the notion of design as
an abstract, ambiguous, indefinite, and unpredictable intellectual phenomenon is quite attuned
to the very nature of the definition or perhaps lack of a single definition of design. Yet, the
mere existence of certain ambiguous qualities such as amphiboly, indefiniteness, vagueness,
equivocation, ambivalence, or coexistence serve as patterns, metaphors, and encapsulations
that facilitate in detecting, understanding, and addressing complex notions. The most
paradigmatic example of this practice is the case of architect Frank Gehry. In his office, design
solutions are not sought through methodical computer-aided design methods but rather by the
use of encapsulated symbolic schemes, such as metaphors, allegories, or analogies. The design
teams spend countless hours of thought, modeling, iterative adjustment, and redesign based
on the metaphor of a crinkled piece of paper or an ambiguous napkin sketch. Complexity
emerges not as a sum of the parts but rather as a reference to a model that serves the purpose
of a metaphor. Rather than using direct, explicit, or unequivocal terms to communicate,
designers often use instead ambiguous, tacit, or metaphorical means. For instance, designers
often use non-verbal means of communication such as sketches, drawings, analogies,
expressions, gestures, or metaphors. What makes verbal communication so problematic for
creative people is that it is too literal, leaving little, if any, ground for interpretation. It assumes
that for every notion or idea there is a word or a phrase to describe it, but that may not be the
case for those yet to be defined design concepts. In contrast, implicit and tacit information
suggests much more than their spoken counterparts.
In short, Originally the role of computers in architecture was to replicate human
endeavors and to take the place of humans in the design process [1960s]. Later the role shifted
to create systems that would be intelligent assistants to designers, relieving them from the
need to perform the more trivial tasks and augmenting their decision-making capabilities
[1970s, 80s]. Today, the roles of computers vary from drafting and modeling to form-based
processing of architectural information. While the future of computers appears to include a
variety of possible roles, it is worth exploring these roles in the context provided by the
question: “Who designs?” If one takes the position that designing is not exclusively a human
activity and that ideas exist independently of human beings, then it would be possible to design
a computational mechanism which would associate those ideas.6
Notes and references:
1. Kostas Terzidis, Algorithmic Architecture, Architectural Press, Burlington, 2006, pg 37.
2. Ibid, pg 40.
3. Ibid, pg 42.
4. Yehuda. E Kalay, Principles of computer aided design: COMPUTABILITY OF DESIGN, pg 79
5. Terry Knight, Report for the NSF/MIT Workshop on Shape Computation, School of
Architecture and Planning, MIT, April 1999
6. Kostas Terzidis, Algorithmic Architecture, Architectural Press, Burlington, 2006, pg 52.
figure 9: Source: http://colonos.files.wordpress.com/2008/10/435hedgehog.jpg
"Small enough to fit in your hands but too prickly to hold" is a good description of the hedgehog. Though small, it is by no means defenseless. Thousands of stiff, sharp spines-harder and sharper than those of a porcupine-cover the animal's back and sides, like a pincushion filled with needles. Even though spines, or quills, provide the hedgehog with effective protection, the animal's most striking characteristic is its practice of curling up into a tight ball, with its spines sticking out in all directions. When the hedgehog rolls up, a special, highly developed circular muscle that runs along the sides of the body and across the rump and neck contracts and forms a "bag" into which the body, head and legs are folded. The hedgehog curls up if disturbed or frightened-only the strongest predators, such as the badger, can pry it open. It also sleeps in this position, so is rarely caught unprotected.
Who designed the hedgehog?
Figure 10: Source: Scientific American, January 2009 issue.
When the 26-year-old Charles Darwin sailed into the Galápagos Islands in 1835 onboard
the HMS Beagle, he took little notice of a collection of birds that are now intimately associated
with his name. The naturalist, in fact, misclassified as grosbeaks some of the birds that are now
known as Darwin’s finches. After Darwin returned to England, ornithologist and artist John
Gould began to make illustrations of a group of preserved bird specimens brought back in the
Beagle’s hold, and the artist recognized them all to be different species of finches. From Gould’s
work, Darwin, the self-taught naturalist, came to understand how the finches’ beak size must
have changed over the generations to accommodate differences in the size of seeds or insects
consumed on the various islands. “Seeing this gradation and diversity of structure in one small,
intimately related group of birds, one might really fancy that from an original paucity of birds in
this archipelago, one species had been taken and modified for different ends,” he noted in The
Voyage of The Beagle, published after his return in 1839. Twenty years later Darwin would
translate his understanding of finch adaptation to conditions on different islands into a fully
formed theory of evolution, one emphasizing the power of natural selection to ensure that
more favorable traits endure in successive generations. Darwin’s theory, core features of which
have withstood critical scrutiny from scientific and religious critics, constituted only the starting
point for an endlessly rich set of research questions that continue to inspire present-day
scientists.
Darwin’s theory represents a foundational pillar of modern science that stands
alongside relativity, quantum mechanics and other vital support structures. Just as Copernicus
cast the earth out from the center of the universe, the Darwinian universe displaced humans as
the epicenter of the natural world. Natural selection accounts for what evolutionary biologist
Francisco J. Ayala of the University of California, Irvine, has called “design without a designer,” a
term that parries the still vigorous efforts by some theologians to slight the theory of evolution.
“Darwin completed the Copernican Revolution by drawing out for biology the notion of nature
as a lawful system of matter in motion that human reason can explain without recourse to
supernatural agencies,” Ayala wrote in 2007. Some kinds of organisms survive better in certain
conditions than others do; such organisms leave more progeny and so become more common
with time. The environment thus “selects” those organisms best adapted to current conditions.
If environmental setting changes, organisms that happen to possess the most adaptive
characteristics for those new conditions will come to predominate. Darwinism was
revolutionary not because it made arcane claims about biology but because it suggested that
nature’s underlying logic might be surprisingly simple.
Natural Selection: The logic
Figure 21: Source: Scientific American January 2009 issue.
The best way to appreciate evolution by natural selection is to consider organisms
whose life cycle is short enough that many generations can be observed. Some bacteria can
reproduce themselves every half an hour, so imagine a population of bacteria made up of two
genetic types that are initially present in equal numbers. Assume, moreover, that both types
breed true: type 1 bacteria produce only type 1 offspring, and type 2 bacteria produce only
type 2s. Now suppose the environment suddenly changes: an antibiotic is introduced to which
type 1s are resistant but to which type 2s are not. In the new environment, type 1s are fitter—
that is, better adapted—than type 2s: they survive and so reproduce more often than type 2s
do. The result is that type 1s produce more offspring than type 2s do. “Fitness,” as used in
evolutionary biology, is a technical term for this idea: it is the probability of surviving or
reproducing in a given environment. The outcome of this selection process, repeated
numberless times in different contexts, is what we all see in nature: plants and animals (and
bacteria) that fit their environments in intricate ways. Evolutionary geneticists can flesh out the
preceding argument in much richer biological detail. We know, for instance, that genetic types
originate in mutations of DNA—random changes in the sequence of nucleotides (or string made
up of the letters A, G, C and T) that constitutes the “language” of the genome. We also know a
good deal about the rate at which a common kind of mutation—the change of one letter of
DNA to another—appears: each nucleotide in each gamete in each generation has about one
chance in a billion of mutating to another nucleotide. Most important, we know something
about the effects of mutations on fitness. The overwhelming majority of random mutations are
harmful—that is, they reduce fitness; only a tiny minority are beneficial, increasing fitness.
Most mutations are bad for the same reason that most typos in computer code are bad: in
finely tuned systems, random tweaks are far more likely to disrupt function than to improve it.
Adaptive evolution is therefore a two-step process, with a strict division of labor between
mutation and selection. In each generation, mutation brings new genetic variants into
populations. Natural selection then screens them: the rigors of the environment reduce the
frequency of “bad” (relatively unfit) variants and increase the frequency of “good” (relatively
fit) ones. Population geneticists have also provided insight into natural selection by describing it
mathematically. For example, geneticists have shown that the fitter a given type is within a
population, the more rapidly it will increase in frequency; indeed, one can calculate just how
quickly the increase will occur. Population geneticists have also discovered the surprising fact
that natural selection has unimaginably keen “eyes,” which can detect astonishingly small
differences in fitness among genetic types. In a population of a million individuals, natural
selection can operate on fitness differences as small as one part in a million. One remarkable
feature of the argument for natural selection is that its logic seems valid for any level of
biological entity—from gene to species. Biologists since Darwin, of course, have considered
differences in fitness between individual organisms, but in principle natural selection could act
on differences in survival or reproduction between other entities.1
Given the basic introduction to natural selection, this is exactly the process which we
are going to replicate, not to breed hedgehogs, but buildings. It may seem a bit weird and
fictional, but this is the fact.
Let’s make it simple. This is the analogy
Architectural Design :: Evolutionary Design
Buildings :: organisms
Abstract buildings :: abstract vertebrates
Design:: gene
Computer :: Nature
Architect :: God
Genetic algorithm
A genetic algorithm is an algorithm, a search technique used in computing [in our case,
architectural computing], to find exact or approximate solutions to optimization problems. GA
is a particular class of evolutionary computation inspired by the process of biological evolution.
A genetic algorithm needs two things to be defined:
1. Genetic representation of the solution domain:
This means that the final parts of the product should be defined in terms of the
variables which are used in the algorithm. For example, if the solution domain is a set of
cuboids, then, it can be defined by a minimum of three variables, defined by an array of bits.
2. Fitness function:
Fitness function is analogical to a filter in the real world. The fitness function filters
solutions which are not fit, that is, if a random solution generated by the process does not
conform to the standards specified in the fitness value, then, the particular solution is omitted
for the next iteration of the process. For example, if the fitness function requires a height of
more than 2.1 metres, then, all cuboids which have height value less than this will be filtered
out and omitted for the next generation.
The process [of using genetic algorithms for finding approximate solutions to
optimization problems] has four basic steps:
1. Initialization
2. Selection
3. Reproduction
4. Termination
Initialization:
Initially many individual solutions are randomly generated to form an initial population.
The population size depends on the nature of the problem, but typically contains several
hundreds or thousands of possible solutions. Traditionally, the population is generated
randomly, covering the entire range of possible solutions (the search space). Occasionally, the
solutions may be "seeded" in areas where optimal solutions are likely to be found.
Selection:
During each successive generation, a proportion of the existing population is selected to
breed a new generation. Individual solutions are selected through a fitness-based process,
where fitter solutions (as measured by a fitness function) are typically more likely to be
selected. Certain selection methods rate the fitness of each solution and preferentially select
the best solutions. Other methods rate only a random sample of the population, as this process
may be very time-consuming.
Most functions are stochastic and designed so that a small proportion of less fit
solutions are selected. This helps keep the diversity of the population large, preventing
premature convergence on poor solutions. Popular and well-studied selection methods include
roulette wheel selection and tournament selection.
Reproduction:
The next step is to generate a second generation population of solutions from those
selected through
1. Genetic operators – cross over, and
2. Mutations.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from
the pool selected previously. By producing a "child" solution using the above methods of
crossover and mutation, a new solution is created which typically shares many of the
characteristics of its "parents". New parents are selected for each child, and the process
continues until a new population of solutions of appropriate size is generated.
These processes ultimately result in the next generation population of chromosomes
that is different from the initial generation. Generally the average fitness will have increased by
this procedure for the population, since only the best organisms from the first generation are
selected for breeding, along with a small proportion of less fit solutions, for reasons already
mentioned above.
All this is carried out by the computer in a virtual environment – the computer itself.
The computer computes the crossover and mutation values for each permutation and
combination to a maximum number of crossovers previously set in the algorithm.
Termination:
This generational process is repeated until a termination condition has been reached.
Common terminating conditions are:
1. A solution is found that satisfies minimum criteria.
2. Fixed number of generations reached
3. Allocated budget (computation time/money) reached
4. The highest ranking solution's fitness is reaching or has reached a plateau such that
successive iterations no longer produce better results
5. Combinations of the above2
Genetic algorithm in architecture
Given that how genetic algorithms work in solving optimization problems, we can
directly jump into how we can use this to solve problems in architecture. Architectural
problems, as discussed before are not as simple as finding the shortest distance between point
A and B; they are considered to be highly complex equivalent to an NP complete problem1.
The computer simulation of evolutionary processes is already a well established
technique for the study of biological dynamics. One can unleash within a digital environment a
population of virtual plants or animals and keep track of the way in which these creatures
change as they mate and pass their virtual genetic materials to their offspring. The hard work
goes into defining the relation between the virtual genes and the virtual bodily traits that they
generate, everything else -keeping track of who mated with whom, assigning fitness values to
each new form, determining how a gene spreads through a population over many generations-
is a task performed automatically by genetic algorithms.
In a sense evolutionary simulations replace design, since architects can use this software
to breed new buildings rather than specifically design them. This is basically correct but, there is
a part of the process in which deliberate design is still a crucial component. Although the
software itself is relatively well known and easily available, so that users may get the
impression that breeding new forms has become a matter of routine, the space of possible
designs that the algorithm searches needs to be sufficiently rich for the evolutionary results to
be truly surprising. As an aid in design these techniques would be quite useless if the designer
could easily foresee what forms will be bred. Only if virtual evolution can be used to explore a
space rich enough so that all the possibilities cannot be considered in advance by the designer,
only if what results shocks or at least surprises, can genetic algorithms be considered useful
visualization tools. And in the task of designing rich search spaces certain philosophical ideas,
the productive use of genetic algorithms implies the deployment of three forms of
philosophical thinking
1. populational,
2. intensive, and
3. topological thinking)
which made the basis for a brand new conception of the genesis of form.
To be able to apply the genetic algorithm at all, a particular field of design needs to first
solve the problem of how to represent the final product (a building) in terms of the process that
generated it, and then, how to represent this process itself as a well-defined sequence of
operations. It is this sequence, or rather, the computer code that specifies it, that becomes the
"genetic material" of the building in question. In the case of architects using computer-aided
design (CAD) this problem becomes greatly simplified given that a CAD model of an
architectural structure is already given by a series of operations. A round column, for example,
is produced by a series such as this:
1. Draw a line defining the profile of the column;
2. Rotate this line to yield a surface of revolution;
3. Perform a few "Boolean subtractions" to carve out some detail in the body of the
column.
Some software packages store this sequence and may even make available the actual
computer code corresponding to it, so that this code now becomes the "virtual DNA" of the
column (A similar procedure is followed to create each of the other structural and ornamental
elements of a building).
At this point we need to bring one of the philosophical resources [as mentioned earlier]
to understand what happens next: population thinking. In a nut shell what characterizes this
style may be phrased as "never think in terms of Adam and Eve but always in terms of larger
reproductive communities". More technically, the idea is that despite the fact that at any one
time an evolved form is realized in individual organisms, the population not the individual is the
matrix for the production of form. A given animal or plant architecture evolves slowly as genes
propagate in a population, at different rates and at different times, so that the new form is
slowly synthesized within the larger reproductive community. The lesson for computer design is
simply that once the relationship between the virtual genes and the virtual bodily traits of a
CAD building has been worked out,
1. An entire population of such buildings needs to be unleashed within the computer,
not just a couple of them.
2. The architect must add to the CAD sequence of operations points at which
spontaneous mutations may occur (in the column example: the relative proportions
of the initial line; the center of rotation; the shape with which the Boolean
subtraction is performed)
and then let these mutant instructions propagate and interact in a collectivity over many
generations.
Following this is the idea of intensive thinking. In science, an intensive quantity is a one
whose value does not half if the size is halved. In technical terms, they don’t lose their intensive
property while their magnitudes are reduced. This can be better understood when I explain its
opposite – an extensive quantity: for example, volume is an extensive quantity whereas
temperature is an intensive quantity – a bucket of water at 90oc will be at the same
temperature even it is emptied into two buckets of half the volume each. Although this lack of
divisibility is important, stress should be also upon another feature of intensive quantities: a
difference of intensity spontaneously tends to cancel itself out and in the process, it drives
fluxes of matter and energy. In other words, differences of intensity are productive differences
since they drive processes in which the diversity of actual forms is produced. For example, the
process of embryogenesis, which produces a human body out of a fertilized egg, is a process
driven by differences of intensity (differences of chemical concentration, of density, of surface
tension).
What does this mean for the designer? That unless one brings into a CAD model the
intensive elements of structural engineering, basically, distributions of stress, a virtual building
will not evolve as a building. In other words, if the column we described above is not linked to
the rest of the building as a load-bearing element, by the third or fourth generation this column
may be placed in such a way that it cannot perform its function of carrying loads in compression
anymore. The only way of making sure that structural elements do not lose their function, and
hence that the overall building does not lose viability as a stable structure, is to somehow
represent the distribution of stresses, as well as what type of concentrations of stress endanger
a structure's integrity, as part of the process which translates virtual genes into bodies. In the
case of real organisms, if a developing embryo becomes structurally unviable it won't even get
to reproductive age to be sorted out by natural selection. It gets selected out prior to that. A
similar method would have to be simulated in the computer to make it certain that the
products of virtual evolution are viable in terms of structural engineering prior to being selected
by the designer in terms of their "aesthetic fitness". And this kind of intensive thinking not only
holds true for structural engineering but also all the other similar departments, for making the
final product efficient in all aspects of building design.
Now, lets assume that all of these have been met – an architect-programmer sets out to
perform this with a software package [a CAD and a Structural engineering] and he writes down
a code which can combine the functionality of both these softwares. If he uses virtual evolution
as the process of design, then the only role left for the humans is to select the aesthetic fitness
in every generation. By doing *only+ this, doesn’t the architect’s job becomes that of a
painter’s?
There is, however, another part of the process which needs the third type of
philosophical thinking to be explained – the topological thought. One way to introduce this
other style of thinking is by contrasting the results which artists have so far obtained with the
genetic algorithm and those achieved by biological evolution. When one looks at current artistic
results the most striking fact is that, once a few interesting forms have been generated, the
evolutionary process seems to run out of possibilities. New forms do continue to emerge but
they seem too close to the original ones, as if the space of possible designs which the process
explores had been exhausted. This is in sharp contrast with the incredible combinatorial
productivity of natural forms, like the thousands of original architectural "designs" exhibited by
vertebrate or insect bodies. Although biologists do not have a full explanation of this fact, one
possible way of approaching the question is through the notion of a "body plan".
As vertebrates, the architecture of our bodies makes us part of the phylum "chordata"1.
The term "phylum" refers to a branch in the evolutionary tree (the first bifurcation after animal
and plant "kingdoms") but it also carries the idea of a shared body-plan, a kind of "abstract
vertebrate" which, if folded and curled in particular sequences during embryogenesis, yields an
elephant, twisted and stretched in another sequence yields a giraffe, and in yet other
sequences of intensive operations yields snakes, eagles, sharks and humans. There are
"abstract vertebrate" design elements, such as the tetrapod limb2, which may be realized in
structures as different as the single digit limb of a horse, the wing of a bird, or the hand with
opposing thumb of a human. Given that the proportions of each of these limbs, as well as the
number and shape of digits, is variable, their common body plan cannot include any of these
details. In other words, while the form of the final product (an actual horse, bird or human)
does have specific lengths, areas and volumes, the body-plan cannot possibly be defined in
these terms but must be abstract enough to be compatible with a myriad combination of these
extensive quantities. [We use the term "abstract diagram" or "virtual multiplicity” to refer to
entities like the vertebrate body plan, but the concept also includes the "body plans" of non-
organic entities like clouds or mountains.]
What kind of theoretical resources do we need to think about these abstract diagrams?.
In mathematics the kind of spaces in which terms like "length" or "area" are fundamental
notions are called "metric spaces", the familiar Euclidean geometry being one example of this
class. (Non-Euclidean geometries, using curved instead of flat spaces, are also metric). On the
other hand, there are geometries where these notions are not basic, since these geometries
possess operations which do not preserve lengths or areas unchanged. Architects are familiar
with at least one of these geometries, projective geometry (as in perspective projections). In
this case the operation "to project" may lengthen or shrink lengths and areas so these cannot
be basic notions. In turn, those properties which do remain fixed under projections may not be
preserved under yet other forms of geometry, such as differential geometry or topology. The
operations allowed in the latter, such as stretching without tearing, and folding without gluing,
preserve only a set of very abstract properties invariant. These topological invariants (such as
the dimensionality of a space, or its connectivity) are precisely the elements we need to think
about body plans (or more generally, abstract diagrams.) It is clear that the kind of spatial
structure defining a body plan cannot be metric since embryological operations can produce a
large variety of finished bodies, each with a different metric structure. Therefore body plans
must be topological.
To return to the genetic algorithm, if evolved architectural structures are to enjoy the
same degree of combinatorial productivity as biological ones they must also begin with an
adequate diagram, an "abstract building" corresponding to the "abstract vertebrate". And it is
at this point that design goes beyond mere breeding, with different artists designing different
topological diagrams bearing their signature. The design process, however, will be quite
different from the traditional one which operates within metric spaces. It is indeed too early to
say just what kind of design methodologies will be necessary when one cannot use fixed lengths
or even fixed proportions as aesthetic elements and must instead rely on pure connectivities
(and other topological invariants). But what it is clear is that without this the space of
possibilities which virtual evolution blindly searches will be too impoverished to be of any use.
Thus, architects wishing to use this new tool must not only become hackers (so that they can
create the code needed to bring extensive and intensive aspects together) but also be able "to
hack" biology, thermodynamics, mathematics, and other areas of science to tap into the
necessary resources. As fascinating as the idea of breeding buildings inside a computer may be,
it is clear that mere digital technology without populational, intensive and topological thinking
will never be enough.3
Notes and references:
1. H. Allen Orr, Testing Natural Selection, Scientific American, January 2009.
2. Wikipedia.org, accessed on November 15, 2008.
3. Anonymous, Genetic algorithm in Architecture.
Case Studies
1. Thesis Project, Karthik.D, Design of a high rise – an exploration in form finding
algorithms.
Figure 11: Source: Karthik.D, Architect, India.
This thesis project by Architect Karthik.D undertook by him in May 2007, was an
exploration into the merits and de-merits of employing “form finding” processes in the design
of an environment [a high rise].
The reason he gave for choosing such a project for his thesis was:
“The dawn of the digital era has brought endless possibilities for architects looking to
push the envelope which is a consequence of the convergence of numerous developments in
science, technology, mathematics, economy and globalization in recent decades. Within the
discipline of architecture, the impact of this new paradigm is being especially marked. Whole
cityscapes are changing, new building types are emerging rendering the older ones obscure.
The reasons for choosing to work on this project are two-fold. One is a desire to comprehend the
changes brought about in the very mode of designing due to the advent of digital technologies,
the second being an intent to deconstruct the notion of a skyscraper iconography as we perceive
it today. This project is thus a culmination of the two factors mentioned above, both of which
Are mutually dependent on each other. contemporary architectural practices across the globe
are all exploring a wide variety of “design processes’, each of which are quite
profusely facilitated by digital technologies. This project would examine an algorithmic
approach towards the design of a high-rise. The success of this project would lie in my ability to
shape the future of this fascinating architectural genre.”
The site:
The site for the project was the central business district in Moscow which had a strong
history behind it and therefore a strong context.
Moscow International Business Center is a projected part commercial district of central
Moscow, Russia. The Moscow-City area is currently under intense development. The goal of
Moscow IBC is to create the first zone in Russia, (and in all of Eastern Europe) that will combine
business activity, living space and entertainment. It will be a city within a city. The whole
project takes up 1 square kilometer. The proposed site is located on parcels 17 and 18 inside
Moscow IBC. The project required an iconic tower to be constructed there for the very reason
of proving Russia as a modern superpower.
Process:
The whole process of the design can be subdivided into the following sub categories:
1. Experiments with numbers and parameters.
2. Program for the building.
3. Devising an algorithm for the generation of form.
Experiments with numbers and parameters:
The architect started his project with a number of experiments and analysis. These include
1. Surface analysis
2. Module study [1 and 2]
3. Ground coverage – FAR – Height ralationship analysis
Here is where the architect analysed and studied the different factors which could have
affect the appearance and other specifications of the built form.
figure 12: Source: Karthik.D, Architect, India.
figure 13: Source: Karthik.D, Architect, India.
Program for the building:
The architect prepares a program for the building based on the different functions for
which the building will be catering to.
Devising an algorithm for the generation of form:
The architect then devised an algorithm for the generation of form of the building; a
part of it is shown below.
“Option Explicit Dim resFar:resFAR = 188000 Dim offFar:offFar = 166000 Dim hotFar:hotFar = 96000 Dim scale Dim n:n = 120 Dim n1:n1 = 110 Dim n2:n2 = 90 Dim a:a = 4.5 Dim a1:a1 = 4.5 Dim a2:a2 = 3 Dim i:i = 1 Dim arrpoint:arrpoint = Rhino.getpoint ("pick point for rotation") Dim arrend:arrend = Array (arrpoint(0), arrpoint(1), arrpoint(2)+4000) Function don (a,n,scale) For i=1 To n 'Dim arrstart:arrstart = Array (arrpoint(0),arrpoint(1),arrpoint(2)) Dim c:c = Rhino.selectedobjects Dim f:f = Rhino.curveareacentroid (c(0)) Dim g:g = Array (f(0)(0),f(0)(1),f(0)(2)+4000) Dim arrScale:arrScale = Array(scale,scale,1) Rhino.scaleobject c(0),arrpoint,arrScale Dim e:e = Rhino.extrudecurvestraight (c(0),f(0),g) 'Rhino.capplanarholes e Rhino.rotateobject c(0), arrpoint, a, ,True Rhino.unselectallobjects
Dim d:d = Rhino.firstobject (True) Rhino.moveobject d,arrpoint,arrend Next End Function Dim red:red = Rhino.GetObject ("pick object red") Dim blue:blue = Rhino.GetObject ("pick object blue") Dim green:green = Rhino.GetObject ("pick object green") Rhino.currentlayer "layer red" Rhino.unselectallobjects Rhino.selectobject red don a,n,0.996 Rhino.currentlayer "layer blue" Rhino.layervisible "layer red",False Rhino.unselectallobjects Rhino.selectobject blue don -a1,n1,0.996 Rhino.currentlayer "layer green" Rhino.layervisible "layer blue", False Rhino.unselectallobjects Rhino.selectobject green don a2,n2,0.996 Rhino.layervisible "layer red",True”
The platform used for the generation of this algorithm was RHINO. The algorithm was intended
to do the following tasks:
1. Generate a basic floor plate which consists of three triangular leaves around a central
circular core, as shown in the figure.
figure 14: Source: Karthik.D, Architect, India.
2. Relate the dimensions of each triangle in the floor plate to the nth floor by a factor k
where the following arguments holds true:
[Area[ground floor]]k = Area[nth floor]
3. Rotate each of these triangles individually through an angle α where the following
argument holds true:
αn = αG [n]
where p and k are constants and k<1.
Given that he made the computer understand what he wanted through his algorithm [visual
basic - RHINO platform], the task for the computer is just to generate points in Cartesian space
which satisfy the given conditions, which is nothing but the tectonic form of the building.
The form of the building is thus generated through a simple looking algorithm, which
would have been seemingly complex at the first instance when the programming language
interface was shown.
After the generation of the form of the building the building is constructed on the site
through the usage of Computer Aided Manufacturing, but that is another dissertation.1
2. Serpentine Pavilion
The Serpentine Pavilion 2002 - Toyo Ito and ARUP
figure 15: source: www.flickr.com
Geometry
Geometry is an animation. It has always been so - the ideas behind Greek architecture
were based on proportionate rules that took their inspiration from the relative positions of a
point on a line. Imagine a dot traveling along a line and at different positions stopping to
produce a measure of harmonic, geometric, or arithmetic means. At one particular point the
lesser part of the line to the greater part has the same ratio as the greater part is to the whole.
This defines the Golden Mean that led to the Acropolis and much of the compositional rules
behind classical art. Alberti and Le Corbusier too developed their own proportionate rules for
the making of architecture. With the computer we now have the power to look further into an
animate geometry - using feedback techniques and algorithms. Tectonic space need not be
limited to imagining structure as box like and assembled with standard post and beam
constructions; it may be viewed as a serial punctuation generated by complex processes. But
the investigation of such non-linear space needs its own rigors and, surprisingly, these come
back to aesthetic ideas of proportion, scale, and materiality.1
Initially, Toyo Ito proposed two questions:
A - How to float a slab?
B - How to transform the box?
A. Wanting a slab to float means it loses its connection with the ground, no line shoots straight
down and amplifies gravity, no squatness or robustness or claims to an assumed efficiency
remain. Instead, there could be a wandering line, a kind of dreaming path.
No need for hard single contact - instead, there could be collector zones or gravity basins.
Instead of descent, and the idea of a load compelled to travel downwards, what if the logic
were to flipp and the 'load' ascend, upwards? The ground is given life to rise and coil up into the
air - then a flat plane intersects, almost 'flying' at it, to be embedded. The movement in the roof
slab is, as if, frozen. If the rising ground is translucent, pools of light may fill the space between
heaven and earth, and benches and beds are fold lines. Mini program space is found within. The
traditional limit of slab on columns is now forgotten. Somewhere above, the roof floats.
B. With B the game is positive, negative. How much is void and what should remain of solid
material? Is it an eating away of the form or a flow of large bubbles that may trap a form
within? 2
Resolving A and B
The initial questions A and B Ito raised led to much speculation about form and enclo-
sure and how to define a traditional volume. For there was much to experiment with as we
investigated what could be contained or liberated simply by the drawing of pattern, and what
sort of risk do we inject into the unpredictable? We chose to imagine, in the event, a cubic
space made only out of vanishing lines.
Network
A straight line is a constant velocity. In speed lines it streaks from nowhere to some-
where, and does not want to be stopped. But a crossing line that intersects the motion slows it
down. A series of crossings and the questions multiply, where direction is lost, where time
stops.
We may loop or zigzag or jump over intersections imposing a particular direction over others,
but as the network grows the puzzle becomes more intricate, for which line came first?
Algorithm
Usually to construct a rectangular or square roof, lines are drawn at right angles to each
other, parallel to the sides of the plan, to produce a grid of beams. This roof plane is then
supported by vertical columns placed evenly around the edges. Instead of following the edges
though, a more efficient pattern for the roof may be drawn by traveling across at an angle, say
from half point on one side to the half point on the adjacent side. Repeating this for each side
produces an inner square wholly embedded within the first, but diagonal in orientation. If the
connection between adjacent sides is made more general, the start and end point of the first
line may have different ratios. This puts a skew into the pattern, and once the new square is
completed a virtual square is implied that goes beyond the boundaries of the original shape.
Repeating the idea produces a spiral of shapes.3
At the same time if all lines are projected forwards and backwards a dense field of
crossed lines appears. If anywhere on this two dimensional field the planes of a cube or box is
laid out flat and then folded back again, the pattern picked up provides a continuous zigzag
tracing over the three-dimensional form. Daniel Bosia of Arup helped develop this algorithm, to
provide us with endless opportunity in the drawing of networks that outlined the territory.
figure 16: source: Digital Tectonics, Karthik.D, School of Planning and Architecture
Construction
A minimum size of steel flat is chosen to materialize all lines.
Particular traces of the pattern are underlined and made thicker to act as structure.
figure 17: Source: www.flickr.com
(We should note that normally steel flats would be judged too weak to span much distance as
beams, as the thin sections buckle easily. But due to the side support made available from
crossing elements in the pattern this particular weakness is easily overcome, the density offers
a net of stability.)2
3. British Museum Great Court Roof
Figure 18: source: www.flickr.com
The new roof to the Great Court at the British Museum (architect Norman Foster) faced
a major constraint in the geometry of the surrounding buildings that the roof had to match - a
central circle and an outer rectangle. A second consideration was to make the structure
invisible - using as little material as possible so that the sky would be more visible than the roof.
A third consideration was to keep the height down to satisfy the planning constraints imposed
on the design. A fourth was to ensure that it could be built whilst the museum was working. A
fifth was to ensure a pattern of roof elements that would support the glass skin and flow
naturally between the circle and the rectangle like a single 'web'. 4
Finding a form for the roof began with a 'naturally' formed surface. A soap-film
stretched between the circle and rectangle inflated into an undulating shell. Ideally, vertical
gravity forces would have been used to define the shape rather than pressure, which is normal
to the surface, but as the roof was going to be rather flat anyway due to the planning constraint
on height it did not make a great deal of difference. Finding this form to be horribly bulbous,
Chris Williams (of Bath University), who assisted the form-finding process for Buro Happold,
played with the stress levels in the bubble effectively tightening it in areas intended to be lower
and slackening it in areas where it should be higher. In the end this wasn't quite enough and he
resorted to describing the form analytically, though with a residual memory of the soap-film
form.
The algorithm used for the geometric design of the British Museum Great Court roof
used a number of different types of rule. Initial studies used the relationship; between the load,
w: the stress function, W = εαβελμzαλФβμ and the vertical coordinate l/>, to derive an 'optimum'
structural form. However, this approach was abandoned because other constraints could not
be accommodated. 5
figure 19: source: Digital Tectonics, Karthik.D
The final form is described by the three functions, weighted and added together, and
are the Cartesian axes, and all other quantities are constants. The weighting functions also vary
with position in plan. The first function gives the change in level between the circular Reading
Room boundary and the outer rectangular boundary. The second two functions differ mainly in
their behavior at the corners. One is smooth and the other gives a concentration of curvature.
This was important for the structural action - the roof is supported on sliding bearings and
exerts no horizontal thrust on the existing building.
The position of the nodes of the steelwork grid upon this surface was determined by a
relaxation process applied to a 'numerical grid'. 6
The coarser structural grid is obtained by joining diagonal nodes of the numerical grid.
The relaxation process involved moving each of the nodes on the numerical grid until it was the
weighted average of the surrounding nodes. This process was repeated for the whole grid a
large number of times, until the grid stopped moving. The weighting functions vary with
position, mainly to try and limit the maximum size of glass panel.
Once this process was complete the structure was analyzed in a number of ways -
including the application of a stress function corresponding to the roof trying to work in
compression and tension only. However sharp folds indicated that this is not possible and
therefore significant bending and tensional moments are to be expected in the structure as
confirmed by more conventional analysis methods. 3
figure 20: Source: Digital Tectonics, Karthik.D, School of Planning and Architecture
Notes and references:
1. Karthik.D, Design of a high rise – thesis project, School of Planning and Architecture,
New Delhi, May 2008.
2. Karthik.D, Digital Tectonics – Architectural Dissertation, School of planning and
Architecture, New Delhi, December 2006.
3. Ibid.,
The blue pill or the red one?
Algorithmic architecture is still in an infant stage for designers to use it exclusively for
designing buildings; just as with any other technology, more and more people are getting aware
of the benefits of employing the usage of computers as active design partners though the use
of algorithms.
The computer as a technology is as powerful as a nuclear weapon – it can be used both
to destroy millions of lives in seconds or generate useful energy for the benefit of millions. A
technology as powerful as the computer demands mature thinking and usage for it to a
purpose as sensitive as design.
Though there are some inherent demerits associated with this form of design, those will
be temporary, and as democratic students and professionals, any new architectural idea should
be accepted and followed if it is justified for a good cause.
With information technology playing a crucial role everywhere, it will for sure, do its
part in architecture too…
Appendix I
figure 22: Source: National Geographic, April 2008 issue.
The study of nest building behavior in the higher apes is based on roughly 50 years of field research and about 200 years of observation. In 1929, chimpanzees, gorillas and orangutans were systematically studied by the American Primatologist couple, the Yerkes. They for the first time scientifically termed nest building as "constructivity" and theoretically placed it at the beginning of an evolutionary process. Their conclusion: "... nesting behavior illustrates the appearance and phylogenetic development of dependence on self-adjustment to increasing dependence on manipulation or modification of environments as a method of behavioral adaptation." (Yerkes 1929:564; Egenter 1983, 1987). In terms of architectural theory, this conclusion introduces a basic situation which can be used to research the development of human building behavior or architectural evolution in the anthropological sense. Over the last 50 years an important question has been clarified. Nest building is to a great extent learned behavior. Earlier zoologists considered it merely motoric programmed instinctive behavior. However, the surveys of Bernstein (1962, 1969) and Lethmate (1977) strongly questioned this opinion. The ability to weave branches into a stable construction requires a definite learning process. Nest building behavior consequently can be seen as a tradition in the human sense, and an important one, because it shows us that the hand can be understood as the primary tool. Nest building thus becomes a primary type of handicraft in the factual and evolutionary sense of the word.
Do apes externalize thinking?
Figure 23: Source: http://implosion.architexturez.net/00AA2_Apes_NestsFig_Lg01.html
Figure 23: Diagram showing spatial location of a group of six nests constructed and used by gorillas in a mountain forest surveyed in terms of constructional types and types of users (acc. to Kawai/ Mizuhara 1959)
#..........tree nest _..........mixed construction using branches of trees and bamboo stalks x..........nest constructed of bamboo o..........ground nest D..........soiled with faeces nD ........clean h..........height in metres
Figure 24: source: http://implosion.architexturez.net/00AA2_Apes_NestsFig_Lg01.html
Figure 24: Reconstruction of one Gorilla group’s night camp based on a plan measured by Izawa/ Itani. The presumably dense bamboo thicket in the centre was left out in the drawing, to make the nests clearly visible.
Figure 25: Source: http://implosion.architexturez.net/00AA2_Apes_NestsFig_Lg01.html
Figure 25: My home is my castle: spatial interpretation of the night camp as 'access-place-schema'. The female and child thus occupy the central and highly secured place. Four younger gorillas occupy and secure the corner posts of the pentagon. The ground nest of the dominant male is presumably positioned at the entrance path to the camp. The strongest and most experienced animal is thus imposed with the duties of a doorkeeper. This spatial arrangement shows a strong similarity with elementary ground plans of human dwellings. A very basic form of securing space finds expression.
ES ..........external space (jungle), extensively patrolled IS..........internal space (home, rest), intensively patrolled <-->..........external-internal relation of patrols ..............inner path system ----..........outer path system (access) x..........peripheral sleeping place, at the same time individually occupied border ...........point with social function in regard to group o..........central sleeping place, highly secured place X..........access (outside/ inside, extensive/ intensive regarding patrols) F..........front C..........centre B..........back S..........sides
Do Apes rationalize?
Appendix II
figure 26: Source: http://www.fastcursor.com/computers/images/quantum-computer-photo-gallery.jpg
Has D-Wave really demoed a quantum computer? Most scientists are skepitcal. Even the company, D-Wave, does not seem to be very sure. It uses some quantum mechanics, says the company. The Orion processor powered quantum computer has not been available to scientists or the public by D-Wave for further scientific examination. The D-Wave website does not even have much details for us to go by. For us, all of you will have have to do with this photo gallery of what is probably the world's first quantum computer.The computers we use today work on the principles of classical physics, where logically n data values can be stored at a time in n number of bits. These are called classical bits.The quantum computer works on the basis of the mysterious world of the quantum physics where all the permutations of 2
n different values can be stored at the same time in n number of
qubits by the principle of quantum superposition. The consequence is mindboggling: Things which were once considered impossible for even the fastest supercomputers of the world will be theoretically possible by a pocket sized quantum computer.
Is Quantum Computing the future?
Bibliography
Books and magazines:
1. Kaku, Michio, Intelligence revolution (documentary), 2007, BBC.
2. Lawson, Bryan, How Designers Think, Architectural Press, Burlington, 2005, pg 9.
3. Terzidis , Kostas, Algorithmic Architecture, Architectural Press, Burlington, 2006.
4. Kalay , Yehuda. E, Principles of computer aided design: COMPUTABILITY OF DESIGN.
5. Knight, Terry, Report for the NSF/MIT Workshop on Shape Computation, School of
Architecture and Planning, MIT, April 1999
6. Orr , H. Allen, Testing Natural Selection, Scientific American, January 2009.
7. Anonymous, Genetic algorithm in Architecture.
8. Karthik.D, Design of a high rise – thesis project, School of Planning and Architecture,
New Delhi, May 2008.
9. Karthik.D, Digital Tectonics – Architectural Dissertation, School of planning and
Architecture, New Delhi, December 2006.
10. Kolarevic , Branko, Architecture in the digital age.
11. Architectural Design, May 2004 issue.
12. Scientific American, January 2009 issue.
Other Resources:
13. Wikipedia.org
14. Google search