the grid - first semester thesis book
DESCRIPTION
THE GRID - Thesis book as it was at the end of the first semester. This documents research and initial software testing.TRANSCRIPT
![Page 1: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/1.jpg)
![Page 2: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/2.jpg)
1
Thesis Website
http://the--grid.tumblr.com
CMU SoArch Thesis Website
http://www.andrew.cmu.edu/course/48-509/
CMU SoArch Home
http://www.cmu.edu/architecture/
Contact
Yuriy Sountsov
Find me on LinkedIn! Revision 9 - 12/13/2013
![Page 3: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/3.jpg)
2
Table of Contents 2
Introduction 3
Interest and Architecture Moving Forward 3
Advisors and Primary Contacts 5
Project Brief 8
Project Methods and Timeline 13
Research 17
Precedents 17
Literary Research 21
Interviews and Reviews 33
Software Research 39
Hardware Research 51
Deliverables 53
Applications 53
Moving Forward - Software Package 57
MovingForward-BenefitsandDeath 59
Moving Forward - Imagination and Experience 65
Appendix 67
Sources 67
Terms 76
Fig. 0.1 QR code for the thesis website.
Ta b l e o f Co n T e n T s
R e a l T i m e 3 d V i s u a l i z a T i o n
![Page 4: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/4.jpg)
3
Fig. 1.1 The eye provides the most powerful sense humans have: vision. Architecture is often a primarily visual profession - while many architects argue the tactile and auditory aspects of architecture are also very important, the experience always comes back to the appearance of a building. Therefore it should be of paramount importance to architects how they communicate the visuality of their designs, yet one of the most powerful tools in an architect’s arsenal, the computer, remains wholly unused.
I, Yuriy Sountsov, am interested in this project because I have
the opportunity to give something to the field of architecture
that it has struggled to have. During my last year here at Carnegie
Mellon University I have the time, resources, and commitment
necessary to put forth a complete, developed, and forward-
thinking project that others can take and use in their lives as
designers and practitioners of architectural theory and thought.
In the four years and going that I have spent studying
architecture at Carnegie Mellon University...I have seen the
future. And it is a strange future, indeed. The world, reader, is on
the brink of new and terrifying possibilities. But what was made
available in my education was severely lacking. Architects spend
too long learning tools that areobsoleteby the time theyfind
ways to teach those tools to new architects.
What if the world could see inside the mind of the architect?
What if the architect’s ideas did not travel a maze before
becoming visible?
Architects are ready to learn. One of the major aspects of
an architectural thinker is that they are open to new ideas, new
societies of thought. Over the centuries, it has taken radical
i n T R o d u C T i o nin T e R e s T a n d
aR C h i T e C T u R e mo V i n g fo R w a R d
![Page 5: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/5.jpg)
4
thinking of the likes of Brunelleschi, Gaudi, and Candela to
advancethefieldofarchitectureingreatleapsandbounds,but
it was not because they created things that had never been seen
before but that they knew what was available and created what
could be possible. The digital world is only the latest arena which
is thus untapped. It has been exponentially growing for decades
and the time is nigh for architects to seize the tools that await
them...on THE GRID.
Fig. 1.2 Brunelleschi’s dome, a single combination of previously disparate concepts that allowed architecture to take a great leap forward.
![Page 6: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/6.jpg)
5
Yuriy Sountsov -YuriySountsov isafifthyeararchitecture
student at Carnegie Mellon University. He is dissatisfied with
the digital backwardness of the program he has been exposed
to and wonders sometimes whether architects have become so
desensitized to the creative world around them that they think
they are on the cutting edge when in fact they are on the cutting
block. He has experience with various digital design software,
variousvideogameengines,hasseenmanyfilmsandhasexplored
filmtechnology.Heseesaprobleminarchitecturalpracticeand
wishestocontributehistimeandenergyforfreetofixit.
Arthur Lubetz - Arthur Lubetz is an Adjunct Professor in the
School of Architecture. He brings a theoretical mindset, a creative
framework, and a rigorous approach. He is also the fall semester
instructor. I have not collaborated with Arthur before though
he once taught a parallel studio. One of Arthur’s key driving
principles is the inclusion of the body in architecture. This relates
closely to my thesis.
ad V i s o R s a n d PR i m a R y Co n T a C T s
Fig. 1.3 My Fall 2010 studio project that Art Lubetz critiqued and reviewed.
![Page 7: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/7.jpg)
6
Dale Clifford-DaleCliffordisanisanAssistantProfessorin
theSchoolofArchitecture.Hehassignificantbackgroundfinding
simple solutions to complex problems using media not native to
the problem. I have had Dale in two previous classes, Materials
and Assembly and BioLogic, both of which involved combining
disparate systems of assembly to achieve a goal not easily or
impossibly reached by any constituent system. Dale may also
provide many connections into digital fabrication practices.
Joshua Bard - Joshua Bard is an Assistant Professor in the
School of Architecture. He should contribute some digital and
media expertise. He will be the spring semester instructor.
Joshua is co-teaching a fall course, Parametric Modeling (the
other instructor being Ramesh Krishnamurti) that focuses on
integrating a software with Rhinoceros, Grasshopper, although
that software is built inside Rhinoceros as a plugin. Joshua may
help with adapting other software.
Ramesh Krishnamurti - Ramesh Krishnamurti is a Professor
in the School of Architecture. He should contextualize my thesis
due to his background studying computer visualization and
vision. He is teaching a course I am currently taking, Parametric
Modeling. I have worked as a Teaching Assistant with him for
the class Descriptive Geometry for a few years. He is also a great
thinker - he may help me work out the nature of my thesis and any
kinks it might have.
Fig. 1.4 Samples of work made in Materials and Assembly (MnA), BioLogic, and Parametric Modeling. Top to bottom: The MnA enclosure made with zip ties; A responsivewallusingnitinol;Parametricallydefinedsurfaces.
![Page 8: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/8.jpg)
7
Varvara Toulkeridou - Varvara Toulkeridou is a graduate
student in the School of Architecture. I have worked with her
while being a Teaching Assistant for Descriptive Geometry under
Ramesh. As she has a similar background and knowledge to
Ramesh, she may be another useful source of advice and critique.
She is also currently a Teaching Assistant in the Parametric
Modeling course that I am taking, making her available weekly
shouldIhavespecificquestionsIneedtoaskher.
Kai Gutschow - Kai Gutschow is an Associate Professor
in the School of Architecture and is the thesis coordinator. He
is developing the program as it runs, and manages all of the
students’ time and projects.
![Page 9: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/9.jpg)
8
PR o j e C T bR i e fThe architectural render has long been the pinnacle of drawn
design - a constructed image that shows the viewer an idealized
view of an architectural project from a specific locationwithin
theprojectataspecifictimeofday.Traditionally,thearchitect’s
primary tool for image-making was a drafting board. Some time
in the last few decades architects have adopted the computer
to serve the same role yet advance it in many ways, making the
digital render an evolution over what was possible with drafting.
Yet, despite the apparent approach towards a visual quality near
that of human sight, the digital render failed to fully use the full
power of a computer. The digital render took a horse cart and
made it into an automobile but failed to then also make a van, a
truck, or even a race car.
The allure of a digital world has fascinated people ever
since computers were able to create early vector and later raster
graphics.TheideahasbeenexploredinsuchfilmsasTron (1982)
and The Matrix (1999) and more recently in Avatar (2009), where
over half of the film was photorealistic computer effects, as
well as hundreds of student or collegiate art projects. It has led
to the development of hardware to augment the human frame,
extending what the human mind is limited to by the body. Digitally
Fig. 1.5 The complete toolset in Rhinoceros for animations.
Fig. 1.6 Diagram created by a developer of Brigade 3, a cutting edge path renderer, made by OTOY, the same people behind Octane. It posits that, after a
certain amount of geometric detail, ray tracing always beats raster meshes.
![Page 10: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/10.jpg)
9
Fig. 1.7. Three approaches to my thesis. Top to bottom: taking a render, creating many renders from it, then showing them together as an animated sequence under the control of the viewer, faster than just a series of renders; The render and the model are combined into a visual system whereby the user can explore the model in a virtual world, allowing her or him to share the model with anyone; With a real time render the concept of presence comes into play, since a moving realistic image
allows the viewer to inhabit the image.
fabricated films have gradually replaced hand-drawn films and
have even entered the mainstream as a respected category of
film. Architectural designers have tapped this field, but not as
fully as they could have.
Another way the digital world has entered the social
consciousness is through video games. While not all video games
involve a 3D virtual environment, the ones that do often go for a
highly photorealistic portrayal of a digital environment. The tools
videogamedesignersuseareoftenmadespecificallytoquickly
develop virtual environments. Students have often tried to use
such tools in their projects, but although they tended to gain
successarchitecturalfirmshaverarelyfollowedsuit.
It is true that video game designers create objects that are
meant for mass production, and film companies make objects
meant for mass exposure. This kind of thinking dodges the aim
of my thesis though, because I am not proposing architecture
become video game-like or film-like. I am proposing it use the
tools they use maximize digital communication.
+ + + + + + + + + +
The thesis is a field produced by two axes - the vertical
axis is that of architectural image-making: how have designers
evolved their tools to match current technological advances; the
horizontal axis is that of digital interfaces and interaction: more
![Page 11: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/11.jpg)
1 0
andmoresocietyisfindingwaystointerconnectwithitself-such
interactioninarchitecture,afieldentirelyinvolvedinthebusiness
of being around others, seems largely absent or unused.
Thefirstaxis,visualization:
While many designers in the field have advanced the static
render into something more dynamic, making videos or flythroughs
or virtual habitats, more often than not these cases were one-
time gimmicks and have not established as a versatile aspect of
architectural design.
The second axis, interaction:
The concept of digital interaction has often been explored by
artists trying to cope with the digital frontier yet the possibility of
delivering an architectural project with extra-sensory exposure does
not seem to have gained traction among architectural designers,
even though technology exists to allow interaction beyond that
which is seen or heard.
Theproject, therefore, is toexploreanddefinetheextent
ofsuchefforts inbothdirections, identifywhatwastried,what
failed, and how those attempts could be improved, identify the
best candidates (by an evolving criteria as the project develops)
for a concentrated push into versatility, and produce a working
example of the next evolution of drafting.
Fig. 1.8 Is there a possibility here?
Fig. 1.9 The GRID. Neither interaction nor visualization alone will achieve any greatness; it is through the collaboration of the two axes that a far greater
advancement can evolve.
How can
Model > Render > Edit > Review > Improve >>
Become
Model > Virtual Review > Improve >>
?
interactionvi
sual
izat
ion
![Page 12: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/12.jpg)
1 1
The primary deliverable will be a software package which
parallels or replaces the point in design when a designer of
architecture would make a static render and, instead of producing
a mere digital render, would create an interactive simulation
serving as proof of experience much like an architectural model
is a proof of assembly.
A distinction has to be made between a pre-rendered
animation and a real time interactive environment. While pre-
renderedanimationisaside-effectofthisunder-utilizedfunction
of computers, it is absolutely a rut of possibility. It is a linear
evolution of a digital render - why stop there when a render can
evolve planarly?
+ + + + + + + + + +
A breakdown of the thesis into one sentence, three short
sentences, and a short paragraph is a useful tool for understanding
the thesis:
1:ToSeekaMeansandtheBenefitsofaSystemtoInteractin
Rendered Real Time With Digital Models.
3: Such a system would provide architects and clients a
preview of the visual and aural aspects of a building in their
entirety before the building is built. Much like how a physical
model is a proof of assembly this would be a proof of experience.
So what?
Fig. 1.10 An example of a virtual environment that can be explored. It is both dynamic and interactive - it goes beyond what a set of renders could have done and also gives the user something a render could never have - a sense of presence in
the project.
![Page 13: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/13.jpg)
1 2
9: Architects traditionally make analog products - visual
stimuli that mimic the rays of light that true sight gives. For
presentations (renders) and data analysis (orthographics), these
products are nearly always static images. Yet, much of architectural
design requires the input of a user’s movement to activate. No
static image will ever describe to the designer the experience
of natural movement within a project. Without an interactive
experience to iterate from, the final, built, experience cannot
be prototyped. Interpreting a static image requires a skill called
mental rotation that is learned through studies of descriptive
geometry, long exposure to architectural orthographics, and
CAD. Mental rotation is a skill not every client has and not every
architect develops fully. Without this skill static images become
severely lacking because too much of the design process relies on
interpreting these images with the aim of improving the design.
Opportunities exist to replace or compliment static images with
real time renders that closely resemble the built design both
experientially and conceptually, which would allow a more in-
depth design pipeline.
![Page 14: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/14.jpg)
1 3
Research - the first step of the thesis is to generate a
foundation of knowledge in the field of visualization and
architectural visualization in particular. The thesis combines
several schools of thought - Representation, Automation
through Technology, Simulation, Video Gaming, Interfaces,
and, naturally, Architecture. Each field would contain several
informative areas: History, Technology, Application or Practice.
Theseareaswouldinformwhatisavailableinthefieldaswellas
dictate possible constraints. For a broad spectrum I would expect
at least six established literary sources and six other collateral
sources (videos, talks, examples of work).
Definition-inthemeantime,Iwouldcontinuetorefinethe
grounds of my thesis - the product, the deliverable, is a tool. The
means is often more important than the end because the means
is inherently repeatable. The research would mold the form and
function of the thesis and its ultimate deliverable, a visualization
tool.
PR o j e C T me T h o d s a n d T i m e l i n e
DDDDDDDDDD
RRRRRRRRRRRR
EEEEEEEEE
Sept
emb
erO
ctO
ber
Sep. 3 - Version 2 of Thesis
Sep. 9 - Version 3 of Thesis, focus on methods
Sep. 16 - Version 4 of Thesis, expand on all sections
Sep. 18 - Version 5 of Thesis, presented as a poster
Oct. 21 - Version 6, review
Oct. 4 - List of deliverables
Oct. 18 - Midsemester break
![Page 15: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/15.jpg)
1 4
Experimentation and Evaluation - the second step is
an exhaustive analysis of existing visualization software (or
hardware, if it is available through CMU) for the purpose of
design (NOT making a final product but as another step, or a
better step, in an iterative process). This would involve its own
researchonwhattoolsarchitecturefirmshaveusedinthepast
(and documented) for time-based deliverables and subjectively
evaluate them based on those deliverables. Following research
on what tools practicing architects use, I would perform research
on tools students have used, what artists of various caliber have
used, and video game engines. While the time each visualization
tool takes to render (from hours per frame to frames per second)
is crucial, I will also look for other design features, keeping the
root of my thesis in mind - the possibility for the digital real time.
Theoretically this research will come across examples of work,
but the focus would be on how those were made, not what they
are.
Oc
tOb
er
DDDDDDDDDD
RRRRRRRRRRRR
EEEEEEEEE
NO
vem
ber
Dec
emb
er
Nov. 28 - Thanksgiving
Dec. 8 - Review of thesis development
Dec. 13 - Submittal of thesis book
Dec. 16 - Last day of first semester
![Page 16: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/16.jpg)
1 5
Compilation - the two threads of research will combine. At
thispointIwouldhaveasteel-harddefinitionofmythesis.There
may be at least two deliverables, one for each body of research.
The literary deliverable will be an opinion piece drawing from
all the sources I compiled that projects the possibility (that I
believe is the case) of what architects could embrace in the
field of visualization given the power of computers and what
effect itwouldhaveon currentdesignparadigms. Thisopinion
piece should predict the possibility of the second deliverable.
The software deliverable will be a proof of concept or a
redistributable software package (depending on if the software
I end up choosing is licensed for educational use or distribution).
This software package would support the opinion in the first
deliverable, ultimately proving architects can evolve the render
into something that interacts on a level above the visual or tactile.
The software package will address the range of interactivity
that is missing in architectural delivery. Depending on what
software I use, there will be a way that is for both the client
and the designer to enrich their communication. The software
package will be, necessarily, an all digital item, as having a video
or a screenshot of it defeats the point of interaction.
EEE
DDDD
CCCCCCCCCCCC
JaN
ua
ry
Feb
ru
ar
ym
ar
ch
Jan. 13 - First day of second semester
Jan. 20 - MLK Day, no classes
Mar. 7 - Spring Break starts
Mar. 5 - Midsemester thesis review
![Page 17: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/17.jpg)
1 6
CCCCCCCCCCCC
Beyond - if there is yet more time I may develop more
deliverables to parallel the two main deliverables in the
Compilation step. One would be a documentation on the use
of the software package and tool. A certain amount of basic
tutorials smoothening the learning curve would already be part
of the software deliverable, but, like any software, much of the
toolwouldbedifficulttoapproachforanewuser.Ifthereistime
I could develop detailed explanations of various functions within
the software package. Importantly, this would heavily depend on
the nature of the software package. If it is a video game engine
editor then it may grow to have dozens of tutorials. If it is a small
utility (perhaps an architectural firm has developed one), then
there may only be a small handful.
Mar. 17 - Spring Break ends
BBB
apr
ilm
ay
ma
rc
h
Apr. 10 - No classes for Carnival
Apr. 13 - Carnival ends
May. 2 - Last day of classes
May. 5 - Thesis due
![Page 18: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/18.jpg)
1 7
Precedents are difficult to find because the bulk of
professional architectural animation focuses on pre-rendered
scenes. The videos that are produced by companies that focus
on this kind of animation are often flythoughs or disembodied
gliding camera views moving through completed designs, either
as part of a submission to a competition or after the design was
built.
The short Wikipedia page on architectural animation
mentionshowdifficult it is torenderanimationsandhowfirms
rarely have access to the hardware or tools to assemble such
products.However italsomentionsthat,moreandmore,firms
are recognizing that animations are better at conveying the ideas
of a project than design diagrams. Otherwise, there seems to be
littleeffortanywheretodocumentthemosteffectiveanimations
or even any attempts at real time interaction with animation.
Two companies exist that have begun using game-like
software to create virtual versions of architectural projects.
Both focus on Unity3D and create services ranging from training
simulations to marketing packages. Both companies have
harnessed Unity3D’s ability to work cross-platform as well as its
abilitytoefficientlyhandleacomplexscenewithpre-computed
PR e C e d e n T s
R e s e a R C h
Fig. 2.1 Arch Virtual’s web version of one of their projects.
![Page 19: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/19.jpg)
1 8
Fig. 2.4 A deliverable from Real Visual on a mobile platform.
Fig. 2.5 Arch Virtual’s Unity3D booklet and their application of the Oculus RIft virtual reality headset.
shadows and materials.
The first company, Real Visual, focuses on a high quality
of delivery in simulations, training, marketing, and outsourced
design work. They cover work in various multi-national sectors
aside from architecture - energy, transport, and defense. This
displays flexibility and expandability, and shows how such
technology and its application are quickly burgeoning in the
wider world. They work closely with the developers of Unity3D
to ensure the software is as cutting edge as possible. If architects
could learn from the technical expertise of this company then the
fieldwouldonlybeenriched.
The second company, Arch Virtual, focuses more on cutting
edge hardware and integrating it with Unity3D. They have
worked with the Oculus Rift, a virtual reality headset currently
in development, bringing in projects developed in Unity3D, that
are also configured towork onmobile platforms like those of
Real Visual, and setting them up to work with the headset. They
also have an ebooklet detailing the steps required to create an
architectural project within Unity3D. This booklet is a step in the
right direction for the profession, but it is by far not enough, as
at 65 pages it is only a set of guidelines rather than thorough
educational materials.
Autodesk also has software designed for the purpose of
accelerating architectural animation. I mention this not because
Fig. 2.2 Real Visual’s logo.
Fig. 2.3 Arch Virtual’s logo.
![Page 20: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/20.jpg)
1 9
itisaneffectiveprecedentformythesisbutbecauseitisexactly
the wrong approach - it does not use a human viewpoint, it
doesnotofferahighlevelofrealisminitsgraphics,anditfavors
presentation over interaction.
This software, Autodesk Showcase, takes models and allows
the user to dress them up, applying materials and environments
to the scene. It offers various alternate rendering types, like
cartoon or sketched, as well as options for sets of materials to be
shownbythemselves.Theworkflowisoneofsettinguprenders
or animations with a preview viewport and then rendering them,
akin to what a full screen Vray would look like.
The biggest drawback that I perceive in this software is,
despite its effort to offer architects amore intuitive rendering
solution, that it fails to advance the field. It is an example of
stagnation: nothing in it is radically new over what is already
possible in AutoCAD, Maya, 3DSMax, or Rhinoceros with Vray. It
is a horizontal advancement and fails to use advanced rendering
methods, new interaction methods, or take advantage of newer
hardware.
+ + + + + + + + + +
It is also important to note video game graphics precedents.
There is a stigma within the commercial culture today that
video games and their technology are beneath professionals
Fig. 2.6 Autodesk Showcase screenshots. Clockwise from top left: Regular previewview;Cartoonpreview;Differentmaterialsets;Publishing,orrendering,
an image.
Fig. 2.7 Autodesk Showcase logo.
![Page 21: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/21.jpg)
2 0
Fig. 2.8 QR code for a demo video of
CryEngine.
Fig. 2.9 QR code for a demo video of the
Fox Engine.
and their interests. While it is true that the gameplay aspects
of video games have little bearing in most professional fields,
the technology and simulation aspects behind video games are,
bynow,entirely applicable inotherfields. (Therearealso such
things as GWAPs, games with a purpose, video games designed
specifically tobe trainingmaterialsandhigh-fidelity simulations
of real-world scenarios).
For the purposes of my thesis I will argue that the graphics
advancements of video games have, over the past several years,
reached such high levels of realism, among the video games that
use cutting edge engines, that they contend with professional
rendering software in terms of speed, quality, and production
value.
Modern video game engines generate lighting and shadows
dynamically, meaning there is no pre-computation except that
which is necessary to place the geometry into the scene. For
materiality many games still use shaders, simplifying computation
and sacrificing some real time effects, but some engines have
begundevelopingrealtimeshadereffects,namelyrefractionand
reflection.
As for simulation, all video games with a first person
perspective already have immersive interaction and exploration,
key features for visualization that are lacking in professional
software packages.
Fig. 2.10 Super Mario 64, not an example of a contemporary video game.
Fig. 2.11 Crysis 2, an example of a contemporary video game using realistic graphics.
![Page 22: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/22.jpg)
2 1
Fig. 2.12 Mind Map as of October 21st, 2013. My thesis subject area is in the top left corner.
Mental rotation
Collaboration
Specialized keys
Fig. 2.13 QR code for the mind map.
Theliteraturetookupthebulkoftheworkduringthefirst
half of the first semester after the thesis programgot started.
The literature review pulled from over 30 sources more than a
third that provided valuable insight into the context of my thesis.
This proved to peers that this is an academic subject and bears
worthinthefieldofarchitecture.Theveryshadowynatureofthe
subject of my thesis is exactly why I am proposing my thesis - to
raise awareness of what can be done with modern tools.
Ialsocreatedamindmap,opentomyadvisorstofleshout,
as I continue to insert data siblings and children. The mind map
chartseverything in thefieldofcomputing thatcould relate to
my thesis - it is an attempt to contextualize my work, to bring it
from computer science to a position that is understandable by
architects. The semantics of my thesis automatically raise various
stigmas in readers or reviewers, so having a way to visually place
my thesis among other academic subjects is important.
Ideally, any interaction with computers that architects could
have should have a spot on this mind map and right now my thesis
only occupies a small portion of it. But one of the points of my
thesis is that this should not be so. Interactive visualization can be
apowerfulallyindevelopingadesign,andbyexpandingthatfield
l i T e R a R y Re s e a R C h
![Page 23: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/23.jpg)
2 2
architectscouldlearnmorepowerful,moreflexibletools.
Following is literature research, a review of several sources
that bring up important points for visuality and rendering as they
apply to architecture:
Visual Digital Culture: Surface Play
and Spectacle in New Media Genres
This book by Andrew Darley explored visuality and spectacle
in digital media. I drew parallels in it with architecture with how
early digital modeling was show-driven - digital rendering is
often about what a project could be like and not what it is. The
greater definition of model, to simulate, comes into context,
showing how lacking static renders are. It also showed how video
game software could be photorealistic, an important point that
I must continually clarify. The illusion and wanting to be fooled,
repetition and customization, the sense of occupancy and a
comparison between video games and virtual environments
rounds out the content of the book.
Relevant quotes in textual order:
“A key example of such research was that into real-time interactive computer graphics. This came to practical fruition in 1963 in a system called Sketchpad, which allowed a user to draw directly on to a cathode display screen with a ‘light-pen’ and then to modify or ‘tidy-up’ the geometrical image possibilities so obtained with a keyboard. Though extremely primitive by today’s standards, Sketchpad is viewed as a crucial breakthrough from which have sprung most of the later technical developments in the areas of so-called ‘paint’ and interactive graphics systems. By the mid -1960s, asimilarsysteminvolvingcomputerimagemodificationwasbeing
used in the design of car bodies - a precursor of current CAD/CAM (Computer Aided Design/Computer Aided Manufacture) systems. Andby1963,computergeneratedwire-frameanimationfilms-visualsimulationsofscientificandtechnicalideas-werebeingproduced using the early vector display technique.” - pg. 12
This is significant as a historical precedent on the type of
interactive software that my thesis belongs to. Sketchpad, Ivan
Sutherland’s own thesis, was the grandfather of CAD modeling.
While it crucially combined hardware and software, within the
realm of modern software and interface systems my thesis does
not have to have the same intertwined nature. Ideally my thesis
should be able to do everything with a keyboard and mouse,
however exploration into alternative hardware input is possible.
The point is to separate the tangential, relatively, development of
software like REVIT and AutoCAD from this original thread.
“The desire on the part of scientists to model or simulate physical processes and events in space (and time) was a central impulse intheproductionoftheearliestcomputergraphicsandfilms.Whilst concurrent with the initiation of applied forms, work was underwayoncomputerproducedfigurativeimageryasaresearchactivity in its own right. Even the work conducted in collaboration withartistshadadecidedleaningtowardsmorefigurativekindsofimagery. At the end of the 1960s experimentation began into the production of algorithms for the production and manipulation of still,line-basedfigurativeimages.”-pg.14
The notion that early computer graphics were, in a way,
show-driven, relates well to how architects do things with
technology. Architects often use computers and rendering to
show what a project could be like, as opposed to showing what
it actually is. The original scientific drive to model, however,
encompasses more than just showing the project itself, but also
showingwhat theprojectcoulddo.Here thegreaterdefinition
Fig. 2.14 Visual Digital Culture: Surface Play and Spectacle in New Media Genres cover.
![Page 24: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/24.jpg)
2 3
of ‘model’ applies, in that ‘to model’ means ‘to simulate’, where
various possibilities enter the game and a static representation
becomes lacking.
“The one that came to discursive prominence within computer image research and practice is perhaps the one with which we are all most familiar. Quite simply it turns upon the notion of the proximate or accurate image: the ‘realisticness’ or resemblance of an image to the phenomenal everyday world that we perceive and experience (partially) through sight. For the majority of those involved with digital imaging at the time, the yardstick of such verisimilitude was photographic and cinematographic imagery.” - pg. 17
This is another thing to keep in mind, while my thesis may
include video game software an important benchmark is that I
donotsacrificephotorealism.Iammentioningthisbecauseone
aspect of my thesis is that it takes several steps forward, and very
few, if any, back.
“In this case, of course, the set is virtual or latent - itself a simulation created and existing in the program of a computer. Such programs are now able to simulate three dimensional spatial andtemporalconditions,naturalandartificiallightingconditionsandeffects,surfacetextures,thefullspectrumofcolours,solidityand weight, the movement of objects and, as well, the complete range of movements of a camera within and around their virtual space. When cartoon characters - and, just as important, cartoon tropes such as anthropomorphism - are imaged through this studio simulacrum, then new registers of mimetic imagery are achieved within the cartoon: a consequence of this peculiar crossing or fusingoftraditionallydistinctformsoffilm.”-pg.85
A parallel discipline to my thesis is digital film animation.
With digital film animation, the software technology is, by
necessity, highly configurable and allows total control of a
virtual scene. While such control is not applicable to architectural
design because the digital in architecture is merely a step in the
development,seeingwhatispossibleinthefieldwillallowmeto
findanupperboundinsoftwarecapabilities.
“A technical problem - the concrete possibility of achieving ‘photography’ by digital means - begins to take over, and to determine the aesthetics of certain modes of contemporary visual culture. Attempts - such as those focused upon here - to imitate and simulate, are at the farthest remove from traditional notions of representation. They displace and demote questions ofreferenceandmeaning(orsignification)substitutinginsteadapreoccupationwithmeansandtheimage(thesignifieritself)as a site or object of fascination: a kind of collapsing of aesthetic concerns into the search for a solution to a technical problem.” - pg. 88
This is the other side of the problem. Attempting to focus
too much on the signified versus the signifier may break the
relation of the image to the model or what it is modeling. The
efforttoproduceavisuallyrealisticimagemovestoofarfromthe
idealthatthetaskofcreatingtheimageinthefirstplacestarted
offfrom-invisualrepresentationthatidealistoshowtruthfully
what the virtual environment looks like, and in architecture and
my thesis that ideal is to show a model experientially - through
space and time.
“This involves surface or descriptive accuracy: naturalism. At the same time as distinguishing itself as other (alien) in relation to thehumancharactersandthefictionalworld,thepseudopodmust appear as indistinguishable at the level of representation, thatistosayinitsrepresentationaleffect.Ithadtoappeartooccupy-tobeontologicallycoextensivewith-thesameprofilmicspace as the human actors. This involved the seamless combining oftwodifferentlyrealisedsetsofrealisticimagery:ofwhichoneis properly analogical, i.e. photographic, the other seemingly photographic, i.e. digital simulation. Additionally however, it must also integrate, again in a perfectly seamless manner, into the diegetic dimension: the story space. In order for this to occur an exceptional amount of pre-planning had to enter into the carefully orchestrated decoupage that eventually stitches the shots together.Here,finally,surfaceaccuracyissubordinatedtotheratherdifferentcodesofnarrativeillusionism.”-pg.108
Here the author was analyzing a scene from the film The
![Page 25: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/25.jpg)
2 4
Abyss (1989) where a computer generated tentacle is made to
coexistwithinthefilmicspacewiththerealcharactersandsetting
and also within the presentational space, where the story as shot
has to make room for this element which will be added later in
theproductionof thefilm.The importanceof this isagain that
the purpose of a render, or real time interaction, is not the pretty
image itself but what the image does, its performative element.
Thequalityand thebelievabilityof the frame inafilmexample
has to kneel to the frame as a narrative element - this tentacle
in The Abyss hastomakesenseasatentaclefirst,theimageofa
tentacle later. Likewise in architectural representation, an image
of a project has to come after what the image will do, which is a
proof of experience.
“Thecontradiction-everpresentinspecialeffects-betweenknowing that one is being tricked and still submitting to the illusory effectisoperativehere.Yet,particularly(thoughcertainlynotsolely) in those scenes involving computer imaging discussed here, the more photographically perfect or convincing the images, the more - paradoxically - does their sutured and suturing aspect seem to recede and their fabricated character come to the fore.” - pg. 113
This pertains to the effect of illusion and wanting to be
fooled. Sometimes a fabricated image, a computer generated
mosaic,becomestooartificial.Thisisimportanttonotebecause
it is possible that so much effort can be spent on making an
architectural image perfect photographically that its photorealism
eclipses its narrative - its experiential conduit. Just like there are
technological functionality bounds - software exists that can do
many, perhaps too many, things in a virtual environment - there
are aesthetic bounds - software cannot be so focused on being
realistic that the realism gets in the way of the representation.
“It is both the bizarre and impossible nature of that which is represented and its thoroughly analogical character (simulation of the photographic), that fascinates, produces in the viewer a ‘double-take’ and makes him or her want to see it again, both to wonder at its portrayal and to wonder about ‘just how it was done’.” - pg. 115
This, on the other hand, produces a lower bound on the
aesthetics of the image. It is likewise cautionary to make an image
too experiential, too generative of wonder. The combination of
seemingly impossible imagery rendered (by computer) with
accurate realism, so to say, produces a kind of inquisitiveness
that places the generation of the image itself before what the
image represents. The way the image was made becomes more
interesting than what the image is about.
“Thus the fact that we can make many identical copies (prints) of aparticularfilm,meansnotonlythatmorepeoplegettoseeitbutalso that as a work it is thereby made less precious.” - pg. 125
This passage refers to Walter Benjamin’s theories on
mechanical reproduction. It is always a good idea to keep in
mind the fact that quantity, even if it maintains quality, does
not necessarily increase the popularity of a work. Since a part of
my thesis is to explore if architectural simulations can become
portable, it will be important to see what effects suchmobile
qualities have on architectural design.
“today it is not what is repeated between given tokens of a series that counts for spectators, so much as the increasingly minimal differencesinthewaythisisachieved.Burgeoning‘replication’,the repetition at the heart of commodity culture, forestalls the threat of saturation and exhaustion by nurturing a homeopathic-
![Page 26: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/26.jpg)
2 5
likeprincipleofformalvariation(i.e.basedoninfinitesimalmodificationsandchanges).”-pg.127
The issue of repetition versus customization further explores
what architectural representation could become in a mass mobile
environment. This particular passage refers to the phenomenon
of television shows, comic strips, and serial novels where only
small changes are made between versions, only enough so that
a new installment is different from the last. Theoretically the
proliferation of architectural representation into the mainstream
couldgothisway-anarchitecturalfirmproducesaninteractive
architectural simulation or a few, and a client modifies it only
slightly. Perhaps that is an unideal future.
“Evenfieldssuchascomputergamesandsimulationrides,whichare the most recent and appear to depend more on the novelty of the technology itself, are - as we shall see in coming pages- just as much subject to this aesthetic of repetition. They may involve new formal elements - the much vaunted ‘interactivity’ and ‘immersion’, forexample-andthesemaywellaffecttheirindividualaesthetics.However, just as much as the more established forms, they also seem destined to operate within the logic of self-referentiality and the preponderance of the ‘depthless image’. All are manifestations of an altogether new dimension of formal concerns that established itself within the mass cultural domain of the late twentieth century, helping to constitute both cultural forms and practices of production and aesthetic sensibilities.” - pg. 129
Here the author combined the two threads of thought -
repetition of the image in culture and a focus on the image itself
over the substance of the image. The idea here is that as an image
spreads it does not necessarily mean that people see it more, or
see through it more. The proliferation of an image may shift the
audience’s concern towards the formal quality of the image, put
another way, more people see less. Being able to have a large
audience for an image may be a large factor - in an architectural
firmandwithaclientonlyasmallnumberofpeopleseetheimage
and can control it - once such limitations are lifted, if they can be
lifted, the image may be diluted even if it gains other properties,
like interactivity.
“Living in cultures in which we are surrounded on all sides by moving images, we are now particularly accustomed to the kind of montagethatstrivestohideitsartifice.”-pg.131
Architecture is, independent of what some architects think,
part of the global digital stage and as such has to compete with
othervisualfields.Themoregraphicallyadvancedtherestofour
culture becomes, the more certain qualities will be expected of
thevisualelementsofarchitecture.Thismeansthatfleshingout
this aspect of architecture, or at least exploring it in my thesis, will
also require me to know what is expected of real time interaction
as well as what it can do.
“The sheer sense of presence, however, conveyed in the best of them - and here Quake is a key example - compensates for such defeats. In other words, it is the experience of vicarious kinaesthesia itself that counts here: the impression of controlling events that are taking place in the present.” - pg. 157
Here the author brings in the experience of video games,
saying how, in the interaction with the game, the fact that the
player may sometimes need to repeat areas in a video game is
overshadowed by the fundamental fact that the player is actually
controlling something in the virtual realm. This is an aspect of real
time interactive simulations that needs to be put in the forefront
because it simply does not exist in renders or even CAD programs.
![Page 27: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/27.jpg)
2 6
There is no sense of time in Revit or Sketchup, and watching an
animation gives the user no control. While substance is key in the
image, presence is important outside it.
“interactive representation involves a mode of representing that is ‘inside the time of the situation being described’. That is to say, timeisrepresentedasviewedfromafirstpersonperspective- literally as if one were really there, thereby, producing the impression that things are continually open to any possibility...Indeed,itbecomesdifficulttountanglespacefromtimeinthisrespect so intimate is their relation. We might say that the illusion of experiencing events as if they are taking place in present time in computer games is largely dependent upon visual simulation.” - pg. 158
Here the author points out that the mere introduction of
time to a virtual environment already creates the impression of
interaction by the simple virtue of providing limitless possibilities
on ‘what could happen next.’ In video games, the visual alone
can do this. Likewise in my thesis, establishing this effect by
the photorealistic representation of architectural models could
already be a huge step towards interaction.
“given the increasing surface realism of the moving imagery, the sophisticationof real-time graphic representation and the use of first-personperspective, the impression of actual occupancy and agency within the space of thegame’sfictionalworldcanbeextremelyconvincing.”-pg.163
Another aspect of video games that can be transferred to
interactive architectural simulation is the sense of occupancy.
Through a combination of realistic imagery, realistic depth
(materialeffectsandbelievabilityofpresence),andasimulation
of what it would be like as if one was there, occupancy can be
achieved. Since occupancy is a major aspect of experience, such
aconceptualframeworkisimportantforthefieldofmythesis.
“However, such ‘active participation’ should not be confused with increased semantic engagement. On the contrary, the kinds of mental processes that games solicit are largely instrumental and/or reactive in character. As I suggest above, the space for reading or meaning-making in the traditional sense is radically reduced in computer games and simulation rides.” - pg. 164
Here the author steps back and concedes that the actual
interaction with a video game is not the same thing as interaction
with the virtual environment. The user is still fundamentally
looking at an image. This is also very important to keep in mind
becausemy thesis does not seek to redefine how architecture
is made - it seeks to augment or improve only the computer
representation aspect of architecture.
Generating Three-dimensional Building Models
From Two-dimensional Architectural Plans
The only relevant quote:
“The building model used to develop and demonstrate the system was produced by iteratively applying “clean-up” algorithms and user interaction to convert a grossly inadequate 3D AutoCAD wire-frame model of Soda Hall (then in the design stages) into a complete polyhedral model with correct face intersections and orientations.TheBerkeleyUniGrafixformatwasusedtodescribethe geometry of the building, because of its compatibility with the modeling and rendering tools available within the group. Theinteriorofthebuilding,includingfurnitureandlightfixtures,was modeled by hand, through instancing of 3D models of those objects. In all, the creation of the detailed Soda Hall model requiredtwoperson-yearsofeffort.Itbecameclearthatbettermodeling systems were needed.” - pg. 3
While the research report, by Rick Lewis, was written in
1996,beforesignificantadvances inCADhadtakenrootamong
the designing audience, the general gist of what this quote refers
Fig. 2.15 Generating Three-dimensional Building Models From Two-dimensional Architectural Plans
cover.
![Page 28: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/28.jpg)
2 7
to remains true today. With my thesis this argument would more
pertain tohaving to customize every render for aflawless end
result (presumably), the notion that accurately modeling an
entire building in a computer is manually labor intensive is true
- partly because many designs are so unique, there are no tools
for efficiently spreading geometric complexity within a model
without resorting to grids or simple patterns. With rendering and
interaction,themanualdifficultyliesinpreparingarenderscene
and then setting lighting and material properties, all of which take
a large percentage of the total time it takes to develop a render.
Perhaps there is a way to develop a pipeline where materials and
lighting can be more easily established without thinking of it as a
necessary preparation for each render scene.
Visuality for Architects: Architectural Creativity and
Modern Theories of Perception and Imagination
ThisbookbyBrankoMitrovićintroducedanideatomythesis:
mental rotation, the ability to rotate a 2D representation in the
mind. It bashed architects for blindly relying on narrative as the
prime way of communicating projects and designs. It proposes
that architecture evolve into a visual profession. Generally, it
noted a behavior in architects to avoid or ignore architecture’s
purely visual aspects. The idea of ideological bias versus the
opportunity to see architecture visually is critical to expanding
the use of interactivemedia in architecture, yet architects first
need to open their mind to the notion that architecture is not
narrative by default.
Relevant quotes in textual order:
“What psychologists describe as mental rotation is the same kind of task that is performed by computers in modern architectural practice.” - pg. 6
This book argued that what CAD does is not fundamentally
different fromwhat a human brain does when it views a plan
or a perspectival image - though the separation of conceptual
thinking from visual thinking becomes easier in a computer.
Thus relying on creating static images just so the brain can be
forced to have visual and conceptual thinking near each other,
forcing connections, is a fairly outdated concept - the process
can be separated, CAD can give the full visual stimulus that real
experience provides with a real building and the brain can be fully
used for conceptual thinking.
“The same tendency to base design on stories that can be told about architectural works is common in contemporary architectural practice as well. Here it is strengthened by the fact that in order to get commissions, architects often have to explain in words their design decisions to their clients. Sometimes they (are expected to) invent stories about what the building represents.” - pg. 11
Another key theme the book brought up was the stubborn
reliance of contemporary architects on narrative and having, or
thinking that it is the only way to, describe a building’s ‘concept.’
Why rely on speaking about an almost inherently visual idea
(granted, tactility and sound matter) when you can communicate
it visually?
Fig. 2.16 Visuality for Architects: Architectural Creativity and Modern Theories of Perception and
Imagination cover.
![Page 29: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/29.jpg)
2 8
“In fact, much bigger issues are at stake. Architecture does not live in isolation from its intellectual and cultural environment. If antivisual biases are going to be credible among architects, architectural academics, or theorists, this can happen only if such views are based on and derive from assumptions that are credible in the society in which they live.” - pg. 13
Socially, one can argue that the visual has grown faster
and faster in developed society. Take the internet - experienced
almost exclusively visually: computer screens, smartphones,
tablets, even printouts of web content are visual objects. Film,
video games, advertising, it is all visual. Perhaps even literature
financially is falling behind visual storytelling through film,
TV, Netflix, and so on. Therefore architecture must develop,
somehow paradoxically, into a visual profession. That is nearly at
the core of my thesis.
“Applied to architecture, this means that there are no visual properties of architectural works that are not ultimately derived from the ideas we associate with these works. Visual perception of buildings is merely a result of the knowledge and beliefs we already have about them.” - pg. 14
A bit of theory here. The more the brain is forced to draw
from its reservoir of constructible memories, when exposed to
a single image of a piece of architecture, the more the brain will
generalize to the archetype. The brain, when it has to make up
information, will just use what it already knows. Thus it is in fact
detrimental to the review or design of architecture if people view
it in a reduced manner, that is, in a manner far from the actual
experience of architecture. I propose that a greater reliance
on interactive visualizations, being that those are closer to said
experience, would promote a truer review of architecture.
“If we are going to talk about the aesthetic qualities of architectural works, we need to be aware that these works are going to be thought about not only as perceived from a single point in space but as three-dimensional objects. We perceive a building from one side, from another, from inside, we observe the composition of spaces, and after some time we have formed a comprehensive understanding of the building’s three-dimensionality. Or, we don’t have to be dealing with a built building at all; we can grasp its spatial qualities by studying its plans, sections, and elevations. By analogy with 3-D computer modeling, one could say that we have formulated a 3-D mental model of the building in our minds” - pg. 71-72
Again with mental rotation. Much of architectural experience
revolves around understanding the visual composition and
relationships of a design or building. This is possible from a human
vantage point with a built building, but with design products, the
observerhas toeffectively rebuild themodel inside theirmind.
It would only accelerate the understanding if the observer could
interpret something only a step away from actual experience, an
interactive render.
“In a situation where it is recognized that architectural works can be perceived, imagined, thought about, mentally rotated, and that their geometries can be studied, their colors discussed, and so on, independently of any concepts or meanings we associate with these works, only an ideologically biased professor can insist on evaluating the work exclusively on the basis of the story that can be told about it.” - pg. 85
This pertains to the general issue where architects are not
grasping the full breadth of the tools that are available to them.
The somewhat hesitant reliance of architectural reviews to
generalize renders to drawings paired with a reliance on printed
materialisstiflingarchitecturaldesignflexibility.Thus,inaneffort
to justify their views (ironic), review boards pretend that they are
in fact not interested in the visual and are looking for (inescapable
![Page 30: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/30.jpg)
2 9
irony) a more narrative description of the project. The idea
of ideological bias versus the opportunity to see architecture
visually is critical to expanding the use of interactive media in
architecture.
One Approach for Creation of Images and Video
for a Multiview Autostereoscopic 3D Display
This research report by Emiliyan Petkov outlines a method
for creating images for 3D screens, useful to know for my thesis.
A relevant quote:
“A matter of interest is exploring the possibility for developing interactive applications for 3D displays. This kind of applications gives users the opportunity to interact with objects in a computer simulated world in real time. Thus the time for remaining in this virtual environment is not limited and decisions what to do and wheretogoaremadebytheuser.Theseapplicationswillofferanopportunity for creation of virtual worlds through the multiview autostereoscopic 3D displays.” - pg. 322
Somewhat tangential, part of my thesis is exploring possible
hardware for interaction, one of which would be 3D Displays
or Monitors or Screens. A strong aspect of that would be, not
just review of the design using this hardware, but also creation,
potentially collaborative.
Touchable 3D Video System
This research report by Jongeun Cha, Mohamad Eid, and
Abdulmotaleb El Saddik introduces the idea of presence - the
immersive feeling of being inside a virtual environment.
Relevant quotes in textual order:
“Recent advances in multimedia contents generation and distribution have led to the creation and widespread deployment of more realistic and immersive display technologies. A central theme of these advances is the eagerness of consumers to experience engrossing contents capable of blurring the boundaries between the synthetic contents and reality; they actively seek an engaging feeling of ‘being there,’ usually referred to as presence.” - pg. 29:2
In the entertainment industry, displays are getting larger and
larger, with more accurate color rendition and higher contrast
ratios - this is driven by consumers, so people are buying what they
likemoreandnaturalselectionkillsofftheTVsinthepopulation
set that are not selected. Part of that drive is, naturally, the need
to be entertained, but another part of it is that the more powerful
the display the more data it can deliver. This can and should be
harnessed by architects.
“When viewers have the ability to naturally interact with anenvironment,orareabletoaffectandbeaffectedbyenvironmental stimuli, they tend to become more immersed and engaged in that environment.” - pg. 29:2
There is an argument for critical distance - maintaining a
distance from a design being reviewed so that the design does
notinfluencethereviewitself.However,architecturecannotbe
reduced to a set of images as it often is in design reviews. When
afilmproductionteamlooksatacutofafilmtheydosoinadark
room-muchliketheaudiencewouldviewthefilmwhenitcomes
out. Likewise in architecture, being able to experience a design
while it is being made like it would be experienced by its users
after it is built seems like a useful ability to have.
Fig. 2.17 One Approach for Creation of Images and Video for a Multiview Autostereoscopic 3D Display cover.
International Conference on Computer Systems and Technologies - CompSysTech’10
One Approach for Creation of Images and Video fora Multiview Autostereoscopic 3D Display
Emiliyan Petkov
Abstract: Nowadays computer 3D technologies are topic of present and are field of state-of-the-art research and dynamical development. Their main goal is to provide the observers of computer simulated environments with 3D perception. 3D computer display technologies play base role in this three-dimensional visualization. This report presents the results from the investigations into a multiview autostereoscopic 3D display technology developed by Philips 3D Solutions with the purpose of creating images and video for these kind devices. This paper is financed by project: Creative Development Support of Doctoral Students, Post-Doctoral and Young Researches in the Field of Computer Science, BG 051PO001-3.3.04/13, European Social Fund 2007–2013, Operational Programme “Human Resources Development”.
Keywords: 3D, graphics, display, autostereoscopic, multiview, images, video, WOWvx, 2D-plus-Depth, Declipse.
INTRODUCTION3D displays are the devices capable of conveying three-dimensional images to the
viewers. There are generally four types of 3D displays: stereoscopic, autostereoscopic, computer-generated holography and volumetric displays [8]. These devices together with the methods for creation of three-dimensional scenes allow the reproduction of a virtual reality under the observer’s eyes. This vastly enriches the experience and contributes to the best perception of the presented reality.
Each of these display technologies has comparatively great cost. Even though stereoscopic and autostereoscopic 3D displays have lower prices than the displays developed under the other two technologies. This allows investigations over the stereoscopic and autostereoscopic displays to be made from wider circle of researches.
Figure 1. Philips Multiview Autostereoscopic 3D display.
The set task in this research is a part of a bigger project of the University of Veliko Turnovo, titled «Virtual Reality in Education». The project aims to investigate the application of virtual reality technologies, in particular 3D displays, in the process of teaching students at university and to equip a laboratory for 3D technologies. In the last few years a number of companies among which Philips 3D Solutions [10] have introduced multiview autostereoscopic 3D displays. A multiview autostereoscopic 3D display from Philips 3D Solutions (fig. 1) has been chosen for the equipment of an auditorium.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is grantedwithout fee provided that copies are not made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
CompSysTech'10, June 17–18, 2010, Sofia, Bulgaria.Copyright©2010 ACM 978-1-4503-0243-2/10/06...$10.00.
317
Fig. 2.18 Touchable 3D Video System cover.
29
Touchable 3D Video System
JONGEUN CHA, MOHAMAD EID, and ABDULMOTALEB EL SADDIKUniversity of Ottawa
Multimedia technologies are reaching the limits of providing audio-visual media that viewers consume passively. An importantfactor, which will ultimately enhance the user’s experience in terms of impressiveness and immersion, is interaction. Amongdaily life interactions, haptic interaction plays a prominent role in enhancing the quality of experience of users, and in promotingphysical and emotional development. Therefore, a critical step in multimedia research is expected to bring the sense of touch, orhaptics, into multimedia systems and applications. This article proposes a touchable 3D video system where viewers can activelytouch a video scene through a force-feedback device, and presents the underlying technologies in three functional components:(1) contents generation, (2) contents transmission, and (3) viewing and interaction. First of all, we introduce a depth image-basedhaptic representation (DIBHR) method that adds haptic and heightmap images, in addition to the traditional depth image-based representation (DIBR), to encode the haptic surface properties of the video media. In this representation, the hapticimage contains the stiffness, static friction, and dynamic friction, whereas the heightmap image contains roughness of the videocontents. Based on this representation method, we discuss how to generate synthetic and natural (real) video media through a 3Dmodeling tool and a depth camera, respectively. Next, we introduce a transmission mechanism based on the MPEG-4 frameworkwhere new MPEG-4 BIFS nodes are designed to describe the haptic scene. Finally, a haptic rendering algorithm to compute theinteraction force between the scene and the viewer is described. As a result, the performance of the haptic rendering algorithmis evaluated in terms of computational time and smooth contact force. It operates marginally within a 1 kHz update rate thatis required to provide stable interaction force and provide smoother contact force with the depth image that has high frequencygeometrical noise using a median filter.
Categories and Subject Descriptors: H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems—Video; H.5.2 [Information Interfaces and Presentation]: User Interfaces—Haptic I/O; I.4.10 [Image Processing andComputer Vision]: Image Representation—Multidimensional
General Terms: Design, AlgorithmsAdditional Key Words and Phrases: Haptic surface properties, haptic rendering algorithm, video representation
ACM Reference Format:Cha, J., Eid, M., and El Saddik, A. 2009. Touchable 3D video system. ACM Trans. Multimedia Comput. Commun. Appl. 5, 4,Article 29 (October 2009), 25 pages. DOI = 10.1145/1596990.1596993 http://doi.acm.org/10.1145/1596990.1596993
1. INTRODUCTION
Recent advances in multimedia contents generation and distribution have led to the creation andwidespread deployment of more realistic and immersive display technologies. A central theme of theseadvances is the eagerness of consumers to experience engrossing contents capable of blurring the
Authors’ address: Multimedia Communications Research Lab., School of Information Technology and Engineering, University ofOttawa, 800 King Edward, Ottawa, CA, KIN 6N5.Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee providedthat copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first pageor initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute tolists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may berequested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481,or [email protected]© 2009 ACM 1551-6857/2009/10-ART29 $10.00 DOI 10.1145/1596990.1596993 http://doi.acm.org/10.1145/1596990.1596993
ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 5, No. 4, Article 29, Publication date: October 2009.
![Page 31: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/31.jpg)
3 0
Computer Games and Scientific Visualization
This article by Theresa-Marie Rhyne examines the use and
impactofvideogametechnologyinscientificvisualization.
Relevant quotes in textual order:
“The market dynamics of computer game applications are thus influencingcomputerarchitectureshistoricallyassociatedwithscientificvisualization.”-pg.42
Whilescientificvisualizationdoesnotsoundlikeitrelatesto
architectural visualization, one can make poignant comparisons.
Both are data-driven. Both are group-reviewed. Both develop
diagrammatic visual products. Both require iterative or prototype
design stages. Both are model-based, forgoing an exhaustive
translationoftheentireproduct,insteadfocusingonasimplified
representation. If scientific visualization can learn from video
games, architecture can too.
“Shortcuts in the rendering software to produce a more engaging experience tor the user might work well in a game, but geologists using the same digital terrain data in a visual simulation of fault structures are unlikely to trust what they’re seeing or be able to applyitonareal-lifescientificmission.”-pg.42
A point against interactive visualization - sometimes
simplification of data renders it too unreliable. This works in
a purely scientific framework. However in architecture, the
simplificationhappensfromanimpossibleideal-noarchitectural
render has ever become reality. Ever. Thus simplifying from
a pretty picture to a less pretty picture but gaining real time
interaction works in architecture. At the same time, there are still
moments in design where data is crucial, but in those moments
making the design interactive in real time gains little for the
designer. At that point one has to be a little professional on when
to use a certain tool and when not to.
“Games now represent the leading force in the market for interactive consumer graphics. Not surprisingly, the graphics hardware vendors tend to anticipate the needs of game developersfirst,expectingscientificvisualizationrequirementstobe addressed in the process.” - pg. 43
Here is an interesting observation - hardware development
occurs for the lucrative business - video games - first, and the
data analysis, less popular, business, second, even though
the data analysis business should have a closer contact with
hardwaredevelopmentastheyhavemorespecificrequirements
for hardware. This is to point out that architecture should still
piggy-back on something else when it comes to visualization and
interaction tools - until, or if ever, it is a powerful business, tools
willnotbemadeforit.Itwillhavetofindthemitself.
Component-Based Modeling of Complete Buildings
This research report by Luc Leblanc, Jocelyn Houle,
and Pierre Poulin examines another system for automatically
generating architecture. While this is not fully near my thesis, it
is important to be aware of what else computer technology is
capable of that architects have not harnessed yet.
The only relevant quote:
“Shape grammars constitute the state-of-the-art in procedural
Fig. 2.19 Computer Games and Scientific
Visualization cover.
Fig. 2.20 Component-Based Modeling of Complete Buildings
cover.
Component-Based Modeling of Complete BuildingsLuc Leblanc ∗ Jocelyn Houle Pierre Poulin
LIGUM, Dept. I.R.O., Universite de Montreal
Figure 1: Variations on a building. Top: Random variations on the distribution of apartments, secondary corridors, rooms, and furniture for onerandomly generated configuration of wings in a multi-storey building. Bottom: Random variations on the wing shapes and their content.
ABSTRACT
We present a system to procedurally generate complex modelswith interdependent elements. Our system relies on the concept ofcomponents to spatially and semantically define various elements.Through a series of successive statements executed on a subset ofcomponents selected with queries, we grow a tree of componentsultimately defining a model.
We apply our concept and representation of components to thegeneration of complete buildings, with coherent interior and ex-terior. It proves general and well adapted to support subdivisionof volumes, insertion of openings, embedding of staircases, deco-ration of facades and walls, layout of furniture, and various otheroperations required when constructing a complete building.
Keywords: Procedural Modeling, Architecture, Shape Grammar,Boolean Operation
Index Terms: Computer Graphics [I.3.5]: Computational Geom-etry and Object Modeling
1 INTRODUCTION
Buildings host a great deal of modern human activity. As such, ev-ery immersive computer graphics (CG) project, whether it be moviespecial effects, virtual reality systems, or video games, is bound toeventually require buildings. Our familiarity with buildings man-dates a high degree of fidelity, and therefore, many adopted solu-tions rely mainly on manual labor from artists. Consequently, cre-ating an entire building, or worse, all the buildings of a city, quicklybecomes a daunting endeavor.
Procedural modeling is an excellent method to tackle the com-plexity of reality. Instead of relying on long and sustained human
∗e-mail: { leblanc, houlejo, poulin } @iro.umontreal.ca
involvement, arbitrarily complex objects can be generated with lit-tle input from a user. This approach forgoes defining every littlemanual detail in favor of a succinct set of automatic rules able tosatisfy most cases reasonably well. Various procedural techniqueshave been fairly popular in specialized modeling domains of CG,such as fractals for landscapes, L-systems for plants, particle sys-tems for fluids, and shape grammars for building exteriors.
Shape grammars constitute the state-of-the-art in proceduralmodeling of building exteriors, and have produced high-quality re-sults [4]. However, even though modeling building interiors andexteriors appears similar, shape grammars have not yet proven tobe a good solution for modeling complete buildings. In fact, sincetheir creation, only a small number of grammars, such as the palla-dian [30], have been produced for 2D floor plan generation, andbetter solutions have been provided by optimization techniques.Moreover, despite 10 years of development, shape grammars haveseemingly yet to be used to model complete buildings.
This paper presents our solution to generate procedural buildingswith coherent interiors and exteriors. We introduce a system ca-pable of simulating split grammars and executing CSG (Construc-tive Solid Geometry) operations within a unified context. Our tech-nique consists of executing a series of operations (i.e., a program)on a set of shapes selected by a query mechanism. These oper-ations and queries are implemented as a programming language,and consequently, our system retains the flexibility and generalityof programming languages, which is an asset in procedural model-ing. The language is devoted to modeling with components, whichis different than a library of tools on top of a regular programminglanguage. Our system is currently not intended for general artists,but rather for designers with some programming skills. Moreover,our goal is to generate believable and coherent buildings for gameand special effects environments, similar to those from recent CGshape grammars. While we hope to explore more advanced archi-tectural issues in the future, we are not architects, and our systemfirst addresses the basic needs for building design. It provides tools,but intelligence is still in the designer’s hands. However, with care-ful design, the procedural modeling aspect in our system allows for
872011
Graphics Interface Conference 201125-27 May, St. John's, Newfoundland, CanadaCopyright held by authors. Permission granted to CHCCS/SCDHM to publish in print form, and ACM to publish electronically.
![Page 32: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/32.jpg)
3 1
modeling of building exteriors, and have produced high-quality results. However, even though modeling building interiors and exteriors appears similar, shape grammars have not yet proven to be a good solution for modeling complete buildings. In fact, since their creation, only a small number of grammars, such as the palladian,havebeenproducedfor2Dfloorplangeneration,andbetter solutions have been provided by optimization techniques. Moreover, despite 10 years of development, shape grammars have seemingly yet to be used to model complete buildings. ” - pg. 87
While tools exist to parametrically generate exteriors, or
otherwise surfaces, those tools are not being applied to spaces or
are otherwise only being applied in a limited manner. Architects
spend too long marginalizing their own trailblazers - this report
claims over a decade has been spent on developing procedural
shape grammars, yet none of those years yielded a complete
proceduralbuilding. Isthisanunimportantfield inarchitecture?
Perhaps, but why has it been in development for so long, if so?
Exploring the Use of Ray Tracing for Future Games
This research report by Heiko Friedrich, Johannes Günther,
Andreas Dietrich, Michael Scherbaum, Hans-Peter Seidel, and
Philipp Slusallek introduces a software technique called ray
tracing and applies it to full virtual scene generation, including
shadows, reflection, refraction, caustics and other complex
effects.The reportproposes thatcomputersarenowpowerful
enough that this is possible at realistic hardware scales.
Relevant quotes in textual order:
“Computer games are the single most important force pushing the development of parallel, faster, and more capable hardware.” - pg. 41
One more reason to look to video games for cutting-edge
visualization inafieldthat isalmostprimarily...visual.Architects
can spend all the time they want making window schedules but
at the end of the day the product will be something that is seen.
“Somefeaturesofthisenginearerealisticglasswithreflectionand refraction, correct mirrors, per-pixel shadows, colored lights, fogging, and Bézier patches with high tessellation. All of these effectsaresimpletoimplementwithrudimentaryraytracingtechniques” - pg. 45
Thisquoteisusefulbecause,ontheoffchancethatIattempt
to develop a visualization software, I know that it may not require
a high-end graphics engine with hundreds of shaders and visual
tricks - it all can be done with one system.
“Because ray tracing computes visibility and simulates lighting on theflythepre-computeddatastructuresneededforrasterizationare unnecessary. Thus dynamic ray tracing would most likely allow for simulation-based games with fully dynamic environments as sketched above, leading to a new level of immersion and game experience.” - pg. 47
Here the technology of ray tracing is advertised on the fact
that, since it does not need pre-computation (like having to wait
for a render), it would provide the opportunity for immersive
interaction. This makes sense, as the faster the experience is
accessed from when it was designed the more responsive the
user would be as the conceptual thread in the mind would simply
continue from one medium to another.
Adding a Fourth Dimension to
Three Dimensional Virtual Spaces
The only relevant quote (on facing page):
Fig. 2.21 Exploring the Use of Ray Tracing for
Future Games cover.
Copyright © 2006 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail [email protected]. Sandbox Symposium 2006, Boston, Massachusetts, July 29–30, 2006. © 2006 ACM 1-59593-386-7/06/0007 $5.00
Exploring the Use of Ray Tracing for Future Games
Heiko Friedrich∗
Saarland University
Johannes Gunther†
MPI Informatik
Andreas Dietrich∗
Saarland University
Michael Scherbaum‡
inTrace GmbH
Hans-Peter Seidel†
MPI Informatik
Philipp Slusallek∗
Saarland University
Figure 1: Screenshots from fully interactive, ray traced game prototypes featuring highly realistic images together with richness in scenedetails. Ray tracing greatly simplifies the creation of games with advanced shading effects including accurate shadows and reflections evenfor complex geometry and realistic material appearance in combination with sophisticated illumination. Interactive ray tracing performanceis already possible using software-only solutions (left three images) but dedicated hardware support is also becoming available (right).
Abstract
Rasterization hardware and computer games have always beentightly connected: The hardware implementation of rasterizationhas made complex interactive 3D games possible while require-ments for future games drive the development of increasingly paral-lel GPUs and CPUs. Interestingly, this development – together withimportant algorithmic improvements – also enabled ray tracing toachieve realtime performance recently.
In this paper we explore the opportunities offered by ray tracingbased game technology in the context of current and expected futureperformance levels. In particular, we are interested in simulation-based graphics that avoids pre-computations and thus enables theinteractive production of advanced visual effects and increased re-alism necessary for future games. In this context we analyze theadvantages of ray tracing and demonstrate first results from severalray tracing based game projects. We also discuss ray tracing API is-sues and present recent developments that support interactions anddynamic scene content. We end with an outlook on the differentoptions for hardware acceleration of ray tracing.
CR Categories: I.3.1 [Hardware Architecture]: Graphicsprocessors— [I.3.4]: Graphics Utilities—Software support I.3.6[Methodology and Techniques]: Graphics data structures and datatypes— [I.3.7]: Computer Graphics—Ray tracing
Keywords: Games development, realtime ray tracing, simulation,dynamic scenes, global illumination, graphics hardware
∗e-mail: {friedrich,dietrich,slusallek}@graphics.cs.uni-sb.de†e-mail: {guenther,hpseidel}@mpi-inf.mpg.de‡e-mail: [email protected]
1 Introduction
Computer games are the single most important force pushing thedevelopment of parallel, faster, and more capable hardware. Someof the recent 3D games (e.g. Elder Scrolls IV: Oblivion [BethesdaSoftworks LLC 2005]) require an enormous throughput of geo-metry, texture, and fragment data to achieve high realism. Theyincreasingly use advanced and computationally costly graphics ef-fects like shadows, reflections, multi-pass lighting, and complexshaders. However, these advanced effects become increasingly dif-ficult to implement due to some fundamental limitations of the ras-terization algorithm. One major limitation is its inability to per-form recursive visibility queries from within the rendering pipeline,which results in a number of significant problems when trying toimplement advanced rendering effects. We analyze these limita-tions in more detail in Section 2.
Ray tracing, on the other hand, has several advantages and avoidsmany of these limitations (also discussed in Section 2). It is, forexample, specifically designed to efficiently answer exactly theserecursive visibility queries, which enables it to accurately simulatethe light transport and the appearance of objects in a scene. How-ever, ray tracing had been much too slow for interactive use in thepast.
Due to significant research efforts in recent years, ray tracinghas achieved tremendous progress in software ray tracing perfor-mance [Wald et al. 2001; Reshetov et al. 2005; Wald et al. 2006a;Wald et al. 2006b] to the point where realtime frame rates can al-ready be achieved for non-trivial scenes on standard CPUs and atfull screen resolution (see Table 1).
Table 1 compares the rendering performance of several realtimeray tracing implementations, namely the original OpenRT sys-tem [Wald et al. 2002a], multi level ray tracing (MLRT) [Reshetovet al. 2005] both using kd-trees as spatial index structures, andvery recent implementations with Bounding Volume Hierarchies(BVH) [Wald et al. 2006a] and Grids [Wald et al. 2006b]. Thesenumbers give an overview of the ray tracing performance that canbe achieved in software, but it is important to note that these sys-tems vary significantly in their feature set and thus are not directlycomparable. Images of the used test scenes are shown in Figure 2.
41
Fig. 2.22 Adding a Fourth Dimension to Three Dimensional Virtual Spaces cover.
Copyright © 2004 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
© 2004 ACM 1-58113-845-8/04/0004 $5.00
Adding a Fourth Dimension to Three Dimensional Virtual Spaces
Robina E Hetherington Liverpool Hope University College
John P Scott University College Chester
ABSTRACT
The development of new standards for distributed data offer new possibilities to combine and display multiple types of information. This paper is concerned with an architectural and historical application of X3D and XML to objects, such as buildings, which have an organic quality and tend to evolve over time. The display of a 3D computer model does not always adequately describe the building or artifact and additional data are often required.
This paper describes and evaluates techniques for the integration of three-dimensional data in the form of X3D and other data contained in XML format, such as temporal data. The capabilities of X3D to display a model with associated temporal data in different states or times are outlined. The relationship of X3D to XML is considered and methods described to enable 3D models and temporal data to be meaningfully combined. The use of XML to represent temporal data is outlined along with the use of XSLT (eXtensible Stylesheet Language Transformations) and DOM (Document Object Method) to filter both model and temporal data. The use of an API (Application Programming Interface) to alter the state of an X3D model is described. These methods are applied to a simple model and data file to display temporal data along with a 3D model at different points in time. Conclusions are drawn as to the appropriate method to employ for client-side manipulation of different types of 3D models and related data.
CR Categories: C.2.4 [Distributed Systems] Distributed Systems – Client/Server, H.5.3 [Information Interfaces and Presentation] Group and Organization Interfaces – Web-based interaction, I.3.7 [Computer Graphics]Three-Dimensional Graphics and Realism – Virtual Reality.
Keywords: Information Visualization, Interactive 3D Graphics, Architecture, X3D, XML, Cultural Heritage
1 INTRODUCTION
The last decade has seen a phenomenal growth in the use of the World Wide Web as a communications medium. This has been mainly through the use of HTML, an open source mark up language. However, the limitations of HTML have led to the development of eXtensible Markup Language (XML), which is a data formatting specification language based on the Standard Generalised Markup Language (SGML). XML is a markup language, like HTML, but the tags in XML are not predefined. Authors have to define their own through either a Document Type Definition (DTD) or an XML Schema. XML was created to store, structure and to exchange information. HTML may well be used for many years to come and it will work with XML to display data in Webpages. However, with an XML data file, the same information will be available for display on many other platforms. Because an XML document is a plain text file, it provides a software and platform independent way of sharing data.
In parallel with the development of the World Wide Web there has been a growth in the ability to model three-dimensional objects on computers. In the main this has been using propriety software, both to develop and to display the three-dimensional models. Exchange of data relating to models produced using modelling software has typically involved the use of DXF files. In the early 1990s the Virtual Reality Modelling Language (VRML) was developed to enable three-dimensional models to be displayed over the WWW, with the first official version released in May 1995. However, it has not seen comparable uptake to that of HTML and a new standard has been proposed, in the form of X3D (eXtensible 3D), which is an XML application.
Technological problems such as slow connections and the limited power of computers have, until recently, inhibited the widespread use of Web3D. (Web3D is a generic term for the delivery of any 3D model over the World Wide Web). The growth of broadband Internet connections and a significant rise in the number of relatively low-priced computers readily available, which can handle both the file size and rendering requirements of 3D models, means that the time is now right for wider applications of Web3D graphics.
Although there is a significant body of work on both VRML and XML, there is very little work in the application of X3D combined with XML. Polys (2003) has demonstrated how chemical structures can be displayed through a combination of CML (Chemical Modelling Language) and X3D. Kim and Fishwick (2002) have examined the concept of creating dynamic models with X3D. According to Polys (2003) the potential impact of convergence between W3D and XML has yet to be understood or explored.
163
![Page 33: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/33.jpg)
3 2
“ThispaperfirstoutlinesthecapabilitiesofX3Dtoshowbuildingsatdifferenttimesorstates.ItthenexamineshowtemporaldatacanbestoredwithinXMLandcombinedwithmodeldataintheformofX3D.ThisdataisthenextractedandfilteredontheclientcomputerthroughtheuseofXMLtechnologies.Thewayinwhichbuildingscanbedisplayedatdifferenttimesorstatesalongwithassociated descriptive text is demonstrated.” - pg. 164
The general gist of this research report, by Robina E.
Hetherington and John P. Scott, is the apparent simplicity of
encoding time data into a model on the pseudocode level. That
is, it isnotfundamentallydifficulttostoretemporalversionsof
adesignwithinthefilesofthedesign.Thisissignificantbecause,
again, it is so simple for architects to use these tools, or to develop
them, that it boggles the mind that they have not used them yet,
or frown on their use. The ability to encode time data within the
design, separate from animation, could show clients, or a review
board,whatthedesignwouldappearlikeduringdifferenttimes
of the year, which sounds like a powerful tool.
Service-Oriented Interactive 3D Visualization of
Massive 3D City Models on Thin Clients
This research report, by Dieter Hildebrandt, Jan Klimke,
Benjamin Hagedorn, and Jürgen Döllner, points out how
cumbersome specialized hardware and software can become. In a
system designed to visualize massive models of cities, specialized
hardware was developed with specialized software and an expert
was trained to operate all of that...just to make a moving picture
of a city. This is a point against the tendency with architects to
maketoolsthatarehighlyspecifictoonepurposeor,worse,one
project.
“Until today only “monolithic” geovisualization systems can cope with all these challenges of providing high-quality, interactive 3D visualization of massive 3D city models, but still have a number of limitations. Such systems typically consist of a workstation that is equipped with large storage and processing capabilities, as well as specialized rendering hardware and software, and is controlled by an expert who controls the virtual camera and decides which information to integrate into the visualization through a graphical user interface.” - pg. 1
Generally, tools need to be general. A hammer that works on
only one type of nail is not a very good hammer. A rendering setup
that only works during day scenes is not very useful in the large
scheme of things. Likewise, a system for interactively visualizing
designsshouldremainflexiblesothatallarchitectscanuseit.
“these systems mostly lack the emotional factor that is immanent to today’s presentation and interaction devices such as smartphones and tablets” - pg. 1
This is an aspect I have strangely ignored - the emotional
factor of being immersed in a design. There is zero emotion,
except despair, in an architectural review. Let the building speak
for itself, let it inspire, motivate, drive the review. Such are the
fruits of an interactive visualization system.
Fig. 2.23 Service-Oriented Interactive 3D Visualization of Massive 3D City Models on Thin
Clients cover.
Service-Oriented Interactive 3D Visualization ofMassive 3D City Models on Thin Clients
Dieter Hildebrandt, Jan Klimke, Benjamin Hagedorn, Jürgen DöllnerHasso-Plattner-Institut
University of Potsdam, Germany{dieter.hildebrandt|jan.klimke|benjamin.hagedorn|doellner}@hpi.uni-potsdam.de
ABSTRACTVirtual 3D city models serve as integration platforms forcomplex geospatial and georeferenced information and asmedium for effective communication of spatial information.In this paper, we present a system architecture for service-oriented, interactive 3D visualization of massive 3D citymodels on thin clients such as mobile phones and tablets.It is based on high performance, server-side 3D renderingof extended cube maps, which are interactively visualizedby corresponding 3D thin clients. As key property, the com-plexity of the cube map data transmitted between server andclient does not depend on the model’s complexity. In addi-tion, the system allows the integration of thematic rasterand vector geodata into the visualization process. Usershave extensive control over the contents and styling of thevisual representations. The approach provides a solutionfor safely, robustly distributing and interactively presentingmassive 3D city models. A case study related to city mar-keting based on our prototype implementation shows thepotentials of both server-side 3D rendering and fully inter-active 3D thin clients on mobile phones.
Categories and Subject DescriptorsC.2.4 [Computer-Communication Networks]: Dis-tributed Systems—Client/server, distributed applica-tions; C.5.5 [Computer System Implementation]:Servers; D.2.11 [Software Engineering]: Software Ar-chitectures—Service-oriented architecture (SOA); D.2.1[Software Engineering]: Requirements/Specifications;I.3.2 [Computer Graphics]: Graphics Systems—Dis-tributed/network graphics
General TermsAlgorithms, Design, Performance, Standardization
KeywordsService-oriented architecture, mobile device, distributed geo-visualization, 3D geovirtual environment, virtual 3D city
model, 3D computer graphics
1. INTRODUCTION3D geovirtual environments (3DGeoVEs) are a conceptualand technical framework for the integration, management,editing, analysis, and visualization of complex 3D geospatialinformation. Virtual 3D city models as a specialized and fre-quent type of 3DGeoVE serve as integration platforms forcomplex geospatial and georeferenced information. For ap-plication areas such as city planning and marketing, virtual3D city models turned out to be effective means for the com-munication of planning related information, e.g., about landusage, transportation networks, public facilities, and realestate markets. Such systems have to provide up-to-datedata, most efficient interaction capabilities, as well as effec-tive, high-quality visual representations. Typically, the geo-data required for representing virtual 3D city models in realworld software applications have massive storage require-ments. To give users interactive access to high-quality vir-tual 3D city models, the resources required by a 3D geovisu-alization system in terms of storage and computing capacitycan be significant. This currently restricts the applicabilityof 3D geovisualization especially on mobile devices and forservice-based and web-based systems.
Until today only “monolithic” geovisualization systems cancope with all these challenges of providing high-quality, in-teractive 3D visualization of massive 3D city models, butstill have a number of limitations. Such systems typicallyconsist of a workstation that is equipped with large storageand processing capabilities, as well as specialized renderinghardware and software, and is controlled by an expert whocontrols the virtual camera and decides which informationto integrate into the visualization through a graphical userinterface. Typically, only a single view is available on a sin-gle screen or projection; multi-user access and collaborationis usually not supported; and these systems mostly lack theemotional factor that is immanent to today’s presentationand interaction devices such as smartphones and tablets.Often, such a system does not allow for easy and seamlessintegration of new or updated information, as data needsto be preprocessed to fit a specific internal format for en-abling high-performance rendering. Furthermore, it may bedifficult to adapt such an encapsulated visualization systemto specific data and usages that require new, advanced vi-sualization techniques. Even for today’s high-performancevisualization systems, it is a challenging task to combine thevisualization of massive, large-scale 3D data with the visu-
![Page 34: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/34.jpg)
3 3
On September 18th, I met with Thomas Cortina, Associate
Teaching Professor in Computer Science at the Gates-Hillman
Center. Below are important points from the meeting:
• Thomas mentioned a number of names I could pursue
for further inquiry: Jessica Hodgins, Kayvon Fatahalian (with
whom I eventually had an interview), both of whom work in
computer graphics, Alexey Efros, who is at Berkeley and works
with computational photography, and Guy Blelloch, who was
the lead on the design committee on the client side for the Gates
Center while it was being built. Some of these ended up being
unreachable.
• He also mentioned several libraries that I could look
into (and eventually did): the ACM (Association for Computing
Machinery) and SIGGRAPH, both of which could have articles and
research on graphics related to architecture.
•Yetathirdlineofinquiryhementionedweretheresearch
branches of large tech giants such as Microsoft, Google, IBM, and
Pixar, which often publish reports on cutting edge research and
technology.
All of these paths helped me develop my literary research.
i n T e R V i e w s a n d Re V i e w s
Fig. 2.25 Thomas Cortina.
Fig. 2.22 The College of Fine Arts compared to the Gates-Hillman Center at CMU.Bothreflectthestyleoftheirage:theCollegeofFineArtsisrigid,uniform,and measured, while the Gates-Hillman Center is open, dynamic, and constantly
adapting.
![Page 35: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/35.jpg)
3 4
Fig. 2.26 Kayvon Fatahalian.
Fig. 2.27 Near-exhaustive computation brought up during the interview.
On October 8th, I met with Kayvon Fatahalian, Assistant
Professor of Computer Science in the Smith Hall. Below are
important points from the meeting:
•Simplelightingcanbedoneuptoanyarbitrarygeometric
complexity, but baking complex shadows becomes tricky, and is
the area where graphics systems start taking shortcuts.
•Oneaspectofthesis ismakingthisstatement:“Ibelieve
it is possible...” Where are the situations where existing tools do
not meet the needs of architects; what is not good enough?
• If I asked about what architects want, the deliverable
wouldbeaproposedsolution.Conductingasurveyoftheefficacy
ofvisualizationsoftwareinthefieldwouldbefruitful.
•Withaninteractiverenderversusastaticone,thereisan
aesthetictrade-off-thefirstlooksworse,thesecondlooksvery
good. What particular things do architects want to do?
•Theideaofhowpre-renderedvideoscanaccountforevery
possible virtual scenario. That, or a mix of pre-rendered and real
time. How does that apply to architecture?
The biggest points I got from this meeting was to ask myself
how would an architect approach such software and what would
they need of it. This allowed me to move forward with software
analysis.
![Page 36: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/36.jpg)
3 5
During the first poster session, on September 18th, I got
feedback from various professors in the School of Architecture
as well as my advisors and other students. Below are points from
that feedback:
• A feasibility analysis would be useful, in the form of a
flowchart with yes/no pathways that would narrow down the
nature of the thesis. This idea I later incorporated into both the
MindMapandthesoftwareflowchart.
• The architectural design process was suggested to be
important to keep in mind. The problem had to be framed both
from the point of view of the client (what does the client want
to see?) and from the architect (what does the architect want to
show?).
From the first poster session I got ideas on what my
midreview should include to explain and ground my thesis.
Fig.2.28Poster#1shownatthefirstpostersession.
![Page 37: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/37.jpg)
3 6
Fig. 2.29 The midreview plot.
Fig. 2.30 The midreview brochure, showing both the outside and the inside.
The midreview, on October 21st, was when the greater ideas
of how I was presenting my thesis came into play. The plot’s color
scheme was designed as if one were staring at the world with
one’s eyes closed. There were also brochures and my website
available forperusal,whichmade itsofficialdebuton thatday.
The midreview had the following feedback:
•What is the dimensionality of inquiry? What is too
interactive? What is not visual enough? Where is this on a scale of
realism to representation to abstraction? This pushes the nature
of belief.
•Everytoolchangesthefield.Speculateonwhatthiswillkill.
Find how it will negatively impact architectural practice.
•In1994rendersweremadewith600kHzprocessorsthat
mimicked hand drawings. At some further point, firms began
experimenting with realistic renderings, with no technical
expertise.
•Is technology pushed just so it can wow someone?
Anything with technology or design has this eventuality, but is
that the point?
•Thereisacaveat-thatIamnotatechnicaldesigner.
•Lastly, comments were made to the effect of “this is a
thesis. Where is your project?”
![Page 38: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/38.jpg)
3 7
The second poster session, on October 25th, was the same
week as the midreview so it featured little development from
the work at the midreview. It was more of a ‘coming attractions’
setup. As such I had a projector with a video setup in front of my
poster showing a glimpse of things to come.
The feedback from the midreview and the second poster
session, due to its positivity, allowed me to continue in full swing
with the software evaluations. However, I knew that, for many,
getting a basic understanding of my thesis was important, and I
had to focus on that as well.
Fig. 2.31 Highlights from the second poster session. The top right image shows the setup with projector.
Fig. 2.32 QR code for gif animations of
the poster.
![Page 39: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/39.jpg)
3 8
The final reviewwas on December 8th. The review panel
provided a number of new and interesting perspectives that I can
use to move forward with my thesis:
•Ineedtoaddresshowarchitectswillusethis,especiallywith
BIM and delivering construction documents. My assumptions are
far above the set of common assumptions of architects. I need
tobridgethisgap.Ineedtolookagainatotherfirmsdoingthis-
consider why animation is paid for, not in-house.
•Withvideogames,thereareotheraspectsthanthevisuals
thatcanbenefitarchitects,likepathing,AIsimulation,etc.
•WhenarebeautifulsketchesusedcomparedtotheGRID?
Is it detrimental to show this to a client, since they won’t use their
imaginationanymore?Differentaudienceswilluseitdifferently.
•Different levels of information can be shown - maybe
abstraction is a tool architects want: the GRID can still have
motion, but does not have to be photorealistic.
•Videogamesandfilmsaremadetobemassproduced,very
unlike architecture, and consider the social aspects.
•This exists, so what is the question? Will it eventually
become mainstream? Address the trend, and why accelerate it.
•Whatisatutorial?Developdemonstrations,shownotwhy,
but how - prove by example.
Fig.2.33Thefullfinalreviewplot,notincludingtheprojectedvideo.
Fig.2.34Theprojectorandspeakersetupinfrontofthefinalreviewplot.Theprojector was used to project a moving graphic and a video.
![Page 40: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/40.jpg)
3 9
The second half of the semester focused on engaging
softwareresearchwiththeliteraryresearchIdidduringthefirst
half of the semester. That involved an extensive analysis of various
software packages. These software packages are outlined in the
following pages. The analysis will follow the same thorough path
outlined on the facing page.
The main purpose of this part of my thesis is, within the
generalcontextthatmyliteraryresearchcreated,tofindaplace
for visualization software in architectural practice. This is a two-
prongeddevelopment:thefirstprongistoactuallyfindacapable
software package that can perform baseline photo-realistic
rendering and is flexible enough for a variety of applications.
The second prong is to approach the problem from the side of
architects: if one of these software is capable of these basic tasks,
whatadvanced,architecture-specific,techniquesshouldtheybe
able to do? For example, should this software be able to simulate
people mingling in a project? Water collecting on roofs after a
heavy rain? Structural fatigue?
so f T w a R e Re s e a R C h
![Page 41: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/41.jpg)
4 0
Fig. 2.35 Software research path.
ExperimentationExperiment with all available software
and/or hardware, benchmarking features, learning curve, and time per step of
design.
DiscardWhile commercial software may still be
viable, an important aspect of this thesis is that the software be available to students
of architecture too.
DiscardTexture (or material) and lighting setup is
crucial to delivering a photo-realistic appearance - and therefore immersive -
environment.
DiscardSupporting one of these filetypes is crucial
for rapid development on a software platform.
CautionWhile supporting these formats is not a damnable issue, it may hamper moving from one design software to another.
CautionThe more extra hardware that is needed
the more cumbersome the setup becomes. Ideally the hardware more than
pays off the added expense.
CautionWhile mesh editing within the program is not yet a major issue, having to reimport every time there is a change in geometry
may become cumbersome.
CautionOne or the other of these is almost
required for the software goal. However, neither in conjunction with specialized
hardware may work.
CautionAdvanced shaders go a long way towards
making something virtual appear photo-realistic. Without them things look
flat and fake.
CautionThis is the last step and may define the
software’s true usefulness. It may not be enough that the software merely shows
the project before it is complete.
Import options?What filetypes does it import? Does it
support NURBS?
Interface options?How does it interact with software?
Editing?Does it allow visual and interactive
editing?
Testing?What kind of visualization can it do?
Is it capable of design?Can the project be (re)designed within this software? Does it make plans as a step in
design obsolete?
Materials?Does it import material data?
Does a middleman importer exist?Does this support a filetype that can be
exported through a third-party converter? Wavefront (.OBJ)?Does it support wavefront files?
Rhinoceros (.3dm)?Does it support Rhinoceros’ files?
3D Studio (.3ds, .max)?Does it support 3D studio files?
COLLADA (.dae)?Does it support COLLADA files?
Proprietary?Does it use proprietary software?
Replace Monitor?Does it use DVI or VGA ports?
Require Monitor?Does it require a standard monitor?
Require Other Hardware?Does it need other hardware?
SketchUp (.skp)?Does it support SketchUp files?
Mesh?Does it support editing the mesh?
Texture or Material?Does it support applying materials?
Lighting?Does it support setting up lighting?
Interactive?Does it support adding interaction?
NURBS?Does it support NURBS geometry?
Not an IssueMost software simplifies to a mesh.
Not an IssueThe presence of a monitor is given.
Yes
Yes
Yes
No
No
No
No
No No No
No
No
No
No
No
NoNo
No
No
No
No
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes Virtual Exploration?Does it support virtual walkthroughs?
Rapid Animation?Can it generate near real time rendering?
Multi User Experience?Does it allow many users to collaborate?
Advanced Shaders?Does it support water, refraction, etc.?
Yes
Yes
Mobile Export?Does it allow export to a mobile platform?
Yes
Yes
Yes
Yes
Yes
YesYesYes
Yes
Yes
Yes
No
Not an IssueMaterials can be redefined in-program.
Not an IssueAt this point interaction is not prioritized.
Not an IssueThis can be useful, but isn’t required.
Not an IssueThis can be useful, but isn’t required.
No
No
Yes
No
Yes
No No
Yes
No
Yes
No
Yes Yes Yes
No No
UDKIs it available as an educational or free
license?
Oculus RiftIs it available as an educational or free
license?
Cave2Is it available as an educational or free
license?
3D Glasses and MonitorIs it available as an educational or free
license?
Google GlassIs it available as an educational or free
license?
CryEngine 3Is it available as an educational or free
license?
Rhinoceros 5 3DIs it available as an educational or free
license?
Lumion 3DIs it available as an educational or free
license?
Other SoftwareOther software may be discovered. Is it
available as an educational or free license?
Other HardwareOther hardware may be discovered. Is it
available as an educational or free license?
Yes
No
Yes
No
Yes
No
BlenderIs it available as an educational or free
license?
?????????????This(ese) is(are) the software of choice for
the thesis.
?????????????This(ese) is(are) the hardware of choice
for the thesis.
![Page 42: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/42.jpg)
4 1
The software I reviewed were Octane as a plugin for
Rhinoceros, Vray RT, which is part of Vray, Arauna2, a separate
program, UDK and CryEngine, which are video game software
suites, Blender Cycles and LuxRender, both experimental and
thefirstbuilt intoBlender,Unity3D,avideogamedevelopment
suite,andLumion,whichwasmadespecificallyforarchitectural
visualizations. On the left are comparisons for each software in
each category on a scale of 0 to 10, subjectively.
I thoroughly analyzed each software for its pros, useful
features and benefits, its cons, where the software was hard
to use or had drawbacks, its software context, how it related
to a default installation of Rhinoceros and Vray, its rendering
features,whatkindofrenderingeffectsitcoulddo,itsrendering
drawbacks, what kind of shortcuts did it take to achieve real time
rendering, and its delay load, how much more time it would take
to work with this software compared to a render in Vray.
After considering everything, I found that none of the
software achieved high points in all categories. The choices I think
I have are Arauna2, Octane, CryEngine, and Lumion. Ultimately
it will be either Octane, given an interactive walking script is
made for Rhinoceros, or CryEngine, if I can streamline its import
process. Arauna2 would be nice, but it is still in development.
Lumion is almost there, but has too many interactive drawbacks
and does not appear to support scripts.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.36 Summary of the software evaluations.
![Page 43: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/43.jpg)
4 2
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Octane sets up very quickly once loaded in Rhinoceros. The
default values are very good for an average Rhinoceros model.
The controls andmaterials are easy to define. It also can sync
with Rhinoceros’ camera. While it includes its own sunlight and
skysystem,likeVray,itisbuiltin,andneedstobereconfiguredif
the scene already has a sun light. There are a lot of options - but
notallofthemhavemuchvisibleeffect.ItisGPUbased,soother
programsarenotheavilyaffected.
The biggest drawback is the renderer itself - path tracing
appears very fuzzy until the camera stops, after which the view
resolves within seconds. The camera can be set to only show the
view after a few samples have been calculated. The rendering
qualityisfixed,soif it isslowthenitwillalwaysbeslow.Scene
complexitydoesaffectitsomewhat.Also,theviewportneedsto
be updated when new geometry is created. Other cons are that
lights have to be set up as emitter surfaces and it does not appear
to use bump maps to simulate detail.
Otherwise,itcandoallmaterialtypes,depthoffield,andhas
advanced camera controls: exposure, ISO, gamma, saturation,
etc., and it can be networked.
The delay load is marginal. Time might be spent on setting
up materials, converting lights to emissive surfaces or trying to
findfeaturesofVraythatarenotpresentinOctane-suchasthe
differentrenderers,animationcontrols,cameratypes,etc.
Fig. 2.37 Octane render logo.
Fig. 2.38 Snapshots of Octane’s controls and render viewport in Rhinoceros. Clockwise from top left: Basic scene featuring sunlight and sky modeling, depth of field,andreflections;Anotherexampleofanimportedscene,featuringmaterials;Complex scene rendered rapidly; Scene with millions of triangles with minimal mesh conversion into Octane; Comparison with Vray RT, with similar materials; Comparison with the regular Vray using the sample scene I created, and lighting
matched as closely as possible.
![Page 44: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/44.jpg)
4 3
TheworkflowinLumion,whichisseparatefromRhinoceros,
israpidandconfigurable.Itdefinitelyseemstocomefromavideo
game background, as it has easy quality controls (compared to
CAD software, where preview controls are hard to access).
The import process is fast and intuitive, with a large library of
models of people, trees, and objects. It features terrain sculpting
andwater bodies,with an ocean that has configurablewaves.
However, the full version is not free.
The biggest drawback is that the aim is for pre-rendered
videos and images only. There is no walking mode and the
cameraisastandardflyingcamera,thoughitcanswitchtoorbit
via a button press. Below the ‘high quality’ setting, the rendering
looksverycheap.Thereareonlyafewfixedcloudarrangements,
though that is understandable given the task of photographing
a variety of clouds. There is a compromising feature though -
the clouds can be adjusted in density (which seems to have no
effectonthesun,andthecloudsdonotcastshadows).Thewater
customizationisnice,butitisfairlyfixedinstyle.
Otherwise, models can have any materials, but refraction is
bynormalmaponly.Sinceitimports.objfileswell,UVmapping
can be done in Rhinoceros.
Using it takes only several minutes. A scene can be set up
with the library of objects quickly, and the camera and UI controls
are fairly intuitive.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig.2.39SnapshotsofLumion’scontrolsandviewport.Themenusareallflyout,meaning that once a scene is loaded it takes up the whole screen except when a menu is opened. Clockwise from top left: The sample scene with approximate shaders, note how water was used to approximate refraction; the same scene with materials and higher quality shadows, this was a performance hit on my laptop; The scenewiththepackagedelementsincluded-atreeandaman,bothaffectedby
light and animated, though the man walks in place.
Fig. 2.40 Lumion logo.
![Page 45: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/45.jpg)
4 4
Arauna2 is a new experimental rendered that recently
revealed an evaluation version. So far it has many features: full
material support, including refractive, reflective, and specular,
lights support, built-in post processing and full screen filtering,
andafixedsunmodel. IthasaveryeasytouseUI, thoughthe
camera controls are somewhat unintuitive. Another useful feature
it has that is rare to see are various extra rendering modes, such
as normals, depth, pure GI, rendering cost, and others.
Aside from the lack of a walking camera, the only drawback
is that it is still in development - there is no way to test how well
it imports models, or if it will have any more advanced features.
The camera does not collide with anything, but one can assume
there will be some way to use model collision. The evaluation
version uses a Unity scene as data, but that may be temporary. It
is also unknown if it will even be released as a separate program
for visualization - perhaps it will only be licensed for video game
developers. It does use path tracing, which is as always grainy
during motion. One minor point is that lights had hard shadows.
The delay load is unknown, but most likely marginal to
fractions of an hour, depending on the import process. This
renderer is very promising.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.42 Snapshots of Arauna2’s controls and viewport. The menus are all overlays, meaning that once a scene is loaded it takes up the whole screen except where a menu is, and everything can be hidden via a button. Clockwise from top left: Pure GI shading with depth focus in the back; Path tracing with focus in the frontshowinglighteffectsandsimplespecular;Exampleoffullscenereflection,which had no impact on performance; Example of refraction, some caustics, and
customizable light.
Fig. 2.41 Arauna2 logo.
![Page 46: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/46.jpg)
4 5
Vray RT is the narrowest transition from regular Vray use,
though it lacks many features that the other renders have. Its
maindrawitthat,simply,itisadifferentbuttontopresstodoa
Vray render.
It appears to be a reduced renderer, and does not approach
Vray’s usual quality, thus seeming to be only for preview purposes.
Otherwise, ray-traced shadows and materials are rendered
accurately. The sun and lights are still processed properly. The
camera can be synced to Rhinoceros’ camera and does not
feature any other camera controls, like walking.
However,comparedtomorefocusedeffortslikeOctaneor
Arauna2, it is grainy and resolves fairly slowly.
Thedelayloadisminimal.Itisonlyadifferentbuttonaway
from a regular Vray render. If nothing else can be done or used, it
is an available alternative.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.43 Snapshots of Vray RT’s viewport in Rhinoceros. Top to bottom: The render viewport by itself; The renderer, left, compared to Octane.
Fig. 2.44 Vray logo.
![Page 47: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/47.jpg)
4 6
UDK (Unreal Development Kit) is a free software package
specificallymadetodevelopvideogames.Itisalargedownload
(1.9 GB) that features an extensive library of models and other
elements that can populate a scene and several rapid template
setups with preset sky and sun arrangements.
TheimportprocessmustuseBlendertoconvert.objfilesto
afileformatforUDK,.ase.Then,shadowmapsneedtobebaked
fairly quickly fast, but must be done again after any change.
Materials have to be set within UDK and are limited to simple
shaders. Sky and sun can be changed, and UDK has various types
of lights. Collision is a matter of a toggle.
The biggest drawback is that mesh import glitches at
65535 triangles, limiting the detail of complex models, requiring
them to be split into several chunks. It also takes around five
minutes to start. Many features in UDK are totally unnecessary
for the visualization itself. The sun light does not interact with
the atmosphere, requiring manual adjustment. Lastly, UDK uses
vertexlighting,causingshadowstoappearofforinaccurate.
Otherwise, UDK has interactive walking. The camera bobs to
the motion of moving legs and there is a slight motion blur.
The delay load can be fractions of an hour, depending on any
issues with the import and basic materials exist or can be found.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.46 Snapshots of UDK’s controls and viewport. The viewport functions just like the one in Rhinoceros, where wireframe orthographic views can be set up. Clockwise from top left: The raw scene import with basic shadows calculated; The same scene with materials applied from the included library; The content browser, which shows the materials, objects, and other elements that come with
the software.
Fig. 2.45 UDK logo.
![Page 48: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/48.jpg)
4 7
CryEngine is another software suite for making video games.
Even though it is newer than UDK it runs fairly smoothly (~20 fps)
on low-end systems. It also comes with a large library of models
that can populate a scene, like trees and rocks.
The huge drawback I experienced was that the export
process is long and arduous and requires either Blender
(unofficially)or3DSMax(orMaya).Theexportprocessrequires
significant set up in Blender. 3DSMax export is faster, except
materialdefinitionisfaulty.Inbothveryspecificstepsneedtobe
taken, with nearly any step prone to glitches, and a slip anywhere
may mean improperly assigned materials or a lack of collision.
However, once the meshes are imported it is fairly simple to
setupascene,especiallywithatemplatefile.Allmaterialeffects
can be simulated with shaders, the sky and sun are realistically
modeled and an ocean or bodies of water can be made. There
is interactive walking just like in UDK, with the addition of the
walker’s shadow. The shadowmaps are entirely dynamically
generatedandapproximateGI.Lightingissimplebuteffective.
It may take multiples of an hour to bring a scene into
CryEngine from Rhinoceros. Even with practice there is a lot of
preparation that has to happen and not all of it is intuitive.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.47 Snapshots of CryEngine’s controls and viewport. The viewport only shows a 3D view of the scene, concordant with WYSIWYG. Clockwise from top left: The sample scene imported without any materials, featuring real time shadowing, sun, and sky; The same scene being tested, with materials, a shadow from the viewer, and the sky altered due to a lower sun angle; A view of the 3DSMax import
pipeline, where materials are assigned.
Fig. 2.48 CryEngine logo.
![Page 49: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/49.jpg)
4 8
Blender comes with an experimental path tracing renderer
called Cycles. It has very few controls, which replace Blender’s
default controls once it is activated, thus there is less to learn
of the actual renderer once one has knowledge of how Blender
works. The path tracing rendering is very fast - the scene resolves
to an acceptable quality within seconds if the camera is still. Also,
since Blender is free Cycles is free as well. This also means there is
a large DIY community of graphic modelers and designers.
Cycles supports Blender’s light objects and material
definitions, with many presets including reflective, refractive,
cartoon, and others. While moving the view is pixelated but is not
choppy, which is a better solution than that used in Octane.
The biggest drawback is that it requires some knowledge of
Blender, which has a steep learning curve. If geometry is imported
froman.objfile,materialshavetobereassigned.Blender’ssun,
as it is handled by Cycles, does not have sunlight modeling - it is
just a distant light at a given angle, though a modeled sky can be
set up. Blender does not easily support walking.
The delay load is fractions of an hour or more - added to how
much time it would take to learn Blender, setting up a project here
comparedtoVray takesmoreeffort, includingchangingmouse
controls, changing how objects are placed and moved, and more.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.50 Snapshots of Blender’s controls and render viewport. The viewports inBlendercanbevariouslyconfigured.Clockwisefromtopleft:Thesamplescenewith soft shadows and full materials; The same scene with harder shadows; The
scene as it resolves with one sampling, showing the graininess it begins with.
Fig. 2.49 Blender’s logo. Cycles does not
have a logo.
![Page 50: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/50.jpg)
4 9
Unity is similar to both the game suites and to Blender in that
it is designed to make games but has its own modeling tools. Its UI
is relatively straightforward. The free version has many features,
enough to do basic visualizations. While it can import .obj directly,
Blender may be required for additional material or UV setup.
A big drawback is that many advanced features present
in the other software are not included in the free version and
the features that are present are fairly weak in quality. The sun
and sky need to be faked to achieve various daytime lighting
situations and the shadows seem to be fairly low quality and need
to be calculated, a process that takes several minutes.
Otherwise it has material shaders, some kind of real time
shadows and supports light objects. Walking is supported after
some setup. Mesh collision can be easily set.
The delay load is fairly small - fractions of an hour - any
extra setup in Blender and importing assets into Unity take time,
although template scenes may be possible.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.51 Snapshots of Unity’s controls and render viewport. The viewport switches to game mode when the walking is activated. Top to bottom: The sample scene with some basic materials showing dynamic shadows; Precomputed
shadows, but at a low quality.
Fig. 2.52 Unity logo.
![Page 51: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/51.jpg)
5 0
LuxRender is fairly fast and comes with a material library,
but it does not provide any interactivity. It is a step backwards,
using new software rendering but not using it advantageously.
It is a plugin renderer for Blender and works on the same level as
Cycles. It likewise changes various settings and generates a new
viewport when the render is started.
It renders a frame at a time, like Vray and due to the new
viewportitisdifficulttomovebackandforthbetweenthedesign
window and the render window.
Its delay load can get to fractions of an hour. Material
settings and assignments are nothing like those of Vray and are
somewhatclunky,ontopoflearningtheworkflowofBlender.
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
CONS SOFTCNTXT
FEATRS DRAWBACKS
DELAYLOAD
TOTALPROS
Blender: Cycles
Blender: LuxRender
UDK - Unreal Development Kit
CryEngine Sandbox
Unity3D
Lumion
Arauna2
Vray (Vray RT)
Octane for Rhinoceros
Fig. 2.54 Snapshot of LuxRender’s controls and render viewport. There are more controls in Blender’s menus. This is highly similar to Vray’s viewport.
Fig. 2.53 LuxRender logo.
![Page 52: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/52.jpg)
5 1
On November 18th I received a new graphics card I purchased
a fewdays earlier. This cardwas aNvidiaGeForceGTX660 Ti,
replacing an ATI HD 5770, and the reason was purely because it
had hardware that enabled the use of or the faster application of
several of the software packages I looked into. Nvidia graphics
processors (GPUs) have a technology called CUDA that uses
parallel processing to do graphics tasks. The software that uses
this technology, Octane, Arauna2, and other path tracers, would
not otherwise work with the ATI card that I had before. I was
able to use Octane at reduced settings on my laptop, as it had an
Nvidia card albeit one of lesser quality, but the others would not
work with that card because it was too old.
The laptop card was a GeForce 130M with compute capability
(a property of CUDA technology) of 1.1, whereas the 660 Ti, by
comparison, has one of 3.0. The laptop card also has only 32 CUDA
cores, whereas the new desktop card has 1344. Also, for the video
game engines, the new card is roughly 50% stronger than the ATI
card I had before, so I can push those engines further to achieve
higher quality visualizations.
Buying the new card (a $259.99 value) was the best option
for my thesis in terms of hardware because it was readily available,
ha R d w a R e Re s e a R C h
Fig. 2.57 The old ATI card.
Fig. 2.55 The new card, left, versus the old card, right.
Fig. 2.56 The new card inside the desktop tower.
![Page 53: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/53.jpg)
5 2
enabled the use of software for my thesis, and demonstrated that
my thesis can exist without expensive or cutting edge hardware
like virtual reality headsets, new means of interaction like the
hardware Adobe is developing, or immersive room-sized display
setups.
The initial limitation of the low-end hardware on my laptop
and the unusable hardware on my desktop still played an
important part in my thesis because it showed that this software
could be used on existing, potentially old, hardware, though with
severe drawbacks and shortcuts.
Fig. 2.58 The Oculus Rift virtual reality headset in action. This is an example of unattainable
hardware.
Fig. 2.60 Unboxing the new card. It came with an instruction manual, a drivers disc, and extra cables. The card was distributed by ASUS, which also added the
cooling system.
Fig. 2.59 Adobe Mighty and Napoleon. Mighty is the triangular pen, Napoleon is the ruler.
![Page 54: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/54.jpg)
5 3
Fig. 3.1 Serious Editor 3.5 by Croteam. This kind of software is used by video game developers to create virtual worlds - much like architects do with CAD software,
except with materiality and lighting as part of the toolset.
Fig. 3.3 Unreal 4 by Epic Games. This is a future engine currently in development that, while it still uses shaders, simple lighting, and other standard methods, pushes
them to their limits to achieve photorealism.
Fig. 3.2 CryEngine Sandbox by Crytek. This is a much more recent video game engine and favors dynamic shadow generation over the use of pre-computed
shadowmaps.
Fig. 3.4 Luminous Engine by Konami. This is also a future engine currently in development. Engines like this are at the forefront of video game engine technology,
pushing what is possible with shaders and graphics software.
d e l i V e R a b l e s
aP P l i C a T i o n s
![Page 55: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/55.jpg)
5 4
Fig.3.5Helpfilesanddocumentationforvariousgraphicssoftware.Clockwisefromtopleft:Unity;Blender;UDK;Rhinoceros.Theserangeinqualityanddepth,withsomefeaturingtextandimagedescriptionsandothersevenincludingvideo.Unitywastheonlyonethatreadfromanincludedfile,theotherseitherembeddedoropenedabrowser
page to an online database.
![Page 56: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/56.jpg)
5 5
Fig. 3.6 Fallingwater in Half-Life 2 by Kasperg. This is a demonstration of modeling a real building in a video game environment.
Fig. 3.8 City scenes in Brigade 3 by Otoy. This is the cutting edge of cutting edge path tracers.
Fig. 3.9 Fox Engine by Konami. One of the images in each set is the engine, the other is a comparative real life photograph. Which images are the engine?
Fig. 3.10 Euclideon Engine. This uses a method I did not explore - voxels - as it is more about generating geometry rather than photorealism.
Fig. 3.7 House in UDK by Luigi Russo. This student project, modeled in video game software, showed that the same goals that students use CAD
software for can be applied to video game engines.
![Page 57: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/57.jpg)
5 6
Fig. 3.11 Path tracing method, sample images. This shows an exhaustively detailed physical environment rendered with full lighting and materiality at interactive speeds. On theright,watereffectsarealsosimulated.
Fig. 3.12 Las Vegas Bellagio Comparison in CryEngine by IMAGTP. This is a photo-realistic demonstration of a real building compared to a photograph taken at the same location.
![Page 58: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/58.jpg)
5 7
OctaneRhinoceros
InteractiveScript
Output(Python or internal)
Me
Users
Depending on which software I move forward with, the
next steps of my project will be either lightweight coding or
heavyweight streamlining or coding.
The Octane approach assumes that Octane is set up within
Rhinoceros and the only thing missing is an interactive control.
The range and nature of this control will vary, as simple horizontal
camera control by forward impetus and turning is much simpler
than also having camera bob, gravity, or collision detection.
The CryEngine approach assumes that it is installed and a
Rhinoceros project is available and the only thing in the way is the
cumbersome and complex import process. The range and nature
of streamlining this process will vary from simply documenting
comprehensively and cleanly how to do it with the least mistakes,
to attempting to enhance the plug-ins already existing to attempt
to automate the process further.
Once one of the above is in place, the next steps are more or
less identical. After both real time rendering and interaction are
achieved I need to document further features such as material
assignment, any way for collaboration or portability, streamlining
controls, general use principles or shortcuts, and the like.
mo V i n g fo R w a R d - so f T w a R e Pa C k a g e
Rhinoceros
CryEngine
Output(Interaction built-in)
Me
Users
Fig. 3.13 The Octane approach, where only an interactive script needs to be made.
Fig. 3.14 The CryEngine approach, where the import path needs to be streamlined.
![Page 59: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/59.jpg)
5 8
Octane or CryEngine
Help files, videos, tutorials
Scripts, download info
Zip package
Separate download
Output
Me
Users
After that, I would attempt to compile a software package.
With Octane, that would involve everything but the software itself,
as it is not free and would need to be purchased. Otherwise, there
wouldbeazipfile,orevenasmallself-installerthatwillconsist
of plug-ins, help documents, videos, and so on. With CryEngine,
the package will be more robust, as, at least theoretically, it may
include all of CryEngine, which is hefty at 1.9 GB. Since there
may be licensing issues I may also require that it be downloaded
separately, but as it is free that is less of an issue.
The software package, by its very existence, would be the
proof of concept for my thesis. However, at this time it is of a very
vague nature since there are too many variables in how I would
approachdevelopingit.Workload-wise,developingthehelpfiles
and tutorials alone is a lot of documentation, and if I choose to do
some sort of scripting I would need time to familiarize myself with
the scripting languages that I would need to use.
Also, knowing my audience will be very important. As
differentuserswilluseitdifferently,Iwillneedtoframeitassuch.
For an architect looking to use it as a design tool, to rapidly view
a project interactively with progressive visuals, it will be one thing
and have a certain feature set. For a client wishing to explore
a realistic simulation of a project, it will be another thing. For a
contractor wishing to see the assembly of certain elements it will
be yet something else.
Fig. 3.15 The breakdown of the software package.
![Page 60: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/60.jpg)
5 9
mo V i n g fo R w a R d - be n e f i T s a n d de a T h
The software package foresees potentially great benefits
inthefieldofarchitecture.Tounderstandwherethesebenefits
come from, it is useful to review in a nutshell. The GRID is a tool
meant to preclude the physicality of an architectural design via
software and hardware that is currently available and immerse,
by visual and other means, the client or architect in the design
before it is built. Since architecture is experienced through both
time and space, it is necessary that such a tool exist during some
earlystageofdesignbeforethedesignisfinalizedandconverted
into construction documents. That conversion is generally
done with BIM, as BIM is accurate and collaborative. Once the
BIM phase starts there need for the GRID lessens, as it can be
assumed the design will not change dramatically at that point.
Using construction documents, the architect and contractor
collaborate to produce a built space, too late to make many
changes. In the current design process, the real space and the
real time are only reachable in snapshots or animations generated
beforehand (animations could be understood as simply a series
of snapshots). The problem with this is that, due to the way one
experiences a still image versus a physical space, there will always
beexperientialdifferencesbetweenwhatthedesignwasbefore
it was built and what the design becomes after the construction.
![Page 61: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/61.jpg)
6 0
Benefit #1 - Reducing the Gap:Thefirstbenefitofthetoolis
that the experiential gap is narrowed or even removed. With the
ability to see a building through the medium of a computer screen
with realistic shadows, movement, light, and materiality, both the
client and the architect are brought to the same level. The client
probably has little experience working with CAD models and
renders, or animations, and lacks the preparation that the CAD
model, and working with that model, gives the architect.
What the client lacks is an understanding of the space.
This can be done by teaching the client how to understand
architectural orthographics, which is arduous and exacerbates
the problem (by reducing experience rather than expanding it)
or the client can have what they already understand, a visual
substitute for the real thing. Sketches work, but ultimately the
building will be something real, and this reality has to somehow
manifest early on.
Benefit #2 - Personalization: This allows the client to make
the project their own. By using computer interfaces they can
inhabit and explore the project. The power of the simulation is
that it uses the user’s own brain to their advantage by letting it
translate the motion within the virtual world to motion of their
physical self. The project becomes familiar and understandable.
This assumes that the client was not already swayed with
beautiful sketches, or other abstract representations of the
Fig. 3.16 The experiential gap between a still image and the physical space.
Fig. 3.17 The GRID allows a viewer to experientially inhabit the project.
![Page 62: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/62.jpg)
6 1
project. Where these other representations used imagination to
create the space, the GRID will use it to explore the space.
Benefit #3 - Prototyping: Even before the client uses this
tool, the architect her- or himself can use it to rapidly prototype
the experience rather than the assembly or the totality alone.
Architecture is far to big to be prototyped in full, and prototyping
little chunks only goes so far.
One can approach this by breaking down what architecture
is - the memory of time and space. Memory is the passage of
experience, time is a series of moments, and space is moment
given shape. Space is the easiest to prototype because an
architect can build a scale model - this will provide a sense of
the space. Time is also easy to prototype, because the architect
need only to hold the model for a while. Memory is a little harder
because the brain is smart - it knows the model is just a small
object. The architect needs to trick her or his brain, to get down
near the model and pretend it was big and thus come close to a
memory.
The GRID does that and goes further. It also has the architect
make a model, and spend time with it, and get down close to it.
But it goes beyond - the architect can walk inside the model, the
architect can change the lights and the time of day, the architect
canfloodaroomwithwaterorplaceotherpeopleinthespace.
Truecreativitycanflowerthen,whenmemoryisachieved.
Fig. 3.18 Architecture is composed of the memory of time and space. Space can be prototyped with models, time can be prototyped by simply being around and examining the models, but memory is harder because it involves tricking the brain.
![Page 63: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/63.jpg)
6 2
+ + + + + + + + + +
The thesis would also be malicious. As a tool it would
contend with orthographics, renders, and physical models.
Before construction documents and after things that can be
called sketches these tools have come to be standard in the
design pipeline.
Death #1 - Orthographics: With orthographics, should the
client be enamored with the GRID, they may not be interested
in plans or sections, even though those tools still provide
valuable insight into the spacial organization of a project and the
interaction of the systems within or between the spaces. Likewise,
if an architect is using an axonometric diagram to explain the
order in a project but the client does not see that order in the
GRID, the client may put less faith in the work the architect put
into the diagrams, demanding instead, perhaps unrealistically,
that the diagrams match the experience found on the GRID. In
an in-firm review the orthographics may be quickly cast aside
as experiential conversations are brought up only visible on the
GRID, raising questions as to why the architect spent time on the
orthographics over working on the GRID.
Death #2 - Renders: With renders, the overlap is sharper.
Given a regular pretty render and the GRID, the client may wonder
why the architect bothered to take one picture when the GRID
allows them to move around and take any and all the pictures
Fig. 3.19 A client may not care about orthographics if the GRID is compelling.
Fig. 3.20 Two architects, one waiting on a traditional render, the other already being group reviewed.
zzz
![Page 64: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/64.jpg)
6 3
thattheywant,fromanyattainableangle.Backatthefirm,the
architect is spending many hours working on a few renders while
another architect, working on the same project, in the same time
finalizedtheGRID,rapidlycreatingcountlessrendersandvideos
of the same project, all at an even high quality.
Death #3 - Physical Models: Even physical models may feel
the heat - much as with a render, one architect spends the whole
night crafting a model while another has created the GRID, with
full materiality, realistic sun shading, water bodies, and more. The
onlydifferenceis,thephysicalmodelistwirledinthehandswhile
the GRID is controlled by a keyboard and mouse. Even assuming
advanced hardware exists, the physical model is 3D printed with
full geometric detail...and the GRID architect uses an Oculus Rift
to create a virtual 3D display that delivers a near-real experience,
complete with depth information.
Sketching too may be impacted, though not killed - imagine
a precedent study being not just looking at photos and drawings
but exploring the GRID version of that building, perhaps modeled
with LiDAR, and documenting the experience. Perhaps one step
of design is quickly molding spaces on the GRID and experiencing
them for inspiration.
Construction documents can be reinforced by the GRID. An
architect can show a polished GRID on the construction site to
the team, showing how the project would look like, materials,
Fig. 3.21 Prototyping small pieces of architecture with physical models takes time and does not give an accurate rendition of the built end result. With a digital
substitute, the entire project can be prototyped and reviewed.
![Page 65: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/65.jpg)
6 4
shading, landscape elements and all, as one moved from one
end of the building to the other. Since the GRID is intuitively
understood, the contractor would not need to learn a new means
of communication with the architect.
Also during construction, the architect, perhaps if she or he
nowsendsafloorplantothetenementsofafutureapartment
building so that they can mock up their furniture arrangements,
can send the tenements the GRID, which they can use to explore
and make their choices in a medium they can understand. No
more need for the billboard proclaiming a future building - just
goonthewebsiteofthefirmanddownloadthatbuilding’sGRID.
Fig. 3.22 A client customizing a house using a real time visualization to get the exact appearance they want.
![Page 66: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/66.jpg)
6 5
mo V i n g fo R w a R d -im a g i n a T i o n a n d ex P e R i e n C e
The thesis needs to find its audience, for its audience
does not know the show is on. There are certain assumptions
inherent in the GRID that are so far away from the common set
of assumptions of architects and their connected fields that
I attempted to break down, and I isolated a few and tried to
address them, but many remain.
One aspect that I overshadowed was the reality that the
software implementation that my thesis is exploring is already
presentinsomearenas-somefirmshaveusedthisasadesigntool
anddeliveredittoclientsassuch.Theseunsungfirms,however,
donotthemselvesseethebenefitsofspreadingthisknowledge
totherestofthefield.Perhapsbecausethisisbecausetheyfeel
entitled to uniqueness, perhaps they do not see results or believe
this delivery is more work than it is worth. Perhaps every person
theyuseitwithhasdesireddifferentthingsfromit.
DifferentaudienceswillindeedreactdifferentlytotheGRID.
Well-entrenched firms will not allow for yet another software
intotheirpipeline,whilemoreopenfirmswillsee itasadesign
tool, perhaps devaluing the photorealism for the interaction and
layered data sets.
![Page 67: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/67.jpg)
6 6
Thedatasetsfirmsmaychoosecoulddifferfromthegeneral
one I focused on - that of photorealistic interaction by walking
in real time. Some firms, or even the client and maybe the
contractor may desire to explore the project while only focusing
on the hierarchy of spaces, or perhaps while emphasizing the
structure behind the walls. Their imagination could then be
guided depending on the type of communication.
The imagination of the recipient, be it client, contractor,
or fellow architect, would nevertheless still be engaged. While
abstract sketches or diagrams can communicate, nothing yet
gives the user the element of choice, the choice of experience,
over memory. The choice of what can be done, over what has
been done.
Would this lead to a death of the outdated cultural belief
that architectural products are drawings, and instead herald an
age where people see architects embracing the digital? What if an
architect wanted to do something other than what his profession
had intended for him or her? What if an architect dreamed of
something more, some means of taking their understanding and
making it the understanding of others?
The GRID will give architects an ideal to strive towards. They
will still render, still make animations, still rely on CAD. But in time,
they will learn to use it, to make it shine as the sun. In time, it will
help them accomplish wonders.
![Page 68: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/68.jpg)
6 7
BOOKS AND RESEARCH REPORTS
Darley, Andrew. Visual Digital Culture: Surface Play and Spectacle in New Media Genres. London ; New York: Routledge, 2000.
Dieter Hildebrandt, Jan Klimke, Benjamin Hagedorn, and Jürgen Döllner. 2011. Service-oriented interactive 3D visualization of
massive 3D city models on thin clients. In Proceedings of the 2nd International Conference on Computing for Geospatial Research
& Applications (COM.Geo ‘11). ACM, New York, NY, USA, , Article 6 , 1 pages. DOI=10.1145/1999320.1999326 http://doi.acm.
org/10.1145/1999320.1999326
Emiliyan Petkov. 2010. One approach for creation of images and video for a multiview autostereoscopic 3D display. In Proceedings
of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on
International Conference on Computer Systems and Technologies (CompSysTech ‘10), Boris Rachev and Angel Smrikarov (Eds.). ACM,
New York, NY, USA, 317-322. DOI=10.1145/1839379.1839435 http://doi.acm.org/10.1145/1839379.1839435
Heiko Friedrich, Johannes Günther, Andreas Dietrich, Michael Scherbaum, Hans-Peter Seidel, and Philipp Slusallek. 2006.
Exploring the use of ray tracing for future games. In Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames (Sandbox
‘06). ACM, New York, NY, USA, 41-50. DOI=10.1145/1183316.1183323 http://doi.acm.org/10.1145/1183316.1183323
Jongeun Cha, Mohamad Eid, and Abdulmotaleb El Saddik. 2009. Touchable 3D video system. ACM Trans. Multimedia Comput.
Commun. Appl. 5, 4, Article 29 (November 2009), 25 pages. DOI=10.1145/1596990.1596993 http://doi.acm.org/10.1145/1596990.1596993
Lewis, Rick. Generating Three-dimensional Building Models From Two-dimensional Architectural Plans. Berkeley, Calif.: University
of California, Berkeley, Computer Science Division, 1996.
so u R C e s
a P P e n d i x
![Page 69: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/69.jpg)
6 8
Luc Leblanc, Jocelyn Houle, and Pierre Poulin. 2011. Component-based modeling of complete buildings. In Proceedings of Graphics
Interface 2011 (GI ‘11). Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo,
Waterloo, Ontario, Canada, 87-94.
Mitrovic, Branko. Visuality for Architects: Architectural Creativity and Modern Theories of Perception and Imagination. University
of Virginia Press, 2013.
Rhyne,Theresa-Marie.“ComputerGamesandScientificVisualization.”AssociationforComputingMachinery.Communications
of the ACM 45.7 (2002): 40-4. ProQuest. Web. 24 Sep. 2013.
Robina E. Hetherington and John P. Scott. 2004. Adding a fourth dimension to three dimensional virtual spaces. In Proceedings of
the ninth international conference on 3D Web technology (Web3D ‘04). ACM, New York, NY, USA, 163-172. DOI=10.1145/985040.985064
http://doi.acm.org/10.1145/985040.985064
YOUTUBE AND VIMEO
alvaroignc. (2010, March 17). Zumthor’s Thermae of Stone in Source SDK part 5: Props. [Video File]. Retrieved from http://www.
youtube.com/watch?v=hh4nGEAKm4s
- Zumthor’s Therme Vals rendered in Source.
Archimmersion. (2010, June 25). UDK - Family House in Realtime 3D [Video File]. Retrieved from http://www.youtube.com/
watch?v=AV802r_Pr0k&feature=youtu.be
- More UDK - again, note the cheap quality.
Autodesk. (2011, April 12). Autodesk Showcase 2012 for Architectural, Construction, and Engineering Users - YouTube [Video File].
Retrieved from http://www.youtube.com/watch?v=ioP0CVRJvUI#t=17
- This is for reference - this is a very bad implementation of the subject of my thesis as it provides no presence, no true interactivity and is not at all designed for the user.
bigkif. (2007, November 17). Ivan Sutherland : Sketchpad Demo (1/2) [Video file]. Retrieved from http://www.youtube.com/
watch?v=USyoT_Ha_bA
![Page 70: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/70.jpg)
6 9
bigkif. (2007, November 17). Ivan Sutherland : Sketchpad Demo (2/2) [Video file]. Retrieved from http://www.youtube.com/
watch?v=BKM3CmRqK2o
- Ivan Sutherland’s 1963 Sketchpad thesis, archival footage.
EliteGamer. (2012, November 28). Luminous Engine - Live Edit Tech Demo “Agni’s Philosophy”[Videofile].Retrievedfromhttp://
www.youtube.com/watch?v=eHSGBh1z474
-Luminous Engine tech demo.
GameNewsOfficial.(2013,March29).Metal Gear Solid 5 Fox Engine Tech Demo[Videofile].Retrievedfromhttp://www.youtube.
com/watch?v=_18nXt_WMF4
-Fox Engine tech demo.
gametrailers. (2012, June 7). Unreal Engine 4 - GT.TV Exclusive Development Walkthrough[Videofile].Retrievedfromhttp://www.
youtube.com/watch?v=MOvfn1p92_8
-Unreal 4 tech demo.
Hammack, David. [hammack710]. (2013 January 3). Unity 3D Simulation Project [Video File]. Retrieved from https://www.youtube.
com/watch?v=EEA5_he3pRk
- A demo of Unity3D , looks very cheap and old.
HD, RajmanGaming. (2013, August 21). CryEngine Next Gen (PS4/Xbox One) Tech Demo [1080p] TRUE-HD QUALITY [Videofile].
Retrieved from http://www.youtube.com/watch?v=4qGK5lUyCwI
-CryEngine demo reel.
Inc, Marketing Department Ideate. (2013, February 26). Autodesk Showcase 3D Visualization Software[Videofile].Retrievedfrom
http://www.youtube.com/watch?v=IvBL2kX6CME
-Autodesk Showcase video.
lxiguis. (2012, August 28). Real time Architectural Visualization - After Image Studios [Video File]. Retrieved from http://www.
youtube.com/watch?v=HPtQyBDpatg&feature=youtu.be
- UDK demonstration. It is not that great and a little old, but is a capable engine.
![Page 71: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/71.jpg)
7 0
Lapere, Samuel. [SuperGastrocnemius]. (2012, April 6). Real-time photorealistic GPU path tracing: Streets of Asia [Video File].
Retrieved from http://www.youtube.com/watch?v=gZlCWLbwC-0
Lapere, Samuel. [SuperGastrocnemius]. (2013, August 13). Real-time path tracing: 4968 dancing dudes on Stanford bunny [Video
File]. Retrieved from http://www.youtube.com/watch?v=huvbQuQnlq8
Lapere, Samuel. [SuperGastrocnemius]. (2012, May 29). Real-time photorealistic GPU path tracing at 720p: street scene [Video File].
Retrievedfromhttp://www.youtube.com/watch?v=evfXAUm8D6k
-GPU path trace method demonstrations. This is a highly realistic rendering method, short of the grainy appearance.
Lumion3D. (2010, November 1). Architectural visualization: Lumion 3D software is easy to use[Videofile].Retrievedfromhttp://
www.youtube.com/watch?v=uoLV8QIm02M
-Demonstration of Lumion 3D.
Naing, Yan. [MegaMedia9]. (2013, May 31). Realtime 3D Architectural Visualization With Game Engines[Videofile].Retrievedfrom
http://www.youtube.com/watch?v=uXzy3V3N2uw
-CryEngine3 demonstration in a sandbox environment.
Skiz076. (2012, January 3). FallingWater in Realtime 3d (UDK) [Video File]. Retrieved from http://www.youtube.com/
watch?v=QdF4rvw64rg
- A model of Fallingwater in UDK.
spacexchannel. (2013, September 5). The Future of Design [Video File]. Retrieved from http://www.youtube.com/watch?v=xNqs_S-
zEBY#t=134
- Video showcasing tactile hardware interaction. This is the future, but we are not then yet.
Storus, Matt. (2011, February 9). Video Game Engine Architectural Visualization Test [Video File]. Retrieved from http://vimeo.
com/19774547
-Another CryEngine3 demonstration.
![Page 72: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/72.jpg)
7 1
T.V., Arocena. [arocenaTM]. (2011, February 17). Presenting Architecture through Video Game Engine [Video File]. Retrieved from
http://www.youtube.com/watch?v=S8HUj85Cq1s
- Demo by Max Arocena with CryEngine showing interactive lighting.
Timeshroom. (2013, July 30). Architectural Visualisation - Oculus Rift Demo[Videofile].Retrievedfromhttp://www.youtube.com/
watch?v=gaFZH8Z70vk
-OculusRIFTdemoshowingtheviewsprovidedbytheheadset.Notehowtheyareslightlyoffset,thiswouldproducetheillusionof3D.
Visual, Real. [RealVisual3D]. (2012, October 23). iPad 4th Generation: Unity 3d Realtime Architectural Visualisation [Videofile].
Retrieved from http://www.youtube.com/watch?v=n6eb4KB2k2U
-iPad demonstration of Unity3D and how it is cross platform.
ARTICLES
(2013, August 20). Arch Virtual releases architectural visualization application built with Unity3D game engine, including Oculus
Rift compatibility. Arch Virtual. Retrieved from http://archvirtual.com/2013/08/20/arch-virtual-releases-architectural-visualization-
application-built-with-unity3d-game-engine-including-oculus-rift-compatibility/
(2013, August 20). Arch Virtual. Retrieved from http://www.archvirtual.com/Panoptic/2013-08-19-arch-virtual-panoptic.html
- Premade realtime visualization demo by Arch Virtual. It is interactive within a web browser. This is a very good example of the subject of my thesis.
(2013, June 3). Arch Virtual. Retrieved from http://archvirtual.com/2013/06/03/tutorial-ebook-now-available-unity3d-and-
architectural-visualization-1-week-preview-edition-discount/
- Arch Virtual’s ebooklet on architectural visualization in Unity3D.
Elkins, James. (2010, November 6). How Long Does it Take To Look at a Painting? Huffpost Arts & Culture. Retrieved from http://
www.huffingtonpost.com/james-elkins/how-long-does-it-take-to-_b_779946.html
- Article showing how Mona Lisa visitors spend 15 seconds looking at it.
![Page 73: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/73.jpg)
7 2
Hudson-Smith, Andrew. digital urban. Retrieved September 2, 2013, from http://www.digitalurban.org/ (deprecated page: http://
www.digitalurban.blogspot.com/)
- Blogging platform that publishes research about connecting digital modeling and the real world with an emphasis on the profession of architecture.
Jobson, Christopher. (2013 September 22). Full Turn: 3D Light Sculptures Created from Rotating Flat Screen Monitors at High
Speed. Colossal. Retrieved from http://www.thisiscolossal.com/2013/09/full-turn-light-sculpture/?src=footer
- A project using alternate projection - this is useful because hardware exploration is part of my thesis, though here the technology is very artsy.
Kasperg. “Kaufmann House.” The Whole Half-Life. 1/23/2006, Retrieved September 2, 2013, from http://twhl.info/vault.
php?map=3657
- Website of the Fallingwater digital recreation. This establishes a kind of benchmark for the possibilities of the area.
Russo, Luigi. Architectural Visualization. Unreal Engine. Retrieved September 3, 2013, from http://www.unrealengine.com/
showcase/visualization/architectural_visualization_1/
- Website of a project done in UDK. This is in place to be licensed (educational use included).
simulation. (n.d.) Random House Kernerman Webster’s College Dictionary. (2010). Retrieved October 20, 2013, from http://www.
thefreedictionary.com/simulation
-Definitionofsimulation.
Varney, Allen. “London in Oblivion.” The Escapist. 7/8/2007, Retrieved September 2, 2013, from http://www.escapistmagazine.
com/articles/view/issues/issue_109/1331-London-in-Oblivion
- Article that mentions several attempts to visualize architectural work in video game engines. This could be a good springboard on collating pasteffortsinthisarea.
Vella, Matt. (2007, December 21). Unreal Architecture. Bloomberg Businessweek. Retrieved from http://www.businessweek.com/
stories/2007-12-21/unreal-architecturebusinessweek-business-news-stock-market-and-financial-advice
- Article detailing the use of UDK for architectural purposes.
Wikipedia contributors, “Architectural Animation,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/Architectural_
animation (accessed November 29, 2013).
- Wikipedia article on architectural animation.
![Page 74: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/74.jpg)
7 3
IMAGES
Act-3D. 19 April 2012. Lumion logo. [logo]. Retrieved from http://lumion3d.com/forum/general-discussion/lumion-logo/?action=dl
attach;attach=8515
- Lumion logo.
alexglass. 11 October 2013. Ray Tracing vs Rasterized. [chart]. Retrieved from http://www.ign.com/boards/threads/generation-8-
starts-with-brigade-not-x1-ps4.453427233/
- Chart of raster vs. ray tracing technologies.
Blender Foundation, The. n. d. Blender logo. [logo]. Retrieved from http://download.blender.org/institute/logos/blender-plain.
png
- Blender logo.
Chaos group. n. d. Vray logo. [logo]. Retrieved from http://upload.wikimedia.org/wikipedia/fa/a/a1/Vray_logo.png
- Vray logo.
CryEngine. n. d. CryEngine logo. [logo]. Retrieved from http://www.n3rdabl3.co.uk/wp-content/uploads/2013/08/logo_vertical_
black.jpg
-CryEngine logo.
Epic Games. n. d. UDK logo. [logo].Retrievedfromhttp://epicgames.com/files/technologies/udk-logo.png
- UDK logo.
Euclideon. 22 November 2011. Euclideon Unlimited Detail. [screenshot]. Retrieved from http://media1.gameinformer.com/
imagefeed/featured/gameinformer/infdetail/infpower610.jpg
- Euclideon screenshot.
Fatahalian, Kayvon. n. d. Kayvon Fatahalian. [graph]. Retrieved from http://www.cs.cmu.edu/~kayvonf/
- Photo of Kayvon Fatahalian.
![Page 75: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/75.jpg)
7 4
Fatahalian, Kayvon, et al. July 2013. Visualization graph. [graph]. Retrieved from http://graphics.cs.cmu.edu/projects/
exhaustivecloth/
- Kayvon’s exhaustive graph.
History Blog, The. n. d. Dome design. [drawing]. Retrieved from http://www.thehistoryblog.com/wp-content/uploads/2013/01/
Dome-design.jpg
- Brunelleschi’s dome image.
IGXPro.com.n.d.Mario 64. [screenshot]. Retrieved from http://www.igxpro.com/wp-content/uploads/2012/09/mario64.jpg
-Mario64, an old 3D video game.
Jean-Philippe Grimaldi, et al. n. d. LuxRender logo. [logo]. Retrieved from http://upload.wikimedia.org/wikipedia/commons/f/f5/
Luxrender_logo_128px.png
- LuxRender logo.
Konami. 27 March 2013. Title. [logo].Retrievedfromhttp://babysoftmurderhands.com/wp-content/uploads/2013/04/FOX-Engine-
Kojima-Productions-GDC-2.jpg
- Comparison of the Fox Engine to real life.
Mh. 10 March 2010. The Gates-Hillman Complex. [photo]. Retrieved from http://upload.wikimedia.org/wikipedia/commons/a/a6/
CMU_Gates_Hillman_Complex.jpg
- Photo of the Gates-Hillman Center.
n. d. Tom Cortina. [photo]. Retrieved from http://sigcse2014.sigcse.org/authors/
- Photo of Thomas Cortina.
Otoy, Inc. 22 November 2012. Octane Render logo. [logo]. Retrieved from http://en.wikipedia.org/wiki/File:Octane_Render_logo.
png
- Octane logo.
PcGamesHardware. n.d. Crysis 2 screenshot 5. [screenshot]. Retrieved from http://www.pcgameshardware.com/screenshots/
original/2010/03/crysis-2-screenshots-gdc-2010__5_.jpg
-Crysis 2 image.
![Page 76: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/76.jpg)
7 5
Persage. 5 April 2007. Carnegie Mellon University College of Fine Arts building. [photo]. Retrieved from http://upload.wikimedia.
org/wikipedia/commons/3/3a/CFA.JPG
- Photo of the College of Fine Arts.
Unity Technologies. n. d. Unity logo. [logo]. Retrieved from http://upload.wikimedia.org/wikipedia/ru/a/a3/Unity_Logo.png
- Unity logo.
MISCELLANEOUS
Adobe & Touch. n. d. Projects Mighty & Napoleon. Retrieved from http://xd.adobe.com/mighty/notify.html
- Website of Adobe Mighty and Napoleon.
Autodesk. n. d. 3D visualization software brings design to life. Retrieved from http://www.autodesk.com/products/showcase/
overview
- Website of Autodesk showcase.
Crydev. (2013, October 18). CRYENGINE® Free SDK (3.5.4) [Computer software]. Retrieved from http://www.crydev.net/dm_eds/
download_detail.php?id=4
- CryEngine3 SDK.
Lumion. (2013). Lumion 3D () [Computer software]. Retrieved from http://lumion3d.com/
- Lumion website, note how a new version is available, but the evaluation version of it is not yet.
NHTV University of Applied Sciences. (2013, November 11). ARAUNA2 demo. [Computer software]. Retrieved from http://ompf2.
com/viewtopic.php?f=5&t=1887#p4233
- Arauna2 demo.
Schroeder, Scott A. (2011, January 1). Adopting Game Technology for Architectural Visualization. Purdue e-Pubs. Retrieved from
http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1005&context=cgttheses
- Possible precedent thesis.
![Page 77: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/77.jpg)
7 6
3D - A digital representation of three point perspective that approaches how the eyes interpret light. 3D is often mapped onto a
planar screen, but newer technologies are using curved screens, or even a screen for each eye to even more closely replicate vision.
ACM - Association for Computing Machinery.
Animation - In the context of this thesis, refers to a disembodied flythrough that architects are so fond of - an unnatural
movement that lacks artistic merit and generally does not approach human experience. Animations can and do exist that give the
viewer an experience they can understand, but technology can go beyond that.
AO - Ambient occlusion. A technique that replicates GI shading by determining where deep corners are and shading them
accordingly.Combinedwithothereffects,thisisanefficientmethodtofakeradiosityshading.
Architecture - The study of the memory of time and space. Encompasses the thought, theory, tools, design, construction,
evaluation, and history of buildings.
Baking - Taking pre-computed data and turning it into a texture that can be applied in a material.
BIM - Building Information Modeling. A type of modeling not necessarily visual that digitally covers architectural systems.
Bump map - A bump map is either another name for a normal map or refers to a greyscale image that appears like the grain or
small-scaledetailofamaterialthatisappliedtoamaterialinthescenetoveryefficientlyfakesaiddetail.Abumpmapisthesimplest
way to add complexity to a mesh on a small scale by only using a material.
CAD - Computer Aided Design. Digital precision tools used in product, aviation, automotive, and architectural design.
Compute capability - A ranking of CUDA technology, roughly the version number, that relates to how well the CUDA cores can
Te R m s
![Page 78: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/78.jpg)
7 7
process their tasks.
CUDA - A technology Nvidia developed for their graphics processors that uses parallel processing that developers can directly
access for graphics purposes.
CMU - Carnegie Mellon University. This is my university and where the School of Architecture is - where I am having my thesis.
Delay load - a term I came up with that describes the relative time it would take to use one program or pipeline compared to
another. For the purposes of the software evaluations, I compared a regular pipeline of modeling in Rhinoceros and rendering in Vray
to each set of alternative software.
DIY-DoitYourself.Afieldofdevelopmentnotnecessarilyinformedbyprofessionalpracticewhereusersattempttofindtheir
own ways to achieve a task. These attempts are not always successful but the culture is one of sharing - the attempts that work are
oftendocumentedandrefined.
Drivers - Software middle-men between hardware on a computer and other software that aims to use that hardware.
Engine - A graphics software (that can also be embedded in other software) that is used to render virtual worlds. In video games,
this is what makes the graphics work, though it is often also responsible for physics calculation, the menus and UI, and AI.
Environment map - A single snapshot of the six cardinal directions around an object with a FOV of 90° that are then composited
togeta360°viewcompletelyaroundanobject.Thisisusedtofakereflections.Doingthisinrealtimeisverytaxingonperformance.
FOV - Field of view. The geometric angle that is subsumed by the view cone of a viewer.
Fps - Frames per second, also frame rate. A measure of the amount of frames a graphics processor can generate on a monitor
everysecondtosimulatefluidmotion.Valuesbetween30and60aregoodgoals forgraphics-heavysoftware,asat lowervalues
choppiness and stuttering become apparent and higher values may produce incompatibility with the monitor hardware (usually not
an issue with modern software). This can be measured as an average over the last few seconds or as a value every few seconds.
![Page 79: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/79.jpg)
7 8
Gameplay - The actions a user performs in relation to the environment or other players within a game. People often fail to make
thedistinctionbetweengraphicsandgameplay,asoneortheothermaydefineavideogamemorethantheother.Forthisthesis,Iam
ignoring almost all aspects of gameplay except those involving interaction, walking, and other movement controls.
GI - Global Illumination. This refers to an even distribution of light in a scene such that more exposed surfaces get more light and
less exposed surfaces get less light. This ends up making corners darker and smoothly shading other geometry. This is useful as a step
in generating realistic shadows.
GPU - Graphics Processing Unit. The piece of hardware in a computer largely responsible for computing what is seen on a monitor.
Over the years the GPU has grown in importance, not only for video games but for design number crunching as well.
GWAP - Games With A Purpose. Video games designed or heavily repurposed for the aim of training real jobs. These video games
arehighfidelityandtakeintoaccountnearlyallaspectsofarealworldscenario.Theyoftenfocuslessongraphics,however.
Mesh - A set of connected or related triangles in 3D space that combine to make a virtual shape or surface. The triangles are solid,
howevertheirappearancecanchangewhenanimage,oratexture,isappliedtothemeshviapredefinedoperations,amaterial,using
coordinates assigned to each point of the triangles. Meshes can have billions of triangles.
Normals and normal map - A normal is the perpendicular direction from a plane; in meshes the planes are the triangles. A normal
map is a purple and green image that replicates height data, which is projected along the normals of the mesh. This fake height
dataappearsasridgesorothershapes,dependingonthemap,thatreceivelightingandshadingbutareonlyavisualeffectonthe
geometry - it is clipped by the visible edges. A technique called parallax mapping or displacement mapping works around the clipping,
appearing to make physical geometry on top of the original mesh.
NURBS-NonUniformRationalBasisSpline.Amathematicalmethodfordefiningacurvethatcanalsobeusedtodefinecomplex
surfaces.Sincethedefinitionismathematical,thesurfacesareexact,thoughagivengraphicsprogramapproximatesthesurfacewith
a mesh for preview purposes. The mesh simply takes a small number of points on the surface and connects them, but the mesh is no
longer the NURBS surface, it is just a very near approximation. Many methods exist to sample those points.
![Page 80: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/80.jpg)
7 9
Path tracing - A method of ray tracing that determines where the photons that comprise a pixel most likely came from, taking
into account all the light in a scene. Over enough samples, path tracing should generate an image indistinguishable from reality.
Photo-realistic - A digital simulation that is visually very near or indistinguishable from a photo taken by a real camera.
Pre-computed - Computed before hand, usually a process that takes many minutes or hours, but the results are reusable.
Raster - Very general term for taking a mathematically perfect form and simplifying it for viewing. Raster can refer to pre-
computing shadows in a scene and baking them into the materials in the scene instead of having the shadows be dynamic.
Ray tracing - A graphical technique where photons from lights are traced around a scene, taking into account all possible material
properties, to determine how that scene is lit.
Real time-Adigitalrefresh,orframe,rateatwhichpointthescreenlooksfluid,likeamovie.Realityisinrealtime.
Render - A technique where a graphical algorithm is applied to a scene that generates how that scene would look, usually, in real
life. It is also a general term for creating a high quality image, so many realistic paintings could be understood as renders.
Rhinoceros - NURBS modeling software developed by McNeel and Associates that is primarily used for nautical, product, and
architectural design. It is fairly streamlined and includes hundreds of functions. Supports scripting and plug-ins.
Scene -Asetofgeometry, lights,materials,effects,andotherfeaturesthatcombinetobeusedforrenderingorinteraction.
Designsoftwareeitherimportsfilestocombineintoascene,orsavesthesceneasafilewhichreferencesotherfiles.
Shader -A rapidcomputationalprocesswherevisualeffects likerefraction,bumpysufraces,andreflectionareprocessedas
materials that can be applied to geometry. Shaders are much cheaper than brute force methods but rely on environment maps and
fairlycomplexmaterialdefinitionstoreplicatehowtheseeffectsappearinreallife.Dependingonthesoftware,theyallowbehavior
thatwouldotherwisebedifficulttoreplicate,forexampleamaterialcanfadedependingonhowclosethevieweristoit.
Shadow map -Precomputedshadowsthatareappliedtoallgeometry.Shadowmapsarestoredascolorimagefiles,depending
![Page 81: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/81.jpg)
8 0
on the lights in the scene, that are then used (usually automatically) in the material shaders of the scene geometry; this is called
baking. Just like with other textures, they use object UV coordinates.
SIGGRAPH - Special Interest Group on Graphics. An annual conference held by ACM that reviews and publishes research on
computer graphics.
Simplified lighting-Theuseofsimplemodelsofhowlightpropagatesinspace.Thisrangesfromalinearhotspot/falloffmodel,
with 100% light in a small sphere of an arbitrary radius and 0% light in a larger sphere, and a linear gradient in between, to more complex
models where certain shapes are achieved on surfaces that mimic how real lenses distribute light.
Ultimatype - Direct opposite of prototype - what the object or space will eventually be.
User interaction - The concept of a person using controls on a device to change how that device operates, often this feedback is
displayed on a monitor or screen.
Vector-Amathematicallydefinedcurve.Vectorgraphicshaveinfiniteresolution,butcannotexistinreallife,sotheyhavetobe
turned into a raster image. Likewise, digital photons are also vectors, but they have to be turned into bright spots and dark spots on
surfaces for a user to understand them.
Vertex lighting - An alternate method of generating shadows in a scene. Vertex lighting applies a color value to each vertex of a
geometry that corresponds to the color of the shadow or the light at that spot. Geometry is sometimes subdivided for this purpose to
have a more even distribution of points. The advantage this has over regular shadow maps is that it is not pixel based and will always
have smooth shadows, but at the potential cost of detail.
Vray-RenderingsuitedevelopedbyAsgvis.Featuresafastdual-rendererpipelinethatincorporatesmaterialdefinitions,lights,a
sun and sky, caustics, and has support for crude animation. Exists as a plugin for Rhinoceros and other modeling programs.
WYSIWYG - What You See Is What You Get. A design concept where the visual development of something is exactly what that
thingwouldlooklikeonceitisfinished.MicrosoftWordisagoodexampleofaWYSIWYGprogram.
![Page 82: THE GRID - First Semester Thesis Book](https://reader037.vdocuments.net/reader037/viewer/2022103103/568bdbbf1a28ab2034afaeb8/html5/thumbnails/82.jpg)
8 1
This page intentionally left blank.