Subido por Diego Méndez

Real Time 3D ArchViz

Anuncio
REAL-TIME 3D ARCHVIZ
Luís António Gomes Tarrafa Ramos
Instituto Superior de Tecnologias Avançadas
Rua Dr. Alves da Veiga, 142 4000-072 Porto
Portugal
June 12th, 2015
REAL-TIME 3D ARCHVIZ
Luís António Gomes Tarrafa Ramos
Project developed in the scope of the Multimedia
Engineering course in Instituto Superior de
Tecnologias Avançadas, oriented by Professor André
Cruz.
Instituto Superior de Tecnologias Avançadas
Rua Dr. Alves da Veiga, 142 4000-072 Porto
Portugal
June 12th, 2015
ABSTRACT
Keywords: Architecture visualization, computer graphics, game development, real-time
rendering, game engine, physically-based rendering, photorealism, workflow.
Distinct professional areas of Architecture, Computer Graphics, and recently Game
Development, are merging to create the possibility of 3D architecture visualization in realtime that is seen by many as the next logical step on the digital architecture visualization
world.
This is a very recent field that hasn’t been thoroughly explored yet. What made this
possible was the creation of game engines that could render physically-based materials in
real-time and allow for some rendering techniques to finally be adapted to real-time
rendering engines, which highly improved the photorealism possibilities in the resulting
applications.
What this project intends to do, is to create an easy way for 3D artists to create an
interactive environment without the need to spend time learning about game engines and
programming. After the creation of the 3D assets, the artist only needs to drag them into
the game engine and set up a few options, making everything ready to be built into a final
product. It comes with an easy to follow User Manual (see Appendix D – User Manual)
and simple interaction that anyone can get used to in minutes.
This project’s planning and development uses techniques and tools that spanned
across the three years of studies of this course, including: scheduling, UML, prototyping,
vectorial design, programming, 3d modeling and texturing.
RESUMO
Palavras-chave: Visualização de arquitectura, computação gráfica, desenvolvimento de
videojogos, renderização em tempo real, motor de jogo, renderização físicamente correcta,
fotorrealismo, fluxo de trabalho.
As diferentes áreas profíssionais de Arquitectura, Computação Gráfica e
Desenvolvimento de Videojogos estão a unir-se para criar a possibilidade de visualização
de arquitectura 3D em tempo real, que é vista por muitos como o próximo passo lógico no
mundo da visualização de arquitectura.
Este é um campo muito recente, que não foi ainda aprofundadamente explorado. O
que o tornou possível foi a criação de motores de jogo que conseguem renderizar materiais
fisicamente correctos em tempo real e permitir que outras técnicas fossem adaptadas aos
motores de rendering em tempo real, o que aumentou as possibilidades da criação de
aplicações com fotorrealismo.
Este projecto pretende criar um método fácil para artistas 3D poderem desenvolver
um ambiente interactivo sem a necessidade de passar tempo a aprender um motor de jogo e
programação. Após a criação dos elementos 3D, o artista apenas terá que os arrastar para o
motor de jogo e definir um pequeno número de opções, ficando então tudo pronto para que
seja criado o seu produto final. O projecto inclui um Manual do Utilizador (ver Apêndice
D – User Manual) fácil de seguir e envolve uma interacção simples à qual qualquer pessoa
se habitua em minutos.
O planeamento e desenvolvimento deste projecto utiliza técnicas e ferramentas
adquiridas ao longo dos três anos de estudos nesta licenciatura, incluindo: cronogramas,
UML, prototipagem, desenho vectorial, programação, modelação 3D e texturização.
ACKNOWLEDGEMENTS
None of this work would be possible without the support and patience of my parents
António and Cristina Ramos, who helped me despite the ups and downs I went through
before I finally realised what I wanted from life.
I would also not be here without my girlfriend Ânia Guerra, who opened my eyes to
the path I never knew I had the courage to follow, and who is and will be the reason I do
everything I do.
My thanks, in particular to Prof. André Cruz for all the support and dedication
through this project, but also to Hélder José Silva for letting me know what’s waiting for
me in the future and not letting my motivation fall down, to my brother João Ramos who
made me see “work” in a different perspective.
I’d also like to acknowledge the teachers I’ve had through this course who had
something to share with me that affected, directly or indirectly, the work in this project.
TABLE OF CONTENTS
1. INTRODUCTION ................................................................................................................................ 15
1.1 PROBLEM ANALYSIS ................................................................................................................................ 16
1.2 OBJECTIVES AND REQUIREMENTS ............................................................................................................... 16
1.3 DOCUMENT STRUCTURE........................................................................................................................... 17
2. STATE OF THE ART ............................................................................................................................ 19
2.1 COMPUTER GRAPHICS ............................................................................................................................. 19
2.2 ARCHITECTURE VISUALIZATION .................................................................................................................. 20
2.3 PRE-RENDERING VS REAL-TIME ................................................................................................................. 20
2.4 GAME ENGINES ...................................................................................................................................... 22
2.5 3D MODELING SOFTWARE ....................................................................................................................... 25
2.6 LIGHT THEORY........................................................................................................................................ 26
2.7 PHYSICALLY BASED RENDERING ................................................................................................................. 30
2.8 EXISTING PROJECTS ................................................................................................................................. 33
2.9 CHAPTER SUMMARY................................................................................................................................ 36
3. METHODOLOGY AND PLANNING ...................................................................................................... 39
3.1 PROJECT SCHEDULE ................................................................................................................................. 39
3.2 UML ................................................................................................................................................... 40
3.3 INTERACTION MODE ............................................................................................................................... 41
3.4 PROTOTYPING ........................................................................................................................................ 42
3.5 CHOOSING THE SOFTWARE ....................................................................................................................... 42
3.6 USER GUIDE .......................................................................................................................................... 43
3.7 CHAPTER SUMMARY................................................................................................................................ 43
4. DEVELOPMENT ................................................................................................................................. 45
4.1 PREPARING UNITY .................................................................................................................................. 45
4.2 SCENE HIERARCHY .................................................................................................................................. 46
4.3 INTERACTING WITH THE SYSTEM................................................................................................................ 46
4.4 INTERACTION MODE AND OBJECT SELECTION ............................................................................................... 47
4.5 PERSONALIZATION .................................................................................................................................. 48
4.6 USER INTERFACE ..................................................................................................................................... 49
4.7 OBSTACLES ............................................................................................................................................ 50
4.8 CHAPTER SUMMARY................................................................................................................................ 52
11
5. CONCLUSION AND DEVELOPMENT PROSPECT ................................................................................... 53
6. REFERENCES ..................................................................................................................................... 55
APPENDICES .......................................................................................................................................... 59
12
LIST OF FIGURES
FIGURE 1: PRE-RENDERING (TOP) AND REAL-TIME RENDERING (BOTTOM) IMAGES FROM THE LAST
OF US, PLAYSTATION 3 GAME (IMTIAZ, 2014) ........................................................................
21
FIGURE 2: BLUEPRINT SYSTEM ON UNREAL ENGINE 4 (EPIC GAMES, 2015) .................................. 24
FIGURE 3: REFLECTION AND REFRACTION (VALLANCE GROUP, N.D.) ............................................. 27
FIGURE 4: RENDERING FIREFLIES (PRICE, 2014) ............................................................................. 28
FIGURE 5: FRESNEL EFFECT (MCDERMOTT, 2015) .......................................................................... 29
FIGURE 6: CAMERA LENS (WILSON, N.D.) ....................................................................................... 30
FIGURE 7: METALNESS AND SPECULAR WORKFLOWS (MCDERMOTT, 2015) .................................. 31
FIGURE 8: IKEA HOME PLANNER (IKEA, 2014) ............................................................................... 33
FIGURE 9: LONDON APARTMENT (UE4ARCH, 2015) ....................................................................... 34
FIGURE 10: WINTER CHALET (XOIO STUDIOS, 2015) ..................................................................... 35
FIGURE 11: HYATT RESORT (TETRAVOL, 2006) .............................................................................. 36
FIGURE 12: GANTT CHART ............................................................................................................... 40
FIGURE 13: USE CASE DIAGRAM ..................................................................................................... 41
FIGURE 14: GRAPHICAL USER INTERFACE PROTOTYPE ................................................................... 42
FIGURE 15: SCENE HIERARCHY........................................................................................................ 46
FIGURE 16: USER INTERACTION OVERVIEW .................................................................................... 47
FIGURE 17: UNITY TOOLTIP EXAMPLE ............................................................................................ 49
FIGURE 18: SCROLLVIEW EXAMPLE ................................................................................................ 49
FIGURE 19: SUN MOVEMENT ........................................................................................................... 50
FIGURE 20: SCROLLRECT WITHOUT MASK ..................................................................................... 51
13
Chapter 1
1. Introduction
Computer Generated Imagery (CGI) is, nowadays, present in most aspects of our life.
We use it for our leisure, when we play a video game or watch a movie; we see it
everywhere, in huge digital publicity boards, or when we need to withdraw money from
the ATM; we owe it our safety, when vehicle manufacturers use it to run safety
simulations; we owe CGI our health and well-being when it makes possible for scientists to
reproduce the interior of the human body as a digital image, or lets them create artificial
body parts that can then be printed and used to save lives.
Every day, new ideas, technologies and needs are found or invented. Every field of
science and technology is a never-stopping machine that’s always producing and requiring
new solutions to new problems. And every day new solutions are created.
Architecture had its origins as early as the year 10.000BC, in Southwest Asia
(“Architecture”, 2015), and considering how long it has been around, only recently the use
of CGI by architects has been implemented.
The use of CGI in the simulation of an architectural environment as a realistic image
or animation is known as ArchViz (Architecture Visualization). The range of applications
for this recent field is still expanding and creating new purposes and opportunities for
itself. One of those purposes is the main theme of this project: using game engines to
create real-time, photorealistic, interactive architectural visualization.
15
1.1
Problem Analysis
Although this new technology is available to anyone almost free of charge (see
Chapter 2.4 – Game Engines), the amount of developers experimenting with it in the field
of Architecture Visualization is not that large. Most developers and hobbyists use game
engines for their primary purpose: creating games.
The professionals of the Architecture Visualization field are usually 3D artists, who
are proficient in creating photorealistic still images of architecture subjects. They have
knowledge of 3D modeling, texturing and rendering, and they are very good at that – but
they seldom have programming skills, or even knowledge about how a game engine works.
Without having this skills, it is impossible for a professional of the ArchViz industry
to develop ArchViz interactive applications that render in real time using the new
Physically Based Rendering technologies (see Chapter 2.7 – Physically Based Rendering)
offered by the recent versions of the top game engines.
Most 3D artists in the field of ArchViz strive for realism. By providing the final user
with the possibility of interacting with the environment, he is not just looking at a distant
image of the product – he is inside the house – which is a new form of realism that will
create a richer user experience. By interacting with the environment, the user can also
change it according to his taste, by changing the furniture and wall materials.
The interaction feature is also interesting for the enterprises that created the realworld objects that are represented in the application (furniture, appliances, etc.), since the
final user will be able to view each object’s details, including its brand, and follow a direct
link to the seller’s webpage from inside the application, which would not be practical (or
possible) in a standard still image or video walkthrough.
1.2
Objectives and Requirements
The main goal of this project is to solve the problem stated in chapter 1.1, by creating
an easy method for the ArchViz professionals of the various industries (architecture,
interior design, furniture manufacturers, etc.) to be able to use their 3D assets to create an
interactive application without having to spend a lot of time learning new skills and
without affecting their workflow significantly.
16
Functional Requirements
To achieve this goal, it will be created a new workflow for the future developers to
be able to interact with a game engine without the need for additional time-consuming
skills. Along with that workflow, a document will be created, instructing the first-time
developers on how to use the system.
The interaction with the game engine environment’s most technical or complex
features will also be already settled, so that the future developers don’t need to address that
problem.
The final user must be able to interact with the objects and environment.
Non-Functional Requirements
The interaction between the developer and the system must be user-friendly and easy
to get used to. It must have as little steps as possible, avoid using complicated tasks that
can cause a need to often resort to consulting the User Manual (see Appendix D – User
Manual). The User Manual must be clear and easy to understand, while still being
thorough.
The final product must be stable and easy to interact with.
1.3
Document Structure
Following the introduction chapter, this document is structured as three main phases
of the project itself: the State of the Art (see Chapter 2 – State of the Art) refers to the first
phase of the process – the gathering of information – which was possibly the most
extensive part of the project, and gives the reader an overview of the tools and technologies
that are addressed by or related to this project; the Methodology and Planning (see Chapter
3 – Methodology and Planning), which is the phase in the project where everything is
planned and decided; and the Development (see Chapter 4 – Development) of the final
product, where there is an analysis of the product creation along with the problems that
come with it and its resolutions.
After the main parts of this document, follows an overview of the project as a whole
(see Chapter 5 – Conclusion and Development Prospect), where there is a summary of the
17
conclusions and lessons taken from the development of the project, and a brief analysis of
some of the future implementations that the project could go through after this first, but
final, version.
The Appendices at the end of the document consist of documents that couldn’t, for
aesthetical or technical reasons be included in the main sections, and will be referred to
throughout the document.
18
Chapter 2
2. State of the Art
2.1
Computer Graphics
Computer Graphics (CG) is a term used to refer to the representation or manipulation
of image data with or by a computer, usually with the help of specialized graphic hardware
and software (“Computer Graphics”, 2015). Another name used specifically for generated
images is Computer Generated Imagery (CGI) (“Computer-Generated Imagery”, 2015).
Despite the existence of screens that could display images since the late XIX century,
that were used by the Brothers Lumière in their first projected motion picture in 1895
(Walters, 2002), the term CG can also include the existence of some kind of interactivity.
The first actual use of CG that contained an interactive element dates from 1954, in a
project by Douglas T. Ross, an American computer scientist pioneer, where he wrote a
program that could capture the movement of his finger and translate it to a vectorial image
on the screen (“Computer Graphics”, 2015).
Soon after the introduction of computers, CG was used for the display of data on
hardcopy plotters and cathode ray tube (CRT) screens (Foley, van Dam, Feiner, & Hughes,
1996, p.1). Since then, the field has known numerous improvements either in the creation,
storage, manipulation, visualization, etc.
The CG industry has always been used in a wide array of knowledge fields, such as
mathematics, physics, engineering, architecture, conceptual, and also in different
professional areas, like physics simulations, architecture visualization, gaming, web
development, branding, product visualization, prototyping and development, business
presentations, and the list keeps expanding every day with the imagination being its limit.
19
One of the percursors of the use of 3D CGI in the world of cinema was the well
known movie director George Lucas, that created the Star Wars franchise, which was made
with extensive use of CGI, in the year of 1977 (Lobo, 2014).
“Until the early 1980’s, computer graphics was a small, specialized field, largely
because the hardware was expensive and graphics-based application programs that were
easy to use and cost-effective were few.” (Foley, van Dam, Feiner, & Hughes, 1996, p.1)
When personal computers in an accessible price range with built-in graphics displays
started being sold to the public, graphic interfaces that used bitmap graphics for usercomputer interaction became also available. From that point onwards, more and more
graphics-based applications started making an appearance, and, as we know, today most
applications we use in our computers (including smartphones, tablets, ATM machines, and
almost every kind of digital interface) is presented to us as a graphics interface.
2.2
Architecture Visualization
Architecture Visualization (ArchViz) is a union of the fields of architecture and CG.
That union is a fairly recent innovation in the history of both fields. The need for realistic
project previews has stimulated the creation of software for that effect, that focus on
specific tasks, such as the creation of plants, roofs, terrain, urban environments, parametric
models, skies, crowds, etc.
“The influence of architectural visualization in 3D computer graphics is so strong
that it has affected the growth and development of render engines themselves, and brought
a new wave of real-time incarnations aimed straight at ArchViz projects.” (Neoscape –
Creative Studio, 2012).
2.3
Pre-Rendering vs Real -Time
“Rendering is the process of generating an image from a 2D or 3D model (or models
in what collectively could be called a scene file), by means of computer programs.”
(“Rendering”, 2015)
There are numerous processes of rendering a 3D model along with its textures,
lighting information and other data, into an image. Some of those processes focus on
20
rendering something in real-time, which means that we can get immediate visual results
when we interact with the model, and some focus on the pre-rendering of a model or scene
for posterior use of the rendered images.
Figure 1: Pre-rendering (top) and real-time rendering (bottom) images from The
Last of Us, Playstation 3 game (Imtiaz, 2014)
Pre-rendering is widely used in the architecture and product visualization industries,
since the product of the rendering is usually required to be a high resolution image with as
much realistic appearance as possible, which is often used for printing (nervouschimp,
RobertZamber, Son Kim, 2015). On the other hand, real-time rendering is the technique
used in the gaming industry, since games need to provide the user with, at least, 60
rendered images (frames) each second to provide a seamless visual experience.
21
As it is expected, each method has its advantages and disadvantages. Pre-rendering is
known for giving more realistic results, with more customization, at the cost of having
much higher rendering times – in some animation movies, a single frame can take more
than one day to be rendered, like in Pixar’s Monsters University, where 100 million CPU
hours, divided by a render farm of 2000 computers, were needed to render the whole
movie (Bettinger, 2013) – while real-time rendering, as the name implies, produces very
fast results, while, until recently, it couldn’t create images with a high level of realism (see
Figure 1: Pre-rendering (top) and real-time rendering (bottom) images from The Last of
Us, Playstation 3 game (Imtiaz, 2014)).
With the recent introduction of Physically Based Rendering (PBR – see chapter 2.7
for more information) on the main Game Engines (software that is used to create games or
interactive applications using real-time rendering methods), the visual quality and realism
of the images generated by real-time rendering engines is getting closer each day to the
capabilities of the pre-rendering methods. Although this doesn’t mean that pre-rendering
methods will become obsolete, it gives real-time rendering a new range of opportunities,
namely in the field of Architecture Visualization in real-time (nervouschimp,
RobertZamber, Son Kim, 2015), which is the main theme of this project.
Another factor that can be taken in account when deciding between a real-time and a
pre-rendering rendering engine is the fact that most of the best companies that provide
Game Engines that support PBR have now released their software for free to anyone,
charging only after the company or individual make a certain amount of profit from an
application created on it. Since the best pre-rendering engines that are often used in
ArchViz are somewhat expensive, at least for an independent user, many people may
choose to spend more time learning real-time rendering with its advantages instead.
2.4
Game Engines
“Drawings and static renderings often fail to capture the experiential and spatial
qualities of a building, but imagine being able to walk through a design to explain its
circulation, for example.” (Rawn, 2015)
22
A game engine is a piece of software that has a great amount of tools that a game
developer may need to build a game or interactive application. Usually those tools are
included in a graphic environment, and as time passes, they become more accessible and
easy to learn and use, which makes game engines increasingly attractive for architects and
professionals of areas outside the game development industry.
Real-time rendering is always the technique used by 3d game engines for rendering
the high amount of frames per second necessary to achieve a seamless visual user
experience while playing a 3d game or application. Most game engines also include, or
have some way of achieving collision detection and response, sound, scripting, animation,
artificial intelligence, physics engine, networking, memory management, threading, etc.
The most popular 3d game engines in use today are: Unity 5, Unreal Engine 4 (UE4,
successor to the UDK) and CryENGINE (Masters, 2015).
Unity and UE4 have both recently released free versions that are only paid after the
game is released, in very appealing ways. While Unity remains completely free until you
reach 100,000$ worth of revenue with your projects, and only then you need to purchase
their pro version (which is not very expensive), UE4 opted for a different approach,
allowing the use of the engine for free, but demanding a small 5% royalty on every project
you release.
As for CryENGINE, it releases you from any kind of royalty, but it’s not free to use.
It has a subscription and a full-license options, so you can choose to pay one time or on a
small amount each month basis.
In this project, the price of the rendering engine would play an important part on the
choosing process, so the choice would consist on analysing the differences between Unity
and UE4.
While they are both similar in the end product quality, the processes used to get there
are very different. The main difference between the two game engines is the programming
language, environment and methods.
Unity uses the standard approach, letting the developer build scripts using one of
three programming languages (C#, JavaScript or Boo - a language created from and for
Unity3D) that can later be attached to game objects to provide them with functionality.
23
While using scripting is the standard approach in game development, there isn’t always
this type of workflow where the scripts are attached to individual objects.
On the other hand, UE4 uses a new and innovative approach called blueprints (see
Figure 2: Blueprint System on Unreal Engine 4 (Epic Games, 2015)), where the developer
interacts with blocks of premade actions or events as a node-based environment, and links
them to each other to determine a flow of information or a sequence of actions that must be
taken on a given situation.
Figure 2: Blueprint System on Unreal Engine 4 (Epic Games, 2015)
This method requires almost no programming language experience to do most basic
to intermediate actions, while allowing the use of C++ programming language to do the
more advanced actions if required. Using blueprint, the developer can build a full
application without having the need to write a single line of code.
While the UE4 approach seems to have some advantages, namely that it can target
people outside the game development industry without a need for them to learn
programming before, for those who already know how to program it can be a difficult
change on the workflow and the way of thinking about game development, which may
need some time to get used to.
Another strong capability that must be considered for this type of project is crossplatform integration. Both Unity and UE4 provide the integrated possibility of developing
for multiple platforms, spanning between desktop (Windows, Mac OSX, Linux), mobile
24
devices (Android, iOS), game consoles (Xbox One, Playstation 4), web (WebGL for unity
and HTML5 for UE4), and have the possibility of using VR devices such as Oculus Rift
and Morpheus. Unity still wins the “war” on multiplatform support, since it offers just a
few more options that UE4 doesn’t, such as support for PS Vita and WiiU game consoles,
and a wider range of mobile operative systems support, such as Windows Phone,
Blackberry 10 and Tizen, and it also supports Samsung Smart TV development.
Both game engines have a good support for importing assets from the major 3D
applications (see chapter 2.5 – 3D Modeling Software, for more information), and Unity
has now native support for 2D game development as well, although that isn’t a factor in the
choice of software for this project.
2.5
3D Modeling Software
There is a wide array of software with good 3D modeling capabilities nowadays. The
most commonly in the game industry are still Autodesk’s 3D Studio Max and Maya, but a
number of capable competitors have appeared recently, namely The Foundry’s MODO,
Maxon’s Cinema 4D, and Blender.
Any of these software packages are able to achieve the desired results. There is a
general opinion that, while some applications may be better at doing some specific actions
than others, the choice of software highly depends on the artist’s personal preference
regarding its workflow and interface.
The one defining factor that may be decisive when making a choice for a project that
has a small budget (or none at all) will be the price of the software and its payment options.
Blender easily wins the argument on pricing, since it’s a free software package that
requires no royalties. Other interesting choice would be Maya LT, a more inexpensive
version of Maya, recently released with indie developers in mind (Masters, 2015, para. 913).
Most game engines can import, and most 3D modeling applications can export the
objects or scenes in the most used file formats, so nowadays incompatibility isn’t a real
issue anymore, which means that the artist can often choose any group of applications he is
comfortable working with and use them without having any major problems.
25
2.6
Light Theory
Most of the rendering process, whether we’re talking about PBR or older rendering
methods, is about how the light interacts with a given material that is applied to an object
(Shirley, Morley, Sloan, & Wyman, 2013, pp. 2-3), so it’s extremely important that a 3D
artist, namely the ones in charge of texturing, lighting and/or rendering, has a deep
knowledge of light physics and how light interacts and behaves when hitting or passing
through an object or between two different media (such as light passing from air to water).
Absorption and scattering
When light travels in a medium that is not homogeneous, it can be absorbed or
scattered. Absorption is when the light intensity fades and its color is altered, while it is
being transformed into another form of energy and its wavelength changes – but the
direction of the light ray doesn’t. With scattering, the light ray direction is changed with its
deviation depending on the material it’s passing through. As opposed to absorption, when
light scatters, the intensity stays the same. To better understand scattering, we can imagine
a simple plain thin glass and a human ear: both have low absorption which means that the
light intensity doesn’t fade a lot, but while there is no scattering when passing through the
glass – the light keeps traveling in the same direction it had before -, when it passes
through the ear it scatters into multiple directions (McDermott, 2015, p. 3).
26
Reflection and Refraction
Figure 3: Reflection and Refraction (Vallance group, n.d.)
Light behaviour when it hits a surface can be interpreted as three simple situations: it
can bounce off the surface (reflection), pass through the surface from one medium to
another (refraction), or both (see Figure 3: Reflection and Refraction (Vallance group,
n.d.)).
There are two types of reflection – diffuse and specular. Specular reflection is, when
light simply bounces off the surface of an object, and diffuse reflection happens when the
light enters the object’s surface (refracts), then is scattered and finally refracted back out
(McDermott, 2015, pp. 3-4).
In specular reflection, the roughness (other names can be used for the same purpose,
depending on the software and workflow, such as smoothness, glossiness or microsurface)
of the surface – surface irregularities – plays an important part. Smoother surfaces, like a
pool ball, behave differently from a rougher surface like rubber. On a smoother surface, the
reflections seem more focused and brighter, despite the reflection intensity being the same.
In diffuse reflection, the roughness of the surface is often neglectable, since the light either
travels too long inside of the material, being completely absorbed, or is refracted out at
roughly the same point it entered (McDermott, 2015, p. 5).
27
Color
The color of a surface is defined by the light wavelengths the object reflects or
absorbs. The wavelengths that aren’t absorbed are reflected, and those will be the ones we
see and interpret as color (McDermott, 2015, p. 6).
Energy Conservation
The principle of energy conservation plays a big role in physically based rendering
methods. It states that the amount of light reflected by a surface is never higher than the
amount it received. In other rendering methods, the absence of energy conservation may
result in rendering artifacts or fireflies (white dots on the final rendering, that are often
removed in post-production or using clamping or blurring methods – see Figure 4:
Rendering Fireflies (Price, 2014)) (McDermott, 2015, p. 7).
Figure 4: Rendering Fireflies (Price, 2014)
On physically based rendering, the principle of energy conservation is always
enforced, since the light is supposed to behave like it actually does in the real world.
28
Fresnel
“The Fresnel effect [Augustin-Jean Fresnel (1788-1827)] . . . states that the amount
of light you see reflected from a surface depends on the viewing angle at which you
perceive it.” (McDermott, 2015, p. 7).
Figure 5: Fresnel Effect (McDermott, 2015)
What that means is that, depending on where the viewer is positioned in relation to
the target object and its surface angle, our perception of the light reflection coming from a
given point on that object can vary (see Figure 5: Fresnel Effect (McDermott, 2015)). For
instance, in a sphere with a smooth surface, the reflection value increases up to a 100%
specular reflection value as the angle of the object surface relative to the viewer position
reaches a 90º angle (McDermott, 2015, p. 7).
In a PBR engine, the developer doesn’t need to control the fresnel values, since the
calculations of the fresnel values is already integrated in the PBR process.
29
2.7
Physically Based Render ing
Figure 6: Camera Lens (Wilson, n.d.)
Physically Based Rendering (PBR) is a very recent technology that is changing the
world of 3D rendering, in particular the workflow and applications of real-time rendering
engines.
“Much of what makes a physically-based shading system different from its
predecessors is a more detailed reasoning about the behavior of light and surfaces.
Shading capabilities have advanced enough that some of the old approximations can now
be safely discarded, and with them some of the old means of producing art.” (Russell,
2015, para. 2)
With PBR, we can create materials that react in a realistic way with the environment
that surrounds them (see Figure 6: Camera Lens (Wilson, n.d.)), behaving like an actual
physical material, interpreting light and all its components and behaviours in an accurate
way (McDermott, 2015, p. 3). With older reflection modes like the Phong model (Bùi
Tường Phong, 1942-1975), which is the most commonly used model in real-time rendering
engines, the light reflection on an object is interpreted by 3 factors: ambient color, diffuse
color and specular color. This is not an ideal approach, since by defining these factors it
becomes difficult (or impossible) to achieve a material that can looks consistent under
30
different lighting environments (Nehon, 2014, para. 8-12). This kind of workflow relies
heavily on the artist’s capabilities of adapting the material to the environment it’s going to
be in, which may not be ideal for use in a different environment under different lighting.
With PBR the material creation workflow is simplified and at the same time, it
produces consistent results that adapt to the surrounding environment. The PBR engines
also use the principle of Energy Conservation (see chapter 2.6 – Light Theory), which
prevents a surface from reflecting more light than it has received, making its behaviour
more realistic.
There
are
two
distinct
workflows
for
creating
a
PBR
shader:
the
Metalness/Roughness and the Specular/Glossiness workflows (see Figure 7: Metalness and
Specular workflows (McDermott, 2015)). Each of them has three common maps: ambient
occlusion, normal and height maps that are used in the same way as most traditional
rendering methods, and both workflows also have another three exclusive maps that make
the workflows different despite achieving similar results in the end (McDermott, 2015).
Figure 7: Metalness and Specular workflows (McDermott, 2015)
When using the Metalness workflow, we use the base color (also known as Albedo
map), roughness and metallic maps. The base color represents the raw color of the
material, and it’s similar to a diffuse map on older workflows, but without containing any
31
shading information other than color – diffuse maps often contain ambient occlusion maps
integrated through blending. This map is an RGB map that defines the reflected color of
the material if it’s a dielectric material, and reflectance values for metals. The metalness
map defines where the material is more or less metallic (dielectric), and it’s a grayscale
map that tells the engine which parts of the material should be interpreted as reflected color
(dielectric) or metal reflectance values – it often only contains values of black and white
without a lot of intermediate values. The roughness map, as the name implies, represents
the degree of surface irregularities of the material (see chapter 2.6 – Light Theory) and it’s
a grayscale map. As stated by Wes McDermott (2015), “The roughness map is the most
creative map as it allows the artist to visually define the character of a surface. In essence,
it allows you to creatively tell a story about the condition of a surface.” (p. 10).
With Specular workflow, the three particular maps used are the diffuse/albedo,
specular and glossiness maps. This workflow is easier for seasoned artists to get used to,
since it’s similar to the traditional workflows. The diffuse map (RGB) works in a similar
way as in the Metalness workflow, except it doesn’t account for reflectance values. The
specular map defines the reflectance values, and it’s an RGB map that is supposed to have
greyscale values for dielectric materials and color (if needed) values for metallic surfaces,
since metals absorb light at different wavelengths depending on the type of metal in use.
The glossiness map is a grayscale map that describes the surface irregularities in a similar
way as the roughness map in a Metalness workflow, but in an opposite way – while the
roughness map, a white surface represents a rough surface, in this map, it represents a
rough surface (McDermott, 2015, pp. 13-17).
Regarding the maps common to both workflows, the ambient occlusion map is a
grayscale map that defines how much the ambient environment lighting is accessible to a
surface point, which affects the diffuse contribution to the final result. The height map is a
grayscale map that helps in adding some apparent depth and realism to normal maps,
which are RGB maps that simulate surface detail, where the R, G and B values correspond
to the X, Y and Z coordinates of the surface normal (“Normal Mapping”, 2015).
32
2.8
Existing Projects
Real-Time ArchViz is a very recent area that hasn’t yet been explored extensively,
namely the usage of PBR-capable rendering engines. Most projects developed in this area
have a low degree of realism and/or little interactivity.
IKEA Home Planner
Figure 8: Ikea Home Planner (IKEA, 2014)
The furniture multinational IKEA released an online tool that allows the visitors to
create a floor plan of their room compartments and add furniture to it. After it is complete
it lets you see your room in 3 dimensions, although not with photorealistic graphics – it
presents the room as 3d models with little detail (very low polygon amount) with low
resolution textures.
The purpose of this tool is to allow the client to simulate how their future purchase
would look in the space they want to see it in their homes.
33
UE4Arch
Figure 9: London Apartment (UE4Arch, 2015)
Using Unreal Engine 4 game engine, this company specializes in doing interior 3d
walkthroughs in a similar manner as this project proposes to achieve. They (UE4Arch,
2015) advertise themselves as “a company dedicated to creating and pushing forward
hyper-realistic virtual creations that can be experienced in virtual reality” (para. 1), and
they certainly are.
Their 3D interface allows the user to walk around a 3d environment (so far
consisting only of architecture projects) while he can interact with the objects in the scene,
changing his color or materials.
34
XOIO studios
Figure 10: Winter Chalet (XOIO Studios, 2015)
This company from Berlin works on various areas of 3D art: product visualization,
animation, architecture visualization, concept art, etc.
They have recently started using Unreal Engine 4 for architecture visualization
purposes, still without interactivity (just a video fly-through showing the house inside,
outside, and exterior environment). They seem to be interested in following this approach,
and the interactivity is being implemented and soon to be released.
35
Tetravol
Figure 11: Hyatt Resort (Tetravol, 2006)
“Tetravol is an architecture and urbanism buffet specialized on Computer Image
Generation (...)” (Tetravol, n.d.), is how Tretravol describes itself. This Spanish company
uses Unity3D to provide downloadable 3D environments related to architecture
visualization, which allows the clients to try their product.
They don’t provide interaction with the environment, other than walking around, and
the graphics are far from photorealistic, but the environments have a considerable size, and
there are a lot of different previews to choose, from apartments to resorts and yachts to
museums.
2.9
Chapter Summary
The usage of the combination of Real-Time Rendering Engines and Physically Based
Rendering for Architecture Visualization purposes is a very recent field, and it has yet to
be properly explored. This means there are a lot of implementations, techniques and
opportunities to be discovered, which makes it an area worth studying.
36
Having the main software companies reduced or even eliminated the prices on their
software, adopting other types of remuneration using royalties or other methods, allows a
higher range of users to try and learn how to develop using professional software, which
will lead to a faster development in the field.
There is no denying that this is a field in expansion, and it has come to stay and make
a difference. The development and progress of the field of architecture visualization has
come to a point where the next natural step is to allow a client to interact with a
photorealistic environment, being able to preview and change an environment without
having the need to be there.
37
Chapter 3
3. Methodology and Planning
The planning often plays the most important role in a project. It is used to prevent
mistakes, and create a pre-defined workflow that will help create the project in a sequential
organized manner.
In this particular project, there were some limitations on how deep the planning
could go. Both the technology and its application are new, so the lack of documentation
and work standards is noticeable. One of this project’s purposes was also to explore new
tools, not being completely aware of the mistakes that always happen with
experimentation, which makes it difficult to plan in advance.
Despite that, the planning was composed of multiple distinct parts, where some may
have been used in a somewhat unorthodox way (see chapter 3.2 – UML), but always
serving their purpose and helping the project move forward with as few obstacles as
possible.
3.1
Project Schedule
Defining deadlines for each step of the project is often crucial, particularly in projects that
involve multiple phases with strict deadlines and milestones.
39
Figure 12: Gantt Chart
While a Gantt chart (see Figure 12: Gantt Chart, or Appendix A – Gantt Chart for a better
visualization) isn’t ideal for some projects of great dimension, it is an extremely helpful tool in
smaller projects that need a timeline and task organization (Milosevic, 2003, p. 176), such as this
one. By using a Gantt chart the developer(s) have an organized view of the tasks and deadlines to
come, and they can adjust the time they’re going to spend on a particular project accordingly.
3.2
UML
In most projects that involve programming, Unified Modeling Language (UML) is a
standard tool used for planning the execution of the project. As stated by Fowler (2004), “A good
diagram can often help communicate ideas about a design” and “diagrams both help understanding
and communicate that understanding throughout a team” (p.26).
The most important diagrams for UML planning that give the developer (and the client, if
necessary) an overview of the whole project are Use Case and Class diagrams. The Use Case
diagrams show what a project can do, while the Class diagrams show how the project is structured
– both of them in a simple and visual way, using defined standards that are easily understandable.
Programming for recent game engines such as Unreal Engine and Unity has a very different
workflow than the traditional programming methods. When using a game engine, the developer
must build separate scripts that are intended to be attached to a particular object in a particular
scene, which could quickly make a Class diagram very complex and unintelligible, hence losing its
purpose.
40
Figure 13: Use Case Diagram
On the other hand, a Use Case diagram is a very important tool for any project that involves
using a game engine, since either games or interactive applications’ workflows – and also game
engines’ workflows in general – work based on what the application, or a specific element of it, can
do. By creating a Use Case diagram, we can easily get a visual representation of what the project
will be capable of doing, and start creating prototypes (see chapter 3.2 – Prototyping) based on the
UML diagram.
The use of a single UML Use Case diagram (see Figure 13: Use Case Diagram, or
Appendix B – UML Use Case Diagram for a better visualization) proved crucial to this project’s
workflow, being used throughout all steps of the development.
3.3
Interaction Mode
This project, containing interactivity between the user and the environment (furniture, walls,
etc.), will need an interaction mode to be specified before the beginning of the development phase.
There are two common interaction modes that could be used in this situation: either the user
moves the camera near a piece of furniture and presses a key to interact with it, or he is given the
possibility of using a mouse cursor to interact with the environment.
In this case, we opted for the second option, since each object will have more than one action
associated with itself, and the user might have to interact with menus and submenus, both in the
furniture interaction and the overall environment interaction features.
41
3.4
Prototyping
“Getting from a design idea to a realised project is not always straightforward and may
require a considerable amount of prototyping to get things right” (Ambrose & Harris, 2009, p.98).
Ambrose and Harris (2009) inferred that prototyping is an effective tool that allows a
developer to experiment with various configurations or solutions and decide which one suits the
project better (p.98).
Figure 14: Graphical User Interface Prototype
Two different prototyping processes were used at two points during this project: a static,
visual, almost final prototyping process for the graphical user interface (GUI) of the software (see
Figure 14: Graphical User Interface Prototype, or Appendix C – Graphical User Interface
Prototypes for the complete collection of prototypes created), and a dynamic and interactive
prototype for the software interactivity features (Cerejo, 2010, para. 13-16) that would persist
throughout the whole development process.
3.5
Choosing the Software
As we’ve concluded in previous chapters (see chapters 2.4 – Game Engines and 2.5 – 3D
Modeling Software), nowadays, the choice of which software to use is often decided by the artist’s
personal preference, since there is a wide array of viable applications for each particular task.
Nevertheless, software choices must be made at the start of the project, to avoid incompatibilities.
42
The applications were chosen based on their purchase price (when applicable) and
functionality, while prioritizing an application that provides a faster workflow over others, when
the price is reasonable:

The Gantt chart (see Appendix A – Gantt Chart) was made using Gantt Project
(Barashev, 2003);

The UML diagram (see Appendix B – UML Use Case Diagram) was made using a
non-commercial version of Visual Paradigm (Visual Paradigm International, 1999);

Prototypes and final 2D art for the Graphical User Interface was made with Adobe
Illustrator (Adobe Systems Incorporated, 1987) because of it’s fast workflow and
ease of use;

The Game Engine used for merging everything together was Unity5 (Unity
Technologies, 2005), along with its built in programming editor MonoDevelop
(Xamarin Inc., 2011, & MonoDevelop Contributors, 2004);

The 3D modeling software used for the creation of 3D assets to help in the
development process and presentation was Blender (Roosendaal, 2002);
3.6
User Guide
The nature of this project requires the creation of a user guide (see Appendix D – User
Guide), so that the future developers (that will be the users of this project) have a clear
understanding of how to create and set up a scene inside Unity while using this project’s features to
their full extent.
3.7
Chapter Summary
Every step of the planning part of a project is an important one that must take its time
to make sure we can avoid problems later in it.
By using tools such as Gantt charts, UML and prototyping, along with thinking
carefully what other tools and knowledge is going to be needed in the development phase
of the project, we can ensure a smoother workflow without too many mistakes and
unexpected problems.
Most of the software was chosen with its cost in mind, since this a zero-budget
project and most free tools nowadays are able to do as much as paid tools, albeit some
tasks may need a little more effort to achieve the same results.
43
Chapter 4
4. Development
4.1
Preparing Unity
Before starting a project in Unity, there are a few options that must have our attention so that
the general project and the scene we’ll be working on are configured properly.
Our game will need to have a few input keys set. Most of the input keys for standard
movement are already defined in Unity, but we’ll need one extra key that will activate the
Interaction Mode (see Chapter 3.3 – Interaction Mode), that can be added by accessing the Key
Inputs configuration menu in Unity. This doesn’t give the key any functionality yet, that will come
at a later point in the development, by using programming in scripts.
One issue that can be noticed often, particularly when developing FPS games (which are, in
essence, the same as this project), is a light coloured line between two objects that have a gap
between them (Glick, n.d.). That happens for two reasons: first, the objects aren’t leaning against
each other as close as they should, so the gap appears, but this isn’t always the case – sometimes
the line appears even when the objects are as close as possible – so the second reason may be
because the camera background color is set to a lighter color than it should. That can be easily
fixed by setting it to black, or at least a darker color in the camera settings.
Unity provides a lot of standard assets that can be imported into our scene to be used, which
saves a lot of work and assures us that the assets provided will work as intended in all platforms
(they have been extensively tested). There are at least two standard assets packages that interest us,
and that we will be using in this project: the movement package, that includes a first person view
camera prefab with mouselook and movement scripts and sounds associated with it, and a post
effects package that will help our environment look more realistic by applying some effects that
can usually be seen when using actual cameras instead of virtual ones – virtual cameras often look
too clean with lack of imperfections and real-world physics effects (Calver, 2015, video 13).
45
4.2
Scene Hierarchy
In this project, the scene hierarchy differs from the usual Unity projects’ hierarchy, since one
of the objectives of our default scene is to be easily personalized by someone that is not very
familiar to Unity. The usual method of working with scrips in Unity is by attaching a script to the
object it’s going to be handling – for instance, if you make a script for changing a submenu text, it
makes sense for it to be attached to that same submenu – but, in this case, most of the scripts must
be placed near the base of the hierarchy (see Figure 15: Scene Hierarchy), and most of its
functionality must be merged together in a small amount of places, to make it easier for
inexperienced developers to personalize the scene.
Figure 15: Scene Hierarchy
For more information on how to set up the scene, see Appendix D – User Manual.
4.3
Interacting With the System
Users can interact with the system by using two different methods (see Figure 16: User
Interaction Overview): the system Graphical User Interface that appears when the application is
in Interaction Mode (see Chapter 3.3 – Interaction Mode), and the objects’ menus, that will be
shown as a graphical interface floating over the selected object(s).
46
Figure 16: User Interaction Overview
With the GUI interaction the user can turn off the sounds (the ambient sound and the music
independently) by using the icons on the lower right corner, change the time of day by clicking the
clock image in the upper right corner, and use the main menu in the lower left corner, where the
user can (from top to bottom): take a screenshot of what he sees, change the weather (affecting the
ambient colors), change the material of the walls, and exit the application.
The object menu floats over the selected menu, and is, initially, a single horizontal strip of
options. The options that can be used are (from left to right): play the object’s animation (when
available), change the object’s material, view the real object’s information, such as brand and
dimensions, and close the floating menu.
4.4
Interaction Mode and Object Selection
When the Interaction Mode button (see Chapter 3.3 – Interaction Mode) is pressed, the
software switches between the normal visualization mode where the user can walk around the
scene without any visual User Interface (UI) or cursor lowering its sense of realism, and the
interaction mode, where the UI menus appear, and the user can no longer move the camera, but he
can interact with the scene by using a mouse cursor to click objects and their menus.
This is accomplished by using a script (see Appendix F – Interaction Mode Script) that
detects when the Interaction Mode button is pressed and toggles between the two states, applying
the changes to the user interface, and using Raycast to check when an object is clicked (Unity
Technologies, 2015), and the UI buttons to detect any clicks on the UI options.
When a click on an object is detected by the use of Raycast, the Interaction Mode script
searches for the Menu Handler script (see Appendix G – Menu Handler Script) that is attached to
47
that same object and calls the function ShowMenu() in it. That will show the individual object
menu that can now be interacted with.
4.5
Personalization
The use of classes as a mean to store information about an object, even if it has no other
methods inside the class, is very common in Object Oriented programming. In Unity it isn’t a very
common practice to create classes without declaring them as a child of the MonoBehaviour class,
that gives it the default Unity functionalities like the Start() and Update() methods (Unity
Technologies, 2015), since the scripts are usually attached to the object they have information
about or are interacting with.
In this project, there was a need to create two simple classes to store information about
objects (see Appendix H – Furniture Class), as it was a practical way to organize the information
that would be shown in other scripts. By using this method and adding a Serializable attribute
(Unity Technologies, 2015) to the class, the personalization of the information of a given piece of
furniture will be accessible to the future developers in an organized manner despite being paired
with other type of information.
Because classes in Unity work in a different way than in traditional programming, the
traditional data validation methods (validation inside properties) doesn’t apply. Instead, the data
validation is done inside an OnValidate default method (see Appendix H – Furniture Class), that
runs each time the values of the variables are changed (including in the editor) (Unity
Technologies, 2015).
[Tooltip("Maximum 20 characters")]
Inside the classes, each variable is preceded (when relevant) by a Tooltip instruction (see
example above) that will change the tooltip text that will appear in the Unity editor when the
developer puts his mouse cursor over the variable name (see Figure 17: Unity Tooltip Example).
48
Figure 17: Unity Tooltip Example
4.6
User Interface
The User Interface (UI) (see Appendix C – Graphical User Interface Prototypes) is
composed of elements of the Unity UI type. The UI elements are a recent feature that is available
only since version 4.6 of Unity. The old GUI elements were little more than just images or text,
while the new UI has a lot of new functionalities and elements, like scroll views (see Figure 18:
ScrollView Example), a new anchoring system that allows the UI to scale properly to different
resolution screens and the usage of 9-slicing (method of slicing an image into a 3 by 3 matrix,
where the middle square represents the background of a button and the margin squares represent its
borders) to allow the creation of buttons that have minimal stretching on resizing.
Figure 18: ScrollView Example
The populating of the materials lists was made dynamically (at runtime) by using a
MaterialContentHandler script (see Appendix I – Material Content Handler Script). This script
would get the materials the developer included in the object when adding it to the scene, and by
49
using a default list button prefab (that can be edited if needed), it creates multiple buttons and adds
them to the list.
The clock in the UI (see Appendix C – Graphical User Interface Prototypes) works as a
button, activating the sun (directional light) rotation until it reaches the desired position. The sun
works together with the skybox to create the light transitions between day and night (see Figure
19: Sun Movement).
Figure 19: Sun Movement
The weather buttons alternate between three different procedural skyboxes that were created
inside unity. Their colors and definitions vary according to the weather represented.
The sound system is composed by 2 different sound tracks: an ambient sound track and a
music sound track. They can be turned off independently from each other in the lower right corner
buttons.
4.7
Obstacles
The new UI, although it has many advantages, it can be an obstacle to the development when
the developer is not acquainted with it, and it raised many problems throughout the development
phase of this project. Most of those problems were related to the usage of a ScrollRect element
along with a mask and populating this element with a list of the materials of a given object.
The ScrollRect, by default doesn’t mask the content (see Figure 20: ScrollRect Without
Mask), so it shows the underlying list’s content whether it is inside or outside of the ScrollRect.
That was a problem that was solved by using a Mask component, along with an empty Image
component – the image component will occupy the inside of the ScrollRect and the Mask
component will use that image as a mask, to show only the content that overlaps with it.
50
Figure 20: ScrollRect Without Mask
Another important feature in the UI that required some effort before it could work properly
was adding a screenshot button’s functionality. While Unity provides a function that allows the
screenshot functionality to be easily implemented, when the user clicks the “take screenshot”
button, the game is in the Interaction Mode (see Chapter 3.3 – Interaction Mode), so the UI is being
shown and it will appear in the screenshot. The Unity function for taking screenshots also has an
inconvenient method of working (in this case): it automatically overwrites the file when it already
exists instead of renaming it.
This was not the desired result, and the solution seemed simple: hide the UI when taking a
screenshot and give a unique name to each screenshot image. But this was not easily
accomplishable: there was the need to create a ScreenshotHandler script that would handle the file
naming to use the date and time at the end of a file name, hide the UI, and add a timer of 0.5
seconds before and after the screenshot was taken (see Appendix J – Screenshot Handler Script).
The timer serves two purposes: since the date and time added to the end of the filename
contain the time in seconds, by adding a delay of 1 second between each screenshot, we assure that
every screenshot has a unique name while keeping the naming of the images understandable – there
are other ways of having unique identifiers for an image name, but they would just be a group of
characters without meaning – and also, since we need to turn off and on again the UI menus while
we take the screenshot, and the action of toggling it isn’t instant (it needs to wait for the next frame
to be rendered to be toggled), it’s good to have some delay between the operations. As a bonus
feature, it gives the user a visual hint that a screenshot was taken (the blinking of the user interface
menus).
51
The most problematic obstacle encountered during the development process was related to
the object’s animation. While Unity has a surprisingly easy interaction with Blender (it supports
blender’s standard save files, without the need to export the object as FBX or other standard
format), the way they work inside Unity created a big issue – fortunately that issue would
ultimately lead to an even more user-friendly interaction on the final product.
The structure of the furniture objects was planned having in mind a pre-created prefab that
the user would drag into the scene, where he could then customize its values and components (such
as the mesh and animations) by dragging-and-dropping them from the imported object into the
prefab. While initially this seemed like a good solution, it complicates the workflow – that should
be as easy as possible for the inexperienced users – and brings up a lot of problems that would be
too hard to solve without creating a number of additional issues along with it.
The imported objects aren’t just a simple mesh with animations. They often contain an
armature and bones, and they are positioned and moved according to them. They also are
composed of a parent GameObject that allows the animated mesh to be positioned in the world –
when trying to animate an unparented mesh, it would jump back to the origin point (0, 0, 0) before
doing its animation.
The solution to this obstacle was a complete reformulation of the whole object-interaction
system and scripts: instead of creating a default prefab where the user would customize object
values, the system would now allow the user to drag-and-drop its imported object straight into the
scene and add a prefab with all the functionalities and menus into the object hierarchy. This method
requires much less effort from the user, as it only needs to be told where the mesh object is to work
properly, and then the user can customize the furniture piece information that will be shown inside
the menus (such as name, description and materials).
4.8
Chapter Summary
In this chapter we approached the project from a development point of view, going
throught the process of preparation of the scene and analysing some of the methods used
for creating the main scripts required for this project.
There is also an overview of the scene hierarchy and how the personalization process
works, both elements that have in view the ease of use for the future developer.
There was also an analysis of the most relevant problems and obstacles encountered
throughout the development process, and an explanation of each solution used to solve
every one of them.
52
Chapter 5
5. Conclusion and Development Prospect
This project is an eternal work in progress. The nature of it allows the developer to
always have the possibility of implementing new features, or adding new materials, audio
tracks, or other elements that would improve the project’s value.
Although this project’s final result lacks some of those features, it fulfills the goal of
the project: creating a final version of a new workflow that subtracts the need for game
development and programming knowledge, when creating an interactive real-time 3D
architecture visualization application.
Every aspect of the planning and development was a rewarding experience and a new
learning oportunity, and while every one of them offered smaller or bigger obstacles, every
obstacle was tackled and overtaken, while learning from it.
The usage of prefabs and scripts in Unity can be a confusing experience for the new
user, since there isn’t a lot of documentation explaining how the architecture of a system
should work in Unity (in essence, explaining exactly where scripts should or shouldn’t be
attached to, and which ways of interacting between them are better).
As this project was developed inside Unity 5, the development prospects can be as
big as you want them to be. Every aspect of the project has something that can be
improved somehow, as stated before, and besides the features inside the actual project,
since Unity is a platform-friendly engine, it could easily be adapted to run on different
platforms (mobile, consoles, web, etc.) – one platform that looks extremely appealing
would be Oculus Rift virtual reality headset.
Overall, this project was a success, since the primary requirements were met, a lot of
knowledge and experience was taken from ir, and it may, eventually, help people create
53
better virtual environments that will create a better user experience that will create a
futuristic experience that will make the users see the company as avant-garde and
trustworthy.
54
6. References
Adobe Systems, Inc. (1987). Adobe Illustrator CS6 (Version 16.0.0 – 64 Bit) [Computer
Software]. Mountain View, CA: Adobe Systems, Inc.
Ambrose, G., & Harris, P. (2009). The Fundamentals of Graphic Design. Worthing,
Sussex: AVA Publishing
Architecture. (2015, May 29). Retrieved May 30, from
http://en.wikipedia.org/wiki/History_of_architecture
Bettinger, B. (2013, June 21). Pixar by the Numbers – From Toy Story to Monsters
University [Blog Post]. Retrieved from http://collider.com/pixar-numbersmonsters-university/2/
Barashev, D. (2003). Gantt Project (Version 2.6.1) [Computer Software]. Brno: Gantt
Project Team
Roosendaal, T. (2002). Blender (Version 2.70a) [Computer Software]. Amsterdam:
Blender Foundation.
Calver, M. (2015). Digital Tutors – Creating Architectural Visualizations Using Enlighten
in Unity. Retrieved from http://www.digitaltutors.com/tutorial/2141-CreatingArchitectural-Visualizations-Using-Enlighten-in-Unity
Cerejo, L. (2010, June 16). Design Better And Faster With Rapid Prototyping. Smashing
Magazine. Retrieved from http://www.smashingmagazine.com/2010/06/16/designbetter-faster-with-rapid-prototyping/
Computer-Generated Imagery. (2015, May 13). Retrieved May 30, from
http://en.wikipedia.org/wiki/Computer-generated_imagery
Computer Graphics. (2015, April 23). Retrieved May 30, from
http://en.wikipedia.org/wiki/Computer_graphics
Epic Games. (2004). Introduction to Blueprints. Retrieved from
https://docs.unrealengine.com/latest/INT/Engine/Blueprints/GettingStarted/index.ht
ml
55
Foley, J., van Dam, A., Feiner, S., & Hughes, J. (1996). Computer Graphics, Principles
and Practice (2nd ed.). Boston, MA: Addison-Wesley Professional
Fowler, M. (2004). UML Destilled: A Brief Guide to the Standard Object Modeling
Language (3rd ed.). Boston, MA: Addison-Wesley Professional
Game Engine. (2015, May 19). Retrieved May 22, from
http://en.wikipedia.org/wiki/Game_engine
Glick, C. (n.d.). Digital Tutors – Beginner’s Guide to Unity. Retrieved from
http://www.digitaltutors.com/tutorial/572-Beginners-Guide-to-Unity
IKEA. (2014). IKEA Home Planner. Retrieved from
http://kitchenplanner.ikea.com/JP/UI/Pages/VPUI.htm?Lang=en-GB
Imtiaz, K. (2014, March 23). The Last of Us Real-Time vs Pre-Rendered Models Tease
The Graphical Jump For Uncharted On The PS4. Gearnuke. Retrieved from
http://gearnuke.com/last-us-cutscene-vs-gameplay-graphics-tease-graphical-jumpexpect-uncharted-ps4/
Lobo, R. (2014, October 9). How Star Wars Changed the Special Effects Industry. The
New Economy. Retrieved from http://www.theneweconomy.com/home/how-starwars-changed-the-special-effects-industry
MacLeod, R., Norgren, N., Lopez, R., & Cristerna, C. (2012, March 29). Neoscape on the
state of the Archviz Industry [Interview]. Retrieved from
http://cgpress.org/archives/cgarticles/neoscape_on_the_state_of_the_archviz_indust
ry
Masters, M. (2015, January). 3ds Max, Maya LT or Blender - Which 3D Software Should I
Choose for Asset Creation? [Blog Article]. Retrieved from
http://blog.digitaltutors.com/3ds-max-maya-lt-blender-3d-software-choose-assetcreation/
Masters, M. (2015, February). Unity, Source 2, Unreal Engine 4, or CryENGINE – Which
Game Engine Should I Choose? [Blog Article]. Retrieved from
http://blog.digitaltutors.com/unity-udk-cryengine-game-engine-choose/
56
McDermott, W. (2015). The Comprehensive PBR Guide by Allegorithmic (vol. 1)- Light
and Matter: The theory of Physically-Based Rendering and Shading
[Allegorithmic]. Retrieved from https://www.allegorithmic.com/pbr-guide
McDermott, W. (2015). The Comprehensive PBR Guide by Allegorithmic (vol. 2) - Light
and Matter: Practical guidelines for creating PBR textures [Allegorithmic].
Retrieved from https://www.allegorithmic.com/pbr-guide
Milosevic, D. (2003). Project Management Toolbox: Tools and Techniques for the
Practicing Project Manager. Hoboken, NJ: John Wiley & Sons
Nehon (2014. November 25). Physically Based Rendering – Part One [Blog post].
Retrieved from http://jmonkeyengine.org/299803/physically-based-rendering-partone/
nervouschimp, RobertZamber, & Son Kim (2015, March 8). Could Unreal Engine replace
V-ray in the arch-viz industry? [Forum]. Retrieved from
http://community.thefoundry.co.uk/discussion/topic.aspx?f=4&t=106749&page=0
Normal Mapping. (2015, April 29). Retrieved May 24 from
http://en.wikipedia.org/wiki/Normal_mapping
Price, A. (2014, June 26). 7 Ways to Get Rid of Fireflies Once and For All [Blog Article].
Retrieved from http://www.blenderguru.com/articles/7-ways-get-rid-fireflies/
Rawn, E. (2015, March 10). Unreal Visualizations: 3 Pros and 3 Cons of Rendering with a
Video Game Engine [Blog Post]. Retrieved from
http://www.archdaily.com/607849/unreal-visualizations-3-pros-and-3-cons-ofrendering-with-a-video-game-engine/
Rendering. (2015, May 29). Retrieved May 30, from
http://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
Russell, J. (2015). Basic Theory of Physically-Based Rendering. Retrieved from
https://www.marmoset.co/toolbag/learn/pbr-theory
Shirley, P., Morley, R., Sloan, P., & Wyman, C. (2013). Basics of Physically-based
Rendering (SIGGRAPH Asia course notes). Retrieved from
http://www.cs.utah.edu/~shirley/papers/
57
Tetravol. (n.d.). About Tetravol. Retrieved from http://www.tetravol.com/CompanyEng.html
UE4Arch. (2015). UE4Arch – About. Retrieved from http://ue4arch.com/
Unity Technologies. (2015, April 27). Raycasting. Retrieved from
https://unity3d.com/pt/learn/tutorials/modules/beginner/physics/raycasting
Unity Technologies. (2015). MonoBehaviour. Retrieved from
http://docs.unity3d.com/ScriptReference/MonoBehaviour.html
Unity Technologies. (2015). Scripting API: MonoBehaviour.OnValidate(). Retrieved from
http://docs.unity3d.com/ScriptReference/MonoBehaviour.OnValidate.html
Unity Technologies. (2015). Scripting API: Serializable. Retrieved from
http://docs.unity3d.com/ScriptReference/Serializable.html
Unity Technologies. (2005). Unity (Version 5.0.0f4 Personal) [Computer Software]. San
Francisco, CA: Unity Technologies
Vallance group – Chemestry Research Laboratory. (n.d.). Reflection and Refraction of
Light. Retrieved from http://vallance.chem.ox.ac.uk/pdfs/ReflectionRefraction.pdf
Visual Paradigm International (1999). Visual Paradigm Community Edition (Version 12.0)
[Computer Software]. Hong Kong: Visual Paradigm International
Walters, J. (2002). Pioneers. Retrieved from
http://www.earlycinema.com/pioneers/lumiere_bio.html
Wilson. J. (n.d.). Lens. Retrieved from http://www.marmoset.co/viewer/gallery
Xamarin Inc. (2011). & MonoDevelop Contributors (2004). MonoDevelop (Version 4.0.1)
[Computer Software]. Provo, UT: Novell, Inc.
XOIO Studios. (2015). XOIO – Profile. Retrieved from http://xoio.de/
58
Appendices
APPENDIX A – GANTT CHART
3
APPENDIX B – UML USE CASE DIAGRAM
5
APPENDIX C – GRAPHICAL USER INTERFACE PROTOTYPES
Gameplay example – without GUI
Gameplay Example – With GUI
7
Gameplay example – Interacting with an object
Gameplay Example – Selecting object material
8
Gameplay Example – Object Information
Gameplay Example – Selecting Weather
9
APPENDIX D – USER MANUAL
1. Creating and setting up a new scene:
When opening Unity, you will be prompted to select an existing project or create a
new one. For the sake of this manual, let’s create a new project by clicking “New Project”
button. After selecting the project’s name and the desired location, click “Create Project”.
Using the standard view, you can see:
 The Hierarchy panel, on the left – in this panel you can see the objects that
are currently in your scene;
 The Project panel, on the lower left corner – here you can add, remove, and
browse through the assets you will use in your project;
 The Scene/Game panel, in the middle – here you can see your scene and
change your scene’s objects positions, and drag and drop assets from the
project panel to the scene panel directly, to add them to your scene. When
you click “play” on the button on top, the selected tab on this panel will
becom the “Game” tab, where you can play and pause your game;
 The Console panel, in the bottom – here you will have a textual feedback
when errors occur;
 The Inspector panel, on the right – this panel will change depending on
which game object or asset you select, and it’s useful to change each
object’s settings.
The first thing we need to do is delete all the objects in the Hierarchy panel (the
Main Camera and Directional Light objects). To do this simply select them and click the
delete key on your keyboard.
Next, we need to import the EasyRTArchViz package to our scene. To do this,
right-click the Assets, right-click your “Assets” folder in the Project panel, choose Import
Package > Custom Package and browse and select the EasyRTArchViz.unitypackage file.
When the Importing Package window pops up, make sure everything is selected and click
“Import”.
11
At this point, you should have 3 new folders inside your Assets folder:
“EasyRTArchViz”, “Editor” and “Standard Assets”. It is recommended that you never
change anything inside these folders.
The system also needs you to select a button that will become the switch between
normal (view) mode and interaction mode. To do this, in the menu bar click Edit > Project
Settings > Input to open the Input Manager on the Inspector Panel.
In the Input Manager, click the Axes to show the input list, and on the “Size” field
add one to the value (i.e. if the value is 18, change it to 19). One new field will appear at
the bottom of the list, that you must edit. Change its “Name” field to “Interaction Mode”
and in the field “Positive Button” you can input the key you wish to be the switch key (i.e.
if you want the F1 key on your keyboard to be the switch key, write “f1” on the Positive
Button field).
2. Adding the standard functionality scripts:
The assets you will be interacting with are called prefabs. They can be found inside
the “Prefabs” folder in the “EasyRTArchViz” folder.
In the “Prefabs” folder, you can find 4 prefabs:
 FPSController – is a prefab created by Unity that is used to enable the user
to walk through the scene, and which adds features like the sound of
footsteps and the ability to look around with the mouse;
 FurnitureHandler – this is the script you will be attaching to each piece of
furniture that requires interactivity (see chapter 4 for more details);
 GeneralScripts – this will provide most of the functions and GUI to the
scene;
 Room – this prefab will be used to define each room in the scene.
To add the standard functionalities to the scene, simply drag the GeneralScripts
prefab into the Hierarchy game objects’ list. Right now the scene already has most of its
functionality, but you can’t yet experience it. This prefab contains some options that can
(and some that must) be changed, and that issue will be addressed shortly.
12
3. Adding the building or environment
The models for the floor, walls and roof of the building must be separated into
different objects and they must be composed of objects with thickness (instead of just a
simple plane). The walls of a room must also be separated from the walls of other rooms,
and they can be imported as one single object for each room, or one for each wall.
To add the building to your scene, you must import the assets. This can be done in
three ways:
 By right-clicking the target folder in the Unity Assets hierarchy (on the
Project tab) and choosing Import New Asset;
 By dragging and dropping the asset directly from your explorer window into
the desired folder in Unity;
 By manually saving or exporting the file into the project’s Assets folder in
your hard drive (when using Blender save files instead of exporting, this is
the recomended method).
The way of exporting the object from a modeling program so that it can be
imported into Unity can vary from software to software, but the standard format for a 3D
object with animations and materials included is the .fbx format, which most 3D
applications can export as. Unity also supports the actual save files from some applications
(such as Blender), so that the objects don’t need to be exported or converted, and will be
reloaded into Unity automatically in real-time when they’re saved in the 3D application.
After the assets are imported, just drag them into the scene or the hierarchy panel,
and you can then change its postition/rotation/scale by using the standard 3D software keys
for that purpose (W, E, and R, respectively) or by using the Inspector panel’s Transform
component.
If you need to hide an object, you can disable it by, in the Inspector panel,
unchecking the checkbox at the top left. Remember to re-enable it before building your
application, or the object won’t be visible in the final version.
13
To each of these elements, after adding them to the scene, you should attach a Mesh
Collider component. To do this, select the object, and on the Inspector panel, click Add
Component and search for “Mesh Collider”, or click Physics > Mesh Collider.
4. Adding the player
To add the player, you only need to find the FPSController prefab, located inside
EasyRTArchviz > Prefabs folder, and drag it into the scene or hierarchy panel.
After adding it, make sure you position it inside the building, and above the floor
level. If the FPSController prefab intersects with the floor level, it may fall through it when
the application starts running.
At this point, you can already click the “Play” button on the top of the scene panel
and try walking around the empty room, to see if everything is working as expected.
5. Setting up the walls and rooms:
To enable the option to change the walls’ materials, you need to give a “tag” to
every wall so that the system recognizes it as a wall. If you click on the GeneralScripts
game object on your hierarchy panel and open its hierarchy tree by clicking on the arrow
next to it, you will see a game object named “Walls”. Select it, and its information will
appear in the Inspector panel.
There are two fields in this panel you can change. The “Tag Prefix” field can be
changed, but it’s not recommended, and the “Materials” field should be changed from the
default materials into the materials you want to be able to change the walls into.
Every wall in the game must have a tag, which name should be “[tag prefix][room
name]” (i.e. “wallKitchen”). To create a tag, select the desired object, and in the Inspector
Panel, on the Tag drop down list (on top) should say Untagged – click it and select “Add
Tag...” and you will see the Tags and Layers menu. On the tags list, click the plus (+) icon
and change its name according to the previously mentioned rules. Select the object again,
and this time when you click the “Tag” drop down list, the name you created will be there,
and you can now select it.
14
To edit the materials list for the walls, you must first create the materials. The
materials can either be included in your FBX file and imported together with the imported
objects, or can be created inside Unity, by right clicking the desired folder in the Project
panel and click Create > Material. The new material can then be renamed and the desired
maps can be edited in the Inspector panel.
After creating the materials, you can replace the default materials in the Walls
gameobject (inside the General Scripts game object) and change the list size if you want to
add more.
To differentiate the rooms from each other and be able to change the wall materials
of one room at a time, you need to set up the rooms areas. To do this, you use the “Room”
prefab (located in the EasyRTArchViz > Prefabs folder). Just drag the prefab into the scene
panel and adjust its size and position using the standard 3D software keys (W, E and R) or
using the Inspector Panel. Lastly, rename the game object (in the Hierarchy panel) to the
name of your room, which must be the same as the name you set in the tag for its walls
(i.e. if you set the walls tag to wallKitchen, the room game object you just created must be
renamed to “Kitchen”).
6. Final Scene Setup
Everything (except the interactive objects, we’ll save that for last) is now placed
into the Hierarchy panel. But there are some links the system requires us to do before
proceeding:
 Select the InteractionMode game object (child of GeneralScripts) and, in the
InteractionMode script (in the Inspector panel), you see the “Player” field
says “none”. To assign it, drag the FPSController object from the Hierarchy
panel into this field;
 In the Music game object (child of GeneralScripts > Ambient), there is also
a “Player” field. Repeat the process done in the situation above. In here you
can also replace the musics you want to play in your application;
 In the GUI game object (child of GeneralScripts), you can change the
Screenshot Size Multiplier field in the “GUI Script” script. This scales the
15
size of the output screenshot by a factor of n (n being the number you input
here). If you leave it at 1, the output image will have the size of the screen
you’re running the application on.
 There needs to exist a tag named “Furniture” for the system to work
properly. To create it, follow the instructions in Chapter 5 on how to create
a tag, but this time, give it the name “Furniture” (you don’t need to assign it
to any object);
 There needs to exist an EventSystem object for the GUI interaction to take
place. Just click on “Create” on top of the Hierarchy Panel and select UI >
Event System.
7. Adding interactive furniture objects:
To add a new interactive furniture object, it needs to be imported into the Unity
Assets folder first (see Chapter 3 for detailed instructions on how to import an object).
After the object is imported, you need to select it (on the Project panel) and on the
Inspector go into the Rig option (on top) and change the Animation Type to “legacy” and
click “Apply”. Now it can simply be dragged-and-dropped from the project folder inside
Unity, into the scene window.
Figure 1: FurnitureHandler prefab hierarchy
When the object can be seen on the scene window (or in the Hierarchy window), to
add functionality to it, you need to add the “FurnitureHandler” prefab to its hierarchy. The
prefab must be placed on the first level of the hierarchy (being a child of the root object).
In this case (see Figure 1: FurnitureHandler prefab hierarchy), let’s assume
16
“animatedCube” is the root object and “Cube” is the actual object. The “FurnitureHandler”
prefab must be placed as a child of “animatedCube”, as is seen in the picture.
After the “FurnitureHandler” prefab is properly placed, left-click it to view its
contents in the Inspector tab. There you can see three main sections: Transform, Menu
Handler and Object Handler. The only one that requires our attention is the Object Handler
script.
In the Object Handler script, you will find a “Mesh Object” field. The script
expects you to tell him which object in the hierarchy is the mesh object (in this case, the
object named “Cube”). To do so, just drag and drop the object from the Hierarchy tab into
the “Mesh Object” field. If it says the object’s name (in this case, “Cube”) in front of
“Mesh Object”, you’re good to go.
Under the “Mesh Object” field, you have the Object Information and Materials
groups. In the first one, you can change all tha values you want. It contains the information
that will be shown about the furniture piece when requested by the user. In the materials
group, you can set the list of materials that object has. To change its size, change the
“Size” field value, and to change the materials just drag-and-drop them from your Assets
folder into the “Element n” field desired (they should be automatically imported along with
the object and a Materials folder should be created in the same place – if not, you’re going
to have to do it manually).
After everything is set, you can click play and everything should work as intended.
17
APPENDIX F – INTERACTION MODE SCRIPT
using UnityEngine;
using System.Collections;
using UnityStandardAssets.Characters.FirstPerson;
public class InteractionMode : MonoBehaviour {
// player object, where the movement script is
public GameObject player;
public Canvas gui;
private bool interactionMode = false;
void Start () {
gui.enabled = false;
}
void Update () {
// toggle interaction mode
if (Input.GetButtonDown("Interaction Mode")) {
ToggleInteractionMode();
}
// check mouse input
if (Input.GetMouseButtonDown(0)) {
if (interactionMode) {
// check if an object is being selected
CheckForObjectSelection();
}
}
}
private void ToggleInteractionMode() {
interactionMode = !interactionMode;
// hide GUI submenus
gui.GetComponent<GUIScript>().HideAllMenus();
FirstPersonController fpc =
player.GetComponent<FirstPersonController>();
// stop/start movement
fpc.enabled = !interactionMode;
// show/hide mouse cursor
Cursor.visible = interactionMode;
// show/hide GUI
gui.enabled = interactionMode;
}
19
private void CheckForObjectSelection() {
// ray information (output)
RaycastHit hitInfo = new RaycastHit();
// origin and direction of the ray
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
// check if any object was hit
bool objectHit = Physics.Raycast(ray, out hitInfo);
if (objectHit) {
GameObject selectedObject =
hitInfo.transform.gameObject;
if (selectedObject.tag == "Furniture") {
GameObject scriptObject =
selectedObject.transform.parent.Find("FurnitureHandler").gameObject;
MenuHandler mh =
scriptObject.GetComponent<MenuHandler>();
if (mh != null) { mh.ShowMenu(); }
}
}
}
}
20
APPENDIX G – MENU HANDLER SCRIPT
using UnityEngine;
using System.Collections;
using UnityEngine.UI;
public class MenuHandler : MonoBehaviour {
public GameObject menu;
public GameObject defaultMatsButton;
private ObjectHandler objHandler;
private Canvas mainMenu;
private Canvas matMenu;
private Canvas infoMenu;
private RectTransform materialContainer;
private MaterialContentHandler matHandler;
void Start () {
// find object handler
objHandler = GetComponent<ObjectHandler>();
// find submenus
mainMenu =
menu.transform.Find("FurnitureMenu").GetComponent<Canvas>();
matMenu =
menu.transform.Find("FurnitureMatMenu").GetComponent<Canvas>();
infoMenu =
menu.transform.Find("FurnitureInfoMenu").GetComponent<Canvas>();
// initialize menus
InitializeInfoMenu();
InitializeMaterialsMenu();
// position and hide menus
PositionMenus();
HideAllMenus();
}
public void ShowMenu() {
mainMenu.enabled = true;
}
public void ShowMaterials() {
infoMenu.enabled = false;
matMenu.enabled = true;
}
public void ShowInfo() {
matMenu.enabled = false;
infoMenu.enabled = true;
}
21
public void HideAllMenus() {
mainMenu.enabled = false;
infoMenu.enabled = false;
matMenu.enabled = false;
}
private void InitializeMaterialsMenu() {
materialContainer =
matMenu.gameObject.transform.Find("scrollView").transform.Find("Content")
.gameObject.GetComponent<RectTransform>();
// create the materials list
matHandler = new
MaterialContentHandler(objHandler.meshObject, objHandler.materials,
materialContainer, defaultMatsButton);
GameObject[] matList = matHandler.CreateList();
// create the instantiated list
GameObject[] instObjects = new GameObject[matList.Length];
for (int i = 0; i < matList.Length; i++) {
instObjects[i] = Instantiate(matList[i]);
}
// customize list
matHandler.CustomizeButtons(instObjects);
}
private void InitializeInfoMenu() {
Furniture f = objHandler.objectInformation;
FindAndFill(infoMenu, "txtName", f.objectName);
FindAndFill(infoMenu.transform.Find("btnCompanyName"),
"txtCompanyName", f.sourceName);
FindAndFill(infoMenu, "txtDimensions", f.dimensions.x + "x" +
f.dimensions.y + "x" + f.dimensions.z + " cm");
FindAndFill(infoMenu, "txtDescription", f.description);
// set company hyperlink
Button companyWebsite =
infoMenu.transform.Find("btnCompanyName").GetComponent<Button>();
companyWebsite.onClick.AddListener(() => {
Application.OpenURL(f.sourceURL);
});
}
private void FindAndFill(Component menu, string labelName, string
newText) {
menu.transform.Find(labelName).GetComponent<Text>().text =
newText;
}
22
private void PositionMenus() {
SetMenuPosition(mainMenu);
SetMenuPosition(infoMenu);
SetMenuPosition(matMenu);
}
private void SetMenuPosition(Canvas menu) {
Vector3 objSize =
objHandler.meshObject.GetComponent<SkinnedMeshRenderer>().bounds.size;
float yTranslation = objSize.y / 3 * 2;
float zTranslation = objSize.z / 2 * -1;
Vector3 translation = new Vector3(0, yTranslation,
zTranslation);
RectTransform rt = menu.GetComponent<RectTransform>();
rt.Translate(translation);
}
}
23
APPENDIX H – FURNITURE CLASS
using UnityEngine;
using System.Collections;
[System.Serializable]
public class Furniture {
private const int MAX_CHARS_NAME = 20;
private const int MAX_CHARS_SOURCE = 20;
private const int MAX_CHARS_DESC = 150;
// object data
[Tooltip("Maximum 20 characters")]
public string objectName = "Default Object";
[Tooltip("Maximum 20 characters")]
public string sourceName = "Company Name";
public string sourceURL = "www.somecompanywebsite.com";
[Tooltip("Maximum 150 characters")]
public string description = "Nothing to see here, just a default
cube object!";
public Vector3 dimensions = new Vector3(10,20,30);
// makes sure the values changed in the editor aren't out of range
void OnValidate() {
// trim strings
objectName = objectName.Trim();
sourceName = sourceName.Trim();
sourceURL = sourceURL.Trim();
description = description.Trim();
// check maximum string sizes
if (objectName.Length > MAX_CHARS_NAME) { objectName =
objectName.Substring(0, MAX_CHARS_NAME); }
if (description.Length > MAX_CHARS_DESC) { description =
description.Substring(0, MAX_CHARS_DESC); }
if (sourceName.Length > MAX_CHARS_SOURCE) { sourceName =
sourceName.Substring(0, MAX_CHARS_SOURCE); }
//
if
if
if
check dimension
(dimensions.x < 0) { dimensions.x = 0; }
(dimensions.y < 0) { dimensions.y = 0; }
(dimensions.z < 0) { dimensions.z = 0; }
// keep URL in a standard form
25
if (sourceURL.Length > 0) {
// remove http
if (sourceURL.StartsWith("http://")) {
sourceURL = sourceURL.Substring(7);
}
// remove '/' at the end
while (sourceURL[sourceURL.Length-1] == '/') {
sourceURL = sourceURL.Remove(sourceURL.Length-1);
if (sourceURL.Length == 0) { break; }
}
}
}
}
26
APPENDIX I – MATERIAL CONTENT HANDLER SCRIPT
using
using
using
using
UnityEngine;
System.Collections;
System.Collections.Generic;
UnityEngine.UI;
public class MaterialContentHandler {
private GameObject obj;
private Material[] mats; // object materials
private RectTransform rectTransform; // component of the content
gameobject
private Vector2 baseContentSize; // size of the default
recttransform
private GameObject defButton; // default button
public MaterialContentHandler (GameObject meshObj, Material[]
materials, RectTransform container, GameObject defaultButton) {
obj = meshObj;
mats = materials;
rectTransform = container;
baseContentSize = rectTransform.sizeDelta;
defButton = defaultButton;
}
public GameObject[] CreateList() {
List<GameObject> buttonsList = new List<GameObject>();
// set content panel height based on number of materials
rectTransform.sizeDelta = new Vector2(baseContentSize.x,
baseContentSize.y * mats.Length);
// create material list
for (int i=0; i<mats.Length; i++) {
GameObject newButton = defButton;
buttonsList.Add(newButton);
}
return buttonsList.ToArray();
}
27
public void CustomizeButtons(GameObject[] buttons) {
// set button height
float btnHeight = 0;
if (buttons.Length > 0) { btnHeight =
buttons[0].GetComponent<RectTransform>().sizeDelta.y; }
// customize buttons
for (int i=0; i < buttons.Length; i++) {
Material mat = mats[i];
// set position in the container
float newY = buttons[i].transform.position.y btnHeight * i;
Vector3 newPos = new
Vector3(buttons[i].transform.position.x, newY,
buttons[i].transform.position.z);
buttons[i].transform.position = newPos;
// set as a child of the container
buttons[i].transform.SetParent(rectTransform.transform,
false);
// set button text
Text t =
buttons[i].transform.Find("Text").GetComponent<Text>();
t.text = mat.name;
// set onClick event to change the material
Button b = buttons[i].GetComponent<Button>();
b.onClick.AddListener(() => {
obj.GetComponent<SkinnedMeshRenderer>().material
= mat;
});
}
}
}
28
APPENDIX J – SCREENSHOT HANDLER SCRIPT
using UnityEngine;
using System.Collections;
using System;
public class ScreenshotHandler {
public string baseFilename = "";
private Canvas gui;
private int sizeMultiplier = 1;
public ScreenshotHandler(Canvas gui, int sizeMultiplier) {
this.gui = gui;
this.sizeMultiplier = sizeMultiplier;
if (baseFilename == "") { baseFilename = "screenshot"; }
}
public IEnumerator CaptureScreenshot() {
gui.enabled = false;
yield return new WaitForSeconds(0.5f);
string fileName = baseFilename + "-" +
DateTime.Now.ToString("dMMMyyyy-HHmmss") + ".png";
Application.CaptureScreenshot(fileName,sizeMultiplier);
yield return new WaitForSeconds(0.5f);
gui.enabled = true;
}
}
29
Descargar