Before we started to execute this project, we went about searching a great architectural reference to replicate in CG. It did not take long before we stumbled upon the Los Faiques Dwellings by DURAN&HERMISA arquitectos asociados Archdaily.
We were impressed by the simple architectural concept of the house and its integration with the natural environment (as part of the foreground), as well as by the color palette and the general mood of the photos taken by Photographer – Sebastián Crespo. We absolutely loved the contrast between the dark blur skies and the bright orange artificial lights showcased in them.
First of all, and in every step of the creation of a photorealistic render (modeling, texturing, lighting and post-production), we keep clear in our mind one thing – in life, “perfection” doesn’t exist! So when trying to achieve the highest photorealism possible, we always force ourselves to add details, imperfections and mess – The Dirt and Chaos of life.
Maybe the presence of these elements is not immediately revealed, but surely their absence is something that make our brain immediately think of the image in front of us as FAKE.
We started the modeling process using AutoCAD (Yes… AutoCAD), creating a simple plan and an elevation based on the references drawings we found on Archdaily.
This step is very important to establish the correct dimensions of the building. Then we imported it inside 3ds Max, just as a reference guide, and we built the whole model using simple polygonal modeling. We worked on the setup of cameras and we refined the basic model until we were satisfied with the final result.
We did take the time to work the small details of all objects, trying to model everything we could without having to use displacement via the materials. Once we finished the building, all the other objects were added without using snaps or perfectly geometrical arrays, to achieve some natural imperfections. Little movements and/or rotations were also added to reach the same goal.
The Soulburn scripts by Neil Blevins were very useful for this purpose.
We wanted to create very realistic trees, because many of them were in the foreground of the views we intended to make and they needed lots of details such as convincing roughness on the bark, cut trunks, thin branches without leaves, etc…
We used the GrowFX plugins to generate trees. We did lots of tests, and we soon discovered that simply changing some seed values inside GrowFX on a base tree was not enough to get something believable. A detailed and time-consuming activity has been necessary to reach a satisfying result.
In the end we modeled 24 different trees (to be placed in the foreground, around the house) trying to mimic the natural chaos we found in the reference images, such as number of trunks, global shape of branches, etc. We then manually placed these “main” trees around the house, while the faraway ones were scattered in the background using MultiScatter. We needed to extend the terrain mesh with lots of scattered trees to fill the pics with background trees in a convincing way.
Because of the number of trees we used, we have been obliged to convert them into V-Ray proxies to be scattered around, but in this way we were not able to use displacement modifier on them. So… to achieve good bark’s bumpiness we created it directly inside GrowFX, using high-tessellation meshes for trunks and branches and a bark texture to displace them.
Looking at the reference images we realized we had to generate two different type of grass. One that is pretty regular and another one more patchy in nature, with gaps and holes among grass wisps.
For the regular grass, we modeled two different meshes of standard grass blades, and then we scattered them around using MultiScatter. For the patchy grass, we modeled several different grass wisps, with lots of variations and noisiness. We then grouped them based on the overall dimension of single wisps, and we scattered them using 3 different MultisSatters with a noise map controlling the distribution.
In the end, we decide to create also some rather tall grass wisps to be manually distributed around some trees trunks and other specific points (such as floor edges), to give the feeling of uncut savage grass, as we saw in some of the reference pictures.
To model the ground with the correct heights we used the level curves drawn in the reference drawing. We replicated them inside AutoCAD, then we imported the splines inside 3ds Max and we placed them at the correct elevation.
At this point we used the terrain compound object to create a base mesh. In order to have a better topology of the final mesh we created a high-tessellated plane and we projected it onto the base terrain using the Geometry Projection Script found on scriptspot.
Finally, we added some more tessellation (only for the ground around the house) and we did some refinements using the paint deformation tool. To have a more realistic effect, we also created a bunch of other objects, such as little stones, leaves, dry branches, roots, and we scattered them as well on the ground, to enrich parts without grass.
TEXTURE / MATERIALS
All materials are based on the same combination of base textures that includes diffuse, reflection, refraction and bump (normal bump or displace if necessary). Than the single materials are combined and mixed in various ways, but the base of the process is always the same and there aren’t particular tricks or secrets. It is a time consuming part of the process, and we try to be patient and to take care of the details.
In particular, we pay attention to reflection parameters, such as IOR, otherwise it would be impossible to achieve the desired photorealism. Sadly, we have to work since the early stage with high parameters for materials (subdivisions) and render with a high detailed HDR to be reflected by surfaces to get proper images as part of the fine tuning process. This implies very long render times for each single test, but this is the only way to avoid bad surprises in the final images.
Bearing in mind “details and imperfections”, nearly 90% of our materials have a dirty look. To achieve this result we use two methods. The first one is using the well known V-Ray Dirt Map with suitable texture for the radius parameters. The second is using the V-Ray Distance Map. With the latter, based on non renderable objects, we have more control on where and how the dirt effect will be visible.
We used this method for the dirt between the wooden staves or for the sealant at the base of the metallic columns.
GRASS / FOLIAGE
For this kind of objects it’s very important to use 2 sided materials with the proper maps. There’s a lot of reference around the web concerning this aspect, especially the brilliant tutorial by Peter Guthrie – Leaf Material. Another thing to keep in mind is that nature is extremely varied and it never repeats itself, so we always try to create lots of variations of the same material, to apply to similar objects in order to eliminate the CG uniformity.
In this case we created a single multi-sub material for the trees, with 7 types of bark and 2 types of leaves. Trees models have been created with different ID material, randomly generated; in this way we applied the same material to all the trees, creating variations among them in a pretty automated way.
One of the main driving aspect of the reference images was precisely the general mood and illumination, with strong tone contrast between artificial and natural lights. So we worked a lot trying to obtain images as similar as possible to the references focusing on this point of view.
All the artificial lights are sphere or area V-Ray Lights. We subdivided all the ceiling spots in groups (living room, kitchen table, outdoor porch) to manage the separate contributions during the following post production phase. Dealing with artificial light sources we found very important to set the color referring to a realistic temperature/color light scale (also when it looks very “far” from white). It could seem very ordinary, but white balance is a very important aspect, and human brain does it continuously. Nevertheless, looking at a pictures taken under artificial lights, it’s always surprising how much yellow-orange are objects that we have in mind as being white! So it is important to rely on objective data.
For natural light, we started with a HDR, the simplest and most effective way to get a photorealistic result. After the first tests however we had to change our minds, because of the unacceptable render times due to the billions of leaves with 2 sided materials applied. So we had to go back to V-Ray Sun + Sky combination, which is also simpler to render even though there are more aspects to take into account.
For example, we had to set the correct parameters of the sky and the sun (when needed) for every different shot, we had to choose a right picture of the sky for every shot and place that on one or more planes, directly visible in the foreground but also in every possible reflection. We couldn’t manage this aspect during post production, because there would have been lot of problems with alpha channel on the leaves of the background trees.
The complete scene counted about 8,000,000 polygons. We rendered the images in at 1980×1280 pixels on a workstation with an Intel i7-3770K 3.50GHz CPU, NVidia GeForce GTX 570 graphic card and 16GB RAM. V-Ray 2.40.3 was used, and final renderings took from 8 to 19 hours to complete, depending on the complexity of each view.
One thing we found very useful and time-saving is to do all the fine tuning of the image in post production. For this reason we saved a lot of channels by the render elements. We chose to output the final renderings as OpenEXR files. This allowed us to save as many channels as we wanted in a single file, and this was pretty useful in the post production stage.
All the post production was made in Adobe After Effects. We basically used all the elements we exported to fine tune all the different parts of the image.
One of the most useful channels turned out to be the V-Ray Light Select elements. They create a black and white mask of the contribution of each single light, allowing us to boost or decrease the intensity and color correct separately every single light source. This was absolutely fundamental for us to recreate the exact mood we wanted to give to each of the final images.
We also used the ID material channel to quickly isolate some groups of objects like grass, ground, trunks and leaves in order to do some color correction adjustments focused exactly where we needed them.
A Z-Depth pass was used to add some fog which really boosted the sense of depth in the images, separating the trees in the background from the foreground objects.
Another little trick we used to make the images more realistic was to distort some linear edges (such as walls or ceiling edges) that we didn’t distort directly in the meshes. To do this we simply used an adjustment layer with a “turbulent displace” filter applied, masked only where we needed.
As final touches, aimed to eliminate some “fake CG looks”, we added to all the images some slight chromatic aberration, a very subtle vignette, a few glow and some flare effects on light source.
Thank you for your time spent reading this article, we hope you liked it and if you have found something interesting we’ll be happy to know.
See you soon!