- Cinematic Trailer
- Could you guys introduce yourselves? Where are you based, what do you do, what projects have you worked on? What services do you provide?
We’re a VFX and animation studio with 2 locations in the UK, our foundations are Cinemeatic trailers for games, such as SMITE, War Thunder, Jurassic World… But we also have a world leading Automotive Department and we’ve recently had a big break into TV, which should be on screens very soon. Basically, we love creating visually stunning content for all audiences.
- I’m really interested to learn a bit more about the way you’re working on the Winter is Coming trailer. Could you tell us how did it all start, what were the requirements? As far as the trailer goes, it seems like you had to start only with some sketches and had to build everything on your own?
We are all massive fans of the show, so this project was a dream to work on. That came with huge pressure, as we know how well known and loved the characters are all over the world – so we had to nail the quality and create a truly authentic GoT experience for the viewer.Cinematic Trailer
Our task was to introduce the viewers to the official HBO licenced game and create a trailer that was authentic to the TV show in every way. The message of the trailer is the impending threat of ‘winter is coming’ and how that links the key houses from the show as they prepare for what is to come. The trailer is split in to two sections, one literal showing the characters in real locations from the show and the other more abstract showing the Sigils that represent the houses being overcome with the frost of winter. We decided internally that the raven was going to be the thread that weaved through the two separate sections and illustrate the journey of the message of war. The trailer was designed to be a teaser for the epic battles that would follow! Cinematic Trailer
This was the first time we have been tasked with creating cg versions of well-known characters, and there is nothing like jumping in at the deep end! And to top of the complexity of the task we knew from the start that we would not have access to the actors or any scan data. So, the search for reference began, the client provided some good onset photography which was especially helpful for the clothing and apart from that the whole team scoured the web for any material we could find. The main challenge that arises from using multiple sources of reference is variations camera lenses and lighting, it’s amazing how these two factors completely change the perception of form in the human face. We would always find the camera lens data in any reference we used and do test renders with cg cameras to match but we found that still left us short of finding the characters instantly recognisable read. After multiple iterations the amazing character modelling and lookdev team had created these great assets but we found we lost the likeness in the shots from the animatic, we then realised we would have to go back and adjust all the lenses and camera positions to mimic the cinematography used in the tv show, that was the only way we could get that instant read on character we had been looking for. This was the tipping point for when the whole thing came to life and we knew all the amazing lookdev would find its way into the final Trailer. Cinematic Trailer
- What’s your pipeline like? What are the pieces of software you’re using most often? What do you use for modeling in particular?
The main pieces of software we use for character work is 3ds Max, ZBrush, Mari and Substance Painter, then the groom is done using Ornatrix. For the simulations and other FX, we use Houdini.
- The trailer has this outstanding little interior. Could you talk a bit about the way you’ve worked on it? What were the challenges you’ve faced? How did you manage to have such a high level of quality In this scene? It’s absolutely amazing.
The challenge here was that we had little reference as to how the rookery should look. So we tried to match other Winterfell sets from the actual show, making sure it’s all cohesive. Our modeling and texturing team really went to town on creating several props and scattering them around the scene. The purpose was to give a feeling of an old, dusty, cluttered interior, where a maester would spend his time sorting out correspondence.
- How do you usually build the material for your props and characters? Do you mostly rely on MARI? Could you tell us a bit about the way you’ve found the right materials for this scribe’s room? What way did you work on it?
Props are textured in Substance Painter as it allows us to get a result out sooner and iterate faster. Characters are done using a combination of Substance Painter and Mari, Painter for the elements that don’t span multiple UDIMs or have real life seems so they can be separated naturally and Mari for large continuous meshes like skin.
After gathering all references, we were able to break down all the scenes from the animatic into smaller pieces like pillar, chandelier, chair, table, walls etc… then assign them to the artist. Usually one artist will make one asset from tip to toe. Unlike many bigger companies where jobs are divided between artist by job phase; modelling, texturing or shading. In most cases modelling has been done in 3dsMax and texturing in Substance Painter. Sometimes we use Mari for bigger surfaces where it’s required to easily paint over different UDIMS.
Tweaking the materials for final render is always easier in Max than relying on the output of Substance only. The interactive render ability of V-Ray was a great help here.
Certain assets needed special attention like the iron throne or candles where traditional workflow would have been too tedious or just not giving good enough result in the given amount of time. For example, instead of traditional modelling or sculpting for the candles we used particles to get a realistic base which was then polished to get a nice set of candles in the end.
- You’ve got some amazing characters here. Very photorealistic and true to the original cast. What way were these created? How did you manage to get these amazing photorealistic details in your characters? What are the challenges?
The likenesses were all based on looking at photography as there were no scans of the actors. We would constantly go back and forth comparing the sculpts until they were at a level we were happy with. We would find photos with the camera data embedded and match that data in our scene to help us judge where things needed tweaking. Once we were in a good place with the proportions we would then go to Mari and do all of the texture work using texturingxyz displacement and albedo maps.
The main challenge was creating the actual likeness. With no scans to go off it was a constant process of making changes, compare the renders to photography and make further changes to the sculpt, changes to the camera, changes to lighting. It was a constant guessing game of what needed to change in order to make the character more believable.
- What do you usually use for rendering? What is the optimal render to get the right kind of quality going?
The show was rendered in V-Ray with the hair using the VRayOrnatrixMod to generate efficient render time on the hair strands and it allowed us to achieve a more natural result out of the box.
To be able to work on one scene by more than one artist, we referenced all elements of the actual set into one scene except the walls and grounds. This was a combination of xref objects/scenes and V-Ray proxies. This way it was easier to handle bigger, more complex scenes, while maintaining the ability to update them with latest change. Plus, the hardware footprint of the scene was much friendlier.
V-Ray’s support of Cryptomatte was critical in allowing us to control and fine tune every element of the shots in post-production. The progressive renderer was useful in achieving quick local renders before sending to the farm. We’ve also used IPR in helping with setting up lights and making sure the shadows are cast exactly where we wanted them to.
Photorealism is always the goal on projects like this Game of Thrones trailer, and to reach that takes time. Nuke gives us a lot of power in how we manage shots. It allows us to dive deep into fine-tuning minute aspects of the renders without getting lost in the comp. Cinematic Trailer
We use multi-channel EXRs for each pass. For example, there will be an environment EXR sequence and a character EXR sequence. Each contains multiple render elements within, that we then access in Nuke for the compositing process. Cinematic Trailer
Chris Scubli – Lead artist: I like to keep my comps in the RGB channel, so from each read node, I will shuffle any render elements I need into RGB and then merge those into my B pipe as necessary to adjust things like reflection, light selects, or to add fog based on world/camera data. Cinematic Trailer
I always have a B pipe, and everything gets fed into that, with A pipes coming in from the left, and masks for merges or various nodes coming in from the right. I try to keep everything tidy and labelled. I find this gives me instant feedback on what’s going on at any place within the comp. I can then focus more on iterating the shots, instead of navigating an ever-growing comp.
Cinematic Trailer