When Green Screen Isn’t Enough
In this day and age, it’s hard to imagine modern cinema without the use of special effects. Whether it’s CGI or practical effects, it would be impossible for our favourite movies to exist without these reality-bending techniques that when done well, go completely unnoticed to the naked eye. A lot of you may be familiar with some of the tried and tested techniques of CGI such as green and blue screens, full 3D rendering and animation which all require specially made software and clever filming tech to bring these creations to life.
While all these techniques are all relatively old hat at this point, filmmakers are always looking for newer and efficient ways to create digital imagery whether it’s a scene environment or the components within it. One of the latest innovations in these techniques can be seen in the Disney + series: The Mandalorian, and today I’m going to go into detail about how the directors are using the game engine Unreal in the filming of the show.
A quick disclaimer, I am by no means an expert in special effects but I do have a working knowledge of how CGI works and I can confidently explain a lot of the elements that make up special effects but in basic terms.
First I want you to look at this image and try to guess which part is CGI and which part is a practical effect:
Have you guessed it yet? The answer is the foreground is all real, the characters, the costumes and the rocks in the immediate area are all practical effects while the background, on the other hand, is 100% CGI. If you look close enough at the characters outlines it starts to become a little more obvious but the effect is exceptionally well done here and when all the parts are moving the effect is completely invisible.
While the $15 million per episode budget is substantial for TV unless you’re HBO spending roughly that much per episode on Season 8 of GOT, its nothing compared to most film budgets for example Rise of Skywalker had a budget of $275 million, so it begs the question, how did The Mandalorian create film-like effects on what is comparatively a small budget? Although GOT and The Mandalorian had similar budget levels, the quality in special effects does differ between the two. While the special effects in GOT were fantastic for TV, The Mandalorian has surpassed it in regards to looking incredibly clean but not hitting the uncanny-valley aspect GOT tended to struggle with especially with location settings.
When you think of difficult special effects quite often the thought of living creatures comes to mind, which to a certain extent is true as they are one of the most difficult things to recreate in CGI again due to the uncanny-valley issues; but environments also have problems when it comes to getting them spot-on. For example, if you picked up two rocks and held them up and looked at them closely you would see that they are both unique, due to how the environment has enacted upon them over time, the reason this is so important is that your brain recognizes this subconsciously.
Even though there are ways to mimic this effect within CGI using techniques such as photo-scanning where you take a lot of photos of a real-life object and scan it into the software where it takes these photos and rebuilds it in 3D. Techniques like this create the closest thing you can get to a realistic-looking object within CGI, but the problem is when that object starts to move. When the camera is moved around an object the first thing people notice is how the light reacts with the object, this is where visual effect artists and directors often make the biggest mistakes, for example in The Netflix series: The Witcher (Mild spoilers for S1E6) when the Golden Dragon first descends into the cave, many people noticed that the effect looked out of place however it wasn’t down to the dragon itself looking bad, but it was how the lighting reacted to both the dragon and the environment that ruined the realism of the scene.
I’ve put a clip of that scene below, watch it and don’t look at the dragon itself but study the environment and how the light reacts to it and you should be able to see that something isn’t quite right.
The way The Mandalorian got around these issues was by using a different style of CGI for the backgrounds, instead of using post-production techniques they used real-time rendering. They basically had a big centre stage where the actors and the props would go and surrounding most of the stage were big LED screens, which would be projecting at all times a real-time rendering of the environment they wanted to use. The most clever part about how this works is that the images on those screens would react to the position of the camera which allows The Mandalorian to have these shots where the camera moves and the environment matches its movements. This whole effect is a more advanced version of having a painting as your background but the painting now moves.
The Mandalorian isn’t the first time it has been used, but the technique as a whole is still relatively new. What makes all this so interesting is that it can only be achieved by using game engines so far, the only engine being used for this (as far as I am aware) is the Unreal engine. The reason that this effect can only be used by game engines rather than CGI software is due to game engines being designed to be reactive on purpose, it wouldn’t be a fun game if you clicked a button and nothing happened. Effectively what the special effect artists do is load up Unreal, create the environment and load it onto the LED monitors on the stage it’s more complicated than this but that’s the basics of it. This is better than traditional CGI methods due to how the projection is as much as a real part of the set itself as the actors and props, a lot of the time when CGI looks bad it’s not always the effects’ fault it could be how the actor reacts to the effect that ruins the scene. With this kind of effect because the actor can see what is going on in a scene they can react to it in real-time, another benefit of this technique is how the light originates from the screens so the director no longer has to worry about where the light is coming from as it’s already provided for them.
There are some little ways that this makes filming easier as well, such as when you film a translucent object such as a glass. The image on the other side of that glass will have an element of distortion to it; so when you try and paint a CGI background behind such an object the distortion has to be manually done and it never looks correct as distortion is unpredictable. The way real-time rendering solves this problem is due to the background now being a physical part of the scene so the camera can now pick up the distortion in real-time. Another way this makes filming easier is if something is wrong in the background, the visual effects artist can jump into Unreal and fix the problem on the spot.
I personally find this filming technique interesting, because it has shown that the gaming medium has now evolved to the point where the software that is now being used to create these games are now being used in real-world applications. This now shows that where video games used to take inspiration from TV and Film, it is now the other way round, this offers the gaming medium more legitimacy in the world of art and make no mistake video games are art. I, for one, am excited to see how this is used in bigger productions in the future and how this will change how films are made.