By TREVOR HOGG
By TREVOR HOGG
No longer simply the means for spectacle, visual effects have become a hybrid of technology and artistry to the point that live-action and digital animation are indistinguishable from one another. To gain insight into emerging patterns, directions and applications in visual effects filmmaking and production, VFX Voice consulted a virtual panel of executives and supervisors on the creative and technical trends shaping the industry in 2018. Following are their comments.
I’m not sure if it will come to play in 2018, but there has been so much exciting development around both image processing and image capture that I am highly interested to see what the future holds. I’m specifically looking forward to seeing if there is any adoption of technology to provide additional information on a per-pixel (or even sub-pixel) basis regarding depth.
We’ve seen the development of some cameras (Lytro Cinema comes to mind) that can capture this kind of data, which raises a host of questions surrounding the future of chromatic screens and how movies are currently made. Additionally, seeing so much development regarding camera tracking on phone applications like Snapchat where proxy-geometry is created and tracked from a single camera source in real time fascinates me. Any technology that helps improve the quality and reduces the time spent prepping an individual shot in post so that more time can be spent on the artistry side of things is inspiring to me.
“Any technology that helps improve the quality and reduce the time spent prepping an individual shot in post so that more time can be spent on the artistry side of things is inspiring to me.”
“As Virtual Production has become less obtrusive to the process, it has been used more and more to drive productions.”
Two technologies will continue to grow in the coming year and years: Virtual Production and Performance Capture. Both of these technologies have been in use for a few years, but have been somewhat clunky and intrusive to the process. Only recently has the hardware and software become streamlined enough to where the process is smooth, organic and, most importantly, doesn’t slow production down. As Virtual Production has become less obtrusive to the process, it has been used drive productions more. As more DPs get exposed to and embrace this technology we will see some very interesting and clever use of it.
To be able to choreograph a scene and then dynamically change camera angles, lenses and timings really frees up the filmmakers to first focus on the performances and then on the coverage of those performances. More companies are offering this service and allowing filmmakers to basically ‘live-vis’ a scene. They stage the action, and everything is captured and tracked so that the scene can be replayed in Unity. Then the director and DP can walk around the virtual set with virtual cameras and get the coverage of the scene they want. In the hands of an experienced filmmaker this can be a powerful tool. In some hands, it can create an expensive mess, so it will be interesting to see what happens.
The landscape in our industry is rapidly developing to include new audiences. The area of overlap between gaming, mixed reality and conventional media is growing into its own viable, sustainable medium. That growth is being accelerated because Silicon Valley is playing a significant role in Hollywood and has made some big bets on where they believe audiences, creators and the overall business is headed in the coming years. As a visualization company, we have seen an early reluctance by the major studios to embrace virtual production, augmented reality and the integration of the game engine melt away as these technologies become more effective at increasing production value and reducing costs. Our clients have crossed the threshold and are very familiar with tools that were first forged in the gaming industry and are now a familiar sight on set.
The next wave of content producers is tech-savvy and less averse to disrupting the status quo with applied science than content producers in the past. They have access to analytics that offer unprecedented insight into audience wants and needs. This new resource will fuel dramatic change. The mixed-reality toolset is migrating from – building on, drawing on, evolving from – the production pipeline and will become essential to the way ‘next generation’ audiences experience media. We see this happening in the location-based entertainment space now, where audiences are used to early adoption of state of the art technology. As the technology matures, a new audience will emerge. We are designing our pipeline to serve that audience by empowering the expanding and diverse creative community with tools that amplify their voices.
“Growth is being accelerated because Silicon Valley is playing a significant role in Hollywood and made some big bets on where they believe audiences, creators and the overall business is headed in the coming years.”
“In the next year and into the future, the march toward photo-real digital characters – human and otherwise – will continue.”
In the next year and into the future, the march toward photoreal digital characters – human and otherwise – will continue. I was really impressed with the work MPC did on The Jungle Book, and what Framestore has been doing with Rocket in the Guardians of the Galaxy films. The artists at WETA have been doing a phenomenal job on the The Planet of the Apes movies, and I’m excited to see what comes next.
An increasing emphasis and value given to “invisible effects” has been great to see, and it will only continue to grow, especially with the rising volume of television/streaming VFX work being done. The desire to blend digital effects with practical effects will continue, and it will lead to some great collaborations between artists and companies of different backgrounds.
On the technical side of things, I’m very excited about the technological prospects of getting rid of chroma-keying. It would dramatically affect VFX processes on set for the better and, one would hope, allow us more time in post to focus on integrating elements rather than isolating them.
I’ve been very pleased to see VFX departments getting involved with productions earlier on, and that will continue to happen more frequently. Having the VFX teams involved during development can save tremendous amounts of time and money during production and post, and ultimately lead to better-looking effects, a better work environment for the artists and a better experience for the audience.
‘Virtual Production’ has been an increasingly used term throughout our operations this year. It’s likely the application of this new form of digital filmmaking will intensify in 2018. Combining complex CG content with cameras and real-time rendering technology is an exciting and powerful tool, one which will become integral to the ever-escalating ambition of cinema to portray the fantastic and bring to audiences images and stories never before seen on screen.
Somewhat hand in hand with that, we should expect to see the envelope of photoreal rendering push further. The next generation of believable, immersive digital worlds and fully-realized digital characters will continue to exceed the bar. Photo and motion-capture techniques, as well as hair, muscle and shading tools will continue to improve in both efficiency and accuracy. Add the potential of machine-learning techniques and the horizon broadens for faster as well as stronger results.
Spanning all technological advancements, the perennial requirement for the sharpest artistic eyes creating and nurturing VFX images will only compound. The number of VFX shots the industry produces as a whole is climbing, and audiences are smart and discerning. The illusions we created yesterday will be found out tomorrow, so it goes without saying that the detail and realism produced by the tools and techniques we harness will be held to closer scrutiny from the accumulating authors who put their names to the images.
“Spanning all technological advancements, the perennial requirement for the sharpest artistic eyes creating and nurturing VFX images will only compound.”
“My eyes are specifically set on the latest advances in real-time tracking/compositing that allow us to lock the actors into digital sets or set extensions during the shoot.”
Looking at War for the Planet of the Apes and Furious 7 it becomes crystal-clear that digital characters in live-action feature films are on the rise. But one of the biggest breakthroughs from a pure story-telling point of view are the possibilities that de-aging of well-known actors brings to the table. In 2017, we have seen a younger Anthony Hopkins, Kurt Russell, Johnny Depp and Sean Young.
In 2005, our company, Uncharted Territory, was approached by director Harald Zwart [The Karate Kid remake] who came to us with an incredible screenplay about a protagonist who meets his younger self and has to first bond and then work with this character throughout the story of the film. It was clear from the get-go that a young look-alike actor would not do. It clearly had to be that same person. Unfortunately, we had to turn him down in 2005 because the technology was still in its infancy. Now I look forward to this and other screenplays with similar storylines and technical challenges being turned into thrilling feature films.
VR and AR will for sure be a big topic in 2018. My eyes are specifically set on the latest advances in real-time tracking/compositing that allow us to lock the actors into digital sets or set extensions during the shoot. We used the Ncam system extensively on Independence Day: Resurgence. It is a multi-sensor hybrid technology that creates a point cloud of the environment and instantly locks our pre-created digital environment to the camera image the director sees on his monitor. Director Roland Emmerich calls it his “favorite tool.” My favorite moment of the shoot was when Roland discovered he could include pre-animated objects or characters, i.e. show a half dozen jet fighters vertically lift off inside a hangar. The camera operator was able to pan with the moving jets, and also the actors finally knew what they were looking at, besides a big blue screen.
With the critical and box-office success of Disney’s The Jungle Book, there seems to be a dramatic uptick in the interest in fully-CG photorealistic feature film production. The state of the art has clearly risen to a tipping point of making this style of film production both creatively and financially viable.
The ongoing production of Disney’s The Lion King and Fox’s commencement of the Avatar sequels signals that at least two of the major studios have gone “all in” with this modern production paradigm. This radically different means of producing feature film content is also finding its way into projects that blend traditional plate-based VFX work involving actors in sets and on location, with other portions which are wholly CG created using virtual production methodologies.
This sort of hybrid-type production seems to be a further evolution of the modern filmmaking process. It is a sign of how, in the future, the line between physical production and virtual production will only continue to get more and more blurred.
“In the future, the line between physical production and virtual production will only continue to get more and more blurred.”
“As filmmakers’ and audiences’ visual expectations grow, digital humans are becoming more prevalent – even though their presence is under the radar.”
I see the biggest trends in VFX moving toward digital humans or digital characters with very human facial characteristics. Significant progress in facial motion capture and the continually improving high skill level of facial modelers and lookdev artists have made it possible to have otherworldly, yet believable full CG characters next to live-action actors. We’re able to create such photoreal digital doubles now that the integration is seamless. As the visual expectations of filmmakers and audiences grow, digital humans are becoming more prevalent – even though their presence is under the radar.
A key technical trend that will continue to drive processes for VFX shops is global integration. At Method and other facilities, different outposts used to handle different shows or specialties, but over the past year we’ve identified one set of best practices to implement across the board, thereby standardizing production operations at every Method location worldwide. We can adjust on the fly to redistribute capacity and share capabilities as one fully integrated global operation. Cloud-based tools have matured in the past year and had a big impact in making this possible.
On the creative side, world building will become more important with the continued rise of “shared” cinematic universes, including assets and characters. We’ve experienced this as longtime Marvel vendors, but it’s happening across the board now as well. The software and tools are so advanced now that VFX artists can create massive alternate universes that are photoreal. World building will also be key as VR continues to take hold. Consumers want to step inside these fantastic worlds they’ve seen on screen. It’s becoming more important, especially with VFX-driven features, to think about how that IP can also exist in VR/AR mediums. This not only affects environments, but also character design and even assets like weaponry and costumes.
In 2018, we also will see much greater use of virtual production as the tools continue to improve. The ability for a director to visualize a mostly or even fully CG scene in real time as he or she walks around the set will also have a positive impact on VFX artists and their work, as more looks can be ironed out further in advance – meaning less time iterating and more time achieving the filmmaker’s creative vision.
“We will see much greater use of virtual production as the tools continue to improve. The ability for a director to visualize a mostly or even fully CG scene in real time as he or she walks around the set will also have a positive impact on VFX artists and their work.”
“Audiences’ notion of real vs. artificial on the big screen will be blurred as VFX continues to test existing boundaries of what stories can be told.”
Looking back on the releases from the last few years, I believe that studios and filmmakers have become less fearful in incorporating entirely digital actors into their productions. We have seen with notable films such as Avatar and War for the Planet of the Apes that through these VFX advancements a greater creative range allowed for new story and character developments to be pursued. Apart from the more stylized examples, as demonstrated in the upcoming Alita: Battle Angel and Ready Player One, the versatility of full CG character construction can also produce photorealistic portrayals as was seen in Gravity’s space sequences. With this evolving development, audiences’ notion of real vs. artificial on the big screen will be blurred as VFX continues to test existing boundaries of what stories can be told.
The ‘youthening’ of actors is becoming more prevalent and will continue to be a creative device used; it’s allowing filmmakers to go back in time, and to do flashbacks in a more seamless way. This, along with the pursuit of creating the perfect digital human, will continue to further our art, and help enable filmmakers to tell new stories.
However overall, the over use of CGI is often being blamed for many movie’s failing, and critics are jumping on the bandwagon of condemnation. The challenge moving forward is going to be knowing where and when to draw the line of where VFX and CGI should be best used. As practitioners of the art, we need to know how to best guide the process. We are asked to do impossible things every day; we just need to remember that just because you can, doesn’t mean you should.
“We are asked to do impossible things every day; we just need to remember that just because you can, doesn’t mean you should.”
“Capture-driven performances of digital characters is nothing new since Benjamin Button, but the extent to which this technology is trickling down to even independent filmmakers’ skill sets is a major creative and technical trend that is liberating filmmakers towards being able to tell new, different kinds of stories.”
Since our work on Logan featuring a completely photoreal digital Hugh Jackman, we have seen a real increase in the number of filmmakers considering ideas around convincing digital human performances. Capture-driven performances of digital characters is nothing new since Benjamin Button, of course, but the extent to which this technology is trickling down to even independent filmmakers’ skill sets is a major creative and technical trend that is liberating filmmakers towards being able to tell new, different kinds of stories. To say nothing of the prevalence of digital actors in movies made by the major studios. The fact that a mid-sized visual effects house, like Image Engine, can execute digital human work that rivals that of much larger facilities is a trend to watch!
Three areas of development will make a significant impact on VFX/animation production in the new year.
Scene Graph Based Workflows: While in the past a lot of focus has been placed on standardizing file formats for specific use – cases like animated mesh caches, volumes, point clouds – we will see a more widespread adoption of workflows that deal with the complexities of a scene graph. While these workflows themselves are not new – tools like Katana or Gaffer have been giving TDs and artists these capabilities for many years – the release of USD and its support by both the standard DCC tools and facility-sponsored open-source projects will make a tangible difference in how and at what level data is exchanged between applications and vendors.
Machine Learning: The exponential rise of machine learning/neural networks over the last couple of years will increasingly have an impact on VFX/animation productions. While simple data-driven solutions have been used in VFX production for a while – for example, to control geometry deformations in rigs – more applications for this new, more powerful wave of technology are being found at a steady pace and making their way into production workflows. De-noising, increasingly accurate alpha-matting, rotoscoping, context-aware painting, simplified versions of complex shading effects, example-based facial animation and secondary deformations, 3D feature, object and pose-tracking are all areas where machine learning has already demonstrated the potential to speed up the ‘time to first presentable iteration.’ More of these techniques are making their way into the everyday toolset. It will be interesting to see the impact this will have on work that might today still be a candidate for outsourcing.
Big Data & Production Efficiency: As budget and time constraints become tighter, the need to be as efficient as possible, while at least maintaining the same quality standard, becomes an imperative. VFX and animation production generate a huge amount of data from a variety of sources: production tracking software, bidding, time-keeping and accounting systems, render farms, IT infrastructure and asset-management systems. These are rich sources of information that show a huge amount about how you spent your available human and machine resources. Just like ‘traditional’ businesses have already embraced using this data to gain production insights, so will the creative industries in order to optimize common processes. Using business intelligence and data-mining tools will be more commonplace to measure the performance of anything from ‘machine utilization,’ ‘rendering speed’ and ‘the time it takes for an update to make it through into a final image’ to ‘usable data generated during overtime’. These historical insights will help in more rational decision-making to validate past changes and investments, and they will help in predicting possible problems earlier.
“The exponential rise of machine learning/neural networks over the last couple of years will increasingly have an impact on VFX/animation productions.”
“There have been significant advancements in technology over the last couple years that can make the virtual filmmaking process intuitive for filmmakers, and a large part of that is the utilization of game engines, such as Unreal.”
Virtual production and the use of game engines in film are both technical and creative trends that will become even more widely used in 2018 effects films. There have been significant advancements in technology over the last couple years that can make the virtual filmmaking process intuitive for filmmakers, and a large part of that is the utilization of game engines, such as Unreal.
In our field – visualization – planning out sequences in previs can directly transfer over to a virtual scout or even a shoot with a virtual camera. Using previs animation and preliminary motion capture exported into a game engine, we can set up master scenes for the filmmakers to shoot which can be rather complex. The rendering capabilities of the game engine make crowds, lighting, atmospherics and depth of field all adjustable and shootable in real-time. We used Unreal on Logan, and deployed it on War for the Planet of the Apes for the entire visualization pipeline through to finals.
The animation and assets made in that early stage can be used in live-action production for Simulcam setups or as virtual assets in virtual production. HALON’s use of Unreal for previs and postvis work make these technical developments work in all stages of a film’s production. It gives filmmakers a more intuitive and accurate idea of how their film is going to look, giving them the ability to better realize their vision, and we are continuing to develop and push the technical and creative on our slate of 2018 films.
The new trends that people are playing with is machine learning and deep neural networks. I’ve seen it being explored to blend animations, de-noise and accelerate renders, accelerate long and complicated simulations – like water and fluid simulations, rotoscoping and matte generation. I don’t know if it’s been successfully used in production much yet, but given its incredible potential, everyone seems to be playing around with it. It’s only a matter of time before it gets widely used in production. The arrival of the USD [Universal Scene Description] format is getting a lot of interest. Pipelines are getting more complex, and there are a lot more shared shots between FX facilities which makes this format interesting. MaterialX also shows potential for exchanging materials between renderers and facilities who use different renderers.
The lines are being blurred between layout and final renders. A software like Clarisse makes it so much easier to quickly build very complex shots with a ton of assets and geometry in a very user-friendly way. Facilities are building libraries of assets for quick and easy reuse. Facilities rely more and more on custom libraries of animation vignettes to use to quickly lay out crowds instead of turning towards complicated AI based systems. We even build huge libraries of pre-simulated FX elements that we can quickly lay out when building a shot. These can often be directly used in a final render or they can serve as reference for the FX teams in order to guide the final simulations.
Since scenes are increasingly huge and complex, people turn more towards procedural and simulation tools. That’s one of the main reasons a tool like Houdini is gaining more popularity. It already contains many tools and it is so flexible that it can help automate content creation that would otherwise be very costly to create by hand. OpenVDB is one such tool that opened up so many possibilities that simply weren’t possible before. Deep Images still is a good tool to render and merge complex assets that can even come from various renderers. The Cloud and GPUs are more easily accessible these days and can help distribute otherwise very costly computational jobs. Color spaces are easier to manage thanks to better and easier-to-use standards.
Onset work is changing as well. The use of VR and AR opens all sorts of possibilities for artists to play around and find their shots. The industry is also going less and less towards blue and greenscreens and trying to use and capture as much practical as possible. It’s great reference for actors and directors of photography. We still have to replace and enhance a bunch of things in CG, but we now have great references to match. That increases the amount of rotoscoping we need to do to insert our 3D elements into their plates but, hopefully, machine-learning rotoscoping can help us ease that pain somewhere down the line. I also see a lot of led set-up used to project pre-rendered elements to help actors visualize the upcoming CG work and better react to their environment while helping the director of photography light and better composite their shots. It also means it’s much easier to integrate our CG in the plates, as what is captured in those plates already is close to what the final render is going to be. Rotoscoping and prep work is much easier to do in these cases.
“The new trends that people are playing with is machine learning and deep neural networks. … Given its incredible potential, everyone seems to be playing around with it. It’s only a matter of time before it gets widely used in production.”
“When we get a good response on a vignette it’s much easier to build the pipeline – only once! – knowing we’ve cracked the big design problems without affecting scores of artists.”
2018 needs to become the year of VFX Rapid Prototyping! As filmmakers become more accustomed to thinking about Visual Effects as an environment where they can explore design solutions rather than just executing a design that has been created by another department, we need to find quicker and more effective ways to ‘sketch’ in motion, in 3D.
To illustrate the point: Creating a five-second cool-looking ‘character vignette’ that is technically bare-bones – essentially held together with digital duct-tape and glue – may well save a VFX facility the incredible pain of building and rebuilding a fully fledged industrial-scale pipeline over and over again as they try to keep up with the filmmakers’ creative exploration. The test shot/vignette doesn’t need to be long, doesn’t even need to look photoreal, but it should be crammed full of attitude with some basic FX if the character design supports it. Think of it as Concept Art in motion.
When we get a good response on a vignette it’s much easier to build the pipeline – only once! – knowing we’ve cracked the big design problems without affecting scores of artists.
As audiences binge on ever-magnified spectacles of the fantastic and on ever-developing platforms, the pressure for new ideas is the greatest it has ever been. There is a clear trend to defer idea creation in the belief that the best creativity evolves over time. The danger is that the visual effects process that empowers this deferment is seen simply as technical and not creative in its own right. Time must be shared for VFX artists to gestate a creative answer to creative questions, and VFX filmmakers will find it increasingly necessary to find techniques and practices that allow the creative space to deliver volume on tighter schedules at a sustained quality.
“Time must be shared for VFX artists to gestate a creative answer to creative questions, and VFX filmmakers will find it increasingly necessary to find techniques and practices that allow the creative space to deliver volume on tighter schedules at a sustained quality.”
“Invisible effects are everywhere in the industry; they just don’t get the spotlight. However, with all invisible effects, VFX can fix small continuity errors, set issues, dress issues and even the makeup.”
We will continue with the strong trend that has emerged in recent years and I expect will continue to increase in the future – more digital characters. Creating a highly realistic CG human has always been a goal for the VFX industry. Fortunately, we are getting closer to that dream.
There’s a big trend of using digital characters to enhance the actor’s performance or replace them completely. Also, younger or living CG versions of actors. This is a massive display of how far technology and art have come. Rogue One‘s stunning work to recreate digital versions of Peter Cushing’s Governor Tarkin and Carrie Fisher’s young Princess Leia, a young Johnny Depp as Jack Sparrow in Pirates of the Caribbean: Dead Men Tell No Tales, followed in the footsteps of Marvel’s 2016 Captain America: Civil War. Captain America revealed a young Robert Downey Jr., while HBO’s Westworld showed a de-aged Anthony Hopkins. Martin Scorsese intends to use it on Robert De Niro for the upcoming The Irishman.
Expect another gain of invisible effects. Invisible effects are everywhere in the industry; they just don’t get the spotlight. However, with all invisible effects, VFX can fix small continuity errors, set issues, dress issues and even the makeup. I’m also happy practical effects are back in fashion. A multitude of CG will ultimately be part of the final film, but the reliance on practical explosions and stunts is clear. Wonder Woman and Dunkirk have that sense of grit and grime designed to make them look more hand-crafted than some of the big effects-driven blockbusters.
We are seeing digital animals and humans that are really blurring the lines between live action and CG. As studios and vendors get more comfortable approaching these challenges, we will continue to see digital humans in more than just the tent-pole vfx movies. (The success of which will be varied, similar to the initial move away from miniatures when some CG shots looked amazing and others were obviously a bit overly ambitious). With ever-improving facial-capture technology and artists pushing the envelope at each opportunity, the result of all of these digital-human projects will be to net out more reliable and successful methodologies for future shows to take advantage of.
In the realm of digital animals, we will be seeing the tech perfected in the Planet of the Apes films and The Jungle Book being tapped for a larger range of VFX-budget movies as well. Scripts featuring sidekick characters, whether fantasy or real, will be more achievable and likely to make it on-screen. Current movies like Guardians of the Galaxy Vol. 2, Okja or Paddington 2 that feature fantasy creatures are definitely benefiting from the lessons learned trying to make real animals look photoreal over the past decade.
“With ever-improving facial-capture technology and artists pushing the envelope at each opportunity, the result of all of these digital-human projects will be to net out more reliable and successful methodologies for future shows to take advantage of.”
This is an expanded version of the article that appeared in the Spring 2018 print edition of VFX Voice.