Interviews

Auto Added by WPeMatico

“That’s What Public Art is All About…Everyone is Invited”: Andrey M Paounov on his Christo doc Walking on Water

Andrey M Paounov’s Walking on Water centers on the legendary environmental artist Christo and the realization of his most recent project (his second since the 2009 death of his beloved partner in life and art Jeanne-Claude). 2016’s The Floating Piers was a two-mile-long walkway of monk-yellow fabric that allowed over a million visitors to “float” on foot across Italy’s Lake Iseo. What Walking on Water is not, thankfully, is your standard celebratory portrait of an unconventional maestro (though Christo, who brings to mind a Bulgarian version of Bernie Sanders, is certainly that). Indeed, what makes Paounov’s Locarno-premiering film so refreshingly […]

“Everything We Could Embrace about Motherhood Happened Through the Making of this Movie”: Director Ash Mayfair on Her Vietnam-Set Debut, The Third Wife

The Third Wife marks the ambitious debut of Vietnamese director Ash Mayfair, who gained her MFA from NYU. Set in the late 19th century, her film tracks the fortunes of 14-year-old Mây (Nguyễn Phương Trà My), who is selected as the third wife of a much older man, who expects her to bear him a son. Her life in rural Vietnam becomes further complicated as she begins to develop feelings for the second wife Xuân (Mai Thu Hường) and as pressure builds in the family. Shot by Chananun Chotrungroj (Pop Aye, Hotel Mist), the film largely uses natural light and […]

“Shows of Women Who Eat Bananas Seductively are Banned”: Shengze Zhu on Present.Perfect

Before authorities cracked down in June, 2017, over 400 million customers watched live streaming in China, primarily on three internet sites: douyu.com; huya.com; and panda.tv. (According to Variety, panda.tv closed in March, 2019.) Live streaming in China resembles amateur YouTube broadcasts here, with a slightly different vocabulary. In China “anchors” host “showrooms,” or channels, and transmit “bullets” to their followers.  Documentary filmmaker Shengze Zhu (Another Year, 2016) screened hundreds of hours of footage for Present.Perfect. What starts as a survey of live streaming narrows down to focus on a handful of anchors, including a seamstress assembling underwear in a clothing […]

“We Don’t Do Pickups, It’s Not Fair to the Actors”: Ritesh Batra on Photograph

Two strangers from different classes meet in Mumbai by accident in Photograph, an Amazon Studios release opening theatrically May 17. Rafi (Nawazuddin Siddiqui) scrapes along by selling snapshots of tourists. The middle-class Miloni (Sanya Malhotra) has her life planned for her: a course in accounting, followed by an arranged marriage. Through a familiar screwball-comedy twist, she agrees to pose as Rafi’s betrothed when his grandmother Didi (Farrukh Jaffar) visits. Photograph is not strictly a comedy, but more a study of two deeply unhappy people taking tentative steps out of isolation. Writer and director Ritesh Batra explores his characters with an […]

“Arrogance and Confidence Comes with Film School and That Age”: Joanna Hogg on The Souvenir, Shooting 16mm and Film School

There are many movies about making movies, far fewer about film school. Joanna Hogg’s The Souvenir (the first in a diptych—part two is supposed to shoot this summer) grounds itself in the early ’80s at the UK’s National Film and Television School (NFTS), where Hogg herself went to school. It was there that she experienced a tumultuous relationship, dramatized here as the story of clean-living Julie (Honor Swinton Byrne), a student who falls for Anthony (Tom Burke) after they meet at a party. All well and good, but what Julie doesn’t clock is that Anthony is a heroin addict. A real-life […]

Streaming, Theatrical and Film School: Barry Jenkins, Boots Riley and Aaron Stewart-Ahn at the Miami International Film Festival

Three of current American independent cinema’s most prominent filmmakers recently came together at the Miami International Film Festival to impart some of the hard-earned knowledge they’ve acquired. Barry Jenkins (Moonlight, If Beale Street Could Talk), musician, activist, and storyteller Boots Riley (Sorry to Bother You), and journalist-turned-screenwriter Aaron Stewart-Ahn (Mandy) were honored at the festival as the first trio of guests to be part of the inaugural Knight Heroes masterclass and symposium. Ahead of their presentations in front of a crowded Olympia Theater in Downtown Miami, the three creators sat down with Filmmaker to discuss a wide range of topics: the […]

Breaking new ground in African philosophy

Richard Bright: Can we begin by you saying something about your background?

Jonathan O. Chimakonam: I am Igbo from Nigeria. I hail from the region at the eye of the rising sun, in the lands across the great Niger River, snaked through by the mystic river called Idemmili at the bank of which is the land of my ancestors, Oba. I come from the family of Okeke-mpi, in the lineage of Ezeneche, from the brave clan of Umudimego in Okuzu, a community on the hill. I trained as a philosopher, obtained my B.A honours from Ebonyi State University followed by a master’s and a doctorate degrees from University of Calabar, Nigeria where I currently work.  It is an impossible angle to work from as a researcher like most universities in the Sub-Sahara with little or no funding and of course, no mentoring. For these, your development as a researcher is slow and you tend to make mistakes in the absence of mentors. But a sheer doggedness kept me going.

Having gone through some really difficult times and experiences in my formative years as a scholar, I decided I was going to solve this problem for my students. So I gathered some of my postgraduate students and started the Calabar School of Philosophy initially as a mentoring club, which later metamorphosed into the Conversational School of Philosophy (CSP). Today, I am proud to say that I am the convener of this forum and its membership cuts across several universities on the continent and beyond. It has developed into a school of thought in African philosophy tradition and a movement of difference-makers in Africa’s intellectual history.

As a researcher, I aim to break new grounds in African philosophy by formulating a system that unveils new concepts and opens new vistas for thought (Conversational philosophy, 2015a,b); a method that represents a new approach to philosophising in African and intercultural philosophies (Conversational thinking, 2017a, b, 2018); and a system of logic that grounds them both (Ezumezu, 2017, 2018, 2019). I give everything to my research in African philosophy because above all else, I wish to be remembered as an African philosopher and not just a philosopher.

RB: Have there been any particular influences to your philosophical practice?

JC: Definitely, every philosopher has had some influence and mine is not different. I started off as a logician having been influenced by one of my teachers at Ebonyi State University (EBSU), Professor Uduma Oji Uduma. To me, he was more than a teacher. Back then in the Department of Philosophy, he was a figure that was larger than life. I remember our first year; we would gather in circles and discuss him. He seemed to have made a name for himself in the school generally. So, there were a lot of expectations and impressions about him. We were not sure what to expect and to make matters worse, he skipped the early classes thus prolonging the suspense. When he did turn up for his first class with us, I recall the tension, some of our colleagues who had a phobia for logic were scared to death, others were too excited, but I remember I was focused on discerning what made the man think. You could see inspiring confidence with a touch of challenging arrogance in his demeanour. A combination of these two traits was not lacking in Dr. Joseph N. Agbo, our firebrand lecturer at EBSU and an unabashed Marxist, a virus, unfortunately, he could not infect me with, much as he tried. While Uduma challenged me the most, Agbo was the one who inspired me the most.  I think it is safe to say that the influences from these two were basic in my formation as a scholar. Over the years, I have come to realise that a good scholar must have a tincture of confidence and arrogance; confidence, to inspire their students and arrogance, to challenge them. The humble and timid scholar, no matter how brilliant, neither inspires nor challenges anyone and that makes them a bad scholar as far as I am concerned. The academe is no place for timidity or the idea of humility bandied around nowadays. Humility is a concept that is terribly misunderstood and misinterpreted, especially in the African academe rift with jealousy, fat ego and mediocrity. The idea of academic modesty or humility encourages peers not to brag about their accomplishments in ways that would rub others’ failures or under-achievements in their faces. It does not discourage inspiring confidence and challenging arrogance. A certain level of corkiness is important in the academia. Unfortunately, fat ego mediocres in the African academe waste valuable research time castigating and plotting the downfall of their more aspiring peers who represent the true spirit of the academe, that of inspiring confidence, challenging arrogance, charisma and charm, all of which my two teachers above possessed.

After my honours programme, I went for one year mandatory national youth service. Returning, I went to University of Calabar (UNICAL) for my postgraduate programmes upon the recommendation of Dr. Kanu Macaulay, an amiable gentleman that supervised my honours project. I wanted to continue my studies in logic and UNICAL appears to be the ideal place. There was Professors Princewill Alozie who was retiring as at the time, Chris Ijiomah, Andrew Uduigwomen and Dorothy Ucheaga (now Oluwagbemi-Jacob), and both Professor Uduma and Dr. Kanu had been trained at UNICAL, Uduma in his undergraduate and Kanu through to his doctorate. It was during my time as a postgraduate student that I began to study African philosophy. The African philosopher and metaphysician Professor Innocent Asouzu was already well known for his metaphysical system dubbed ‘ibuanyidanda ontology’. I did not take any of his classes but I took time to read his works. Even though I conducted my master’s and doctoral research in the field of logic, moderated by Uduigwomen and Ijiomah, I did a lot of personal studies in African philosophy. It was in African philosophy that I became heavily influenced as a researcher by the trio of Innocent Asouzu whose thinking style I adopted, Pantaleon Iroegbu whose writing style I adopted and Campbell S. Momoh, whose radical style I adopted.

Today, I am probably known in the academia more as an African philosopher than as a logician. My contributions to knowledge in the folds of conversational thinking, conversational philosophy and Ezumezu logic have been shaped by influences from the three African philosophers above. I am grateful to them for influencing my research and to Uduma and Agbo for shaping my character as a scholar. This is not to suggest that others who taught me at various levels have not contributed anything to my development, they all did one way or the other and for which I am grateful, but I am here focusing on two specific forms of influences, my character as a scholar and my research.

RB: What are the factors behind the contemporary understanding of identity?

JC: Well, that question can have different answers depending on the inclination of the philosopher. But for me, I would like to say that Reductive Physicalism and Non-reductive Physicalism are shaping most philosophers of mind nowadays. While the Reductive Physicalist position holds that with time, scientific accounts would be able to explain all mental states and properties, the Non-reductive Physicalist position holds that the predicates we employ in describing mental states cannot be reduced to the language and lower-level explanations of physical science, even though the mind is not a separate substance. There is this movement towards monism and away from dualism. You see, the days when substance dualism was the pop culture of the field are in the past. The influence of religion has since waned following the collapse of the Holy Roman Empire and the displacement of supernaturalism by science as a framework of choice. The individual is no longer largely seen as an entity with two aspects; one physical and the other spiritual. Scientific understanding of the human being is gaining prominence due mainly to the works of the neuroscientists which influenced the neurophilosophers, of which Patricia Churchland is the egg-head.

When you study the works of physicalists like Daniel Dennett and those of the neurophilosophers, you would understand why physicalism (of different shades) is making more sense in this science-guided era than say the metaphysical option promoted by consciousness scholars like David Chalmers.  With physicalism and neurophilosophy, there is hope and a clear path for the realisation of that hope, that one day scientific explanations can help us make sense of it all. But the metaphysics of consciousness, the type hyped by Chalmers and inspired by Thomas Nagel’s ‘what is it like to-be-a-bat-experience’ does not offer similar level of hope. This latter position excites the mind no doubt, but does not inspire much hope.

From the foregoing, you can see why the contemporary understanding of identity is going the way of physicalism. What makes me, me? This question tends to suggest a sort of introspection until we ask again, what makes me different from others? Then we begin to understand that the question of identity is not an internal thing, it is a social phenomenon. My identity can only be sorted out in connection with the identity of others. It is a property that identifies me from others and identifies others from me and since the interaction between me and others can only be created in a physical space, identity becomes a social phenomenon. The African conception of the self as articulated by Chukwudum Okolo, in a way, can be likened to a physicalist position on identity.

RB: Are they different from any past understandings of identity?

—————————————————-

The rest of this article is reserved for members only. If you have a subscription, please sign in here. Otherwise, why not Subscribe today?

 

The post Breaking new ground in African philosophy appeared first on Interalia Magazine.

AVENGERS – ENDGAME: Simon Stanley-Clamp – VFX Supervisor – Cinesite

In 2014, Simon Stanley-Clamp had explained to us the work of Cinesite on HERCULES. He then worked on many films such as ANT-MAN, THE REVENANT, CAPTAIN AMERICA: CIVIL WAR and ROBIN HOOD.

How did you get involved on this show?
Simon: I became involved at the bidding stage, so pretty early on. I have prior experience of working with Marvel – this was my fourth production.

What was your feeling to be back in the MCU?
Simon: I had just come off of working client-side and over-seeing ROBIN HOOD for Summit Entertainment, so this was an entirely different direction and a very exciting opportunity. My last Marvel production was CAPTAIN AMERICA: CIVIL WAR a couple of years back.

How was the collaboration with directors Russo Brothers and VFX Supervisor Dan DeLeeuw?
Simon: We had no contact with the directors. VFX supervisor Dan DeLeeuw gave all the kick off briefs for our sequences. Once we were up and running we dealt with associate VFX supervisor Mårten Larsson and VFX producer Jen Underdahl, who were both a pleasure to work with. Initially we communicated remotely via cineSync sessions with weekly conference calls. As the project accelerated the number of cineSyncs and conference calls increased and at busiest period during the final push to delivery the supervisor and producers came to London for one to one sessions which were very helpful.

How did you organize the work with your VFX Producer?
Simon: Our work naturally divided into two, so we split the artists into two teams; I had supervisors leading each team for me and both shared the resources of our lighting, FX and assets departments.

What are the sequences made by Cinesite?
Simon: We worked across six sequences, the largest of which were:

  • The opening Lost In Space sequence where Tony is stranded; interior and exterior shots of the Quill’s M-Ship, from Tony and Nebula playing table football to wide shots of the ship in space.
  • Nebula, War Machine, Hawkeye and Black Widow visiting the planet Morag.
  • New York city 2013 inside the Stark Tower lobby, Tony and Ant-Man’s planned heist to steal the tesseract from a suitcase.
  • Tony Stark and Captain America at Camp Lehigh in the 1970s, where Tony meets his father while searching for the tesseract.
  • Following the destruction of the Avengers base, Hawkeye is pursued by Outriders as he searches for and retrieves the gauntlet.

Can you explain in detail about the creation of the spaceship?
Simon: We received the ship as an asset from DNEG. We modified it for the “Lost in Space” sequence, showing more damage and wear and tear from its escape from Titan in INFINITY WAR.

Can you tell us more about the shaders and textures work?
Simon: Our head of assets Tim Potter is best placed to answer that.

Tim: With the Lost in Space sequence we received the ship asset from another vendor and started to build up all the connections between the texture maps in the shaders. As we progressed we found ourselves having to sculpt bespoke damaged areas of the space ship and create additional texture maps, adding further detail in the form of rivets and panelling, dirt and grime, as well as a larger break up to help add a sense of scale. We also created frost maps for the windscreen which were used in various shots. Lookdev were constantly updating the shaders with our news maps and pushing various values to get the best look for Quill’s ‘lost’ ship.

How did you handle the lighting challenge into the deep space?
Simon: We started implementing Gaffer about a year ago as our main lighting tool. Our Head of Lighting Roberto Clochiatti oversaw lighting on ENDGAME.

Roberto: Gaffer gave us a totally different approach, with its more procedural and modular structure. We had a huge amount of scalability to manage lighting scenes and we moved quickly from a one-shot approach to a multi-shot approach. This reduced the workload but maintained consistency within the sequence. We were also able to introduce tools that could be built within the software, without bothering the pipeline department, so that made a big difference to us.

Quill’s M-Ship is floating, surrounded by nothing but distant nebulas and stars. The challenge was to maintain a sense of loneliness and emptiness. We kept the lighting fairly subtle, only using a few lights with mapped stars and distant sources. We used some star-field HDRIs painted by compositing in order to get some plausible reflections, as well to help ground the ship, knowing that we didn’t have any plates to match the lighting to.

Can you explain in detail about the light creation and animation when the ship goes super fast?
Roberto: When the ship goes into hyper drive, we used a combination of lighting techniques. On one side we have the lighting before the jump, with a clear light direction given by the sun. We could reference the planets seen in the background, and once the background was approved it was fairly quick to light the pre-jump part of the shot. The only rule is to always make the ship read easily while it’s moving.

For the jump part, we placed a geometry representing the tunnel of lights surrounding the ship. With values based on what is on screen, we could replicate a fairly complex set of strobing lights and streaks hitting the silver metal of the wings and the body of the ship. Once the « tunnel » look was approved we made sure the lighting was in sync with it by offsetting the animation of the lights passing by, so it would look like Quill’s ship was flying through at high speed.

Can you explain in detail about the design and creation of the beautiful space backgrounds?
Simon: We established the look of the M-Ship early on for the first full trailer and that was used for the rest of the movie. During the Lost in Space sequence, where we see the marooned ship, there are three main shots which show the environment: one close up, one wider and a pullback from Tony in the cockpit. We created several iterations as concept stills which I vetted before presenting 20 or so designs to the client and settling on the final look. Dan wanted to express the loneliness of space and one way we helped communicate that was to progressively show less detail in the more distant shots. The widest shot is almost entirely black space, with small colour accents from a gaseous nebula, so you really get a sense of the ship’s isolation and the emptiness of its location.

How did you create the 1970s version of Camp Lehigh?
Simon: Two of the scenes we were working on were derivatives from previous films. The Morag sequence revisited GUARDIANS OF THE GALAXY, and Camp Lehigh revisited the 1970s and CAPTAIN AMERICA: THE FIRST AVENGER.

The briefs were aesthetic; essentially, we needed to match the tone of the original. For the underground laboratory shots, where Stark finds the tesseract, we extended the interior set pieces in both directions away from the camera. Our build had to match the look of the full-sized set and borrow elements from the original film.

As Tony and Howard leave the laboratory and cross the forecourt, there is a huge camera pan up above them, showing the full extent of the base. Dan supplied us with very specific reference materials from army bases in the US, particularly for the colour ways and layout of the soldiers’ accommodation. I’m pretty sure he said that he spent a period of his childhood on army bases, so he had a clear recollection of how it should look.

What was the real size of this set?
Simon: The foreground portion of the shot encompasses all the live action element of the plate, about 1/3rd of the full base layout.

How did you manage the crowd creation and animations?
Simon: We extended the set by around 70%, adding buildings, vehicles and creating a busy, populated scene utilising production shot bluescreen elements of hundreds of extras. We retimed and repositioned the action on cards and placed them in Nuke to fill out the shot.

Another time period the film revisits is New York in 2013. How did you recreate Stark’s RT interior mechanics?
Simon: This is the sequence where Ant-Man shrinks down to slip inside Tony’s t-shirt and RT chest unit, adjusting the mechanics and causing it to short out. The point of this is to create a diversion enabling a second attempt to steal the tesseract. We based the RT unit designs on IRON MAN 2 and designed and built a 3D environment which mimicked the look of that version. We pitched designs to Dan, who gave preferences which we modified. We blocked out the scene and made minor alternations to the layout to favour seeing Ant-Man convincingly in the macro environment.

Can you explain in detail about the creation of the Outriders?
Simon: We received the model for the Outriders from ILM.

How did you handle their rigging and animation?
Simon: Choreography of the Outriders was key to making the chase sequence work. Hawkeye must never be caught but the Outriders move at blistering speed, so we often have them zigzagging around the tunnels, almost like a skateboarder going up the side of a tube, taking the longest path possible to slow them down. Joint CG Sup Chris Petts can give a more detailed overview of the challenges of this sequence.

Chris: The Outriders were a particular challenge in terms of animation. Drawing on apes, dogs and even spiders for their movement, we were aware that in style, they were both powerful and dangerously fast. We saw in INFINITY WAR that these creatures are capable of outrunning a human easily on open ground, so our challenge was to understand how Hawkeye could keep ahead of them while being chased through the underground tunnels. It quickly became apparent in blocking the animation for the shots, that their strength in numbers was also their weakness within a tunnel environment. They would quickly get in each other’s way in a confined space – each intent on their prey, regardless of the actions of the others.

To begin with, we created a number of animation cycles – running, climbing, crawling and leaping. These were used to block out the scenes, using the appropriate action depending on their position in the shot. As the action within a shot became more defined, the animation was refined for each creature within that specific shot. In the finished shots, bespoke animation replaced these early cycles, as each Outrider interacts so closely with its environment. With eight limbs for each creature and a great deal of interaction between them, this could be particularly time-consuming for the animators. With the Outriders in such close confinement, any change to the animation of one would quickly impact the movement of the others. Every creature also had a simulated muscle rig built by our creature effects department, which was configured to be previewable by the animators in order to see the final body shape within the animation scene itself.

Can you tell me more about the sequence revisiting the planet Morag?
Simon: The sequence on Morag opens with four full CG shots showing the planet from space, then the escape pod being deposited from the M-Ship’s loading bay onto the planet surface. The final shot in this short sequence was an exact match to Quill’s view landing on Morag in GUARDIANS OF THE GALAXY, but with the camera dollied over to screen left to give a slightly different viewpoint. As the sequence progresses the same technique is repeated and we see Quill singing and dancing, as in Guardians, viewed from the War Machine and Nebula’s shifted perspective. We extended the green screen set piece by 100%, adding columns, dripping water and foliage to fill out the environment. Later, still on Morag, Nebula and War Machine enter the inner chamber to extract the orb with the power stone inside from a laser net, again repeating a corresponding sequence from GUARDIANS. Nebula reaches in, partly destroying her arm, which is stripped back to hot, bare metal. She hands the orb to War Machine as the molten metal of her arm cools.

Which sequence or shot was the most complicated to create and why?
Simon: The wide shot where Quill’s ship lands on the surface of Morag used many disciplines, from environment builds through FX, lighting, animation, texturing and comp. It was pretty demanding.

Is there something specific that gives you some really short nights?
Simon: The late finaling of designs for the time suit gave us little room for manoeuvre for the final compositing of some shots. But this was the same for every vendor and late deliveries are the norm in VFX.

What is your favourite shot or sequence?
Simon: I really like the shot mentioned previously, with Quill’s ship landing on the surface of Morag. Although it was challenging, it was well planned and came together well. Seeing it up on the big screen and looking great was ultimately very rewarding.

There’s also a close up of Nebular touching Tony, in the opening Lost in Space sequence, as he drifts off into unconsciousness, the plate is beautifully lit, we added our very subtle space background, very out of focus, but throughout it’s a full CG Nebular arm and you only really become aware of this when her wrist is exposed, as an open space connected by metal rods to her metal hand, subtle but effective.

How long have you worked on this show & what was the size of your team?
Simon: We had about 126 crew, with 40 support, so I’d say around 160 in total.
We started builds for the assets in around September 2018.

What is your best memory on this show?
Simon: We had a great team of people who were a pleasure to work with. We all pulled together and there were lots of moments where, even when it was late in the evening and we were all tired, we managed to retain our sense of humour. I’m proud of the work we pulled off.

What is your next project?
Simon: It’s too early to say, but I’m looking forward to the next challenge!

Roberto Clochiatti – Head of Lighting

Which shots or aspects of Cinesite’s work were you most closely involved with?
I supervised lighting (last 3 months of the project) on pretty much all sequences with characters, vehicles, props and environment, but the environment on the Morag sequence which was done in a non standard way. I was involved in giving lighting feedback to the artists, performed quality reviews before the VFX supervisor, re-organize the lighting crew, maintain the relations between the other departments to discuss technical and organizational aspects, I was present to the morning production meeting and VFX daily sessions.

What were the most challenging parts of that work?
For most of the sequences the workflow was really smooth, a few challenging situations were due to the artistic developments in parts of the complex sequences involving character animations, rendering heavy FX simulation and their interaction with the environment. The design of the pictures went from a very graphic look to the very photo-realistic look, to find a balance point between them took many iterations and development over several different ideas, non the less to achieve it, we needed to change the technical and organizational approach close to the end of the project.

Another challenge in lighting was relative to the delivery of assets (from assets and animation) in a pretty long period of time while the concept kept changing, for the organizational point of view this was a stress test for lighting.

The EBB sequence had a difficult lighting situation, the initial concept was re-thought during the shooting, a very saturated environment did not allowed to show the look of the characters as would appear on a lookdev turntable, so we needed to change lighting and shading to create a more interesting look for the characters.

In the MOR sequence, the characters were shot into a closed set of lights which made them look in a stage rather than in an open environment, the challenge was about trying to integrate the actors shot into a stage with a full cg open environment, many tests were done in order to keep a correct perspective but at the same time maintaining a sense of openness and distance. We should also keep an eye on the references from previous movies and maintain a consistent look as well as a consistent mood.

Can you tell us about Gaffer and whether it was used successfully?
We started implementing Gaffer less than one year ago as the main lighting tool in substitution of maya and maya lighting tools. Gaffer gave us a totally different approach, it uses a procedural and modular structure which it gives us a huge amount of scalability to manage lighting scenes, we moved quickly from a one shot approach to a multishot approach reducing the workload but also keeping the sequence consistent easily and introducing powerful tools that can be built within the software, without bothering the pipeline department. We need to consider that the pipeline integration with gaffer has been continued during all project, which did create some problems, but also give us the real condition to improve workflow and tools, currently I am really satisfied with the progress that rnd and pipeline made, and with our lighting tds which took the challenge very positively.
Second, more detailed answer:

The goal is to make any lighting td in conditions of managing complex scenes, we started organizing the sequences depending on the purposes by grouping the shots with similar artistic and technical challenges, every group of shots was consistently assigned to a lighting artist which was managing it entirely, therefore the lighting team was fairly small. Gaffer gave us a totally different approach, which has a procedural and modular structure. We had a huge amount of scalability to manage lighting scenes and we moved quickly from a one-shot approach to a multi-shot approach, this reduced the workload but maintained consistency within the sequence. We were also able to introduce tools that could be built within the software, without bothering the pipeline department. With Gaffer to share pieces of scripts, macros and solutions is very easy, we are kept working to improve the workflow and tools along all the project, and by the end of the project we made noticeable improvements. To answer the question, Yes it was a success and I am very happy about what the RnD/pipeline team did to integrate and improve the software into the current Cinesite pipeline.

Is there anything else you would like to say or contribute?
In general I am pretty happy with the job done, with all the difficulties we faced I can see an improvement in the procedures, workflow and tools. I am sure things could be done better, although I think we are in the right direction, to improve the final quality and reduce struggles.

How did you handle the lighting challenge in deep space in the Benetar sequence?
For the deep space lighting, we used an early concept to understand what kind of mood was going to be set during the sequence. The Benatar is floating, surrounded by nothing but distant nebulas and stars. The challenge was to get a good representation of that. Trying to maintain a sense of loneliness and emptiness. Keeping the lighting fairly subtle, only using few lights with mapped stars, and distant source, giving some of the greenish/red tint used on the shot where Tony Stark is in the cockpit.
We used some star-field HDRIs that were painted by compositing in order to get some plausible reflection as well to help ground the Benatar, knowing that we didn’t have any plate to match the lighting to. Finding a balance between the Benatar and the background required us to go back and forth to find a good balance of brighter spot behind the ship to highlight the silhouette and play with the shape as well.

Chris Petts – CG Supervisor

Which shots or aspects of Cinesite’s work were you most closely involved with?
My involvement was with any shots involving creature- or human-based effects, including any limb or suit replacements

What were the most challenging parts of that work?
Any human and human-like action can be a challenge in animation. With human-proportioned characters, motion capture, or movement taken directly from a shot plate, can usually be used successfully. But certain characters – although often conforming to the humanoid shape – have sufficient differences from a human to make motion capture impractical, and hand animation a necessity.

The outriders were one such case. In addition to hand-animating human-like movement, the added challenge with the outriders was that they had four pairs of limbs to interact with the environment and with each other. Animating each one was like animating two humans at once.

Can you talk through 2-3 key shots and how you worked with animation to get them completed?

Ant-Man in NYC 4360/4400/4540
These fully CG shots involved Cinesite’s artists designing the interior of the iconic RT, and planning and creating the action of the shots using the available previsualisation and Paul Rudd’s voice tracks as a guide. Several design variations of the RT interior were created by Cinesite and proposed to the client, each drawing influences from the exterior appearance of the RT in other movies. A favoured version was chosen by the clients, and we began laying out shots based on this preferred design.

The action had to be carefully choreographed to the soundtrack, using the tone of Paul Rudd’s pre-recorded voice to guide Ant-Man’s action in each shot. When shots had been sufficiently blocked for camera and character movement, character action could be directly captured using Cinesite’s in-house motion capture system, and this action carefully aligned with the CG set and props. It was important to keep a visual link to the exterior of the RT throughout the sequence in order to tie the action in to the surrounding shots. This was achieved through the use of colour palette – using the pale cyan-blue of the RT as a base colour for the shots – and by keeping the distinctive RT triangle visible in the background of the shots. The shots were given a shallow depth of field to help sell the micro-environment look, and careful attention was made to size and detail-level of components, and types of textures and lighting within the RT to maintain the macro-lens appearance.

Is there anything else you would like to say or contribute?
It’s a rare privilege to be working on such a high-profile and highly-anticipated film.

Tim Potter – Head of Assets

Anything else you can tell me about Benatar or the sequence from your perspective?
This sequence was challenging as we needed to give the Benatar a look that suggested it had been lost in space for a period of time so had picked up damage, but then make sure that it was still able to fly. At first the destruction that we sculpted into the model was difficult to read in certain shots due to the darkness of space so we had to do some shot specific shader work to help bring out this detail, but in the end we were very happy with the look we achieved.

This sequence was challenging as we needed to give the Benatar a look that suggested it had been lost in space for a period of time so had picked up damage, but then make sure that it was still able to fly. At first the destruction that we sculpted into the model was difficult to read in certain shots due to the darkness of space so we had to do some shot specific shader work to help bring out this detail, but in the end we were very happy with the look we achieved.

How did you enhance your Ant-Man since his first movie?
The Ant-man costume has gone through a number of upgrades and changes since Infinity War. Our asset was ingested from another vendor, but with our NYC sequence having the suit so close to camera we had to create additional maps to help push the level of detail in the textures and shaders further. In Cinesite’s NYC shots Ant-man shrinks down and goes inside the reactor on Tony Stark’s chest to create a malfunction. We ending up building a large environment inside the RT, and detailed one particular area for Ant-man to land in and cause havoc. We were aiming to create a close, claustrophobic feel inside, as well as giving him room to move around and keep focus on the area of action. As the interior of the reactor environment was very blue it was a challenge to keep the red look of the suit, so we ended up lighting Ant-man with very neutral tones to get around this as it was important to be able to see and read character.

A big thanks for your time.

WANT TO KNOW MORE?
Cinesite: Dedicated page about AVENGERS: ENDGAME on Cinesite website.

© Vincent Frei – The Art of VFX – 2019

The post AVENGERS – ENDGAME: Simon Stanley-Clamp – VFX Supervisor – Cinesite appeared first on The Art of VFX.

AVENGERS – ENDGAME: Dan DeLeeuw – Overall VFX Supervisor – Marvel Studios

In 2018, Dan DeLeeuw explained his work on AVENGERS: INFINITY WAR. He describes today his work and the challenges on AVENGERS: ENDGAME.

Infinity War and Endgame were filmed back to back. What are the main advantages on the VFX side?
I think creating our infrastructure early – staffing the teams with super talented people and creating dynamic systems was the greatest advantage. After finagling 5119 shots over the course of two films, we have one of the best teams in the business!

After Infinity War, does the directors wanted to change anything about their VFX approach?
The director’s didn’t having any specific requests. The infrastructure for the show was quite robust and flowed seamlessly into ENDGAME.

How did you organize the work with VFX Producer Jen Underdahl and the Marvel VFX Supervisors?
Jen worked to split the work by sequences as much as possible. This methodology worked better for INFINITY WAR because characters and assets slotted into specific battles throughout the film. For ENDGAME, characters and assets were shared across sequences. As a result, there was much more sharing between VFX houses. I took a more high level overview on these two films. I oversaw pre-vis and preproduction planning; as well as running shots in post production. Swen Gilberg covered first unit and Marten Larsen covered second unit. When photography was complete, Swen and Marten moved into post production to help push shots to final.

How did you split the work amongst the VFX vendors?
We had the good fortune of working with many of the same vendors from INFINITY WAR. As a result many of the VFX houses already had assets required for ENDGAME. We tried to cast the show based on work we could leverage from INFINITY WAR, as well as, each house’s strengths.

Did you update some models or techniques such as Thanos thanks to the experience of Infinity War?
Digital Domain and Weta Digital updated their Thanos rigs. Weta used a new technique called Deep Shapes to provide a procedural way to get a finer level of complexity as the face moves from one expression to another. Digital Domain modified their process for tracking the dots on Brolin’s face. The new system used machine learning to track the dots in a few hours rather than one to two weeks.

How did you design and create the Quantum Space suits?
The Quantum Suits were designed by Ryan Meinerding in Marvel’s look development department. From their designs, Framestore, ILM, and DNEG worked in parallel detailing out the suit and making them functional. The suits could build over the characters body using Stark’s bleeding edge technology. We based the effect on Framestore’s work from INFINITY WAR on the MK 50 Iron Man armor.

A new CG character makes his appearance, a new version of Hulk. Can you explain in detail about his design and creation?
We received designs from Ryan, and ILM ran with the sculpt of Smart Hulk’s face. We decided on a great design pretty quickly which allowed us to spend more time perfecting the rigging and performance. We used systems created by the Disney Research Team in Zurich named Medusa and Anyma. Medusa would capture incredibly detailed meshes of Mark Ruffalo’s face and Anyma would interpret the movements of the face from the FAC’s session. For ENDGAME, the research team rewrote Anyma to work from the dots captured in the helmet-cam footage. ILM rewrote their retargeting software to capture as much of Ruffalo’s performance as possible. Framestore worked on Smart Hulk as well and created a machine learning system that could be trained to recognize Mark’s performance.

How did you use your experience with Thanos for this new Hulk?
All of our work with Weta and Digital Domain focused on retaining the smallest details as possible from Josh Brolin’s subtle performance. With Mark Ruffalo, it was almost the opposite extreme. Where Josh’s face was subtle and intense – Ruffalo’s face was broad and elastic. The teams needed to focus on translating the intent of Mark’s performance onto Smart Hulk’s larger features. So for Endgame we had a great new character with his own set of challenges.

Can you tell us more about your work on-set with the crew to get all the necessary infos for the post?
One of the many benefits of working with Marvel is the continuity of the VFX tools across 22 feature films. For the most part, the VFX teams use of the same databases, HDR tools, and the same shooting crews. As a result, everyone on set is familiar with the data we need. That familiarity is the only way to get films this big done!

Some sequences gives us a different look on the first Avengers movie. What are the main challenges about that?
Sequences from the older films shared original and new photography. For the original films we unarchived the final VFX shots and converted them into ENDGAME’s color space. In the DI the new and old shots were color timed to match to create the illusion of a new perspective on a scene. The great advantage is that we have access to all the assets from the other Marvel films. For instance we had the old LIDAR data of Morag from the GUARDIANS OF THE GALAXY set. When we went back and recreated some of the shots, we could re-build the live action set and incorporate it into our new photography. For scenes like the Battle of New York, ILM unarchived their old shots and data to help complete their new shots.

How did you approach the fight between the two Captain America?
For Cap vs Cap, we shot over green screen with two stunt performers. Chris Evans would step in fighting and dialogue. When we see two Cap’s, Chris would perform against a stunt double. Lola then created the face replacement to complete the illusion of the two Captain Americas. Originally, we shot the sequence without Cap wearing his helmet, but it was confusing as to which Cap was which. DNEG completed the shots with an awesome all CG background and added the helmet to reduce the confusion.

The final battle is really epic! What was your approach with such a big sequence?
We always try to map out as much of our big sequences as possible in pre-vis. The Third Floor did an amazing job of creating road map for the entire final battle. For specific beats, the stunt team would create fight choreography for the hand to hand combat. This stunt-vis would then be cut into the pre-vis and used by production to shoot the sequence. Any all CG shots would get turned over as soon as possible because they didn’t have to wait for the live action photography.

We shot on stage with minimal sets and dressing. Ninety percent of what you see in the final battle is created in CG. Weta created a crater shape and layout from The Third Floor’s model. The rough crater was shared with ILM and both houses created their own incredibly detailed virtual sets.

We tried to split the sequence as logically as possible. We knew that the sequence would grow in post-production and we needed two of the biggest VFX houses in the world to complete it.

Can you tell us more about the previs work with the directors on the final battle?
As soon as we get the script, I will start working with pre-vis to flesh out the big fight sequences. The awesome team from The Third Floor’s team was headed up by Gerardo Ramirez – they have artists that go all the way back to the first THOR movie. Gerardo and I will sit down with the script – or even just concepts if the script isn’t written yet – and start to figure out the main beats. We typically over animate a sequence, but that allows us to trim down the edit and keep the best parts.

The most important scenes were Cap picking up Thor’s hammer, the portals opening, and Tony’s sacrifice at the end. Once these main sequences were laid out, we would show the Russo brothers and would get input and notes. I think the biggest note was always to reduce – we wanted to make the sequence bigger than it already was!

How did you help the cast and the crew to visualize this epic sequence?
We would show the pre-vis and concept art to the cast and crew to help the lighting as well as set the mood for the actors. For CG characters, we would always dress up stunt performers in mocap outfits so that everyone knew where are digital characters were on set.

The battle has a crazy amounts of CG assets (characters, FX, environment, …). What is your secret to don’t be lost with so many informations?
The MVP’s of the show were the VFX on-set crew, production coordinators, and the VFX editorial team. Matt Lloyd, David Bosco, and Emily Denker headed up each of those groups respectively. The amount of data that they flawlessly handled was amazing. It all comes down to great tracking systems and crews with the experience and talent to keep it straight.

Can you elaborates about the complex process to shoot all the elements for the final battle?
Much of the final battle would be completed digitally which offset the complexity of the shoot. The set itself was pretty simple. It consisted of a dirt floor and wreckage that could be moved to dress in elevations and foreground for camera. In some cases we would remove set dressing because it would hamper the stunt performers movements. SFX created fire effects for interactive light and to add complexity to the set.

What are the main challenges with full CG shots?
For this film, it was the density of digital characters and effects. When we shot on stage we knew we wouldn’t get the scale and scope we needed. As a result VFX would have to carry a heavy load. When INFINITY WAR ended, we still had to go back and finish photography for ENDGAME. This caused us to have to turn over all CG shots later than we normally would have. The VFX houses had to back load their staffing to create enormous teams to get the work done!

What the VFX shots count?
2496.

What was your best memory or moment on Endgame?
I have wanted Captain America to pick up Mjolnir since the first AVENGERS. The fact that I had to opportunity to design and bring the Cap vs. Thanos fight to the big screen was the best!

Now that those two huge Avengers are done, how did you feel?
I feel like we climbed the movie equivalent of Mt. Everest. It’s an amazing view from up here. Everyone is seeing the movie and we don’t have to keep secrets anymore!

A big thanks for your time.

WANT TO KNOW MORE?
Digital Domain: Dedicated page about AVENGERS: ENDGAME on Digital Domain website.
DNEG: Dedicated page about AVENGERS: ENDGAME on DNEG website.
Framestore: Dedicated page about AVENGERS: ENDGAME on Framestore website.
Industrial Light & Magic: Dedicated page about AVENGERS: ENDGAME on ILM website.
Weta Digital: Dedicated page about AVENGERS: ENDGAME on Weta Digital website.

© Vincent Frei – The Art of VFX – 2019

The post AVENGERS – ENDGAME: Dan DeLeeuw – Overall VFX Supervisor – Marvel Studios appeared first on The Art of VFX.