Quentin Tarantino is getting ready to boldly go to the Final Frontier with his new Star Trek movie.
What do you want to see in Tarantino’s Trek? What would you do if you were making your first sci-fi studio blockbuster?
Before you pitch your takes, it might be helpful to know just what the Once Upon a Time In Hollywood filmmaker’s Trek has in store for the franchise — and when it takes place. On a recent episode of the Happy Sad Confused podcast, the Oscar-winner (and big Star Trek fan) revealed some story details about the top-secret movie.
“I don’t know how much I can say,” Tarantino said. “The one thing I can say is it would deal with the Chris Pine timeline.”
Setting the R-rated movie in the alternate reality where the last three Star Trek movies featuring Pine’s Kirk and Zachary Quinto’s Spock exist is both exciting for fans and, well, somewhat daunting for Tarantino himself.
Quentin Tarantino’s idea for a crossover movie featuring some of his most beloved characters is just one of the many awesome ideas stuck in Development Hell.
Which Tarantino characters would you like to see paired up in a crossover movie? What shared universe movie would you love to make?
Thanks to Tarantino’s Once Upon a Time In Hollywood arriving in theaters soon, we now know more details about one movie fans will (likely) never see: The long-rumored Vega Brothers team-up.
The Oscar-winning filmmaker has been doing the press rounds lately promoting his latest film, where the director recently chatted about his long-rumored Pulp Fiction prequel centered on the Vega siblings: Vincent (John Travolta) and Vic (Reservoir Dogs‘ Michael Madsen). In the late ’90s, QT’s fans were on the verge of obsessive behavior about seeing the then-novel idea of a crossover team-up between the two brothers but, unfortunately, Double V Vega (the official title) never came to be. Here’s why:
Recreational drone pilots in the United States can now obtain near-instant authorization from the U.S. Federal Aviation Administration (FAA) to fly in controlled airspace near approximately 600 airports. This opens an estimated 2,000 square miles of airspace to safe and responsible recreational drone pilots, just as such authorizations were first made available to professional drone … Continued
The Art of the Cut podcast brings the fantastic conversations that Steve Hullfish has with world renowned editors into your car, living room, editing suite and beyond. In each episode, Steve talks with editors ranging from emerging stars to Oscar and Emmy winners. Hear from the top editors today about their careers, editing workflows and about their work on some of the biggest films and TV shows of the year.
Recently, Steve talked with editor Sean Lahiff about editing the Netflix film “I Am Mother”. Sean has extensive experience as a VFX editor having worked on “Harry Potter and the Deathly Hollows pt. 2”, “Hunger Games”, and more. Listen to the Steves conversation with Sean about editing “I Am Mother” below:
In 1989 two movies about lions who would be king went into production. One based on a cartoon from the 60s and the other marketed as a wholly original idea with no inspiration. Which have you heard of?
The Lion King remake crossed $185M opening weekend. The title is one of the most famous in the world, with merchandise and a Broadway show worth billions of dollars. As the remake of the movie continues to wow at the box office, news outlets like The Hollywood Reporter have recently revived a stunning would-be copyright case that asserts The Lion King may not be so original of an idea.
Today I want to look at the Kimba vs. Simba case, talk about the overarching details, and see what you think.
Before we dig into the fact sheet, check out this video by Alli Kat in which she delves into the controversy!
A photographer has found herself a victim of an online scam that cost her both her money and her camera gear, after one user managed to get around eBay’s “protections” by making false claims they’d been sent an older camera model than that listed on the advert.
I loved shooting on film. Nothing was more exciting than the day of the telecine transfer—I finally got to see how the movie would look—but the day the bill was due, my attitude would always change. I hoped that one day the inflated price of the film transfer would come down to a realistic number that made sense. That day is now! My film making career started in the mid-1990s in Florida, around the same time digital was born. I can remember all my friends shooting on Mini-DV and nudging each other saying, “Check out how great the Canon XL-1 looks on TV. It […]
Imagine Products – an American company specialized in workflow solutions for both Mac and PC users – have just released ShotPut Pro Mac 2019.2. This update to their famous offloading application for filmmakers includes several new camera formats supports, but is also compatible with the Imagine HQ iOS app. This app lets you monitor ShotPut Pro’s offloads and transcodes directly on your phone while you are doing something else, letting the computer do the job. Let’s take a closer look at it!
ShotPut Pro Mac 2019.2
If you are not familiar with ShotPut Pro, it’s a Windows and Mac compatible software for filmmakers, media creatives, and DITs that’s been out for nearly ten years. The primary purpose of ShotPut Pro is to safely transfer your files with a checksum verification, from your memory card/hard drive to one or multiple other destinations, and to do so as fast as possible.
Once all your footage is in a safe place, ShotPut Pro can automatically create a report that you can send to the post-production house, for example.
The ShotPut Pro interface. Image credit: Imagine Products
The latest ShotPut Pro Mac 2019.2 update includes several improvements:
Redesign of the “simple mode” for quick and efficient offloading
Support for ARRI .ARI files offloads with Codex HDE integration, compatible with the ARRI Alexa Mini LF
Support for Blackmagic RAW and Canon RAW formats
More PDF reporting options and the possibility to create Media Hash List (MHL) reports
New RED dropped-frame flag and updated RED SDK for the most current metadata retrieval
Thumbs for Codex .ARX RAW frames
Real-time status updates via the Imagine HQ iOS app for iPhone and iPad
Image credit: Imagine Products
Imagine HQ iOS app
If you are tired of waiting behind your computer while it is copying hundreds of gigabytes of data, the Imagine HQ iOS app will save you time.
To make it work, you have to download the app from the Apple Store, log into your ShotPut Pro account, and voila! You can now track all your tasks directly on your smartphone. Once one or multiple tasks are completed, you will get a notification.
Image credit: Imagine Products
Pricing and Availability
ShotPut Pro costs $149.00 for a perpetual license with 12 months of free updates. Once those 12 months have passed, you can keep using the license as is or purchase an update plan to continue receiving further updates. Also, if you don’t need it extensively but wish to use it for a particular project, you can rent it for 15 or 45 days.
In terms of comparison, I can’t help but compare it to Hedge – you can read our review here – that offers similar features for €119. I advise you to try both for free before choosing the one you prefer.
What do you think of this new ShotPut Pro update? Have you already tried it with some projects? Do you use offloading software often, knowing that your footage are safe? Let us know in the comments!
G-Casper is a new and upgraded version of the call sheet creator Casper. It is now cloud-based and ported into Google Sheets, offering many new features to automate most parts of the process and reduce human error. The G-Casper is open source and available for free.
G-Casper is a new online tool for creating call sheets. Source: Prodigium Pictures
For larger productions, a call sheet is more or less the alfa and omega of the whole planning and shooting process. A well-written call sheet makes sure that the shooting day happens according to plan and every crew member or extra knows where and when they are supposed to be, and what is going to happen. Various tools to ease and automate the process of creating call sheets have been out there for years. Back in 2016, we took a closer look at the online platform Studio Binder, which was aiming to do exactly that. Another example of a call sheet creation software is Casper from ThinkCrew. Its newest version, called G-Casper is now ported into Google Sheets. What exactly is new?
G-Casper – Free Cloud-Based Tool for Easy Call Sheets
The original Casper – a tool for creating call sheets based on Excel – was created by ThinkCrew in 2013. It has now been upgraded and ported into Google Sheets by Prodigium Pictures, a production company based in Los Angeles, which is how G-Casper was born.
The G-Casper comes with 30 new features and 20 new error trackers to increase efficiency when making call sheets. It offers live-synchronized, cloud-based call sheets, cast & crew lists, production reports, exhibit G’s and more. The best thing is that it is open source and free of charge.
G-Casper workflow. Source: Prodigium Pictures
As call sheets are sometimes being written (or updated) under time pressure and after a long and stressful day on set, human errors can easily happen. A crew member or actor may get called too late or too early and an unpleasant (and sometimes cost-adding) situation can occur. To lower the chance of human error, G-Casper has a feature called Error Warnings. It controls the call sheet for missing or unusual data and tells you if it finds something suspicious.
G-Casper automates as much as possible to avoid having you re-write the same columns over and over. It auto-fills scene info based on scene number, which was one of the features in the initial Casper. Sunrise/sunset/dawn times are all added automatically, based on the shooting location and date.
Other examples of automation would be:
Food allergies – based on who’s on the call sheet, the information gets printed on the sheet for the catering team.
Automated drop-downs for actors and crew members, based on the scene breakdown.
Thanks to its Google Sheets integration, G-Casper supports cooperation. A production can check in on the AD department and vice versa. Daily Production Reports are created dynamically, based on the call sheet, leaving space for manual corrections.
G-Casper has a full shooting schedule to plan ahead with, so locations and parking spaces can be determined before the time has come. It works for up to 300 people on set and there is a low-ink mode for printing call sheets directly on set, too. You can download G-Casper and start using it by clicking this link.
Do you write call sheets often? Which tool do you use for them? Do you have experience with Casper or G-Casper already? Let us know in the comments underneath the article.
Sony has introduced the MRW-S3 – the world’s fastest UHS-II SD and microSD card reader and USB hub with USB 3.1 Gen 2, with up to 100W USB Power Delivery (USB PD) and HDMI output capability support. Sony also expands the TOUGH SDXC line with SF-M cards in capacities of 64GB, 128GB and 256GB. All of the new products will be available during fall 2019.
New Sony MRW-S3 USB hub with UHS-II reader and new Sony memory cards
Roughly a year ago, Sony introduced their TOUGH series of ultra-durable ultra-fast SD cards. These cards are completely sealed, ribless, switchless and one-piece molded to offer protection against drops, bends, water and dust. They are also very fast, offering speeds of up to 300MB/s for read and up to 299MB/s for write. The available capacities were 32GB, 64GB, and 128GB.
The Sony MRW-S3 is a portable USB hub, offering a fast and reliable data transfer. Based on Sony’s internal testing (as of June 2019), their MRW-S3 hub is the world’s fastest UHS-II SD and microSD card reader, and USB 3.1 Gen 2 hub. It supports high-speed data transfer of up to 1000MB/s (transfer speed may vary and depends on host devices, the OS version or usage conditions).
The MRW-S3 USB hub connects with a host computer via a detachable USB-C to USB-C cable. It has two USB-C connectors, a USB-A connector, Micro SD and SD UHS-II card readers and one HDMI output with 4K 30fps capabilities. Therefore, it is possible to connect external monitors without extra adaptors (given that the host PC can support DisplayPort output from USB Type-C port).
The Sony MRW-S3 USB hub with UHS-II microSD/SD cards reader
In addition, the MRW-S3 can receive up to 100W USB Power Delivery (USB PD) to ensure stable and reliable connections with USB devices. It works only for USB PD supporting AC adaptors, and Sony recommends to use a 35W or higher USB PD AC adapter. With the USB power level indicators, users can be notified if each port is ready to be connected at max power supply levels, eliminating worries of sudden disconnection due to poor power management.
The USB hub has a durable aluminum body with a wave surface to prevent scratches, while the grip makes handling easier. The MRW-S3 weighs 95g and comes with a detachable USB-C to USB-C cable for connecting to the host PC. This cable can also be used in order to connect to USB 3.1 Gen 2 devices or to a USB PD AC adapter at up to 100W, thanks to its embedded eMarker.
New Sony TOUGH SF-M and SF-E Cards
The Sony TOUGH cards that were introduced last year were part of the highest SF-G series with a V90 rating. Now, Sony is expanding their TOUGH SD cards line with the SF-M series. These new cards provide high-speed data transfer of up to 277MB/s read speed and up to 150MB/s write speed. They are, however, only rated at V60, which means the minimum guaranteed sustained write speed is 60MB/s (240Mbps). These SDXC cards will be available in the sizes: 64GB, 128GB, and 256GB.
New Sony TOUGH SF-M memory cards
Just like the SF-G TOUGH cards, the SF-M TOUGH are the world’s toughest UHS-II SD cards (based on Sony’s internal testing and compared to a bending test standard for consumer SD cards – June 2019). They are 18x stronger than a standard SD card and have the highest-grade water- (IPX8) and dust-proof (IP6X) levels.
Additionally, Sony has introduced a new conventional UHS-II SD card, as part of the SF-E series, with fast transfer speeds of up to 270MB/s read and up to 120MB/s write speed for 128GB and 256GB models (up to 70MB/s write speed for the 64GB model). These cards are rated V60 (128GB and 256GB version) and V30 (64GB version).
New Sony SF-E memory cards
Both, the SF-M TOUGH SD cards and SF-E series, offer a file rescue software for macOS as well as Windows to help recover data or photos that were accidentally deleted.
The Sony MRW-S3 USB hub, the TOUGH SF-M series and the SF-E series memory cards will be available during fall 2019.
What do you think of Sony’s new USB hub? Do you have problems with slow USB hubs? Do you have any experience with Sony’s TOUGH cards? Let us know in the comments underneath the article.
Mark Sanger, ACE was one of the very first guests on Art of the Cut when he won the Oscar for Best Editing for Gravity in 2013 – a film for which he was also nominated for an ACE Eddie for Best Edited Dramatic Feature Film.
Mark started as an assistant editor and VFX editor back in the late 1990s and worked on films like The Mummy Returns and 102 Dalmations.
As a VFX editor he worked on Die Another Day, Charlie and the Chocolate Factory, Children of Men, Sweeney Todd and Alice in Wonderland.
In addition to editing Gravity, Sanger has also edited Last Nights, Transformers: The Last Knight, and Mowgli: Legend of the Jungle.
In this interview, we’ll be discussing his latest film, Pokemon Detective Pikachu, directed by Rob Letterman and starting Ryan Reynolds and Justice Smith.
SANGER: As far as I’m concerned when it comes down to the editing of picture I apply the same methodology to 2D and 3D as I do to cutting sound and picture — and that is I start with the most basic of principles, which is: do the visuals go together? When it comes down to 2D and 3D, my idea is always to cut in 2D, because then if you know that the cuts are working at that level, then everything else is just a bonus on top of that. If you start working in 3D and you cut in the three-dimensional environment, my argument is that you would have to go back and recut your 2D version because cutting in 3D is a very very different universe than cutting in two dimensions. So I always start in 2D and if that’s working, what you can then do is — in the stereographic environment at the end — during post-production — then if any of the cuts are jarring — say if you’re cutting from a close up to a wide-angle — which are invariably the cuts that tend to give you a headache in 3D, that’s why you can adjust the curve of the 3D. So on that cut, for instance, you can then make the blend across that cut almost a two-dimensional blend and then you can gradually open it up across the following cut. But I would argue that you have to do it that way around. If you shot in 2D, then you HAVE to do it that way round, because that’s the only way you’re genuinely ensuring that you have a good edit in the 2D environment and a good edit in the 3D environment. If you shot in 3D It’s a very different kettle of fish.
HULLFISH: And so this was not shot in 3D?
SANGER: It was not. It was shot on film. The intention was always to shoot it in a way and cut it in a way that harks back to the nostalgia of the film noir environment.
HULLFISH: When you were saying “blend in 3D,” you aren’t talking about a dissolve, right? You were talking about adjusting the 3D-ness of the shot?
SANGER: Yeah. There are some changes in Gravity, for instance, where we would cut from a big close-up of Sandra Bullock’s face to a wide-shot of the void of space, so you’re definitely dealing with very close close-ups jumping to very wide expanses and some of those cuts — in the three dimensional world — when you cut to those wide shots, you’re actually almost cutting to just a two dimensional image.
SANGER: But the reason I would argue is that the 3D works in any of these movies at all are when those cuts are imperceptible to the viewer. While there will be some big wide expansive shots that you will be very aware that you’re seeing a three-dimensional image, the editing of the two-dimensional version of the film for me is what will always drive the stereographic version of the post-production process.
HULLFISH: I would think over the course of the lifetime of the film, most people are watching in 2D. You’re watching it on your home TV set. You’re watching it on a 2D movie screen. You’re getting it on your phone.
SANGER: Absolutely. And it’s for that very reason that it is always my preference to make it work at the most granular level — depending upon what the film is. I will invariably cut a scene mute, just to get the rhythm and the pacing of the visuals correct. And then the dialog part is something that then informs that mute cut. So I won’t stick with the mute cut necessarily, but the dialogue part will then inform that and it will grow from that point.
But you have to start with a seed and let it grow, and that seed is the raw visual cut. Now, in the case of Detective Pikachu I did stray from that slightly in that — when you are cutting, for instance, a dialogue between one actor who has been prerecorded and another actor who is reacting to playback of that recording on set, then what I would invariably do is create a radio play of the scene, because in the case of Detective Pikachu, you’ve got a buddy cop format for the movie, which means that the dialogue is what is driving the rhythm and pacing of the scene. There’s some great dialogue, particularly in the bar scene where Tim is talking to Pikachu and Pikachu is jumping up and running back and forth in the bar. You’ve got this banter going back and forth. That’s something that we have to CREATE in post-production. And so in the case of that film I will do a pass which is: edit the dialogue together; find the best takes in terms of performance — choose the performances that we think best drive the scene in the way that we want it to go from the practical shoot side — find any pre-recorded material of, for instance, Ryan Reynolds who was often recorded separately — and then create a radio play without the visuals that run in the way that it should. At that point then it’s almost a reverse of what I would do on more conventional shoot, which is — that radio play has driven it because that’s the only way you can cut a scene when you’re dealing with an actor and a bunch of empty plates that will ultimately be filled with some animation. Drive the rhythm of the scene with the radio play of the dialogue and then let the cutting of the visuals inform that rhythm. Then you don’t stick to it because then you realize the rhythm is working but Pikachu literally doesn’t have enough time to get from A to B. So at that point you’re thinking, “Well, note to the director and note to the visual effects guys. Can we discuss what Pikachu could be doing at this moment that would allow the necessary blocking of the scene, but also won’t betray the rhythm of the dialogue.”.
HULLFISH: Did you get Ryan Reynolds recording performance — audio performance — before they shot the movie?
SANGER: Yeah. Rob Letterman was very, very wise, having had lots of experience in this realm before. We did two sessions of recordings with Catherine, Justice, and Ryan in a room together where they were basically doing a read through but also we were going to be extracting some of Ryan’s performances and using those in the film. So that was a very useful tool for everybody. That was pre-shoot and that was very informative for everybody in order to gauge what the movie is, and also for Rob to go away and think, “Well actually there are a few gags that Ryan clearly improvised there that we absolutely need to incorporate visually into the blocking of the scene.” So those two pre-recorded sessions were very, very helpful. And Rob was very smart and I think it may actually have been Ryan’s suggestion for Ryan to be available for the first few days of filming so that Justice got an opportunity to understand — in the real environment that they were shooting in — to get a gauge of the story that we’re making. So for a few days, Ryan was there doing off-camera lines and that really got the rhythm solidified. Rob was very specific over what those scenes would be that we would shoot upfront in order to really maximize the back and forth that we needed to ensure we carried on through the rest of the shoot. And then, of course, it was a process of — as we were going through the shoot — there would be the recordings of those original rehearsals. I created a radio play of all of Rob’s selected performances of Ryan and they were used as playback on set for justice to react to.
HULLFISH: Was Justice listening to an earpiece?
HULLFISH: I just cut a film with a person who spends a lot of time on the phone and all the performances on the phone were done by a non-actor off-camera and the timing of the performances were so different, and the delivery was so different for the on-camera actor to play against. That’s really smart. I love the fact that they did the pre-recordings with multiple actors. It wasn’t just a voice-over session with Ryan.
HULLFISH: That’s huge.
SANGER: It is huge. And again, it comes from Rob’s experience doing other things like Monsters vs. Aliens. He was able to have the foresight to see some of the pitfalls of where we could have ended up had we not done that. There were a lot of movies that maybe didn’t make that decision, and I would argue there’s a disconnect — a subliminal disconnect for the audience, where they won’t necessarily understand what it is that they’re feeling, but they are witnessing a false event. They can tell that this dialogue was actually something going on between two different people at two different times. So I would argue that anything where you can counteract that subliminal feeling of a lack of reality that’s going to make your overall storytelling more cohesive. So yes, I think that worked for everybody because what we couldn’t end up having is a rhythm to the scene that was one thing when you shoot it with the three actors doing a read through and then something else entirely — something that didn’t have that life and energy — when we came to shoot it practically.
That said, there was always somebody on set who was available to read back Ryan’s lines. But — I think this is crucial: they had listened to Ryan’s selected takes and learned the rhythm and the pacing of those lines so that occasionally, if it wasn’t practical for playback to be going on, that actor on set could be at least giving Justice the rhythm and pacing that would honor the performance that Ryan would ultimately be performing together.
HULLFISH: Because he had a very distinctive, kind of a manic character. So you can’t have a laconic performance against it.
SANGER: No. One of the greatest joys I’ve had in the industry was sitting on an ADR stage with Ryan Reynolds just improvising lines and bringing so much life to the movie and there’s obviously a more explicit version of the jokes. Invariably those are the ones that you really want to use because they’re the ones that I found funniest, but he would sit there and give Rob Letterman, the director, a whole broad spectrum of different performances. And that puts you in a position where you’ve got a wealth of different options that you can follow and auditioning with Rob Letterman which ones we were going to use… those days were some of the most fun on the show.
HULLFISH: I believe it. Talk to me a little bit about editing improv-ed lines. It’s not the same as pure scripted. You’re not looking at a script asking, “Which performance of this exact line do I like better?” You’re saying, “Which LINE do I like better?”
SANGER: Well there’s that and then — in the visual effects room — it goes a step deeper as well. There’s one thing where your editing improvised lines between a group of people who are all sharing the same space together as those lines are being improvised. That in itself has its pros and cons. It’s another thing entirely when the improvised lines are being improvised in an ADR session that is being recorded after the practical shoot has been done. And in that scenario, if there’s a line you really want to use but there’s no reaction — in that scenario you’re either in the position where you go off and you seek a reaction and you just can’t find one that justifies using the line and it would feel so manufactured in the edit that the line would fall flat, OR the beautiful scenario would be that you scroll through and you find a reaction from someone like Justice Smith, who does give such a wonderful palette of different reactions on every single take, that often you could go in and you could find a moment with Justice and you say, “Oh my God, look at that!” It wasn’t meant for that moment, but it worked so perfectly, it was divine intervention.
HULLFISH: And do you have to do some kind of organization to be able to find those reactions? Do you create selects reels of reactions because you know you’re going to be in that situation?
SANGER: My working process is always based upon performance anyway. And so there is a process that I go through on any film — whatever it is — that is exactly the same. Which is: I have my assistants — and this is a fairly common process, so it’s not like I have a trademark on this — but this is what works for me on every single movie — and that is, I have my assistants go through and mark up every single line that is said in every single take, and sometimes those lines are identical in every take, and sometimes those lines might be slightly different. That way when constructing the scene, I can see every single delivery of that line and I can work out immediately that it’s going to be between take four and take six and you audition both of those and you make a decision. Everything is all about decision after decision after decision. And that’s how I construct a scene together.
Now, when we get into the world of improvisation, then what you’ll find is that you may have already assembled a scene in one way but then you need to go back and do a deep dive on some of the ADR — all the lines that were recorded subsequent to the practical shoot. In those scenarios, it’s about creating fresh dailies performance sequences which *I* will then put together (as opposed to having an assistant put it together). I’ll go through and I’ll say, “I need that moment, that moment, and that moment from each take.” I’ll say to Robert Letterman, “Please just give me five minutes. I’m going to go and find all those moments.” He’ll have an espresso. He’ll come back and then I’ll present each one of those moments to him and then we will make a decision. So the process of discovering the performances is always ongoing and those select sequences that you have — you may find yourself going back to them six weeks later to be looking for slightly different moment — but without that basis, the way that my dailies are formatted for me by my amazing assistants — that process would not be possible.
HULLFISH: I work very similarly. The only difference for me is that I don’t tend to go by line because I feel like it breaks it up too much. So I usually break a scene up into like six or eight beats and then I just do the beats instead of the lines.
SANGER: I do exactly the same thing. The initial process is a two-stage process. I get the dailies broken down in that format by the assistants and then I create a separate version of that sequence which is where I go in a little bit more granular level and break the scene up into beats as you say. And to me, the moment when you get six cameras on a conversation — regardless of how big the movie is or how small the movie is or how big the scene is or how small the scene is — that’s always daunting because you don’t know your in into a scene and you don’t know your out. You can only really be informed by finding how you left the previous scene and what performance and camera angle and camera move is going to work best editorially with that. That to me is the moment where I start to see the shape of the dailies and if you don’t have that scene broken up into beats, then I certainly wouldn’t be able to focus on how one line is interconnected with the next line. So going into a scene initially — exactly as you said — if you don’t break it up into line by line and then beat by beat as a consequence of that then I’d very quickly get lost within the scene and just have to start again. So that maybe just me, but there’s something very comforting about: if you’ve got four pages of dialogue and your assistant has broken it down into lines and then you break it down into beats, all of a sudden everything becomes very clear and the route that you’re going to take to cut scene becomes a lot more satisfying.
HULLFISH: Since I use that same technique I want to play “devil’s advocate” on two points of what I think are the problems of that technique. The first one: we started this conversation about breaking the scenes down because of reactions. You specifically said you were using reactions that weren’t necessarily supposed to be for that line. So that means, if you’ve broken the beats down beat-for-beat, you’re now looking for reactions that are outside of those beats.
SANGER: Yes. It’s an extremely valid point, but that is one of the moments where if you do need to find a fresh reaction that had not been broken down into those beats that would be one of those moments where I would say to the director, OK, I need to do a deep dive on the whole scene because it could be that the reaction we’re looking for isn’t necessarily exactly what we have in our heads but could even be pre-“action” or post-“cut.”.
SANGER: And so that requires a deep dive on all of the dailies and, yes, essentially you have to sort again — re-break the scene down but that’s where you need the patience of someone like Rob Letterman to be able to say, “OK. I get it. I’ll give you 30 minutes.” Then you can go away and really scrutinize. It’s so beneficial for the director too because then the director isn’t sitting with the editor desperately trying to find something amongst the four hours of dailies. Instead, they’re being presented with 90 seconds of options from which you can then whittle down a bunch of selects.
HULLFISH: That’s the value I find in breaking the selects down into those beats. Sometimes if you’ve got a scene and you’ve got 40 minutes of dailies I can’t wrap my head around 40 minutes of dailies, but if it’s broken down by — say — the blocking of the scene so I’m only looking where they come into the room and go to the table, for example — now I only have to watch three minutes of dailies.
The other danger I find with doing that is because you’ve pre-edited the scene into these little beats, you don’t tend to let the edits play longer because you’re no longer watching the totality of a single take.
SANGER: Yes. Invariably. But I think the scene has to grow organically. And so the most difficult part for me is the beginning of the scene. Because at the beginning of the scene you can then see, “Now I can see where this is going to grow.” You don’t necessarily then need to stick to the selects that you’ve chosen, but most importantly, in that secondary process, after the assistants have put it together in the secondary process when I’m going through and breaking it down into my own sequence — which, part of that process is the selection of the beats — that’s how I learn the dailies. At that point — once the scene begins to grow organically — if you’re looking for a moment that wasn’t necessarily in your original selects but you’re very aware of because you’ve broken that the whole sequence down and you know your dailies. Then it’s just a case of grabbing what you know. Then you need to decide whether the shape of what you were originally selecting is actually working. Maybe what you actually need to be doing is to go off and find something different. But I only ever make selects — as it were — in that process where the director and I need to go and find something slightly different for a unique moment. Other than that I will always have all of the takes — whether or not the director selected them or not — marked up by the assistant in the way that I outlined because they’re all potential selects.
HULLFISH: One of the great things I find with those kinds of broken down selects reels is that they’re great for collaborating with the director.
HULLFISH: It’s the classic “is that the best take?” So you can quickly run through just the takes of that line. “Here you go. Here are three minutes. I can show you everything you got.”
SANGER: Exactly. What those sequences do is give your director the confidence to know that they’re seeing everything. That nothing’s been getting missed along the way. Because that would be a tragedy. The worst possible outcome could be that you design a scene one way and then six weeks later you turn the scene over to visual effects and then actually there WAS the take that the director was looking for and it was never presented to them. That would be the cardinal sin. So I think for a director to be able to see each and every single performance that they committed to film on the day of the shoot gives them the confidence to be able to say, “Yes! Thank God! We’ve got it.” Or “Well, we didn’t get it quite as planned but at least I can see everything here in order to make the decisions about how we move forward.”.
HULLFISH: One of the other things you mentioned was how important it is to find your way into the scene or to know how you’re going to get out of the scene. So when you’re cutting dailies you don’t have that opportunity. Very few movies are cut linearly or shot linearly although some you get a chance to edit after it’s been shot — though rare.
SANGER: Yes. There are three scenarios, basically. The first scenario is that unique situation where the movie’s already been assembled and you’re just going in to refine that in some way, which means that you get to reassemble the movie in chronological order. That’s rare for me just because the very nature of the films that I seem to be offered which are VFX-heavy movies where the visual effects schedule drives post-production and therefore you are being forced to turn the scenes early on for the sake of the visual effects work without necessarily knowing what the scene is coming from and what it’s necessarily going to. That’s scenario one. Scenario 2 is the scenario that I tend to be working with is where you are not presented with the benefit of knowing what’s preceding and what’s following.
HULLFISH: You’re just cutting dailies as they’re shot.
SANGER: In scenario two all you can do is put the best possible version of the scene together that you can hope for and hopefully the ins and the outs of the scene are going to bind with the scenes around it. The most beneficial with scenario 3, which is that there is an ongoing conversation between you and the director about either there are planned outs to a scene and planned ins to a scene based on the script. The director is in a position to be able to actually have that conversation with you before they are shot. And that’s the ideal, but I’ve often gone back to a director and said, “Look, we shot scene 4. You like the assembly of scene 4. I know you had this idea about the in on scene 5, but how about this?” And the director may tell you, “Absolutely not. No. I want to stick with exactly what I really planned and therefore please make sure that scene 4 ends to accommodate that.” Or occasionally the director will say, “You’re right. I didn’t anticipate actually shooting the end of scene 4 that way, but because of that I need to rethink the start of scene 5.”
it’s part of the evolutionary process of making a film. With all the best intentions in the world, once you’re actually shooting, if there’s something better than planned that comes from a moment of epiphany on set then clearly you need to work with that. And that’s definitely part of the editor’s role with the director is to remind them, “Hey, by the way, I know your head’s in that scene in a minute but just think about this when you go into scene five.” Directors are completely overburdened with people telling them “no” all day long because of the nature of the logistics of filming movies. It’s very, very difficult to come away on any day thinking, “That was a brilliant day! I got everything that I wanted!” So for the editor to always be on the phone at the end of the day and say, “Hey, hope you had a great day. Here’s something to consider for next week.”
HULLFISH: How much conversation do you actually have during shooting? Do you keep your eye on the schedule so you know: “Rob’s going to be shooting this tomorrow and I can inform that?” How carefully are you looking at the production schedule?
SANGER: In the case of Detective Pikachu, very closely because 1) I would need to make sure that the radio plays for each scene of Ryan Reynolds — if it was one of the days where he’s not on set — then those radio plays need to be supplied to set so that everybody has enough time for the technical process of making those radio plays available to the actors. So it may be that I’m cutting a scene that is required urgently by visual effects, but I also need to be keeping my eye on ball with the schedule because literally, they won’t be able to turn over on the day’s shoot if they haven’t got the Ryan Reynolds performance to work to the following day. But also — on any movie where you have a very tight shooting schedule and very tight visual effects schedule — you always have to be keeping an eye on the ball for a multitude of different events that are going to be hitting you all in the face once you have post-production. You need to be thinking about sticking to the visual effects schedule so that there are no penalties incurred by production. You know you’re going to be previewing the movie at some stage. You have to preview version of the director’s cut for the studio. What is it that you need to be looking at in terms of the overall schedule that will help those screenings because you’re going to be screening a version of the movie that has very, very few animated characters in it. It’s a RAW version of the film and that’s going to be difficult for anybody to watch. So you’re always thinking about what’s going to be happening not only the next day but three months from now, six months from now, because if you drop the ball during the very early days of shoot and you lose momentum. For want of a better metaphor — the wheels come off the car pretty fast.
On Detective Pikachu, I had a little bit of a battle with some of the execs at legendary because I normally bring in a music editor very very early in the show — during the shoot. And the reason for that is that when you’re dealing with editing scenes so quickly for the visual effects schedule, you don’t often have the time to lay in music and sound in the way that you might do on a non-visual effects movie. my inclination is to never do music or sound editing. I always want to be able to offer that up to other people. It is a natural part of the process in the 21st century that it is expected. And so the problem — as you say — is if you don’t get the music edit right, the tone of the movie you ultimately end up presenting is dramatically affected by it. And if you’re trying to sell this movie to people and the tone of the movie isn’t right musically, then it is very, very difficult to salvage that at the last minute and try to make it work because by that time everybody’s snow-blind by hearing the track that they’ve been listening to and often there is an unfortunate side effect that people start questioning whether or not the picture edit is correct. And so bringing on a music editor very early on in the process for me is crucial in terms of one of those decisions that you need to make sooner rather than later because you can have a great version of the movie that works for you and the director with no sound effects and no music. Sound effects are, always going to help you. but then layering in the wrong temp track presentation to the studio or to for a preview screening can drastically alter the way that it is received. So my argument is always: if you have — for instance — a 10-week schedule for a music editor budgeted, use two of them or three of them even during the shoot because then at least you get the clarity of everybody agreeing what the temp track needs to be moving forward.
HULLFISH: You were talking about screenings. Tell me a little bit of what you did with animation or previs to be able to show at a screening since Pikachu wasn’t in any of the plates?
SANGER: I didn’t think it was anything too different from the way people work nowadays. It was a combination of a post-viz team who were working for me. During the shoot, editorial was actually based within pre-vis and post-vis. I came on three months before we started shooting to put together some of the key sequences we needed because they needed to be locked-down very closely for the shoot. I always like to be part of that process because it means that the director and the editor have had an opportunity to work out the mechanics of a scene before it’s dictated by….
HULLFISH: By what was shot.
SANGER: By the shot! And so on Detective Pikachu I came on board and worked with the pre-vis team to design some of the sequences — some of the big sequences — with and for Rob because he’s got so much on his plate. He could go away and leave me to work with them and then present something at the end of the day so he could give notes. That was always a benefit to him. That pre-vis team — once we got into the shooting process — became the post-vis team and we would do a pre-official turnover just for post-vis where we would start getting some of the blocking sketched out during the shooting process. Framestore, in particular, had a great process called Sketchviz. It’s not great for presentation, but it’s extremely useful for solidifying early conversations about animation between the director, the editor, and the visual effects company. Basically, it was like watching an early Walt Disney movie where they literally just sketch animate on the sequence — and you wouldn’t want to present it necessarily to an audience — what it meant was that we weren’t having a disconnected conversation between a pre-vis company and a director and an editor that was then followed up by the visual effects company and the director and the editor. It was almost a live set of animation notes that we could update really fast to accommodate the way the sequence was coming together. So it was a combination of different tools that we used and as with any process, you start off using one palette and by the end, you learn to use that palette and adapt it.
HULLFISH: Did you use The Third Floor for pre-vis?
SANGER: We used The Third Floor. They were on for some of the post-vis as well, then Framestore were using the Sketchviz process.
HULLFISH: Do you think Sketchviz was an in-house software for Framestore?
SANGER: I’d never used it before, so I don’t know if anybody else out, there is using it but Johnathan Faulkner, the supervisor at FrameStore presented to us and we thought it was just amazing. We don’t need to wait on track plates for post-viz. We don’t need to wait on the process. We can literally supply you a scene and you can come back to us overnight with a full set of animation proposals for the entire scene? It was invaluable.
HULLFISH: The reason why I ask about Third Floor is because I’ve interviewed them and they have their own editors, but pre-vis editors cut things very differently than you might cut them.
SANGER: That goes exactly to my point of why I find it useful to come on early in the process because pre-vis editors are some of the greatest and unspoken heroes in the industry. They’ll be handed one sequence and told to put it together without necessarily the context of the editorial style of how the rest of the movie is going to be put together. And that’s not their fault, but it does present you with a problem when scenes around a pre-vised sequence are cut one way and then the sequence that was pre-vised nine months earlier is shot exactly the way that it was assembled by the pre-vis editor, and then you have a conflict of styles and tone. That’s something that we tried to combat on Detective Pikachu by having me assembling pre-vis from the very, very beginning.
HULLFISH: I’ve talked to those pre-vis editors and one of the things that they mention is they’re working with pretty crappy visuals compared to the final film, so if you see a wide shot like that wide-shot in Detective Pikachu that sets up this massive universe of the characters, it’s so visually stunning that you just want to hold on it for a while to soak it in. But in the pre-vis, it might look pretty bland and boring and you’d maybe want to get off of it quicker than you would if you had the full visual complexity of the shot. They also don’t get the actual characters so when you have to sit down and look at Iron Man’s face as he cries about a lost friend or something, those people are just seeing a crappy Iron Man mask and they may not hold for the full emotional context. So they’re definitely hampered.
SANGER: Hampered is absolutely the right word or hindered certainly by a lack of respect for them by people who might be initially putting the movie together, but also by a lack of context. Everything is context when you’re putting a scene together. And if you haven’t read the script of the movie how on earth are you going to be able to put a scene together that honors everything that precedes it and follows it. You have pre-vis editors who desperately want to honor the film that they’re cutting a sequence together for. I’ve been in a situation on one occasion where a pre-vis supervisor was charged with putting a sequence together based on a series of notes the director and I put together and came back with something that had nothing to do with what we had pitched them, purely because they thought that the scene would be better if it opened one way and ended another way. The irony being that the character that they opened the scene with died earlier on in the film. They had this amazing CGI asset that they wanted to use and present and in that situation it is very, very frustrating the fact that there is still a disconnect in preproduction between the pre-vis houses and production because time can be wasted when somebody is desperate to give the best job they can but isn’t presented with all of the tools to do that. Or just decides to go off on a tangent.
HULLFISH: Especially on a movie with a lot of pre-vis, they tend to be movies that are kind of secretive. So they do not give the entire script to the pre-vis editor so it’s not even like they’re just ignoring looking at the previous scene and the next scene. They don’t even have them.
SANGER: It is a real problem because so much of the budget of the movie and the shooting process of the movie is now based upon work that is done by this group of extremely talented people — as I say, unsung heroes — early on in the process and yet they aren’t always given all of the information that they need in order to present the best possible sequence. That is the reason why I think it is important for editors to come on now on these big visual effects movies long before shoot because it enables a continuity of tone, structure, rhythm, and pace that helps the pre-vis company do the best job that they possibly can and the production ends up shooting something that has been tonally and creatively and aesthetically agreed by the director.
HULLFISH: You mentioned earlier that you tend to get a lot of these visual effects movies. Do you think the trajectory of your career has been because of a background as a VFX editor early on or VFX work you did?
SANGER: Most definitely. It’s a very interesting career path in that — as I suspect with anybody’s career — you could never really predict the way that it went. In the case of visual effects, the reason for that is that visual effects don’t exactly float my boat. My favorite films don’t have visual effects. And yet at the same time, I feel very privileged.
I started taking visual effects editing work frankly because it paid more money than the first assistant job. The reason that it did that in the early days at least was that you were coming out of the visual effects budget rather than the editorial budget. Of course, the visual effects budget is clearly gargantuan in comparison. That wasn’t me being money-grabbing in any way. That was me as a new parent with a mortgage needing to just get a little bit of extra cash if I could. The downside of visual effects editing is that it’s a lot of admin. It’s a lot of technical admin, which frankly never interested me. And if you ask many of the visual effects producers who were forced to work with me in the past they would all agree (all laugh) For all of the people that had to put up with my visual effects admin, it’s amazing that I was offered more than one job to be honest with you.
However, the reason I survived was because another aspect of visual effects editing was putting together Avid comps — something quick and dirty that the director and the editor could look at and say, “Well I can see this will work” or Well I can see now that we need to extend that shot by 16 frames. So because that ingratiates yourself with the editor and director, I started to get a few visual effects editing jobs and from that, worked with some of the greatest directors in the world. So I can’t knock it, but the benefit of that is that on the whole, the large majority of the movies that I end up being offered are visual effects-heavy films. But I would like to think at least that with the exception of a couple, I normally would err on the side of movies that are driven by story and character where visual effects are one of the tools that you use as a storyteller. My favorite films are conversations between people in a cafe or the end of The Good, the Bad and the Ugly: three men looking at each other in a graveyard. Those are my favorite films. It’s just by the nature of the career path that I’ve had. I end up cutting different ones.
I met with Rob Letterman over Skype about the job. I hadn’t read the script at the time, but I was taken by his pitch about how this was a father and son story. And it just happened that there were some characters who would need to be generated using visual effects. But ultimately it was the heart and the charm of what he wanted to bring to the movie that was the reason I wanted to do the movie. And I did have to say to him, “I’ll be honest with you. I don’t know what Pokémon are.” But he was great because he said, “Well, I know exactly what they are and the benefit of that is that for the two of us to be putting a film together that will mean that you will always have an eye on whether or not this film plays to people who aren’t aware of what the Pokemon universe is.”
I have to say I’m very pleased that some of the reviews talked about how this wasn’t a movie that was just for pokemon fans, but it was a movie that everybody could enjoy. That was a deliberate move on Rob’s part. But for me, the fact that we created a story about Pokemon that reached out to a much broader audience and seems to have succeeded in that in terms of storytelling, I’m very proud of that.
HULLFISH: I completely enjoyed the movie, and, like you, I had no idea what a Pokemon was. And I loved the film. What i’d love to have you talk about a little bit more is — when you did that interview on Skype with Rob, you might have prepped and tried to read up on Pokemon and say you are a fan, but instead you just were you and that’s actually what got you the job.
SANGER: I would hope so. As an editor, you spend so much time with the director locked up in a room with them.
HULLFISH: And you can’t fake it.
SANGER: And you can’t — I can’t fake it, let’s put it that way. So for me, it would be doing both myself and the director a disservice to try and lie my way through an interview. That’s not to say that I don’t need the work, because I’m a jobbing editor like anybody else. We all have bills to pay. So when a job comes up it’s not like I decide which job I’m going to do. It’s the same for any of my colleagues at my level. We all need to work. But at the same time, I think you have to respect the director enough to be able to say, Look, honestly I don’t know what pokemon are. But from everything that you’re saying and the fact that I think you and I get on and the fact that I respect the experience you have as a director and your intentions, if I’m honest with you – you can be honest with me and we should have a much more collaborative experience together for it.
HULLFISH: Did you feel at a disadvantage in the interview because you hadn’t seen the script? Is that something you like to do before you have that kind of conversation?
SANGER: It depends on the movie. In the case of Detective Pikachu, the reason I was initially interested was because I heard great things about Rob Letterman and I’d seen some of his movies and he was somebody that I wanted to work with. There are some projects where you don’t necessarily know about the director or the production company and in those situations, clearly you’ll get a script and you’ll like the script and it’s off the back of that you will hope to have a conversation. But what entices me about a story and what ultimately may get me a job, there’s never really any set rule.
HULLFISH: There are a lot of flashbacks in this movie. What’s the key to using a flashback or getting in and out the transitions between flashbacks?
SANGER: That’s a great question because in Detective Pikachu the whole structure of the movie is built around flashbacks. It opens with some flashbacks that are Tim’s flashbacks of his own memories and it ends with flashbacks that are imposed flashbacks where one of the main characters is projecting his own memories upon Tim and Detective Pikachu. It’s interesting when you get into flashbacks — it’s all about the perspective of the character from the point of view of the story that you’re playing. The specific aesthetic transitions are something that is always a conversation. For instance, in the case of Tim, we decided to put a slight sepia effect on — which some may say is a little stereotypical. But at the same time, I think it served the purposes of what we’re talking about. Then you’re also dealing with the storytelling aspects of it. So, for instance, there is a flashback in Detective Pikachu that is interrupted and information that is being given to the audience during that flashback is cut off at a crucial moment, and so the decisions you make over how much story to convey to the audience up until that point during that flashback they affect everything that has preceded that scene and everything that follows. And so flashbacks are for me at once the most interesting part of any film that I’ve worked on because there are so many decisions that need to be made within them and around them and they can affect the entire structure of the movie. And so those exact same reasons the most difficult part of anything that I’ve been involved in that hopefully they appear simple and part of the overall DNA of the storytelling but WHERE to place them and how much information to give and — crucially — how much information to give from the perspective of that character that gives the audience enough information, but at the same time doesn’t betray the fact that you’re only seeing that flashback from that character’s perspective. All of those decisions are at the very heart of what film editing is all about. Rob and I were always talking very respectfully about Rashomon because clearly there are nods of respect to that format. Flashbacks are probably one of the most underrated tools the director and editor can use because they can be used in a very simple fashion or they can be used in a very complex fashion. I hope that we used them in a way that the audience can understand. And yet it was the point in the movie that we spent most time debating.
HULLFISH: To the people who haven’t seen the movie, there is one moment that is revealed over and over again, and it always depends on what is revealed and by whose perspective it’s revealed. So what I wanted to ask was how different was your edit from the script in regards to the flashbacks?
SANGER: Quite different.
HULLFISH: Very interesting. Why?
SANGER: Because there is that usual evolution that happens on a movie of this size whereas you’re going along — we may change some dialogue and then we may have another conversation because we weren’t due to shoot some of the flashbacks until later on in the schedule. So it becomes: “Wait for a second! Should we actually be showing Pikachu in that shot?” because that affects the perspective of this character and there’s a chain reaction that you have to consider and it’s constantly evolving. Let’s put it this way: the story didn’t change, but the point of view of who was seeing what definitely was part of the ongoing debate because we had to audition it several times in the context of the whole movie. You can come up with a great idea about: “What if we actually played that move from this character’s point of view?” It might radically clarify this story moment, but then you really do have to sit back and watch that one change — that one flashback — in the context of the whole movie because it does have a ripple effect. So it’s a very time-consuming process. So the content and the nature of all of the flashbacks in the movie was different from what was originally scripted but always in a very positive way. With flashbacks, it’s the surrounding material that you’re putting together that is informing the flashbacks and vice versa.
HULLFISH: Putting a sepia tone might be a little clichéd but if you don’t have some kind of method either that there’s a transition effect or sepia or black and white or blurry edges, sometimes you lose the audience. The audience has to immediately know they’re in a flashback — unless it’s some kind of device where you’re not supposed to know — but if the audience doesn’t know you’re in a flashback then all this information goes past them and then they think, “Wait a minute! We’re in a flashback! What did I just listen to?” And then they’re lost.
SANGER: And by the way, there clearly were points when I was experimenting with something like that and I would run it for Rob and he would say, “I’m sorry. I’ve lost the context.” He would be telling me exactly what I suspected, which was: it wasn’t working that way, and just by adding a visual aesthetic like a sepia tone isn’t always enough. But we believe for the audience that we were trying to make this film for that simple tools are often the best. They work for a reason. Again it’s about perspective and it’s about context and if you can give the audience enough context to understand that not only that they’re seeing a flashback, but on this occasion you’re seeing a flashback of the same event from different characters point of view — as long as you have given them enough information visually and audibly for them to make that mental transition and understand where they’re at, then you don’t lose them. The moment they are confused about what the context is and what the perspective is on that moment when you lose them, and at that moment you’re in danger of losing them from that point.
HULLFISH: Mark, thanks so much for chatting with me.
SANGER: An absolute pleasure talking to fellow editors any time. It’s frankly one of the few joys that we get. Certainly, for me, there’s a lot of pain in the process, of what I do as a job.
HULLFISH: Well, I really appreciate your generosity of sharing so much with us.
SANGER: Thanks, man. All the very best. Hope to see you soon.
The first 50 interviews in the series provided the material for the book, “Art of the Cut: Conversations with Film and TV Editors.” This is a unique book that breaks down interviews with many of the world’s best editors and organizes it into a virtual roundtable discussion centering on the topics editors care about. It is a powerful tool for experienced and aspiring editors alike. Cinemontage and CinemaEditor magazine both gave it rave reviews. No other book provides the breadth of opinion and experience. Combined, the editors featured in the book have edited for over 1,000 years on many of the most iconic, critically acclaimed and biggest box office hits in the history of cinema.
I love soft light. It makes everything look better, and when shooting tabletop or interviews, having a soft overhead adds a nice touch and looks great. The Intellytech LiteCloth LC-160 is a versatile light that is bright, broad, and soft when used with the included softbox. It gets a lot of use around the station … Continued
We often see stories of photographers or social media influencers falling into danger in order to get some shot, and this is just the latest in that trend. Several Instagrammers became seriously ill after jumping into a highly toxic lake in Spain.
Kevin Corrigan will always have a special spot in the Back To One pantheon, not just because he was the very first guest, but because he set the stage for the discussions on the craft of acting that were to come—personal, steeped in the work, confessional at times, often inspirational, always educational. In this hour, he shares some more inspiring personal experiences from a life in acting, and also talks about the work of those who’ve inspired him, from his friend Natasha Lyonne and his current co-star Pete Davidson, to Marlon Brando, Glenda Jackson, Taylor Negron, the actor Bob Dylan, […]
The Federal Aviation Administration (FAA) announced today that they have expanded the Low Altitude Authorization and Capability (LAANC) system to include recreational drone pilots. LAANC is a collaboration between the drone industry and FAA that allows for near real-time approval in controlled airspace at altitudes below 400 feet. This latest development will give recreational flyers access to 600 airports participating in LAANC, covering roughly 2,000 square miles.
The world’s leading drone manufacturer, DJI, acknowledged the good news for recreational pilots, who were restricted before today by contradictory rulemaking, and fully endorsed by San Francisco-based UAS service provider Kittyhawk. Kittyhawk recently released LAANC 2.0 featuring ‘the latest FAA data sources, rules and requirements packaged into one seamless workflow.’ Enterprise and app users can also apply for LAANC authorizations up to 90 days in advance.
Kittyhawk will be offering its LAANC authorization services to hobbyists for free in hopes of inspiring them to consider a professional career path in the drone industry. ‘The American drone industry needs a strong supply of drone innovators, entrepreneurs and hands-on pilots to continue its rapid growth,’ said the company’s founder, Josh Ziering. ‘Drones are helping businesses, nonprofits, governments and researchers do their work better, faster, safer and cheaper, and accelerating those benefits requires a steady pipeline of talented drone enthusiasts who turn their recreational curiosity into a profession.’
‘Giving recreational drone pilots a free and easy way to access the nation’s controlled airspace is a way to help ensure America achieves all the benefits that drones can offer.’
‘Giving recreational drone pilots a free and easy way to access the nation’s controlled airspace is a way to help ensure America achieves all the benefits that drones can offer,’ he continues. Using LAANC, which replaces the requirement of notifying nearby airports, provides FAA Air Traffic Control Facilities critical information about all approved flights, enhancing situational awareness in airspace and enhancing overall safety.
‘Drones have earned an admirable safety record around the world, and the FAA has recognized that they may be operated safely in certain areas near airports by both professional and recreational operators,’ adds Brendan Schulman, DJI’s Vice President of Policy & Legal Affairs. Kittyhawk’s app is free to download and available for both iOS and Android. For the latest updates on LAANC capabilities, bookmark the FAA’s official resource.