David Lynch Masterclass: Learn Creativity and Filmmaking from the Legend Called a renaissance man of modern filmmaking, award-winning director David Lynch has been experimenting with film since the late 1960s. As a painter, musician, actor, and photographer, David approaches cinema with the eye of an artist, earning praise from mainstream audiences and critics alike. His…
This tutorial, and the next bunch after it came from a very simple place. Someone posted on the Avid Editors of Facebook page about not understanding the proper workflow when dealing with dailies that were given to her from another editing application. In this lesson, and the next few lessons after it, I want to discuss alternate dailies workflows, when doing offlines in Avid Media Composer.
I’ll be honest with you right out of the gate. From my experience transcoding in Media Composer, it’s slow. Brutally slow. That’s why there are many (many) times when I’m either creating dailies, or simply taking 4K footage (that I know will never be shown in 4K) and downconverting it to HD, I’ll always use a third party application, as opposed to doing it in app. Again, for the simple reason that Media Composer is brutally slow when it comes to transcoding larger than HD media. This does, however, bring up a new issue. If you’re not Off/Onlining inside the application, what is the best way to acquire, offline, relink and master your offline Media Composer timelines. Well, that’s where this arc of tutorials will take us. Starting with Dailies from Scratch, and moving onto Resolve and Media Encoder, we’ll look at the process of acquiring your dailies from external applications, the best way to get them into Media Composer, and then the best way to link back to your high resolution media when you’re done, to finish the edit inside of Media Composer. Enjoy!
Edelkrone has released a new Stop Motion Module for the HeadPLUS system that provides Dragonframe integration. Using the edelkrone app, you can still create stop motion films but using it with Dragonframe is where you can really unlock the potential of the motion control system, using the SliderPlus, Slide Module & HeadPlus Pro. Dragonframe Integration … Continued
The first release candidate for Darktable 3.0—the popular free, open source Lightroom alternative—was announced earlier today, and it comes with some major improvements over 2.6, including UI improvements, a major rewrite of the Lighttable module, bug fixes, and more.
The release of Darktable 3.0.0rc0 comes (perhaps on purpose?) just as Adobe revealed its latest build of Lightroom at Adobe MAX, and it adds a bunch of features and enhancements that should make Darktable easier to use, navigate, and personalize.
Major improvements include (but are hardly limited to):
A new CSS-controlled GUI that allows for preset themes like darktable-elegant-darker, darktable-elegant-gray and others
A more versatile color picker that lets you sample any area
The addition of undo/redo support for tags, color labels, rating, metadata, and more in the Lighttable module
A new timeline view in the Lighttable
A new “culling” mode in the Lighttable
And a “quite extensive rewrite” of both the Lighttable and the Filmstrip that promises “drastically” improved performance.
That last point addresses one of the complaints we’ve seen most regularly when writing about Darktable, so it has the potential to really improve the Darktable experience.
There’s way too much in this first release candidate to cover here, but suffice to say that the first build of Darktable 3.0 comes with a slew of new features, usability & UI improvements, and bug fixes, and you can read about all of them in detail at this link.
To learn more about this editor or pick up the first Release Candidate, head over to the Darktable website or go straight to GitHub to download Version 3.0.0rc0 for Windows, MacOS or Linux. And if you’ve never heard of Darktable (or you’ve heard of it but never actually given it a try) be sure to check out this video, which offers a comprehensive introduction to the software.
Free, open source software comes with its fair share of quirks, but Darktable (and the other popular option, RawTherapee) has served many an Adobe deserter very well for the price of “on the house.”
There’s an unbelievable auction currently live on eBay that might rank as the most expensive item we’ve ever seen on the site. Uncovered by the folks over at The Phoblographer, the auction is offering hundreds of historic WWII prints, a Kodak Pocket camera, and an extremely rare negative of the Hiroshima bombing, all for the whopping buy-it-now price of $2,000,000.33.
Historic collections such as these are typically sold through one of the storied auction houses, like Sotheby’s or Christie’s, which is why it’s so strange to see a $2,000,000 item for sale on eBay. That said, eBay seller classicbooks has a sparkling track record selling collectible items such as rare comics, posters, coins and even paintings. For all intents and purposes, this auction seems like the real deal.
Which brings us to the contents.
According to the auction’s description, that $2M will snag you:
233 first generation silver prints, of which 37 are 8 X 10″, a Kodak camera belonging to a soldier of the 9th Photographic Technical Squadron, and the main item: a 9 X 9″ Kodak Black and White Negative depicting post bombing of Hiroshima on August 6th 1945.
The photographs were taken in Guam and Japan and developed by the “9th Photographic Technical Squadron”. This unit was mostly unknown until 2016.
The main item in question is the print shown below, which was “likely made with a Fairchild K-22 aerial camera” on a “Boeing F-13A Reconnaissance Super Fortress (a modified B-29 bomber) from the 3rd Photo Reconnaissance Squadron.”
The auction claims that this is “the only negative of [the] August 6th, 1945 atomic bombing ever offered for auction,” writing that it was discovered “after 74 years” among the possessions of a soldier who served in the 9th Photographic Technical Squadron. According to the description, the photo was found folded with two small tears in the center, and was “gently flattened” for scanning.
Here’s a scan of the photo, and a closer crop on the atomic cloud in the top right of the frame:
These highly collectible items are being kept in a bank safety deposit vault, and if you win the auction, you must “pay within 24 hours or it will be offered to the second highest bidder.” You’ll also need to pick up the items in person in Stamford, CT, which makes sense—if you’re willing to spend 2 million dollars this collection, you’re probably not going to want to risk having your items “lost in the mail.”
To learn more about this lot or possibly put in an offer, head over to eBay. As of this writing, the auction has no end date or best offer listed… but there are 21 watchers.
The Art of the Cut podcast brings the fantastic conversations that Steve Hullfish has with world renowned editors into your car, living room, editing suite and beyond. In each episode, Steve talks with editors ranging from emerging stars to Oscar and Emmy winners. Hear from the top editors of today about their careers, editing workflows and about their work on some of the biggest films and TV shows of the year.
On todays podcast, Steve talks with Joe Klotz, ACE about his editing work on “Motherless Brooklyn”. Joe received an Oscar nomination for his work on “Precious.” He also edited the Netflix breakout film “To All The Boys I’ve Loved Before.” You can listen to Steve’s full conversation with Joe about editing “Motherless Brooklyn” below:
This weeks episode of the Art of the Cut Podcast is brought to you by LaCie. As one of the leading media storage companies in the entertainment industry, LaCie consistently brings innovative ideas to the market. Make sure to listen to the above interview for a special offer from LaCie when you shop on Filmtools.com!
Creative consultant and talented videographer Daniel DeArco is one of the best there is at creating impressive transitions from shot-to-shot in his videos. In his latest video, he’ll take you behind the scenes to show you exactly how he created one of the coolest cuts in his recent empathy video.
The video is all about “match cuts,” and while this technique might require quite a bit of thoughtful planning before you go out and shoot, DeArco explains that it’s “honestly nothing special” when it comes to putting these transitions together in post production.
First things first: a match cut is “when you use the texture, shape or composition from one video to transition into a subsequent one.” DeArco shared a few examples on his Instagram, which he also included in the video above. Done well, a match cut looks something like this:
After explaining the basics, DeArco goes on to show you how he created one of the coolest shots (in our humble opinion) from his recent video on how empathy can transform your work and help you connect with clients. It’s a few-second shot of something simple—unlocking a door—but by using two match cuts he creates something engaging that pulls you in.
Getting the shots required took some thoughtful planning—and in this case, some actual fabricating of props—but the actual shots themselves are less than a second long and nothing “special” at all. A key approaching a lock, a swinging shot to change perspective on the key, and a key entering the lock—but put them together, and you get this:
Check out the video up top to find out more about match cuts and follow along as DeArco creates and captures each of the three shots he needs. Then check back in tomorrow for Part 2, where he’ll show us how to put them all together in post.
Robert Eggers and DP Jarin Blaschke went through a painstaking process of retrofitting ’30s lenses, optimizing Kodak black-and-white film stock, developing a custom orthochromatic filter, and designing frames specifically for 1.19:1.
In the late 19th century, a young drifter and a veteran lighthouse keeper arrive at a desolate, storm-wracked island. Their mission is to operate and maintain the lighthouse, in total isolation, throughout an unforgiving winter. The scruffy, young Ephraim Winslow (Robert Pattison) hopes to learn the art of being a “wickie” from Thomas Wake, an aging, volatile seaman who seaks in gruff nautical verse. Wake takes his job seriously; he won’t have Winslow messing it all up. He sees himself as the guardian of the lighthouse’s traditions and superstitions — “Never kill a seabird,” he implores Winslow, “unless you want to disturb the soul of a sailor that “met his maker.”
The Irishman director took a break from his “Marvel isn’t cinema” contributions to reveal why he didn’t direct the highest-grossing R-rated movie of all time.
Despite spending four years working in some capacity developing Joker — director Todd Phillips’ controversial blockbuster about the origins of Batman’s nemesis — producer Martin Scorsese ultimately decided not to direct the film.
As audiences know, Joaquin Phoenix’s gritty take on the Clown Prince of Crime is inspired by and borrows from such Scorsese classics as The King of Comedy and Taxi Driver (especially the former). And the idea of Scorsese making his first comic book movie, despite his divisive comments about the artistic merits of Marvel’s, is worthy of the admission price alone. So why did he abandon the project before it got the greenlight in 2018? His answer may surprise you.
Judas Collar is a story that made me quit my job as a television documentary director and decide to take the plunge into the world of writing and directing drama.
When I discovered that lone camels, known as ‘Judas’ animals, were collared with a tracking device used as bait to betray their herd to hunters, I knew it was a story I wanted to tell as a drama and not a documentary. But how would you even begin to direct a film starring camels, let alone a film…starring camels…complete with action sequences?
The shoot for our short film Judas Collar was incredibly challenging—filming with eight camels and a helicopter in the remote Australian desert. We had a small but extremely dedicated crew of fifteen people who had to juggle camel wrangling without compromising their film roles. Over the course of six days, we endured eight flat tires, two bogged vehicles, and a blown head gasket on our camel truck. Filming an action set piece in the desert with camels certainly wasn’t easy.
A judge in Ohio has decided that the two teenagers charged with killing 44-year-old photographer Victoria Schafer in Hocking Hills State Park two months ago will be tried as adults. If convicted, they could face life in prison.
In an update to the story of Schafer’s tragic death, local news station NBC4i is reporting that 16-year-olds Jaden Churchheus and Jordan Buckley are being charged with murder, involuntary manslaughter, felonious assault, and reckless homicide, and will be tried as adults in the Court of Common Pleas.
Schafer was killed in Hocking Hills State Park on September 2nd, when a large section of tree fell and hit her during a photo shoot near Old Man’s Cave. But what initially seemed like a tragic accident was quickly ruled foul play, when investigators discovered evidence that the falling tree may not have been “a natural occurrence.”
Ohio Crime Stoppers offered a $10,000 reward for any information that might lead to arrest and conviction of the responsible parties, and the incentive seems to have worked. On October 10th, Churchheus and Buckley were arrested and confessed to playing a part in Schafer’s death.
According to WLWT, the teens were arrested after authorities received a tip about a text message one of the teens sent to a classmate saying that “he and a friend did something serious.” Once arrested, the teens admitted to “forcing a 74-pound log off a cliff,” which fell more than 75 feet, hitting and killing Schafer while she was taking senior portraits for a group of students.
The teens appeared in court today, where a Hocking County judge decided that they would both be tried as adults and issued each a bond of $100,000. According to NBC4i, if convicted on all four charges mentioned above, they could face life in prison.
Over the past six months, it’s been a season of new camera releases, each more tempting than the last. The latest crop of mirrorless hybrids and digital cinema cameras present some compelling new features and innovations designed to make shooting more efficient and the output, to me, more impressive.
The past few months have seen several new cameras announced, but the ones that come to mind immediately as the most interesting are:
Blackmagic Design Pocket Cinema Camera 6K — $2,495
Panasonic Lumix DC-S1H — $3,997
Sony PMW-FX9 — $10,998
Canon EOS C500 MKII — $15,999
Within such an enormous price range, what features make these cameras so interesting? Let’s review what makes the latest crop of cameras compelling:
One of the new cameras feature 6K sensors with 4K recording (the Sony PMW-FX9), while the other three cameras all feature native internal 6K recording.
Internal RAW Recording
Two of the cameras (the Blackmagic and the Canon) allow for internal RAW recording. The Sony and Panasonic will both allow external RAW recording, which, to me, is a non-starter. Once you’ve shot with internal RAW recording, shooting RAW externally seems like a step backward, but it’s nice that all four cameras at least have the option to shoot RAW period.
The Blackmagic Design Pocket Cinema Camera 6K can interface with a Blackmagic external battery grip, which goes a long way to solving its too short internal single battery life. The Panasonic S1H can interface with the same optional Panasonic external audio interface that the GH5 and GH5S have utilized over the last few years.
Both of these lower dollar cameras pale in comparison with the Sony FX-9 and Canon C500 MKII when it comes to modularity. The Sony will interface with an accessory back that allows for various additional external interface functions, and the Canon C500 MKII has a whole new lineup of optional EVFs, camera backs and other accessories that will allow you to customize the cameras connections and interfaces to a degree that no other C Series camera has had before.
Higher Bit Rates And Data Rates
Some customers require certain bit rates and data rates. It’s fair to say that 8-bit video recording is now considered passé’, at least on pro digital cinema cameras, although 8-bit recording is still common with mirrorless cameras. All four of these cameras offer a minimum of 10-bit recording with some offering 12-bit recording and even 16-bit output. All four of these cameras offer data rates that are impressively robust and would have been unheard of just a few short years ago. As the recording media has improved, so too have digital cinema and mirrorless cameras ability to record in higher and higher data rate formats, including RAW, which records at up to 5.9K (5952 X 3140) at an astounding 2.1 Gbps, which requires the new CFexpress card format.
I ‘ve shot with two of these four new cameras, the Blackmagic and the Panasonic. Unfortunately, the Sony and the Canon aren’t yet available to review, but based upon previous experience with the Sony PMW-FS7 and FS7 MKII, the Canon EOS C100, 100 MKII, 300 MKI and MKII and that I own the C200, I can surmise at least roughly at how the Canon and Sony will perform. In my opinion, we’ve finally reached the point where any new cameras hitting the market will be better, but how many of us really need a better camera than this crop of technology?
Are We Hitting A Wall?
A question I see being raised repeatedly on discussion boards and in digital cinema forums is the assertion that we’re basically already at the saturation point for new digital cinema technology in cameras. What do we mean when we say “saturation point”? In order to answer what a saturation point is, let’s take a look at what customers and clients are looking for when they hire you to shoot either footage for them as a production services provider or when they hire you as a production company to shepherd their project all of the way through the creative process, from idea to final product.
Now that the latest crop of cameras has hit the 6K barrier, perhaps it makes sense to take a look at what real clients in the real world are actually asking for.
In our personal experience over the past two or three years, the majority of clients in the markets we shoot and produce in predominantly are still requesting 1080 acquisition. Wait, aren’t we in the era of 4K video already though? Well, yes and no. What we’re hearing over and over again is that many of our client’s internal workflows for editing, monitoring, archiving and outputting are mostly still optimized for 1080.
4K is four times the size of 1080, creating a resolution profile that’s two times wider and two times higher than 1080 HD, thus giving a total screen resolution that’s a bit over 4 times larger overall. Some of these clients are fine shooting a project in 4K UHD, but the final output still needs to be 1080 for the majority of projects we’re hired for. About 35 to 40 percent of the time, the clients don’t specify which format and frame size they want to shoot in, and we often recommend shooting a project UHD (3840×2160) even if we’re going to edit the footage in a 1080 timeline. In this way, at least the client’s footage, if not the edit, will be somewhat “future-proofed” as they could always go back and re-edit the project in UHD resolution. About 20 percent of the time, clients specify and request that the project be entirely shot and delivered in UHD.
Listening To Who Pays The Bills
What conclusions can we draw from what our customers are telling us? Simple. The sum of all projects being shot in at least 4K and delivered in 4K is still quite a bit smaller than many in our industry would have projected just two years ago. If we look at where we are today with shooting and delivering 4K, does it make sense to be buying any camera based upon its ability to shoot and record in 6K resolution? What about 8K? That’s a question you have to ask yourself. We now know that with Bayer sensors and the DeBayering process, to obtain the optimal down-sampled UHD 4K footage, it helps if the sensor in the camera can shoot at a native 5.7k to 5.9K resolution since you lose resolution during DeBayering. If a 4K native sensor is used instead, the DeBayered image will be lower than UHD resolution and will always fall short of fulfilling the potential of a UHD specification. Of course, this is all resolution discussion and not image quality or image characteristic talk, which is a totally different set of criteria.
The Business Case
A lot of your decisions and my own decisions about when to buy a new camera and which camera to buy should center on the business case. Here’s an example. Right now, in 2019, in our market, which is centered in Los Angeles, mostly in the entertainment media, shooting EPK, BTS and documentary type footage mostly, with some occasional corporate work and event work thrown in for good measure, we’re able to charge clients a day rate for the camera package of around $450 to $650 per day, which includes the camera, media, batteries, charger, tripod and a zoom lens. We can add wireless video transmission and a monitor, better and longer length lenses and external recording to Prores HQ as options that take the base $450 rate to the upper rate of around $650.
Looking at our clients, their needs and preferences, our current C200 package fulfills most of their needs, most of the time, so we can surmise for the majority of our clients, our camera, or a similar one like it (Panasonic EVA 1, Canon C300 MKII, Sony FS7/MKII) would fill their needs nicely. A Canon C200 or any of the competitors would cost around $6,000 to $7,500 new for the camera body only. While I find that the two new digital cinema camera offerings, the Sony FX-9 and the Canon C500 MKII would be a delight to shoot with and either would offer superior features in some areas over our C200, I can say with some confidence that none of the features either camera would offer would motivate our clients to pay more than the current $450 to $650 per day for our camera package.
In extrapolating this financial strategy, I’ve come to the conclusion that it won’t be worth it, from a business perspective, for us and our clients, to upgrade from our C200 to the FX-9 or the C500 MKII in the near future. This is not to say that the entire situation couldn’t change and evolve, but viewing the situation through a lens of today’s work with today’s clients with their current needs, we feel no immediate urge to sell off our year-and-half-old C200 to update to the latest and greatest successors.
If we were new to buying digital cinema cameras, we might find the new features offered by either to be very appealing and either could prove to be the right choice as our new first digital cinema camera. For quick turnaround day playing, the Sony FX-9 seems as if it will be a very worthy successor to Sony’s immensely popular FS7/FS7 MKII cameras. For higher budgeted, more involved projects that will be color corrected, graded and have longer production timelines, the internal RAW capability will make the C500 MKII appealing for a large population of users, clients and projects.
The real question is, what’s your business case for buying a new camera or for trading up from your current camera to the latest and greatest?
Accusonus’ ERA 4 Voice Leveler plugin is currently on sale – costing only $9 (instead of $59 regular price) – until the 8th of November. The one-knob tool might help your audio processes significantly, when it comes to leveling a person’s voice . Let’s take a closer look!
ERA 4 Voice Leveler
The Accusonus ERA 4 series of plugins are one-knob tools made for filmmakers that want to fix and improve their audio results, quickly. These plugins are compatible with every NLE and audio software on the market that accepts VSTs. If you wish to hear and learn more about these one-knob tools, you can take a look at the full review I did some time ago, which includes audio examples.
So, what does this plugin do, and how can it help you? When you record someone speaking – the most typical scenario being a talking-head interview – the voice intensity of the person can vary a lot. Of course, you can adjust the volume input directly on set, on your camera or mixer, but as a one-man-crew, you have a lot of things to deal with already, and it’s nearly impossible to do.
If you need the voice of the person speaking to be “constant” – not low at certain moments and higher at others – voice leveling the dialogue by hand is a time-consuming task. The ERA 4 Voice Leveler plugin helps you smooth out your voice track without adding background noise when the person is not speaking.
I’ve used this plugin a lot for the past six months, and it does a fantastic job. Of course, you have to be gentle and subtle when you play with the knob, to get great results. It’s been a whole part of my audio chain process for interviews.
Pricing and Availability
Currently, there is a special offer for the ERA 4 Voice Leveler plugin that lasts until the 8th of November. You can get the plugin for only $9 (instead of $59 regular price). At this price, it’s a real bargain. If you are not quite sure yet if it can aid your workflow, you can try it for free. To get access to this offer, follow this link to the Accusonus website.
Have you already tried ERA 4 plugins? What do you think of these easy to use tools? Let us know in the comments!
Ever since Google debuted Night Sight, people have marveled at how capable the computational photo technology has proven to be. But how does it compare to good old fashion sensor size? Andrew Branch of the YouTube channel Denae & Andrew decided to find out.
In his latest video, Branch pits the Google Pixel 4 and the latest Night Sight technology against the full-frame mirrorless Canon EOS RP in a blind “taste” test that’s meant to simulate how most consumers would try and capture each low-light scene he shot.
Branch is kindly allowing us to share a few of the comparison photos with you below, and you can see all 11 comparison scenes by watching the full video.
Scroll down to see five different scenes, each captured with both the EOS RP and Google Pixel 4’s Night Sight mode. Click on each photo to see it in full resolution, make your guesses as to which photo was shot with which camera, and we’ll reveal the answers at the very bottom.
There was a time when comparing the low-light photography chops of a smartphone against a full-frame camera seemed ludicrous, but with the advances in computational photography that we’ve seen from both Google and Apple over the past year, that’s no longer the case. For everyday use and especially for Web consumption, the results that Google is able to produce by combining multiple images on the Pixel 4, despite its tiny image sensor, are downright incredible.
If you guessed that, overall, the Google Pixel 4 photos were the cleaner and brighter of each pair above, you’d be 100% correct. As you can see from the Answer Key, Google’s computational photography allows for much cleaner low-light imagery, despite its tiny sensor, than shooting the EOS RP hand-held:
Google Pixel 4
Google Pixel 4
Google Pixel 4
Canon EOS RP
Canon EOS RP
Google Pixel 4
Google Pixel 4
Canon EOS RP
Now, before the comments section fills up with claims that this isn’t an apples-to-apples comparison, Andrew wanted to share three important disclaimers that explain why he designed this shootout the way that he did:
Of course the FF camera will do better if multiple photos are stacked and processed in a similar way as the Pixel 4 is doing in post. But the point of the comparison was to see how each performs in-camera.
Of course the results of the FF camera would have been better with a tripod. The point was to compare hand-held.
Of course there are better FF cameras for low-light photography. But the EOS RP is the cheapest Canon FF mirrorless consumer camera, putting it closer in price to the Pixel 4. So that’s why we chose it.
So, would the EOS RP (or any other full-frame, APS-C or M43 camera) have outperformed the Pixel 4 given a tripod and a few seconds of exposure time per shot? Probably. But given the same parameters—the ones most typical consumers are using when they take a low-light photo—it’s clear that computational photography has a major advantage.
Now… imagine what a “real” camera could do given the same hand-held, high-speed image stacking technology. That’s what we’re really waiting for.
Check out the full video up top for more side-by-side comparisons. And if you like this sort of thing, click here to see Andrew and Denae’s blind comparison of Fuji vs Canon color science.
Image credits: Photos by Andrew Branch and used with permission.
Canon very quietly today released a dedicated astrophotography version of its EOS R camera, the EOS Ra.
Much like the Nikon D810a and Canon’s own 20Da, the EOS Ra has been modified to better capture celestial objects in the night sky, but aside from that remains unchanged from its more conventional counterpart. Specifically, Canon has removed the IR filter in front of the full-frame sensor, which will allow as much as four times the amount of hydrogen alpha rays (656nm wavelength) to hit the sensor compared to the standard EOS R camera. This alteration will make it easier to capture the deep red infrared rays given off by objects in space.
The EOS Ra also offers a 30x magnification option in the EVF and in Live View, a dramatic increase from the 10x magnification found in the standard EOS R. This increase should make it easier to focus on celestial bodies to get focus just right.
Aside from those two alterations, the EOS Ra is effectively identical to the EOS R, complete with the 30MP sensor, 3.69M-dot OLED EVF, dot-matrix LED panel and magnesium-alloy body.
The Canon EOS Ra is currently available to pre-order for $2,500. No estimated shipping timeframe has been given at this time.
The first release candidate for Darktable 3.0 has been released for users to test. The new version represents a major upgrade for the software, which joins RawTherapee in being one of the best open-source applications for photographers.
Among many details that will mostly be of interest to developers, not casual users, the team behind Darktable notes that Darktable 3.0 features ‘a full rework’ of its user interface, making it possible to fully theme the software’s GUI. Multiple themes will be included with Darktable 3.0, including the default Darktable theme alongside darker and lighter variants.
Another major change will be the addition of undo and redo support in lighttable for ratings, metadata, tags, color labels, and more. Beyond that, users can expect a new ‘Culling’ lighttable mode and new timeline view, ‘drastically’ improved performance and usability with 4K and 5K displays, plus support for reordering modules.
Darktable 3.0 likewise brings a new Histogram Profile, multiple changes to Denoise, a new 3D Lut transformations module, an update to the Picasa module that transitions it to Google Photos with support for the latest Google Photo API, a faster and generally improved tagging module, ‘many’ code optimizations for SSE and CPU paths, several new modules for things like ‘RGB Curve’ and ‘RGB Levels,’ plus there are a huge number of tweaks and small additions.
Users can also expect a large number of bug fixes to arrive with Darktable 3.0, which is currently available to download as a release candidate for Linux, macOS and Windows through Github.