Quick Tip: Get Better Exposure Results with Luxi Light Meter

A light meter as a standalone device. Do you even need it nowadays? And if yes, do you have to spend a rather high three-digit figure in dollars? The Luxi For All might be the first step into these waters. It utilizes your existing phone and therefore it’s cheap, it’s very easy to use and you can expand its capabilities by using third-party apps. Pretty neat, I’d say.

Luxi

The Luxi For All light meter is not a new product, far from it. Originally, it started as a Kickstarter campaign, back in May 2014. It is the successor of the original Luxi (Kickstarter campaign) but with an upgraded mechanism for attaching it to virtually any mobile devices – that’s why it’s called Luxi For All.

Luxi For All

This device is really nothing fancy but maybe that’s the beauty of it. It’s just a clip that holds a translucent dome. The whole thing can be clipped on to your phone so that the dome covers your front-facing camera. That’s it, the magic is being applied via software.

So what’s the advantage of this over your average camera? Well, the camera measures the brightness of a particular scene by the intensity of light reflected from objects within the frame. However, a dedicated light meter detects and measures the incident light before it hits a surface, known as incident light measurement. This gives you far better control over what precisely you measure.

Luxi

You can use a professional, fully-featured light meter, like a Sekonic, and you’ll get what you pay for. But maybe you don’t really need all the features of such a device but you still want to get better readings. That’s where a cheap device like the Luxi For All comes in. It certainly isn’t the best tool out there but in order to get started that little thing is all you need, really. The accompanying app is basic but gets the job done. If you want to upgrade on features a little bit, other (third-party) apps work with the Luxi, too. Cine Meter II by Adam Witt would be a possibility here (iOS only, available on the App Store). For android users, Lux Light Meter by Doggo Apps could be worth a closer look (Google Play Store).

The original Luxi app is also available for both iOS and Android.

Price

The Luxi For All is a mere $25 US. Whether it’s worth it is a question that you can only answer for yourself. I think it’s a great product and a door opener for beginners who want to get better results when exposing a scene, or just to learn a little bit about lighting. Contrast ratios, dynamic range, grey cards, you name it. Another good starting point is this article by Attila Kun of ExposureGuide.com.

Alternatives

I personally used to use a Sekonic L-758c but I sold it since I barely used it anymore. What I do use a lot is the Lumu Power light meter (see our article). It’s iOS only since it sports a lighting connector for attaching to your iPhone. There’s a version 2 but I haven’t upgraded to that one since the original Lumu Power works just fine for me.

Luxi

If your phone’s camera delivers more accurate readings with the Luxi attached I don’t know but since the Luxi depends on 1) the build-in camera of its host phone and 2) on the software running the measurements, there’s no real way to define its performance. But, and this is key, it enables you to take a real incident light reading, and that alone is a good thing to do.

Luxi

Sekonic L-758c (left), Lumu Power (right).

Plus, the Lumu or the Luxi can live in your bag and won’t be in your way until you need them. A big light meter might be another story.

Links: Website

What do you think? Is a light meter still a thing? Do you use one? Share your experiences in the comments below!

The post Quick Tip: Get Better Exposure Results with Luxi Light Meter appeared first on cinema5D.

Nikon Releases NIKKOR Z 58mm f/0.95 S Noct Lens – World’s Fastest Z-Mount Lens

Nikon releases the NIKKOR Z 58mm f/0.95 S Noct manual-focus lens, which is currently the fastest lens for the Z-Mount and one of the world’s fastest lenses ever. It covers full-frame sensors, offers a function button, and comes with a dedicated case. This unique lens is now available for pre-order for slightly under $8,000.

NIKKOR Z 58mm f/0.95 Noct lens released. Image credit: Nikon

Nikon works hard to extend their mirrorless Z-mount lineup. The Japanese company recently announced the new APS-C camera Nikon Z50 along with two new budget DX lenses. Nikon also released an announcement including the fastest lens in the Z-mount lineup – NIKKOR Z 58mm f/0.95 S Noct lens. Let’s take a look at its features.

Nikon NIKKOR Z 58mm f/0.95 S Noct Lens

The NIKKOR Z 58mm f/0.95 S Noct is an ultra-fast, standard prime, manual-focus lens for the Nikon Z mount system’s full-frame (Nikon FX-format) mirrorless cameras – the Z 6 and Z 7. It takes advantage of the large-diameter (inner diameter of 55 mm) of Z-mount and 16 mm flange focal distance to realize an f/0.95 maximum aperture.

NIKKOR Z 58mm f/0.95 Noct lens released. Image credit: Nikon

In a way, this lens pays homage to the original AI Noct NIKKOR 58mm f/1.2 standard prime lens, which was released in 1977 and so well received for its ability to reproduce sharp and clear point images. It was a lens named after musical compositions known as nocturnes and optimized for nighttime photography with a fast maximum aperture and very good point-image reproduction characteristics.

The optics for this lens are constructed of 17 elements in 10 groups, for which four ED glass elements and three aspherical lens elements are employed. These elements realize a high degree of correction for various types of aberration, including distortion and spherical aberration.

NIKKOR Z 58mm f/0.95 Noct lens optical structure. Image credit: Nikon

The lens is equipped with a lens info panel that can be used even in dark surroundings to confirm information such as aperture value, shooting distance, and depth of field. It has been designed with consideration given to dust- and drip-resistance, featuring a fluorine coat that effectively repels dust, water droplets, grease, and dirt.

When it comes to coatings, Nikon used anti-reflection coatings, ARNEO Coat and Nano Crystal Coat. The combination of these two coating technologies effectively reduces ghost and flare generated by incident light for sharp and clear images. The lens features unique spatial expression with ideal blur characteristics as the degree of bokeh transitions smoothly and naturally with increasing distance from the focal plane.

NIKKOR Z 58mm f/0.95 Noct lens case. Image credit: Nikon

The lens is designed with machined metal exterior components and a yellow engraved “Noct” logo. It is equipped with focus and control rings that rotate smoothly for operation with a precision feel. The NIKKOR Z 58mm f/0.95 S Noct would really shine in portrait, night landscape, and starscape photography and cinematography. The lens comes with a dedicated trunk case bearing the Noct logo.

NIKKOR Z 58mm f/0.95 S Noct – Key Specs

  • Focal length: 58mm
  • Aperture: f/0.95 – f/16
  • Lens mount: Nikon Z
  • Format compatibility: Full-frame
  • Angle of view: 40° 50′
  • Maximum magnification: 0.19x
  • Minimum focus distance: 1.64′ / 50 cm
  • Optical design: 17 elements in 10 groups
  • Diaphragm blades: 11
  • L-Fn (lens function) button with a variety of assignable functions
  • Dust and drip resistance
  • Focus type: Manual focus
  • Image stabilization: No
  • Tripod collar: Fixed and rotating
  • Filter size: 82 mm (front)
  • Comes with a felt-lined lens hood
  • Dimensions (ø x L): 4.02 x 6.02″ / 102 x 153 mm
  • Weight: 4.4 lb / 2000 g

Price and Availability

Nikon NIKKOR Z 58mm f/0.95 S Noct lens is available now for pre-order. It should start shipping at the end of November 2019. Price has been set to slightly under $8,000 which only reflects the uniqueness of this lens.

The NIKKOR Z Lens Lineup Expansion to 2021

Along with the ultra-fast Noct lens, Nikon also released the following table showing their existing and planned lens lineup for Z-mount. Please mind, that information in this Lens lineup table, including release dates, are subject to change.

NIKKOR Z-mount lenses roadmap. Image credit: Nikon

What do you think of the new ultra-fast NIKKOR Noct lens? Would you invest in such a lens? Let us know in the comments underneath the article.

The post Nikon Releases NIKKOR Z 58mm f/0.95 S Noct Lens – World’s Fastest Z-Mount Lens appeared first on cinema5D.

Why Vidiots Coming Back to LA Is a Big Deal for Filmmakers

Los Angeles filmmakers: Get excited that this indie film staple is coming back to make your jobs a little easier — and more fun.

November 2019 marks the beginning of what has become known, in the press at least, as the great Streaming Wars. Apple and Disney will both be launching their flashy, direct-to-consumer platforms, offering entire libraries of new and classic programming, available to viewers as quickly as they can turn on their iPad.

But what is fascinating is that, at the same time as the digital revolution threatens to change the way we consume our entertainment (if it hasn’t already), an old-school way of watching has reemerged from the not-so-distant past and it turns out that it could be a huge deal for Los Angeles filmmakers.

Read More

Zhiyun’s New Weebill S is a Robust Gimbal for Mirrorless and DSLR Cameras

The Weebill S improves upon its predecessor with a new motor, upgraded algorithm, and a new image transmission system.

If you’re a fan of Zhiyun, then you’ve most likely heard of/wanted/bought one of their most popular gimbals, the Weebill Lab, a versatile stabilizer that can not only handle mirrorless and DSLR cameras but is also very affordable. Now, the company has unveiled a new and improved version of their flagship gimbal, the Weebill S, which boasts motors that offer 300% more torque, an upgraded algorithm that allows for faster and smoother operation, and much, much more.

Check out the promo below:

The Weebill S has many of the same interesting features that made the Weebill Lab so enticing, including a lightweight design, multiple operational modes, and of course, that adorable mini-tripod.

Read More

How to Create and Animate a Custom Google Map

Want to create your own map and bring it to life?

In this tutorial, I’m going to show you how to create a custom Google Map animation. First, I’ll add markers and a route to a map inside of Google My Maps. I’ll export my map as a KMZ file, and then bring it over to Google Earth Studio where I’ll animate the markers and route.

Here’s what I’ll be using in this tutorial—

What you’ll need

  • A Google account
  • Google Chrome
  • Video editing software that can import image sequences

Step 1 – Create a Custom Map

For the first step, I’ll go to Google My Maps. This page lets me create and customize my own Google Map. For the first step, I’ll add markers. I’ll put each marker on a separate layer, which will give me more versatility when I go to animate the map inside of Google Earth Studio. Next, I’ll customize the color and icon style of each marker, and then add a walking route traveling between the four marker points.

Read More

5 Essential Screenwriting Lessons You Can Learn From ‘Fleabag’ Season 2

The Fleabag scripts are full of lessons and inspiration, especially the script for the Season 2 premiere.

Writing a historically amazing first season was pressure enough, but Phoebe Waller-Bridge delivered when Fleabag came back for more.

Fleabag came into our lives and nothing was the same after. Certainly not priests. Now, we get the chance to look back, talk about the first episode of that Emmy-winning second season, and reflect on the lessons we can apply to our own writing.

Download the Fleabag S2E1 Script here!

5 Lessons from the Fleabag S2E1 Script

1. Start late

One of the many things to admire about this script — and the series in general — is how late we get into scenes. People are already sitting at the table, already having conversations, and already in the bathroom. This approach infuses every scene with kinetic energy.

Read More

Kevin Smith Screenplays (Download)

Kevin Smith Screenplays Take a listen to Kevin Smith as he discusses his screenwriting and filmmaking process. The screenplays below are the only ones that are available online. If you find any of his missing screenplays please leave the link in the comment section. (NOTE: For educational and research purposes only). 

The post Kevin Smith Screenplays (Download) appeared first on Indie Film Hustle®.

Why ‘Scream’ Has One of Best Final Girls In Horror Movies

While the sequel are hit and miss, you can learn a lot from how the Scream franchise scripts treats their lead character, Sydney Prescott — one of the few Final Girls to have an arc in horror movies.

In 1992, professor and writer Carol Clover released Men, Women, and Chainsaws: Gender in the Modern Horror Film, a feminist text that analyzed the recent slate of slasher films. It’s in this text that Clover described The Final Girl, usually a virtuous woman who — thanks to her upstanding morals and plucky ingenuity — will outsmart the marauding killer and make it out alive.

Read More

These are the most important Google Pixel 4 camera updates

Google yesterday announced the Pixel 4 and Pixel 4 XL, updates to the popular line of Pixel smartphones.

We had the opportunity recently to sit down with Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, and Isaac Reynolds, Product Manager for Camera on Pixel, to dive deep into the imaging improvements brought to the lineup by the Pixel 4.

Table of contents:

Note that we do not yet have access to a production-quality Pixel 4. As such, many of the sample images in this article were provided by Google.

More zoom

The Pixel 4 features a main camera module with a 27mm equivalent F1.7 lens, employing a 12MP 1/2.55″ type CMOS sensor. New is a second ‘zoomed-in’ camera module with a 48mm equivalent, F2.4 lens paired with a slightly smaller 16MP sensor. Both modules are optically stabilized. Google tells us the net result is 1x-3x zoom that is on par with a true 1x-3x optical zoom, and pleasing results all the way out to 4x-6x magnification factors. No doubt the extra resolution of the zoomed-in unit helps with those higher zoom ratios.

The examples below are captured from Google’s keynote video, so aren’t of the best quality, but still give an indication of how impressive the combination of a zoomed-in lens and the super-res zoom pipeline can be.

6x, achieved with the telephoto module and super-res zoom 1x, to give you an idea of the level of zoom capable while still achieving good image quality

Marc emphasized that pinching and zooming to pre-compose your zoomed-in shot is far better than cropping after the fact. I’m speculating here, but I imagine much of this has to do with the ability of super-resolution techniques to generate imagery of higher resolution than any one frame. A 1x super-res zoom image (which you get by shooting 1x Night Sight) still only generates a 12MP image; cropping and upscaling from there is unlikely to get you as good results as feeding crops to the super-res pipeline for it to align and assemble on a higher resolution grid before it outputs a 12MP final image.

We’re told that Google is not using the ‘field-of-view fusion’ technique Huawei uses on its latest phones where, for example, a 3x photo gets its central region from the 5x unit and its peripheries from upscaling (using super-resolution) the 1x capture. But given Google’s choice of lenses, its decision makes sense: from our own testing with the Pixel 3, super-res zoom is more than capable of handling zoom factors between 1x and 1.8x, the latter being the magnification factor of Google’s zoomed-in lens.

Dual exposure controls with ‘Live HDR+’

The results of HDR+, the burst mode multi-frame averaging and tonemapping behind every photograph on Pixel devices, are compelling, retaining details in brights and darks in, usually, a pleasing, believable manner. But it’s computationally intensive to show the end result in the ‘viewfinder’ in real-time as you’re composing. This year, Google has opted to use machine learning to approximate HDR+ results in real-time. Google calls this ‘Live HDR+’. It’s essentially a WYSIWYG implementation that should give photographers more confidence in the end result, and possibly feel less of a need to adjust the overall exposure manually.

“If we have an intrinsically HDR camera, we should have HDR controls for it” – Marc Levoy

On the other hand, if you do have an approximate live view of the HDR+ result, wouldn’t it be nice if you could adjust it in real-time? That’s exactly what the new ‘dual exposure controls’ allow for. Tap on the screen to bring up two separate exposure sliders. The brightness slider, indicated by a white circle with a sun icon, adjusts the overall exposure, and therefore brightness, of the image. The shadows slider essentially adjusts the tonemap, so you can shadow and midtone visibility and detail to suit your taste.

Default HDR+ result Brightness slider (top left) lowered to darken overall exposure
Shadows slider (top center) lowered to create silhouettes Final result

Dual exposure controls are a clever way to operate an ‘HDR’ camera, as it allows the user to adjust both the overall exposure and the final tonemap in one or two swift steps. Sometimes HDR and tonemapping algorithms can go a bit far (as in this iPhone XS example here), and in such situations photographers will appreciate having some control placed back in their hands.

And while you might think this may be easy to do after-the-fact, we’ve often found it quite difficult to use the simple editing tools on smartphones to push down the shadows we want darkened after tonemapping has already brightened them. There’s a simple reason for that: the ‘shadows’ or ‘blacks’ sliders in photo editing tools may or may not target the same range of tones the tonemapping algorithms did when initially processing the photo.

Improved Night Sight

Google’s Night Sight is widely regarded as an industry benchmark. We consistently talk about its use not just for low light photography, but for all types of photography because of its use of a super-resolution pipeline to yield higher resolution results with less aliasing and moire artifacts. Night Sight is what allowed the Pixel 3 to catch up to 1″-type and four-thirds image quality, both in terms of detail and noise performance in low light, as you can see here (all cameras shot with equivalent focal plane exposure). So how could Google improve on that?

Well, let’s start with the observation that some reviewers of the new iPhone 11 remarked that its night mode had surpassed the Pixel 3’s. While that’s not entirely true, as I covered in my in-depth look at the respective night modes, we have found that at very low light levels the Pixel 3 does fall behind. And it mostly has to do with the limits: handheld exposures per-frame in our shooting with the Pixel 3 were limited to ~1/3s to minimize blur caused by handshake. Meanwhile, the tripod-based mode only allowed shutter speeds up to 1s. Handheld and tripod-based shots were limited to 15 and 6 total frames, respectively, to avoid user fatigue. That meant the longest exposures you could ever take were limited to 5-6s.

Pixel 4 extends the per-frame exposure, when no motion is detected, to at least 16 seconds and up to 15 frames. That’s a total of 4 minutes of exposure. Which is what allows the Pixel 4 to capture the Milky Way:

(4:00 exposure: 15 frames, 16s each)

Remarkable is the lack of user input: just set the phone up against a rock to stabilize it, and press one button. That’s it. It’s important to note you couldn’t get this result with one long exposure, either with the Pixel phone or a dedicated camera, because it would result in star trails. So how does the Pixel 4 get around this limitation?

The same technique that enables high quality imagery from a small sensor: burst photography. First, the camera picks a shutter speed short enough to ensure no star trails. Next, it takes many frames at this shutter speed and aligns them. Since alignment is tile-based, it can handle the moving stars due to the rotation of the sky just as the standard HDR+ algorithm handles motion in scenes. Normally, such alignment is very tricky for photographers shooting night skies with non-celestial, static objects in the frame, since aligning the stars would cause misalignment in the foreground static objects, and vice versa.

Improved Night Sight will not only benefit starry skyscapes, but all types of photography requiring long exposures

But Google’s robust tile-based merge can handle displacement of objects from frame to frame of up to ~8% in the frame1. Think of it as tile-based alignment where each frame is broken up into roughly 12,000 tiles, with each tile individually aligned to the base frame. That’s why the Pixel 4 has no trouble treating stars in the sky differently from static foreground objects.

Another issue with such long total exposures is hot pixels. These pixels can become ‘stuck’ at high luminance values as exposure times increase. The new Night Sight uses clever algorithms to emulate hot pixel suppression, to ensure you don’t have white pixels scattered throughout your dark sky shot.

DSLR-like bokeh

This is potentially a big deal, and perhaps underplayed, but the Google Pixel 4 will render bokeh, particularly out-of-focus highlights, closer to what we’d expect from traditional cameras and optics. Until now, while Pixel phones did render proper disc-shaped blur for out of focus areas as real lenses do (as opposed to a simple Gaussian blur), blurred backgrounds simply didn’t have the impact they tend to have with traditional cameras, where out-of-focus highlights pop out of the image in gorgeous, bright, disc-shaped circles as they do in these comparative iPhone 11 examples here and also here.

The new bokeh rendition on the Pixel 4 takes things a step closer to traditional optics, while avoiding the ‘cheap’ technique some of its competitors use where bright circular discs are simply ‘stamped’ in to the image (compare the inconsistently ‘stamped’ bokeh balls in this Samsung S10+ image here next to the un-stamped, more accurate Pixel 3 image here). Have a look below at the improvements over the Pixel 3; internal comparisons graciously provided to me via Google.

The impactful, bright, disc-shaped bokeh of out-of-focus highlights are due to the processing of the blur at a Raw level, where linearity ensures that Google’s algorithms know just how bright those out-of-focus highlights are relative to their surroundings.

Previously, applying the blur to 8-bit tonemapped images resulted in less pronounced out-of-focus highlights, since HDR tonemapping usually compresses the difference in luminosity between these bright highlights and other tones in the scene. That meant that out-of-focus ‘bokeh balls’ weren’t as bright or separated from the rest of the scene as they would be with traditional cameras. But Google’s new approach of applying the blur at the Raw stage allows it to more realistically approximate what happens optically with conventional optics.

Portrait mode improvements

Portrait mode has been improved in other ways apart from simply better bokeh, as outlined above. But before we begin I want to clarify something up front: the term ‘fake bokeh’ as our readers and many reviewers like to call blur modes on recent phones is not accurate. The best computational imaging devices, from smartphones to Lytro cameras (remember them?), can actually simulate blur true to what you’d expect from traditional optical devices. Just look at the gradual blur in this Pixel 2 shot here. The Pixel phones (and iPhones as well as other phones) generate actual depth maps, gradually blurring objects from near to far. This isn’t a simple case of ‘if area detected as background, add blurriness’.

The Google Pixel 3 generated a depth map from its split photodiodes with a ~1mm stereo disparity, and augmented it using machine learning. Google trained a neural network using depth maps generated by its dual pixel array (stereo disparity only) as input, and ‘ground truth’ results generated by a ‘franken-rig‘ that used 5 Pixel cameras to create more accurate depth maps than simple split pixels, or even two cameras, could. That allowed Google’s Portrait mode to understand depth cues from things like defocus cues (out-of-focus objects are probably further away than in-focus ones) and semantic cues (smaller objects are probably further away than larger ones).

Deriving stereo disparity from two perpendicular baselines affords the Pixel 4 much more accurate depth maps

The Pixel 4’s additional zoomed-in lens now gives Google more stereo data to work with, and Google has been clever in its arrangement: if you’re holding the phone upright, the two lenses give you horizontal (left-right) stereo disparity, while the split pixels on the main camera sensor give you vertical (up-down) stereo disparity. Having stereo data along two perpendicular axes avoids artifacts related to the ‘aperture problem‘, where detail along the axis of stereo disparity essentially has no measured disparity.

Try this: look at a horizontal object in front of you and blink to switch between your left and right eye. The object doesn’t look very different as you switch eyes, does it? Now hold out your index finger, pointing up, in front of you, and do the same experiment. You’ll see your finger moving dramatically left and right as you switch eyes.

Deriving stereo disparity from two perpendicular baselines affords the Pixel 4 much more accurate depth maps. In the example below, provided by Google, the Pixel 4 result is far more believable than the Pixel 3 result, which has parts of the upper green stem, and the horizontally-oriented green leaf at bottom right, accidentally blurred despite falling within the plane of focus.

The combination of two baselines, one short (split pixels) and one significantly longer (the two lenses) also has other benefits. The longer stereo baselines of dual camera setups can run into the problem of occlusion: since the two perspectives are considerably different, one lens may see a background object that to the other lens is hidden behind a foreground object. The shorter 1mm disparity of the dual pixel sensor means its less prone to errors due to occlusion.

On the other hand, the short disparity of the split pixels means that further away objects that are not quite at infinity appear the same to ‘left-looking’ and ‘right-looking’ (or up/down) photodiodes. The longer baseline of the dual cameras mean that stereo disparity can be calculated for these further away objects, which allows the Pixel 4’s portrait mode to better deal with distant subjects, or groups of people shot from further back, as you can see below.

There’s yet another benefit of the two separate methods for calculating stereo disparity: macro photography. If you’ve shot portrait mode on an iPhone, you’ve probably run into the error message ‘Move farther away’. That’s because the telephoto lens has a minimum focus distance of ~20cm. Meanwhile, the minimum focus distance of the main camera on the Pixel 4 is only 10cm. That means that for close-up photography, the Pixel 4 can simply use its split pixels and learning-based approach for believable results (though I’d note that iPhone 11’s wide angle portrait mode also allows you to get closer to subjects).

Google continues to keep a range of planes in perfect focus, which can sometimes lead to odd results where multiple people in a scene remain focused despite being at different depths. However, this approach avoids prematurely blurring parts of people that shouldn’t be blurred, a common problem with iPhones.

Oddly, portrait mode is unavailable with the zoomed-in lens, instead opting to use the same 1.5x crop from the main camera that the Pixel 3 used. This means images will have less detail compared to some competitors, and also means you don’t get the versatility of both wide-angle and telephoto portrait shots. And if there’s one thing you probably know about me, it’s that I love my wide angle portraits!

Pixel 4’s portrait mode continues to use a 1.5x crop from the main camera. This means that, like the Pixel 3, it will have considerably less detail than portrait modes from competitors like the iPhone 11 Pro that use the full-resolution image from wide or tele modules. Click to view at 100%.

Further improvements

There are a few more updates to note.

Learning-based AWB

The learning-based white balance that debuted in Night Sight is now the default auto white balance (AWB) algorithm in all camera modes on the Pixel 4. What is learning-based white balance? Google trained its traditional AWB algorithm to discriminate between poorly, and properly, white balanced images. The company did this by hand-correcting images captured using the traditional AWB algorithm, and then using these corrected images to train the algorithm to suggest appropriate color shifts to achieve a more neutral output.

Google tells us that the latest iteration of the algorithm is improved in a number of ways. A larger training data set has been used to yield better results in low light and adversarial lighting conditions. The new AWB algorithm is better at recognizing specific, common illuminants and adjusting for them, and also yields better results under artificial lights of one dominant color. We’ve been impressed with white balance results in Night Sight on the Pixel 3, and are glad to see it ported over to all camera modes.

Learning-based AWB (Pixel 3 Night Sight) Traditional AWB (Pixel 3)

New face detector

A new face detection algorithm based solely on machine learning is now used to detect, focus, and expose for faces in the scene. The new face detector is more robust at identifying faces in challenging lighting conditions. This should help the Pixel 4 better focus on and expose for, for example, strongly backlit faces. The Pixel 3 would often prioritize exposure for highlights and underexpose faces in backlit conditions.

Though tonemapping would brighten the face properly in post-processing, the shorter exposure would mean more noise in shadows and midtones, which after noise reduction could lead to smeared, blurry results. In the example below the Pixel 3 used an exposure time of 1/300s while the iPhone 11 yielded more detailed results due to its use of an exposure more appropriate for the subject (1/60s).

Along with the new face detector, the Pixel 4 will (finally) indicate the face it’s focusing on in the ‘viewfinder’ as you compose. In the past, Pixel phones would simply show a circle in the center of the screen every time it refocused, which was a very confusing experience that left users wondering whether the camera was in fact focusing on a face in the scene, or simply on the center. Indicating the face its focusing on should allow Pixel 4 users to worry less, and feel less of a need to tap on a face in the scene if the camera’s already indicating it’s focusing on it.

On previous Pixel phones, a circle focus indicator would pop up in the center when the camera refocused, leading to confusion. Is the camera focusing on the face, or the outstretched hand? On the Huawei P20, the camera indicates when it’s tracking a face. The Pixel 4 will have a similar visual indicator.

Semantic segmentation

This isn’t new, but in his keynote Marc mentioned ‘semantic segmentation’ which, like the iPhone, allows image processing to treat different portions of the scene differently. It’s been around for years in fact, allowing Pixel phones to brighten faces (‘synthetic fill flash’), or to better separate foregrounds and backgrounds in Portrait mode shots. I’d personally point out that Google takes a more conservative approach in its implementation: faces aren’t brightened or treated differently as much as they tend to be with the iPhone 11. The end result is a matter of personal taste.

Conclusion

We’ve covered a lot ground here, including both old and new techniques Pixel phones use to achieve image quality unprecedented from such small, pocketable devices. But the questions for many readers are: (1) what is the best smartphone for photography I can buy, and (2) when should I consider using such a device as opposed to my dedicated camera?

We have much testing to do and many side-by-sides to come. But from our tests thus far and our recent iPhone 11 vs. Pixel 3 Night Sight article, one thing is clear: in most situations the Pixel cameras are capable of a level of image quality unsurpassed by any other smartphone when you compare images at the pixel (no pun intended) level.

But other devices are catching up, or exceeding Pixel phone capabilities. Huawei’s field-of-view fusion offers compelling image quality across multiple zoom ratios thanks to its fusion of image data from multiple lenses. iPhones offer a wide-angle portrait mode far more suited for the types of photography casual users engage in, with better image quality to boot than Pixel’s (cropped) Portrait mode.

The Pixel 4 takes an already great camera and refines it to achieve results closer to, and in some cases surpassing, traditional cameras and optics

And yet Google Pixel phones deliver some of the best image quality we’ve seen from a mobile device. No other phone can compete with its Raw results, since Raws are a result of a burst of images stacked using Google’s robust align-and-merge algorithm. Night Sight is now improved to allow for superior results with static scenes demanding long exposures. And Portrait mode is vastly improved thanks to dual baselines and machine learning, with fewer depth map errors and better ability to ‘cut around’ complex objects like pet fur or loose hair strands. And pleasing out-of-focus highlights thanks to ‘DSLR-like bokeh’. AWB is improved, and a new learning-based face detector should improve focus and exposure of faces under challenging lighting.

Ultimately what all this means is that the Pixel 4 takes an already great camera in the Pixel 3, and refines it further to achieve results closer to, and in some cases surpassing, traditional cameras and optics. Stay tuned for more thorough tests once we get a unit in our hands.

Finally, have a watch of Marc Levoy’s keynote presentation yesterday by clicking the link below the video. And if you haven’t already, watch his lectures on digital photography or visit his course website from the digital photography class he taught while at Stanford.

Acknowledgements: I would like to thank Google, and particularly Marc Levoy and Isaac Reynolds, for the detailed interview and for providing many of the samples used in this article.

Footnotes:
1 The original paper on HDR+ by Hasinoff and Levoy claims HDR+ can handle displacements of up to 169 pixels within any 3MP single raw color channel image. That’s 169 pixels of a 2000 pixel wide image, which amounts to ~8.5%. Furthermore, tile-based alignment is performed on up to 16×16 pixel blocks, which for a 4000×3000 12MP image amounts to ~12,000 effective tiles that can be individually aligned.

Thieves Steal Equipment Worth Over $100,000 From LA Studio, Caught on Clear CCTV

Thieves Steal Equipment Worth Over $100,000 From LA Studio, Caught on Clear CCTV

One Los Angeles studio suffered a break-in over the weekend, with over $100,000 worth of gear being taken in a time period of just 6 minutes. Unfortunately for the thieves, CCTV caught clear images of their faces, which the studio owner is now using to try and identify them.

[ Read More ]

NFL Photographer Knocked Down by Player Who Later Messages Her to Check She’s OK

NFL Photographer Knocked Down by Player Who Later Messages Her to Check She's OK

Baltimore Ravens quarterback Lamar Jackson is being heaped with praise by fans online over the way he handled accidentally knocking a photographer to the ground. In a clip from the game, he is seen helping her up, and later private messaged her through her social pages to check she was OK.

[ Read More ]

The Zhiyun WEEBILL-S is a compact 3-axis gimbal for mirrorless and DSLR cameras

Zhiyun, a leading gimbal manufacturer, announced the WEEBILL-S 3-axis gimbal earlier this week. Designed for mainstream mirrorless and DSLR cameras plus lens combos, the new gimbal offers ultra-low latency image transmission in 1080p with a brand new TransMount Image Transmission Module while ViaTouch 2.0 allows your smartphone to function as a professional monitor and multi-functional remote controller.

The latest iteration of the WEEBILL-S has a 300% upgraded power torque motor along with a 50% increase in responsiveness. It’s compatible with multiple camera/lens combos, including Sony’s A7 III+FE 24-70mm F2.8 or the Canon 5D Mark IV+EF 24-70mm F2.8. A unique ergonomic sling mode lets operators easily switch between high and low angle shots using the TransMount quick setup kit. The 8th version of the Instune algorithm enables the gimbal to automatically recognize the weight and selects the perfect motor strength for the best shooting accuracy.

The all-new image transmission module enables a maximum of 1080p / 30p streaming, 100-meter image transmission featuring LUT, pseudo coloring, focus peak, and zebra adjustment for professional monitoring and livestream publishing. The TransMount image transmission module allows you to add 3 devices to the stabilizer – a smartphone, tablet, or professional monitor. Interchangeable batteries enable you to run the device for 14 hours straight. You can charge your camera in real-time which comes in handy for day-long shoots. Other features include:

  • ViaTouch 2.0 which creates a seamless connection between smartphone and camera.
  • SmartFollow 2.0. enables you to select a point of interest from the ViaTouch 2.0 interface and the camera will follow its movement with ultra-low latency and a cinematic experience.
  • The all-new motion sensor control system, Sync Motion, gives you the advantage of controlling the stabilizer’s direction with a smartphone and an ultra-high responsive speed gives you an immersive filmmaking experience.
  • WEEBILL-S supports electronic focus and mechanical focus/zoom control with a control wheel on the grip, to realize a fast and accurate focus or zoom when shooting. Using the servo focus/zoom motor, users can control the zoom and focus for a more professional filmmaking experience.

The WEEBILL-S is available to order starting at $439. The Zoom/Focus Pro package retails at $519 while $679 will get you the Image Transmission Pro package.

UK photo retailer Jessops is reportedly looking for administrators to help salvage the company

Jessops’ current online storefront

British photo retailer Jessops is looking for administrators to ‘help salvage the struggling High Street brand,’ according to BBC News.

Serial entrepreneur Peter Jones purchased Jessops from administrators back in 2013 in a joint venture with restructuring company Hilco Capital, after the photo retailer racked up £81 million in debt and closed more than 187 stores. At the time, Jones said in the below interview with BBC News that Jessops would reopen ’30-40′ of its stores with the intention of charging the same price in stores as it did online.

After not initially reaching Jones’ £80 million revenue goal during his first year of ownership (2015), Jessops ended up showing revenue of £80.3 million and £95 million in 2016 and 2017, respectively. However, recent trade conditions have negatively impacted revenue and as a result the company is reportedly looking for a company voluntary agreement (CVA) with landlords and lenders of the chain’s 46 stores, leased under Jessop’s retail property firm, JR Prop Limited. As explained by BBC News, CVA ‘is an insolvency process that allows a business to reach an agreement with its creditors to pay off all or part of its debts [over an agreed period of time] and is often used as an opportunity to renegotiate rents.’

Sky News has reported store closures and rent cuts are expected, but sources close to Jessops say Jones is still optimistic about the presence of its brick-and-mortar locations, according to BBC News.

Sources close to Jones have also told Sky News that ‘Mr Jones had decided that placing JR Prop into insolvency proceedings would provide the most effective means of streamlining Jessops’ operations to ensure their survival.’

Jessops was established by Frank Jessops in Leicester, United Kingdom in 1935. Currently, Jessops’ headquarters are located in Marlow, United Kingdom.

Thanks to Nolan and Eastwood, Large Format Cameras Have Forever Changed Movies

Large format cameras have changed the way we watch and make movies. But how are they changing the medium’s language?

Cinematographers have spent the last five years easing into large format cameras. They started on blockbuster movies like The Dark Knight and Rogue One, but have since branched out into more accessible and smaller projects using the same cameras.

Not since the rise in digital cinematography has the entirety of cinema changed. Now, large format cameras as expanding the screen and our conversation about what we show and the cinematography behind movies and TV.

What are large-format cameras?

Large format cameras, like the Panavision Millennium DXL, Sony F65, and ARRI LF series, capture images with significantly more detail. They create a clearer picture and a shallower depth of field. That means using a 50mm lens on a 65mm format camera produces a field of vision equivalent to 25mm lens on 35mm format.

Read More

How to Use the Ansel Adams Zone System in the Digital World

How to Use the Ansel Adams Zone System in the Digital World

Our histogram shows 256 shades of gray. Besides pure black and pure white Ansel Adams used only nine shades to manipulate the contrast in his famous landscape photos. His zone-system can still be used for our modern digital photography.

[ Read More ]

Sarah Kerruish and Matt Maude Wield Magic in New Documentary

Concerning a little-known offshoot of Apple Computer, the new documentary General Magic chronicles the rise and fall of the same named company which, in the late-1980s and early-1990s, was on the cutting edge of new technologies which are now commonplace 30 years onward. Co-directed by Sarah Kerruish and Matt Maude, General Magic explores the personalities […]

The post Sarah Kerruish and Matt Maude Wield Magic in New Documentary appeared first on Below the Line.

Kevin Smith Breaks Down This Key Scene From ‘Jay and Silent Bob Reboot’

Writer-director Kevin Smith has been a staple of truth-telling and directing for more than 20 years. See him back in his indie role behind the camera and watch him break down a scene. Snoogans!

It seems like there are no new ideas in Hollywood. They’re rebooting everything. Even… Jay and Silent Bob?

Kevin Smith is one of the most controversial stoners and filmmakers out there. His directing style is laid back and so is the way he breaks down a scene.

Check out the video from Vanity Fair and let’s talk after the jump.

The meta-narrative here of a reboot movie in a reboot. One of Smith’s best talents has always been casting. And his story about making Val Kilmer into Bluntman in this movie is so heartwarming.

Kilmer is one of our great actors going through a hard time. F**k cancer. But seeing him appear as the silent Bluntman was the best part of this video. Hsi casting also shows how Smith uses his personality and likability to always cast up.

Read More