Bit depth is about dynamic range, not the number of colors you get to capture

Shooting this image in 14-bit helped capture the full dynamic range of the scene. Most of the time, with most cameras, 12-bit is enough.

Raw bit depth is often discussed as if it improves image quality and that more is better, but that’s not really the case. In fact, if your camera doesn’t need greater bit depth then you’ll just end up using hard drive space to record noise.

In fairness, it does sound as if bit depth is about the subtlety of color you can capture. After all, a 12-bit Raw file can record each pixel brightness with 4096 steps of subtlety, whereas a 14-bit one can capture tonal information with 16,384 levels of precision. But, as it turns out, that’s not really what ends up mattering. Instead, bit depth is primarily about how much of your camera’s captured dynamic range can be retained.

Much of this comes down to one factor: unlike our perception of brightness, Raw files are linear, not logarithmic. Let me explain why this matters.

Half the values in your Raw file are devoted to the brightest stop of light you captured

The human visual system (which includes the brain’s processing of the signals it gets from the eyes), interprets light in a non-linear manner: double the brightness of a light source by, say, turning on a second, identical light, and the perceptual difference isn’t that things have got twice as bright. Similarly, we’re much better as distinguishing between subtle differences in midtones than we are vast differences in bright ones. This is part of the way we’re able to cope with the high dynamic ranges in the scenes we encounter.

Digital sensors are different in this respect: double the light and you’ll get double the number of electrons released, which results in double → continue…

From:: DPreview