The only way to change the perspective of a shot is to change the position of the camera relative to the subject or scene. Just put a 1.5x wider lens on a s35camera and you have exactly the same angle of view as a Full Frame camera. It is an internet myth that Full Frame changes the perspective or the appearance of the image in a way that cannot be exactly replicated with other sensor or frame sizes. The only thing that changes perspective is how far you are from the subject. It’s one of those laws of physics and optics that can’t be broken. The only way to see more or less around an object is by changing your physical position.
The only thing changing the focal length or sensor size changes is magnification and you can change the magnification either by changing sensor size or focal length and the effect is exactly the same either way. So in terms of perspective, angle of view or field of view an 18mm s35 setup will produce an identical image to a 27mm FF setup. The only difference may be in DoF depending on the aperture where f4 on FF will provide the same DoF as f2.8 on s35. If both lenses are f4 then the FF image will have a shallower DoF.
Again though physics play a part here as if you want to get that shallower DoF from a FF camera then the lens has to be the same aperture as the s35 lens. To do that the elements in the FF lens need to be bigger to gather twice as much light so that it can put the same amount of light as the s35 lens across the twice as large surface area of the FF sensor. So generally you will pay more for a comparable FF like for like aperture lens as a s35 lens. Or you simply won’t be able to get an equivalent in FF because the optical design becomes too complex, too big, too heavy or too costly. This in particular is a big issue for parfocal zooms. At FF and larger imager sizes they can be fast or have a big zoom range, but to do both is very, very hard and typically requires some very exotic glass. You won’t see anything like the affordable super 35mm Fujinon MK’s in full frame, certainly not at anywhere near the same price. This is why for decades 2/3″ sensors and 16mm film before that, ruled the roost for TV news as lenses with big zoom ranges and large fast apertures were relatively affordable.
Perhaps one of the commonest complaints I see today with larger sensors is “why can’t I find an affordable fast, parfocal zoom with more than a 4x zoom range”. Such lenses do exist, for s35 you have lenses like the $22K Canon CN7 17-120mm T2.9 which is pretty big and pretty heavy. For Full Frame the nearest equivalent is the more expensive $40K Fujinon Premista 28-100 t2.9. which is a really big lens weighing in at almost 4kg. But look at the numbers: Both will give a very similar AoV on their respective sensors at the wide end but the cheaper Canon has a greatly extended zoom range and will get a tighter shot than the Premista at the long end. Yes, the DoF will be shallower with the Premista, but you are paying almost double and have to deal with a significantly heavier lens with a reduced zoom ratio. So you may need both the $40K Premista 28-100 and the $40K Premista 80-250 to cover everything the Canon does (and a bit more). So as you can see, getting that extra shallow DoF may be very costly. And it’s not so much about the sensor, but more about the lens.
The History of large formats:
It is worth considering that back in the 50’s and 60’s we had VistaVision a horizontal 35mm format the equivalent of 35mm FF, plus 65mm and a number of other larger than s35 formats. All in an effort to get better image quality.
VistaVision (The closet equivalent to 35mm Full Frame.
VistaVision didn’t last long (7 or 8 years) as better quality film stocks meant that similar quality could be obtained from regular s35mm film and shooting VistaVision was difficult due to the very shallow DoF and focus challenges, plus it was twice the cost of regular 35mm film. It did make a brief comeback in the 70’s for shooting special effects sequences where very high resolutions were needed. VistaVision was superseded by Cinemascope which uses 2x Anamorphic lenses and conventional vertical super 35mm film and Cinemascope was subsequently largely replaced by 35mm Panavision (the two being virtually the same thing and often used interchangeably).
At around the same time there were various 65mm (with 70mm projection) formats including Super Panavision, Ultra Panavision and Todd-AO These too struggled and very few films were made using 65mm film after the end of the 60’s. There was a brief resurgence in the 80’s and again recently there have been a few films, but production difficulties and cost has meant they tend to be niche productions.
Historically there have been many attempts to establish mainstream larger than s35 formats. But by and large audiences couldn’t tell the difference and even if they did they wouldn’t pay extra for them. Obviously today the cost implication is tiny compared to the extra cost of 65mm film or VistaVision. But the bottom line remains that normally the audience won’t actually struggle to see any difference, because in reality there isn’t one other than a marginal resolution increase. But it is harder to shoot FF than s35. Comparable lenses are more expensive, lens choices more limited, focus is more challenging at longer focal lengths or large apertures. If you get carried away with too large an aperture you get miniaturisation and cardboarding effects if you are not careful (these can occur with s35 too).
Can The Audience Tell – Does The Audience Care?
Cinema audiences have not been complaining that the DoF isn’t shallow enough, or that the resolution isn’t high enough (Arri’s success has proved that resolution is a minor image quality factor). But they are noticing focus issues, especially in 4K theaters.
So while FF and the other larger format are here to stay. Full Frame is not the be-all and end-all. Many, many people believe that FF has some kind of magic that makes the images different to smaller formats because they “read it on the internet so it must be true”. Once they realise that actually it isn’t different, I’m quite sure many will return to s35 because that does seem to be the sweet spot where DoF and focus is manageable and IQ is plenty good enough. Only time will tell, but history suggest s35 isn’t going anywhere any time soon.
Today’s modern cameras give us the choice to shoot either FF or s35. Either can result in an identical image, it’s only a matter of aperture and focal length. So pick the one that you feel most comfortable with for you production. FF is nice, but it isn’t magic.
Almost all modern day video and electronic stills cameras have the ability to change the brightness of the images they record. The most common way to achieve this is through the addition of gain or through the amplification of the signal that comes from the sensor.
On older video cameras this amplification was expressed as dB (decibels) of gain. A brightness change of 6dB is the same as one stop of exposure or a doubling of the ISO rating. But you must understand that adding gain to raise the ISO rating of a camera is very different to actually changing the sensitivity of a camera.
The problem with increasing the amplification or adding gain to the sensor output is that when you raise the gain you increase the level of the entire signal that comes from the sensor. So, as well as increasing the levels of the desirable parts of the image, making it brighter, the extra gain also increases the amplitude of the noise, making that brighter too.
Imagine you are listening to an FM radio. The signal starts to get a bit scratchy, so in order to hear the music better you turn up the volume (increasing the gain). The music will get louder, but so too will the scratchy noise, so you may still struggle to hear the music. Changing the ISO rating of an electronic camera by adding gain is little different. When you raise the gain the picture does get brighter but the increase in noise means that the darkest things that can be seen by the camera remain hidden in the noise which has also increased in amplitude.
Another issue with adding gain to make the image brighter is that you will also normally reduce the dynamic range that you can record.
This is because amplification makes the entire signal bigger. So bright highlights that may be recordable within the recording range of the camera at 0dB or the native ISO may be exceed the upper range of the recording format when even only a small amount of gain is added, limiting the high end.
At the same time the increased noise floor masks any additional shadow information so there is little if any increase in the shadow range.
Reducing the gain doesn’t really help either as now the brightest parts of the image from the sensor are not amplified sufficiently to reach the cameras full output. Very often the recordings from a camera with -3dB or -6dB of gain will never reach 100%.
A camera with dual base ISO’s works differently.
Instead of adding gain to increase the sensitivity of the camera a camera with a dual base ISO sensor will operate the sensor in two different sensitivity modes. This will allow you to shoot at the low sensitivity mode when you have plenty of light, avoiding the need to add lots of ND filters when you want to obtain a shallow depth of field. Then when you are short of light you can switch the camera to it’s high sensitivity mode.
When done correctly, a dual ISO camera will have the same dynamic range and colour performance in both the high and low ISO modes and only a very small difference in noise between the two.
How dual sensitivity with no loss of dynamic range is achieved is often kept very secret by the camera and sensor manufacturers. Getting good, reliable and solid information is hard. Various patents describe different methods. Based on my own research this is a simplified description of how I believe Sony achieve two completely different sensitivity ranges on both the Venice and FX9 cameras.
The image below represents a single microscopic pixel from a CMOS video sensor. There will be millions of these on a modern sensor. Light from the camera lens passes first through a micro lens and colour filter at the top of the pixel structure. From there the light hits a part of the pixel called a photodiode. The photodiode converts the photons of light into electrons of electricity.
In order to measure the pixel output we have to store the electrons for the duration of the shutter period. The part of the pixel used to store the electrons is called the “image well” (in an electrical circuit diagram the image well would be represented as a capacitor and is often simply the capacitance of the the photodiode itself).
Then as more and more light hits the pixel, the photodiode produces more electrons. These pass into the image well and the signal increases. Once we reach the end of the shutter opening period the signal in the image well is read out, empty representing black and full representing very bright.
Consider what would happen if the image well, instead of being a single charge storage area was actually two charge storage areas and there is a way to select whether we use the combined image well storage areas or just one part of the image well.
When both areas are connected to the pixel the combined capacity is large. So it will take more electrons to fill it up, so more light is needed to produce the increased amount of electrons. This is the low sensitivity mode.
If part of the charge storage area is disconnected and all of the photodiodes output is directed into the remaining, now smaller storage area then it will fill up faster, producing a bigger signal more quickly. This is the high sensitivity mode.
What about noise?
In the low sensitivity mode with the bigger storage area any unwanted noise generated by the photodiode will be more diluted by the greater volume of electrons, so noise will be low. When the size of the storage area or image well is reduced the noise from the photodiode will be less diluted so the noise will be a little bit higher. But overall the noise will be much less that that which would be seen if a large amount of extra gain was added.
Note for the more technical amongst you: Strictly speaking the image well starts full. Electrons have a negative charge so as more electrons are added the signal in the image well is reduced until maximum brightness output is achieved when the image well is empty!!
As well as what I have illustrated above there may be other things going on such as changes to the amplifiers that boost the pixels output before it is passed to the converters that convert the pixel output from an analog signal to a digital one. But hopefully this will help explain why dual base ISO is very different to the conventional gain changes used to give electronic cameras a wide range of different ISO rating.
On the Sony Venice and the PXW-FX9 there is only a very small difference between the noise levels when you switch from the low base ISO to the high one. This means that you can pick and choose between either base sensitivity level depending on the type of scene you are shooting without having to worry about the image becoming unusable due to noise.
NOTE: This article is my own work and was prepared without any input from Sony. I believe that the dual ISO process illustrated above is at the core of how Sony achieve two different base sensitivities on the Venice and FX9 cameras. However I can not categorically guarantee this to be correct.
A firmware bug has been identified with the Sony AXS-CR1 AXS and SXS card reader that can result in the corruption of the data on a card when performing concurrent data reads. To ensure this does not happen you should update the firmware of your AXS-CR1 immediately.
For more information please see the post linked below on the the official Sony Cine website where you will find instructions on how to perform the update and where to download the necessary update files.