One of the great features of the PXW-FX9 is the ability to connect a phone or tablet to the camera via WiFi so that you can view a near live feed from the camera (there’s about a 5 to 6 frame delay).
To do this you need to install the latest version of the free Sony Content Browser Mobile application on your phone. Then you would normally connect the phone to the cameras WiFi by placing the FX9 into Access Point Mode and use either NFC to establish the connection if your phone has it, or by manually connecting your phone’s WiFi to the camera.
However for many people this does not always provide a stable connection with frequent drop outs and disconnects. Fortunately there is a another way to connect the camera and phone and this seems much more stable.
First put the cameras WiFi into “Station Mode” instead of “Access Point” mode. Then setup your phone to act as a WiFi Hotspot. Now you can connect the camera to the phone by performing a network search on the camera. Once the camera finds the phones WiFi hotspot you connect the camera to the phone.
Once the connection from the camera to the phone has been established you should open Content Browser Mobile and it should find the FX9. If it doesn’t find it straight away swipe down with your finger to refresh the connection list. Then select the camera to connect to it.
Once connected this way you will have all the same options that you would have if connected the other way around (using Access Point mode). But the connection tends to be much, much more stable. In addition you can also now use the cameras ftp functions to upload files from the camera via your phones cellular data connection to remote servers.
If you want to create a bigger network then consider buying one of the many small battery powered WifI routers or a dedicated 4G MiFi hotspot and connect everything to that. Content Browser Mobile should be able to find any camera connected to the same network. Plus if you use a WiFi router you can connect several phones to the same camera.
Explaining consciousness is one of the hardest problems in science and philosophy. Recent neuroscientific discoveries suggest that a solution could be within reach – but grasping it will mean rethinking some familiar ideas. Consciousness, I argue in a new paper, may be caused by the way the brain generates loops of energetic feedback, similar to the video feedback that “blossoms” when a video camera is pointed at its own output.
I first saw video feedback in the late 1980s and was instantly entranced. Someone plugged the signal from a clunky video camera into a TV and pointed the lens at the screen, creating a grainy spiralling tunnel. Then the camera was tilted slightly and the tunnel blossomed into a pulsating organic kaleidoscope.
Video feedback is a classic example of complex dynamical behaviour. It arises from the way energy circulating in the system interacts chaotically with the electronic components of the hardware.
As an artist and VJ in the 1990s, I would often see this hypnotic effect in galleries and clubs. But it was a memorable if unnerving experience during an LSD-induced trip that got me thinking. I hallucinated almost identical imagery, only intensely saturated with colour. It struck me then there might be a connection between these recurring patterns and the operation of the mind.
Brains, information and energy
Fast forward 25 years and I’m a university professor still trying to understand how the mind works. Our knowledge of the relationship between the mind and brain has advanced hugely since the 1990s when a new wave of scientific research into consciousness took off. But a widely accepted scientific theory of consciousness remains elusive.
I doubt this claim for several reasons. First, there is little agreement among scientists about exactly what information is. Second, when scientists refer to information they are often actually talking about the way energetic activity is organised in physical systems. Third, brain imaging techniques such as fMRI, PET and EEG don’t detect information in the brain, but changes in energy distribution and consumption.
Brains, I argue, are not squishy digital computers – there is no information in a neuron. Brains are delicate organic instruments that turn energy from the world and the body into useful work that enables us to survive. Brains process energy, not information.
Recognising that brains are primarily energy processors is the first step to understanding how they support consciousness. The next is rethinking energy itself.
What is energy?
We are all familiar with energy but few of us worry about what it is. Even physicists tend not to. They treat it as an abstract value in equations describing physical processes, and that suffices. But when Aristotle coined the term energeia he was trying to grasp the actuality of the lived world, why things in nature work in the way they do (the word “energy” is rooted in the Greek for “work”). This actualised concept of energy is different from, though related to, the abstract concept of energy used in contemporary physics.
When we study what energy actually is, it turns out to be surprisingly simple: it’s a kind of difference. Kinetic energy is a difference due to change or motion, and potential energy is a difference due to position or tension. Much of the activity and variety in nature occurs because of these energetic differences and the related actions of forces and work. I call these actualised differences because they do actual work and cause real effects in the world, as distinct from abstract differences (like that between 1 and 0) which feature in mathematics and information theory. This conception of energy as actualised difference, I think, may be key to explaining consciousness.
The human brain consumes some 20% of the body’s total energy budget, despite accounting for only 2% of its mass. The brain is expensive to run. Most of the cost is incurred by neurons firing bursts of energetic difference in unthinkably complex patterns of synchrony and diversity across convoluted neural pathways.
What is special about the conscious brain, I propose, is that some of those pathways and energy flows are turned upon themselves, much like the signal from the camera in the case of video feedback. This causes a self-referential cascade of actualised differences to blossom with astronomical complexity, and it is this that we experience as consciousness. Video feedback, then, may be the nearest we have to visualising what conscious processing in the brain is like.
The neuroscientific evidence
The suggestion that consciousness depends on complex neural energy feedback is supported by neuroscientific evidence.
Researchers recently discovered a way to accurately index the amount of consciousness someone has. They fired magnetic pulses through healthy, anaesthetised, and severely injured peoples’ brains. Then they measured the complexity of an EEG signal that monitored how the brains reacted. The complexity of the EEG signal predicted the level of consciousness in the person. And the more complex the signal the more conscious the person was.
The researchers attributed the level of consciousness to the amount of information processing going on in each brain. But what was actually being measured in this study was the organisation of the neural energy flow (EEG measures differences of electrical energy). Therefore, the complexity of the energy flow in the brain tells us about the level of consciousness a person has.
Also relevant is evidence from studies of anaesthesia. No-one knows exactly how anaesthetic agents annihilate consciousness. But recent theories suggest that compounds including propofol interfere with the brain’s ability to sustain complex feedback loops in certain brain areas. Without these feedback loops, the functional integration between different brain regions breaks down, and with it the coherence of conscious awareness.
What this, and other neuroscientific work I cite in the paper, suggests is that consciousness depends on a complex organisation of energy flow in the brain, and in particular on what the biologist Gerald Edelman called “reentrant” signals. These are recursive feedback loops of neural activity that bind distant brain regions into a coherent functioning whole.
Explaining consciousness in scientific terms, or in any terms, is a notoriously hard problem. Some have worried it’s so hard we shouldn’t even try. But while not denying the difficulty, the task is made a bit easier, I suggest, if we begin by recognising what brains actually do.
The primary function of the brain is to manage the complex flows of energy that we rely on to thrive and survive. Instead of looking inside the brain for some undiscovered property, or “magic sauce”, to explain our mental life, we may need to look afresh at what we already know is there.
There are already a few setup and staged video samples from the new Sony PXW-FX9 circulating on the web. These are great. But how will it perform and what will the pictures look like for an unscripted, unprepared shoot? How well will the autofocus work out in the street, by day and by night? How does the S-Cinetone gamma and colour in custom mode compare with S-Log3 and the s709 Venice LUT compare?
To answer these questions I took a pre-production FX9 into the nearby town of Windsor with a couple of cheap Sony E-Mount lenses. The lenses were the Sony 50mm f1.8 which costs around $350 USD and the 28-70mm f3.5-f5.6 zoom that costs about $400 USD and is often bundled as a kit lens with some of the A7 series cameras.
To find out how good the auto focus really is I decided to shoot entirely using auto focus with the AF set to face priority. The only shot in the video where AF was not used is the 120fps slow-mo shot of the swans at 0:53 as AF does not work at 120fps.
Within the video there are examples of both S-Cinetone and S-Log3 plus the s709 LUT. So you know which is which I have indicated this is the video. I needed to do this as the two cut together really well. There is no grading as such. The S-Cinetone content is exactly as it came from the camera. The CineEI S-Log3 material was shot at the indicated base ISO and EI, there was no exposure offset. In post production all I did was add the s709 LUT, that’s it, no other corrections.
The video was shot using the Full Frame 6K scan, recording to UHD XAVC-I.
For exposure I used the cameras built in waveform display. When in CineEI I also used the Viewfinder Gamma Display assist function. Viewfinder Gamma assist gives the viewfinder the same look as the 709(800) LUT. What’s great about this is that it works in all modes and at all frame rates. So even when I switched to 2K Full Frame scan and 120fps the look of the image in the viewfinder remained the same and this allowed me to get a great exposure match for the slow motion footage to the normal speed footage.
There are some great examples of the way the autofocus works throughout the video. In particular the shot at 0:18 where the face priority mode follows the first two girls that are walking towards the camera, then as they exit the frame switches to the two ladies following behind (that don’t like being filmed) without any hunting. I could not have done that any better myself. Another great example is at 1:11 where the focus tracks the couple walking towards the camera and once they exit the shot the focus smoothly transitions to the background. One of the nice things about the AF system is you can adjust the speed at which the camera re-focusses and in this case I had slowed it down a bit to give it a more “human” feel.
Even in low light the AF works superbly well. At 1:33 I started on the glass of the ornate arch above the railway station and panned down as two people are walking towards me. The camera took this completely in it’s stride doing a lovely job of shifting the focus from the arch to the two men. Again, I really don’t think I could have done this any better myself.
Also, I am still really impressed by how little noise there is from this camera. Even in the high ISO mode the camera remains clean and the images look great. The low noise levels help the camera to resolve colour and details right down into the deepest shadows. Observe how at 2:06 you can clearly see the different hues of the red roses against the red leather of the car door, even though this is a very dark shot.
The reduction in noise and increase in real sensitivity also helps the super slow motion. Compared to an FS7 I think the 120fps footage from the FX9 looks much better. It seems to be less coarse and less grainy. There is still some aliasing which is unavoidable if you scan the sensor at a lower resolution, but it all looks much better controlled than similar material from an FS7.
And when there is more light the camera handles this very well too. At 1:07 you can see how well S-Cinetone deals with a very high contrast scene. There are lots of details in the shadows and even though the highlights on the boats are clipped, the way the camera reaches the end of it’s range is very nice and it doesn’t look nasty, it just looks very bright, which it was.
For me the big take-away from this simple shoot was just how easy it is to get good looking images. There was no grading, no messing around trying to get nice skintones. The focus is precise and it doesn’t hunt. The low noise and high sensitivity means you can get good looking shots in most situations. I’m really looking forward to getting my own FX9 as it’s going to make life just that little bit easier for many of my more adventurous shoots.