Technology

Auto Added by WPeMatico

Cosmic alchemy: Colliding neutron stars show us how the universe creates gold

Cosmic alchemy: Colliding neutron stars show us how the universe creates gold

Illustration of hot, dense, expanding cloud of debris stripped from the neutron stars just before they collided.
NASA’s Goddard Space Flight Center/CI Lab, CC BY

Duncan Brown, Syracuse University and Edo Berger, Harvard University

For thousands of years, humans have searched for a way to turn matter into gold. Ancient alchemists considered this precious metal to be the highest form of matter. As human knowledge advanced, the mystical aspects of alchemy gave way to the sciences we know today. And yet, with all our advances in science and technology, the origin story of gold remained unknown. Until now.

Finally, scientists know how the universe makes gold. Using our most advanced telescopes and detectors, we’ve seen it created in the cosmic fire of the two colliding stars first detected by LIGO via the gravitational wave they emitted.

The electromagnetic radiation captured from GW170817 now confirms that elements heavier than iron are synthesized in the aftermath of neutron star collisions.
Jennifer Johnson/SDSS, CC BY

Origins of our elements

Scientists have been able to piece together where many of the elements of the periodic table come from. The Big Bang created hydrogen, the lightest and most abundant element. As stars shine, they fuse hydrogen into heavier elements like carbon and oxygen, the elements of life. In their dying years, stars create the common metals – aluminum and iron – and blast them out into space in different types of supernova explosions.

For decades, scientists have theorized that these stellar explosions also explained the origin of the heaviest and most rare elements, like gold. But they were missing a piece of the story. It hinges on the object left behind by the death of a massive star: a neutron star. Neutron stars pack one-and-a-half times the mass of the sun into a ball only 10 miles across. A teaspoon of material from their surface would weigh 10 million tons.

Many stars in the universe are in binary systems – two stars bound by gravity and orbiting around each other (think Luke’s home planet’s suns in “Star Wars”). A pair of massive stars might eventually end their lives as a pair of neutron stars. The neutron stars orbit each other for hundreds of millions of years. But Einstein says that their dance cannot last forever. Eventually, they must collide.

Massive collision, detected multiple ways

On the morning of August 17, 2017, a ripple in space passed through our planet. It was detected by the LIGO and Virgo gravitational wave detectors. This cosmic disturbance came from a pair of city-sized neutron stars colliding at one third the speed of light. The energy of this collision surpassed any atom-smashing laboratory on Earth.

Hearing about the collision, astronomers around the world, including us, jumped into action. Telescopes large and small scanned the patch of sky where the gravitational waves came from. Twelve hours later, three telescopes caught sight of a brand new star – called a kilonova – in a galaxy called NGC 4993, about 130 million light years from Earth.

Astronomers had captured the light from the cosmic fire of the colliding neutron stars. It was time to point the world’s biggest and best telescopes toward the new star to see the visible and infrared light from the collision’s aftermath. In Chile, the Gemini telescope swerved its large 26-foot mirror to the kilonova. NASA steered the Hubble to the same location.

Movie of the visible light from the kilonova fading away in the galaxy NGC 4993, 130 million light years away from Earth.

Just like the embers of an intense campfire grow cold and dim, the afterglow of this cosmic fire quickly faded away. Within days the visible light faded away, leaving behind a warm infrared glow, which eventually disappeared as well.

Observing the universe forging gold

But in this fading light was encoded the answer to the age-old question of how gold is made.

Shine sunlight through a prism and you will see our sun’s spectrum – the colors of the rainbow spread from short wavelength blue light to long wavelength red light. This spectrum contains the fingerprints of the elements bound up and forged in the sun. Each element is marked by a unique fingerprint of lines in the spectrum, reflecting the different atomic structure.

The spectrum of the kilonova contained the fingerprints of the heaviest elements in the universe. Its light carried the telltale signature of the neutron-star material decaying into platinum, gold and other so-called “r-process” elements.

Visible and infrared spectrum of the kilonova. The broad peaks and valleys in the spectrum are the fingerprints of heavy element creation.
Matt Nicholl, CC BY

For the first time, humans had seen alchemy in action, the universe turning matter into gold. And not just a small amount: This one collision created at least 10 Earths’ worth of gold. You might be wearing some gold or platinum jewelry right now. Take a look at it. That metal was created in the atomic fire of a neutron star collision in our own galaxy billions of years ago – a collision just like the one seen on August 17.

And what of the gold produced in this collision? It will be blown out into the cosmos and mixed with dust and gas from its host galaxy. Perhaps one day it will form part of a new planet whose inhabitants will embark on a millennia-long quest to understand its origin.The Conversation

Duncan Brown, Professor of Physics, Syracuse University and Edo Berger, Professor of Astronomy, Harvard University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Cosmic alchemy: Colliding neutron stars show us how the universe creates gold appeared first on Interalia Magazine.

Why Can’t I Get Third Party BP-U Batteries any more?

In the last month or so it has become increasingly hard to find dealers or stores with 3rd party BP-U style batteries in stock.

After a lot of digging around and talking to dealers and battery manufacturers it became apparent that Sony were asking the manufacturers of BP-U style batteries to stop making and selling them or face legal action. The reason given being that the batteries impinge on Sony’s Intellectual Property rights.

Why Is This Happening Now?

It appears that the reason for this clamp down is because it was discovered that the design of some of these 3rd party batteries was such that the battery could be inserted into the camera in a way that instead of power flowing through the power pins to the camera, power was flowing through the data pins. This will burn out the circuit boards in the camera and the camera will no longer work.

Users of these damaged cameras, unaware that the problem was caused by the battery were sending them back to Sony for repair under warranty. I can imagine that many arguments would have then followed over who was to pay for these potentially very expensive repairs or camera replacements.

So it appears that to prevent further issues Sony is trying to stop potentially damaging batteries from being manufactured and sold.

This is good and bad. Of course no one wants to use a battery that could result in the need to replace a very expensive camera with a new one (and if you were not aware it was the battery you could also damage the replacement camera). But many of us, myself included, have been using 3rd party batteries so that we can have a D-Tap power connection on the battery to power other devices such as monitors.

Only Option – BP-U60T?

Sony don’t produce batteries with D-Tap outlets. They do make a battery with a hirose connector (BP-U60T), but that’s not what we really want and compared to the 3rd party batteries it’s very expensive and the capacity isn’t all that high.

BP-U60T Why Can't I Get Third Party BP-U Batteries any more?
Sony BP-U60T with 4 pin hirose DC out.

So where do we go from here?

If you are going to continue to use 3rd party batteries, do be very careful about how you insert them and be warned that there is the potential for serious trouble. I don’t know how widespread the problem is.

We can hope perhaps that maybe Sony will either start to produce batteries with a D-Tap of their own. Or perhaps they can work with a range of chosen 3rd party battery manufacturers to find a way to produce safe batteries with D-Tap outputs under licence.


Why Can’t I Get Third Party BP-U Batteries any more? was first posted on December 19, 2019 at 9:04 am.
©2018 “XDCAM-USER.COM“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at contact@xdcam-user.com

What is Dual Base ISO and why is it important?

Almost all modern day video and electronic stills cameras have the ability to change the brightness of the images they record. The most common way to achieve this is through the addition of gain or through the amplification of the signal that comes from the sensor. 

On older video cameras this amplification was expressed as dB (decibels) of gain. A brightness change of 6dB is the same as one stop of exposure or a doubling of the ISO rating. But you must understand that adding gain to raise the ISO rating of a camera is very different to actually changing the sensitivity of a camera.

The problem with increasing the amplification or adding gain to the sensor output is that when you raise the gain you increase the level of the entire signal that comes from the sensor. So, as well as increasing the levels of the desirable parts of the image, making it brighter, the extra gain also increases the amplitude of the noise, making that brighter too.

Imagine you are listening to an FM radio. The signal starts to get a bit scratchy, so in order to hear the music better you turn up the volume (increasing the gain). The music will get louder, but so too will the scratchy noise, so you may still struggle to hear the music. Changing the ISO rating of an electronic camera by adding gain is little different. When you raise the gain the picture does get brighter but the increase in noise means that the darkest things that can be seen by the camera remain hidden in the noise which has also increased in amplitude.

Another issue with adding gain to make the image brighter is that you will also normally reduce the dynamic range that you can record.

Screenshot-2019-11-27-at-18.21.19-1024x576 What is Dual Base ISO and why is it important?

This is because amplification makes the entire signal bigger. So bright highlights that may be recordable within the recording range of the camera at 0dB or the native ISO may be exceed the upper range of the recording format when even only a small amount of gain is added, limiting the high end.

Screenshot-2019-11-27-at-18.22.59-1024x576 What is Dual Base ISO and why is it important?
Adding gain amplifies the brighter parts of the image so they can now exceed the cameras recording range.

 

At the same time the increased noise floor masks any additional shadow information so there is little if any increase in the shadow range.

Reducing the gain doesn’t really help either as now the brightest parts of the image from the sensor are not amplified sufficiently to reach the cameras full output. Very often the recordings from a camera with -3dB or -6dB  of gain will never reach 100%.

Screenshot-2019-11-27-at-18.23.08-1024x576 What is Dual Base ISO and why is it important?
Negative gain may also reduce the cameras dynamic range.

A camera with dual base ISO’s works differently.

Instead of adding gain to increase the sensitivity of the camera a camera with a dual base ISO sensor will operate the sensor in two different sensitivity modes. This will allow you to shoot at the low sensitivity mode when you have plenty of light, avoiding the need to add lots of ND filters when you want to obtain a shallow depth of field. Then when you are short of light you can switch the camera to it’s high sensitivity mode.

When done correctly, a dual ISO camera will have the same dynamic range and colour performance in both the high and low ISO modes and only a very small difference in noise between the two.

How dual sensitivity with no loss of dynamic range is achieved is often kept very secret by the camera and sensor manufacturers. Getting good, reliable and solid information is hard. Various patents describe different methods. Based on my own research this is a simplified description of how I believe Sony achieve two completely different sensitivity ranges on both the Venice and FX9 cameras.

The image below represents a single microscopic pixel from a CMOS video sensor. There will be millions of these on a modern sensor. Light from the camera lens passes first through a micro lens and colour filter at the top of the pixel structure. From there the light hits a part of the pixel called a photodiode. The photodiode converts the photons of light into electrons of electricity. 

Screenshot-2019-11-27-at-17.40.52-1024x605 What is Dual Base ISO and why is it important?
Layout of a sensor pixel including the image well.

In order to measure the pixel output we have to store the electrons for the duration of the shutter period. The part of the pixel used to store the electrons is called the “image well” (in an electrical circuit diagram the image well would be represented as a capacitor and is often simply the capacitance of the the photodiode itself).

Screenshot-2019-11-27-at-17.41.00-1024x605 What is Dual Base ISO and why is it important?
The pixels image well starts to fill up and the signal output level increases.

Then as more and more light hits the pixel, the photodiode produces more electrons. These pass into the image well and the signal increases. Once we reach the end of the shutter opening period the signal in the image well is read out, empty representing black and full representing very bright.

Screenshot-2019-11-27-at-17.41.09-1024x605 What is Dual Base ISO and why is it important?

Consider what would happen if the image well, instead of being a single charge storage area was actually two charge storage areas and there is a way to select whether we use the combined image well storage areas or just one part of the image well.

Screenshot-2019-11-27-at-18.10.02-1024x575 What is Dual Base ISO and why is it important?
Dual ISO pixel where the size of the image well can be altered.

When both areas are connected to the pixel the combined capacity is large. So it will take more electrons to fill it up, so more light is needed to produce the increased amount of electrons. This is the low sensitivity mode. 

If part of the charge storage area is disconnected and all of the photodiodes output is directed into the remaining, now smaller storage area then it will fill up faster, producing a bigger signal more quickly. This is the high sensitivity mode.

What about noise?

In the low sensitivity mode with the bigger storage area any unwanted noise generated by the photodiode will be more diluted by the greater volume of electrons, so noise will be low. When the size of the storage area or image well is reduced the noise from the photodiode will be less diluted so the noise will be a little bit higher. But overall the noise will be much less that that which would be seen if a large amount of extra gain was added.

Note for the more technical amongst you: Strictly speaking the image well starts full. Electrons have a negative charge so as more electrons are added the signal in the image well is reduced until maximum brightness output is achieved when the image well is empty!!

As well as what I have illustrated above there may be other things going on such as changes to the amplifiers that boost the pixels output before it is passed to the converters that convert the pixel output from an analog signal to a digital one. But hopefully this will help explain why dual base ISO is very different to the conventional gain changes used to give electronic cameras a wide range of different ISO rating.

On the Sony Venice and the PXW-FX9 there is only a very small difference between the noise levels when you switch from the low base ISO to the high one. This means that you can pick and choose between either base sensitivity level depending on the type of scene you are shooting without having to worry about the image becoming unusable due to noise.

NOTE: This article is my own work and was prepared without any input from Sony. I believe that the dual ISO process illustrated above is at the core of how Sony achieve two different base sensitivities on the Venice and FX9 cameras. However I can not categorically guarantee this to be correct.


What is Dual Base ISO and why is it important? was first posted on November 27, 2019 at 5:55 pm.
©2018 “XDCAM-USER.COM“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at contact@xdcam-user.com

How we identified brain patterns of consciousness

How we identified brain patterns of consciousness

Brain connections have been linked to consciousness.
whitehoune/Shutterstock

Davinia Fernández-Espejo, University of Birmingham

Humans have learned to travel through space, eradicate diseases and understand nature at the breathtakingly tiny level of fundamental particles. Yet we have no idea how consciousness – our ability to experience and learn about the world in this way and report it to others – arises in the brain.

In fact, while scientists have been preoccupied with understanding consciousness for centuries, it remains one of the most important unanswered questions of modern neuroscience. Now our new study, published in Science Advances, sheds light on the mystery by uncovering networks in the brain that are at work when we are conscious.

It’s not just a philosophical question. Determining whether a patient is “aware” after suffering a severe brain injury is a huge challenge both for doctors and families who need to make decisions about care. Modern brain imaging techniques are starting to lift this uncertainty, giving us unprecedented insights into human consciousness.

For example, we know that complex brain areas including the prefrontal cortex or the precuneus, which are responsible for a range of higher cognitive functions, are typically involved in conscious thought. However, large brain areas do many things. We therefore wanted to find out how consciousness is represented in the brain on the level of specific networks.

The reason it is so difficult to study conscious experiences is that they are entirely internal and cannot be accessed by others. For example, we can both be looking at the same picture on our screens, but I have no way to tell whether my experience of seeing that picture is similar to yours, unless you tell me about it. Only conscious individuals can have subjective experiences and, therefore, the most direct way to assess whether somebody is conscious is to ask them to tell us about them.




Read more:
The way you see colour depends on what language you speak


But what would happen if you lose your ability to speak? In that case, I could still ask you some questions and you could perhaps sign your responses, for example by nodding your head or moving your hand. Of course, the information I would obtain this way would not be as rich, but it would still be enough for me to know that you do indeed have experiences. If you were not able to produce any responses though, I would not have a way to tell whether you’re conscious and would probably assume you’re not.

Scanning for networks

Our new study, the product of a collaboration across seven countries, has identified brain signatures that can indicate consciousness without relying on self-report or the need to ask patients to engage in a particular task, and can differentiate between conscious and unconscious patients after brain injury.

When the brain gets severely damaged, for example in a serious traffic accident, people can end up in a coma. This is a state in which you lose your ability to be awake and aware of your surrounding and need mechanical support to breathe. It typically doesn’t last more than a few days. After that, patients sometimes wake up but don’t show any evidence of having any awareness of themselves or the world around them – this is known as a “vegetative state”. Another possibility is that they show evidence only of a very minimal awareness – referred to as a minimally conscious state. For most patients, this means that their brain still perceives things but they don’t experience them. However, a small percentage of these patients are indeed conscious but simply unable to produce any behavioural responses.

fMRI scanner.
wikipedia

We used a technique known as functional magnetic resonance imaging (fMRI), which allows us to measure the activity of the brain and the way some regions “communicate” with others. Specifically, when a brain region is more active, it consumes more oxygen and needs higher blood supply to meet its demands. We can detect these changes even when the participants are at rest and measure how it varies across regions to create patterns of connectivity across the brain.

We used the method on 53 patients in a vegetative state, 59 people in a minimally conscious state and 47 healthy participants. They came from hospitals in Paris, Liège, New York, London, and Ontario. Patients from Paris, Liège, and New York were diagnosed through standardised behavioural assessments, such as being asked to move a hand or blink an eye. In contrast, patients from London were assessed with other advanced brain imaging techniques that required the patient to modulate their brain to produce neural responses instead of external physical ones – such as imagining moving one’s hand instead of actually moving it.

In consciousness and unconsciousness, our brains have different modes to self-organise as time goes by. When we are conscious, brain regions communicate with a rich temperament, showing both positive and negative connections.
Credit: E. Tagliazucchi & A. Demertzi

We found two main patterns of communication across regions. One simply reflected physical connections of the brain, such as communication only between pairs of regions that have a direct physical link between them. This was seen in patients with virtually no conscious experience. One represented very complex brain-wide dynamic interactions across a set of 42 brain regions that belong to six brain networks with important roles in cognition (see image above). This complex pattern was almost only present in people with some level of consciousness.

Importantly, this complex pattern disappeared when patients were under deep anaesthesia, confirming that our methods were indeed sensitive to the patients’ level of consciousness and not their general brain damage or external responsiveness.

Research like this has the potential to lead to an understanding of how objective biomarkers can play a crucial role in medical decision making. In the future it might be possible to develop ways to externally modulate these conscious signatures and restore some degree of awareness or responsiveness in patients who have lost them, for example by using non-invasive brain stimulation techniques such as transcranial electrical stimulation. Indeed, in my research group at the University of Birmingham, we are starting to explore this avenue.

Excitingly the research also takes us as step closer to understanding how consciousness arises in the brain. With more data on the neural signatures of consciousness in people experiencing various altered states of consciousness – ranging from taking psychedelics to experiencing lucid dreams – we may one day crack the puzzle.The Conversation

Davinia Fernández-Espejo, Senior Lecturer, School of Psychology and Centre for Human Brain Health, University of Birmingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post How we identified brain patterns of consciousness appeared first on Interalia Magazine.

We’ve discovered the world’s largest drum – and it’s in space

We’ve discovered the world’s largest drum – and it’s in space

The Earth’s magnetosphere bangs like a drum.
E. Masongsong/UCLA, M. Archer/QMUL, H. Hietala/UTU

Martin Archer, Queen Mary University of London

Universities in the US have long wrangled over who owns the world’s largest drum. Unsubstantiated claims to the title have included the “Purdue Big Bass Drum” and “Big Bertha”, which interestingly was named after the German World War I cannon and ended up becoming radioactive during the Manhattan Project.

Unfortunately for the Americans, however, the Guinness Book of World Records says a traditional Korean “CheonGo” drum holds the true title. This is over 5.5 metres in diameter, some six metres tall and weighs over seven tonnes. But my latest scientific results, just published in Nature Communications, have blown all of the contenders away. That’s because the world’s largest drum is actually several tens of times larger than our planet – and it exists in space.

You may think this is nonsense. But the magnetic field (magnetosphere) that surrounds the Earth, protecting us by diverting the solar wind around the planet, is a gigantic and complicated musical instrument. We’ve known for 50 years or so that weak magnetic types of sound waves can bounce around and resonate within this environment, forming well defined notes in exactly the same way wind and stringed instruments do. But these notes form at frequencies tens of thousands of times lower than we can hear with our ears. And this drum-like instrument within our magnetosphere has long eluded us – until now.

Massive magnetic membrane

The key feature of a drum is its surface – technically referred to as a membrane (drums are also known as membranophones). When you hit this surface, ripples can spread across it and get reflected back at the fixed edges. The original and reflected waves can interfere by reinforcing or cancelling each other out. This leads to “standing wave patterns”, in which specific points appear to be standing still while others vibrate back and forth. The specific patterns and their associated frequencies are determined entirely by the shape of the drum’s surface. In fact, the question “Can one hear the shape of a drum?” has intrigued mathematicians from the 1960s until today.

The outer boundary of Earth’s magnetosphere, known as the magnetopause, behaves very much like an elastic membrane. It grows or shrinks depending on the varying strength of the solar wind, and these changes often trigger ripples or surface waves to spread out across the boundary. While scientists have often focused on how these waves travel down the sides of the magnetosphere, they should also travel towards the magnetic poles.

Physicists often take complicated problems and simplify them considerably to gain insight. This approach helped theorists 45 years ago first demonstrate that these surface waves might indeed get reflected back, making the magnetosphere vibrate just like a drum. But it wasn’t clear whether removing some of the simplifications in the theory might stop the drum from being possible.

It also turned out to be very difficult to find compelling observational evidence for this theory from satellite data. In space physics, unlike say astronomy, we’re usually dealing with the completely invisible. We can’t just take a picture of what’s going on everywhere, we have to send satellites out and measure it. But that means we only know what’s happening in the locations where there are satellites. The conundrum is often whether the satellites are in the right place at the right time to find what you’re looking for.

Over the past few years, my colleagues and I have been further developing the theory of this magnetic drum to give us testable signatures to search for in our data. We were able to come up with some strict criteria that we thought could provide evidence for these oscillations. It basically meant that we needed at least four satellites all in a row near the magnetopause.

Thankfully, NASA’s THEMIS mission gave us not four but five satellites to play with. All we had to do was find the right driving event, equivalent to the drum stick hitting the drum, and measure how the surface moved in response and what sounds it created. The event in question was a jet of high speed particles impulsively slamming into the magnetopause. Once we had that, everything fell into place almost perfectly. We have even recreated what the drum actually sounds like (see the video above).

This research really goes to show how tricky science can be in reality. Something which sounds relatively straightforward has taken us 45 years to demonstrate. And this journey is far from over, there’s plenty more work to do in order to find out how often these drum-like vibrations occur (both here at Earth and potentially at other planets, too) and what their consequences on our space environment are.

This will ultimately help us unravel what kind of rhythm the magnetosphere produces over time. As a former DJ, I can’t wait – I love a good beat.The Conversation

Martin Archer, Space Plasma Physicist, Queen Mary University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post We’ve discovered the world’s largest drum – and it’s in space appeared first on Interalia Magazine.

Altered States: 2D digital displays become 3D reality – Digital Technology Lets You Touch Great Art

It’s a natural impulse to reach out and touch an original artwork, perhaps to feel the strong brushstrokes in van Gogh’s Starry Night or to trace the shape of a compelling sculpture. You can’t though, and for good reason: a multitude of individual touches would soon damage the work, so museums sensibly use “Please don’t touch” signs, velvet ropes and alert guards to keep viewers at a distance. It helps that those same museums have put their collections online so you can appreciate great art right on your digital device. However, even at high resolution, images on flat screens do not clearly show surface texture or convey volumes in space. But now researchers in art and technology are finding ways for viewers to experience the texture of artworks in 2D and the solidity of those in 3D.

The missing third dimension is significant even for flat works, which typically show the texture of the background paper or canvas, or of the pigment. Some nominally two-dimensional works are inherently textured, such as Canadian artist Brian Jungen’s People’s Flag (2006), an immense vertical hanging piece made of red textiles. Helen Fielding of the University of Western Ontario has perceptively noted how vision and touch intertwine in this work:

As my eyes run across the texture of the flag, I can almost feel the textures of the materials I see; my hands know the softness of wool, the smoothness of vinyl. Though touching the work is prohibited…my hands are drawn to the fabrics, subtly reversing the priority of vision over touch…

Textural features like these are a material record of the artist’s effort that enhances a viewer’s interaction with the work. Such flat but textured works are art in “2.5D” because they extend only slightly into the third dimension. Now artworks shown in 2.5D and 3D on flat screens and as solid 3D models are giving new pleasures and insights to art lovers, curators, and scholars. As exact copies, these replicas can also help conserve fragile works while raising questions about the meaning of original art.

One approach, developed at the Swiss Federal Institute of Technology (EPFL) in Lausanne, creates a digital 2.5D image of an artwork by manipulating its lighting. Near sunset, when the sun’s rays enter a scene at an angle, small surface elevations cast long shadows that make them stand out. Similarly, the EPFL process shines a simulated light source onto a digital image. As the source is moved, it produces highlights and shadows that enhance surface details to produce a quasi-3D appearance.

This approach has links to CGI, computer-generated imagery, the technology that creates imaginary scenes and characters in science fiction and fantasy films. One powerful CGI tool is an algorithm called the bidirectional scattering distribution function (BSDF). For every point in an imagined scene, the BSDF shows how incoming light traveling in any direction would be reflected or transmitted to produce the outgoing ray seen by a viewer. The result fully describes the scene for any location of the light source.

In films, the BSDF is obtained from optical theory and the properties of the imaginary scene. The EPFL group, however, generated it from real art. In 2014, they illuminated a pane of stained glass with light from different directions and recorded the results with a high-resolution camera, creating a BSDF and showing that the method works for nearly planar items. This approach has been commercialized by Artmyn, a Swiss company co-founded by Luic Baboulaz who led the EPFL team. Artmyn makes 2.5D digital images of artworks by lighting them with LEDs at different visible wavelengths to provide color fidelity, and at infrared and ultraviolet wavelengths to further probe the surface. The result is a BSDF with up to a terabyte of data.

As an illustration, Artmyn has worked with Sotheby’s auction house to digitize two Marc Chagall works: Le Printemps (1975, oil on canvas), a village scene with a couple embracing, and Dans L’Atelier (1980, tempera on board), an artist’s studio. The Artmyn software lets a viewer zoom from the full artwork down to the fine scale of the weave of the canvas, while moving the lighting to display blobs, islands and layers of pigment. This reveals how Chagall achieves his effects and clearly illustrates the difference between oils and tempera as artistic media. Currently in process for similar digitization, Baboulaz told me, are a Leonardo da Vinci painting and a drawing, in recognition of the 500th anniversary of his death this year.

Artmyn has also digitized cultural artifacts such as a Sumerian clay tablet circa 2,000 BCE covered in cuneiform script; signatures and letters from important figures in the American Revolution; and a digital milestone, the original Apple-1 computer motherboard. These 2.5D images display signs of wear and of their creator’s presence that hugely enhance a viewer’s visceral appreciation of the real objects and their history.

For the next step, creating full 3D representations and physical replicas, the necessary data must be obtained without touching the original. One approach is LIDAR (light detection and ranging), where a laser beam is scanned over the object and reflected back to a sensor. The distance from the laser to each point on the object’s surface is found from the speed of light and its travel time, giving a map of the surface topography. LIDAR is most suitable for big artifacts such as a building façade at a coarse resolution of millimeters. Other approaches yield finer detail. In the triangulation method, for instance, a laser puts a dot of light on the object while a nearby camera records the dot’s location, giving data accurate to within 100 micrometers (0.1 millimeter). Copyists typically combine scanning methods to obtain the best surface replication and color rendition.

One big 3D copying effort is underway at the Smithsonian Institution, whose 19 museums preserve 155 million cultural and historic artifacts and artworks. Since 2013, the Smithsonian has put over 100 of these online as interactive 3D displays that can be viewed from different angles, and as data for 3D printers so people can make their own copies. The objects, chosen for popularity and diversity, include the original 1903 Wright Brothers flyer; a highly decorated 2nd century BCE Chinese incense burner; costume boots from the Broadway musical The Wiz from 1975; a mask of Abraham Lincoln’s face from shortly before his assassination in 1865; and for the 50th anniversary of the Apollo 11 moon landing, astronaut Neil Armstrong’s spacesuit. Recently added is a small 3D version of a full-sized dinosaur skeleton display at the National Museum of Natural History showing a T-rex attacking a triceratops, for which hundreds of bones were scanned by LIDAR and other methods.

A different goal animates the 3D art and technology studio Factum Arte in Madrid, Spain. Founded by British artist Adam Lowe in 2001, Factum Arte protects cultural artifacts by copying them, using its own high-resolution 3D scanning, printing and fabrication techniques.

Museums already use copies to preserve sensitive artworks on paper that need long recovery times in darkness and low humidity between showings. During these rests, the museum displays instead high-quality reproductions (and informs patrons that they are doing so). In a recent interview entitled “Datareality,” Adam Lowe expressed his similar belief that an artistically valid copy can provide a meaningful viewing experience while preserving a fragile original. One of his current projects is to replicate the tombs of the pharaohs Tutankhamun (King Tut) and Seti I, and queen Nefertari, in the Egyptian Valley of the Kings. The tombs were sealed by their builders, but once opened, they are deteriorating due to the throngs of visitors. As Lowe recently explained, “by going to see something that was designed to last for eternity, but never to be visited, you’re contributing to its destruction.”

The copies, approved by Egypt’s Supreme Council of Antiquities, will give visitors alternate sites to enter and view. At a resolution of 0.1 millimeter, the copies provide exact reproductions of the intricate colored images and text adorning thousands of square meters in the tombs. The first copy, King Tut’s burial chamber, was opened to the public in 2014, and in 2018, Factum Arte displayed its copied “Hall of Beauties” from the tomb of Seti I.

Earlier, Factum Arte had copied the huge Paolo Veronese oil on canvas The Wedding Feast at Cana (1563, 6.8 meters x 9.9 meters), which shows the biblical story where Jesus changes water into wine. The original was plundered from its church in Venice by Napoleon’s troops in 1797 and now hangs in the Louvre. The full-size copy, however, commissioned by the Louvre and an Italian foundation, was hung back at the original church site in 2007.

Factum Arte’s efforts highlight the questions that arise as exact physical copies of original art become available. Museums, after all, trade in authenticity. They offer viewers the chance to stand in the presence of a work that once felt the actual hands of its creator. But if the copy is indistinguishable from the work, does that dispel what the German cultural critic Walter Benjamin calls the “aura” of the original? In his influential 1935 essay The Work of Art in the Age of Mechanical Reproduction, he asserted that a copy lacks this aura:

In even the most perfect reproduction, one thing is lacking: the here and now of the work of art – its unique existence in a particular place. It is this unique existence – and nothing else – that bears the mark of the history to which the work has been subject.

The Factum Arte reproductions show that “original vs copy” is more nuanced than Benjamin indicates. The Egyptian authorities will charge a higher fee to enter the original tombs and a lower one for the copies, giving visitors the chance to feel the experience without causing damage. Surely this helps preserve a “unique existence in a particular place” for the original work. And for the repatriated Wedding at Cana, Lowe tellingly points out that a copy can bring its own authenticity of history and place:

Many people started to question about whether the experience of seeing [the copy] in its correct setting, with the correct light, in dialogue with this building that it was painted for, is actually more authentic than the experience of seeing the original in the Louvre.

We are only beginning to grasp what it means to have near-perfect copies of artworks, far beyond what Walter Benjamin could have imagined. One lesson is that such a copy can enhance an original rather than diminish it, by preserving it, and by recovering or extending its meaning.

Copying art by technical means has often been an unpopular idea. Two centuries ago, the English artist William Blake, known for his unique personal vision, expressed his dislike of mechanical reproduction such as imposing a grid to copy an artwork square by square. Current technology can also often stand rightfully accused of replacing the human and the intuitive with the robotic and the soulless. But properly used, today’s high-tech replications show that technology can also enlarge the power and beauty of an innately human impulse, the need to make art.

The post Altered States: 2D digital displays become 3D reality – Digital Technology Lets You Touch Great Art appeared first on Interalia Magazine.

Can You Shoot Anamorphic with the PXW-FX9?

The simple answer as to whether you can shoot anamorphic on the FX9 or not, is no, you can’t. The FX9 certainly to start with, will not have an anamorphic mode and it’s unknown whether it ever will. I certainly wouldn’t count on it ever getting one (but who knows, perhaps if we keep asking for it we will get it).

But just because a camera doesn’t have a dedicated anamorphic mode it doesn’t mean you can’t shoot anamorphic. The main thing you won’t have is de-squeeze. So the image will be distorted and stretched in the viewfinder. But most external monitors now have anamorphic de-squeeze so this is not a huge deal and easy enough to work around.

1.3x or 2x Anamorphic?

With a 16:9 or 17:9 camera you can use 1.3x anamorphic lenses to get a 2:39 final image. So the FX9, like most 16:9 cameras will be suitable for use with 1.3x anamorphic lenses out of the box.

But for the full anamorphic effect you really want to shoot with 2x  anamorphic lenses. A 2x anamorphic lens will give your footage a much more interesting look than a 1.3x anamorphic. But if you want to produce the classic 2:39 aspect ratio normally associated with anamorphic lenses you need a 4:3 sensor rather than a 16:9 one.

What about Full Frame 16:9?

But -that’s super 35mm 4:3 or s35mm open gate. The FX9 has a 6K full frame sensor and a full frame sensor is bigger, most importantly it’s taller than s35mm and tall enough for use with a 2x s35 anamorphic lens! The FX9 sensor is approx 34mm wide and 19mm tall in FF6K mode.

In comparison the Arri  35mm 4:3 open gate sensor is area is 28mm x 18mm and we know this works very well with 2x Anamorphic lenses. The important bit here is the height – 18mm with the Arri open gate and 18.8mm for the FX9 in Full Frame Scan Mode.

Crunching the numbers.

If you do the maths – Start with the FX9 in FF mode and use a s35mm 2x anamorphic lens. 

Because the image is 6K subsampled to 4K the resulting recording will have 4K resolution.

But you will need to crop the sides of the final recording by roughly 30% to remove the left/right vignette caused by using an anamorphic lens designed for s35 on a full frame sensor (the exact amount of crop will depend on the lens). This then results in a 2.8K ish resolution image depending on how much you need to crop.

4K Bayer doesn’t won’t give 4K resolution.

That doesn’t seem very good until you consider that a 4K 4:3 bayer sensor will only yield about 2.8K resolution anyway.

And Arri’s s35mm cameras are open gate 3.2K bayer sensors so will result in an even lower resolution image, perhaps around 2.2K? Do remember that the original Arri ALEV sensor was designed when 2K was the norm for the cinema and HD TV was still new. The Arri super 35 cameras were for a long time the gold standard for Anamorphic. But now cameras like Sony’s Venice that can shoot the equivalent of 6K open gate or 6K 4:3 and 6:5 are now taking over.

What about Netflix?

While Netflix normally insist on a minimum of a sensor with 4K pixels horizontally for capture, they are permitting sensors with lower horizontal pixel counts to be used for anamorphic capture because the increased sensor height needed for 2x anamorphic means that there are more pixels vertically. The total pixel count when using a camera such as the Arri LF with a super 35mm 2x anamorphic lens is 3148 x 2636 pixels. Thats a total of  8 megapixels which is similar to the 8 megapixel total pixel count of a 4K 16:9 sensor. The argument is that the total captured picture information is similar for both, so both should (and are) allowed.

So could the FX9 get Netflix approval for 2x Anamorphic?

The FX9’s sensor has is 3168 pixel tall when shooting FF 16:9  as it’s pixel pitch is finer than the Arri LF sensor.  When working with a 2x anamorphic super 35mm lens the image circle from the lens will cover around 4K x 3K of pixels, a total of 12 megapixels on the sensor when it’s operating in the 6K Full Frame scan mode. But then the FX9 will internally down scale this to that vignetted 4K recording that needs to be cropped.

6K down to 4K means that the 4K covered by the lens becomes roughly 2.7K. But then the 3.1K from the Arri when debayered will more than likely be even less than this, perhaps only 2.1K

But whether Netflix will accept the in camera down conversion is a very big question. The maths indicates that  the resolution of the final output of the FX9 would be greater than that of the LF, even taking the necessary crop into account. But this would need to be tested in practice. If the math is right, I see no reason why the FX9 won’t be able to meet Netflix’s minimum requirements for 2x anamorphic production. If this is a workflow you wish to pursue I would recommend taking the 10 bit 4:2:2 HDMI out to a ProRes recorder and record using the best codec you can until the FX9 gains the ability to output raw. Meeting the Netflix standard is speculation on my part, perhaps it never will get accepted for anamorphic, but to answer the original question –

 – Can you shoot anamorphic with the FX9 – Absolutely, yes you can and the end result should be pretty good. But you’ll have to put up with a distorted image with the supplied viewfinder (for now at least).


Can You Shoot Anamorphic with the PXW-FX9? was first posted on October 3, 2019 at 10:57 am.
©2018 “XDCAM-USER.COM“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at contact@xdcam-user.com