Sonnet has announced their new Dual-Slot RED MINI-MAG Pro Thunderbolt 3 Card Reader. The SF3 Series RED MINI-MAG Pro Card Reader features dual card slots and a 40Gbps Thunderbolt 3 interface. The card reader is compatible with Mac, Windows, and Linux computers with Thunderbolt 3 ports. The RED MINI-MAG Pro Card Reader can ingest footage … Continued
Sigma officially announce the new lenses at 9am London time: 1am in Los Angeles time 4am NYC time 10am Berlin time 16:00 Peking time 17:00 Tokyo time We know they will announce the new 35mm f/1.2, 14-24mm f/2.8 and 45mm…
Fatherly advice is something of value that most of us would adhere to. Recent news tells of a striking example of such value: A father’s wise counsel to hold on to some cheaply acquired film footage should soon lead one former NASA intern to riches.
A new video essay explains how ‘Shazam’ fixed plot holes and visual goofs.
When you see a $100M movie in theaters, you’re paying attention to the action, the story, and the characters. It’s not until you get home and turn on CinemaSins or visit the film’s IMDb trivia page that you start to notice even the slightest detail out of place. Don’t believe me? Okay, show of hands: how many of you caught Khaleesi’s water bottle on first viewing?
The director doesn’t have the luxury of ignoring minor flaws. It’s their job to make the final product as close to perfect as possible using any means necessary. Sometimes, that requires taking a few shortcuts in the hopes that nobody will notice.
David F. Sandberg, the director of Shazam, Annabelle: Creation, and Lights Out, has a new video essay out about his experience solving problems on the set of Fox’s superhero flick, Shazam. You can take a look at it here
I first was introduced to the incredible talents of Emmy Harrington on the set of Caveh Zahedi’s The Show About The Show, where she plays “Slut Machine,” and witnessed, first hand, her ability to adapt to all types of run-and-gun shooting environments and unorthodox directing styles and deliver a great performance take after take. You can also see her work in shows like High Maintenance and Jessica Jones, and an award-winning film she wrote, directed, and stars in — Two Little Bitches — is currently making the festival circuit. I sat down with her a couple of days after directing […]
Considering your target audience when creating content is one of the most crucial aspects of any marketing strategy. That’s why creators need to find out exactly who the majority of their viewers are when making and editing material.
Operated continuously for over 30 years, the Prime Time Softball League is now open for Registration for the Fall Season. Any Television show whose production is based in Los Angeles is eligible to participate. Teams are coed and are made up from the shows, cast, crew, staff and their immediate family members. The season will […]
According to a notice published by Nikon Japan, Nikon will stop offering free repair service for affected D600s on January 10th, 2020—exactly six months from today.
“As of February 26, 2014, to ‘Customer for Nikon Digital SLR Cameras D600,’ we were able to provide free support if this phenomenon occurred,” reads the translated statement. “We would like to inform you that the registration for the free service will be closed on January 10, 2020. From January 11, 2020, we will respond according to our repair regulations.”
That means no more out of warranty repairs if your D600 is affected by the sensor dust issue and you didn’t already have it fixed (or replaced by a D610). The move isn’t entirely surprising or upsetting given the issue (and free fix) has existed for over 5 years now. Nikon likely believes that most, if not all, of the D600s with this issue have been serviced, and no longer wants to stock the parts required for a fix. Or maybe they finally ran through the $17.7 million they reportedly put aside to address the issue.
Regardless, if your D600 is one of those affected by this issue and you haven’t yet sent it in to Nikon for a free service, it seems you have six months left to make it happen. We say “seems” because the statement has only been published by Nikon Japan, but we’ve reached out to Nikon USA to find out if this notice applies world-wide, and will update this post if and when we hear back.
Drawing tablets have always been a necessary tool for photo retouchers but just about everyone can benefit from incorporating one into their workflow. Now the new Huion HS64 is a great affordable entry-level option.
Researchers with the University of Zurich (UZH) and ETH Zurich have detailed the development of a novel recurrent neural network that can reconstruct ultra-high-speed videos from data captured by event cameras. Unlike convention cameras, which capture data as individual image frames, event cameras are ‘bio-inspired vision sensors’ that continuously capture movements via pixel-level brightness changes.
Event camera sensors perceive and record the world in a manner similar to how human vision works. Information is continuously recorded, meaning there’s no loss of data that would result from capturing the scene as individual frames. According to UZH researchers, event cameras offer multiple benefits over traditional cameras, including latency measured in microseconds, a complete lack of motion blur, and very high dynamic range.
A figured presented in the researchers’ paper highlighting how fine details in an image were preserved while presenting what they refer to as ‘bleeding edges.’
However, and unlike traditional cameras, the resulting output is a sequence of asynchronous events instead of actual intensity images (frames). Traditional vision algorithms can’t be used on the camera event output; something researchers have addressed with their newly detailed novel recurrent network.
Until now, reconstruction of the camera events into intensity images depended on ‘hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images,’ according to the researchers. The newly developed approach differs, instead of reconstructing the images directly from the data.
In describing the fruits of their work, the team says:
‘Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (> 20%), while comfortably running in real-time. We show that the network can synthesize high framerate videos (> 5,000 frames per second) of high-speed phenomena (e.g., a bullet hitting an object) and can provide high dynamic range reconstructions in challenging lighting conditions. As an additional contribution, we demonstrate the effectiveness of our reconstructions as an intermediate representation for event data.’
The team has released its reconstruction code and pre-trained model on GitHub to aid future research into the technology. Event cameras may one day be used for capturing ultra-high-speed footage, as well as very high dynamic range videos.
A high school girl was traveling in Zambia, Africa, last month when she came across an elephant that apparently hates having its picture taken. When the girl pulled out her smartphone to snap a photo, the elephant lashed out with its trunk and smacked the phone out of her hands.
The girl was in Africa for a 12-day mission trip to help the boys and men living on the streets get back on their feet, and after a 20-hour flight and 10-hour bus ride, the group decided to spend its first day in the country staying near a safari reserve and relaxing.
“We saw the beautiful elephant perched on the edge of its evening enclosure,” the girl tells ViralHog. “As we stood there, we were all admiring the beauty of this animal by taking photos and petting it as it was reaching out to us.
“I stood there smiling with joy, as elephants are one of my favorite animals, so I started to take some photos. I thought to myself, ‘Wow, no one is going to believe I was this close to a real live elephant’ but boy did I get ‘close.’
“Next thing I knew, WHAM!! Right in the side stomach. I felt like 10 people had punched me at once as I was catapulted backwards and my phone flew forward to the ground. […] Yes, I had the wind knocked out of me, no it didn’t hurt, yes I’m okay, and yes I still love elephants!
“It took me a couple of minutes to fully process the fact that I just got smacked by a real living elephant. One of the best parts is that it was all captured on video so everyone, knowing I was okay, could get a good laugh out of it.”
At DigiPro 2019, Foundry’s Head of Research Dan Ring will be presenting on “Jumping in at the Deep End: How to Experiment with Machine Learning in Post-Production Software”.
The 8th DigiPro, or Digital Production Symposium 2019, happens in Los Angeles on 27 July 2019, as part of SIGGRAPH 2019. The event is an open window to the future, as it brings together the world’s premier creators of digital visual effects, animation, and interactive experiences. During the one day event, Scientists, engineers, artists, and producers share ideas, insights, and techniques that bring innovation to real-world production.
Foundry, the leading developer of creative software for the Digital Design, Media and Entertainment industries, reveals its plans for SIGGRAPH 2019, and those include participating at DigiPro 2019 Symposium. Foundry’s Head of Research Dan Ring will be presenting on “Jumping in at the Deep End: How to Experiment with Machine Learning in Post-Production Software”.
Behind-the-Scenes of Spider-Man
The DigiPro 2019 event will open with a keynote that sets the tone. Under the title “Swing Behind-the-Scenes of Spider-Man: Into the Spider-Verse”, Peter Ramsey, from Sony Pictures Animation and Danny Dimian, from Sony Pictures Imageworks, take the audience through the challenges of creating the big screen version of Spider-Man.
From animation to technology, artists were challenged with developing new tools and techniques to create a groundbreaking visual style for this fresh and highly original film, which introduces Brooklyn teen Miles Morales and the limitless possibilities of the Spider-Verse, where anyone can wear the mask. For this year’s keynote, co-director Peter Ramsey and visual effects supervisor Danny Dimian will discuss their experiences while making the film.
The program includes three sessions, The Nitty-Gritty, Learnings and Artist Tools, divided in a total of 10 segments. Physically Based Lens Flare Rendering in “The Lego Movie 2”, Distributed Multi-Context Interactive Rendering, or Creating Interactive VR Narratives through Experimentation and Learning are some of the titles of the different presentations, a clear suggestion of the diversity of themes and techniques covered.
Machine-Learning in Post-Production
Support ProVideo Coalition
Filmmakers go-to destination for pre-production, production & post production equipment!
A team from Foundry, composed of Dan Ring, Johanna Barbier, Guillaume Gales and Ben Kent will participate in the session entitled “Jumping in at the Deep End: How to Experiment with Machine Learning in Post-Production Software”, which also includes Sebastian Lutz, from the Trinity College Dublin.
The information related to the session points to the fact that in recent years we’ve seen an explosion in Machine Learning (ML) research. “The challenge is now”, continues the note, “to transfer these new algorithms into the hands of artists and TD’s in visual effects and animation studios, so that they can start experimenting with ML within their existing pipelines.”
The session at DigiPro 2019 presents some of the current challenges to experimentation and deployment of ML frameworks in the post-production industry. It introduces Foundry’s open-source “ML-Server” client / server system as an answer to enabling rapid prototyping, experimentation and development of ML models in post-production software. Data, code and examples for the system can be found on the GitHub repository page.
Foundry’s All Stars 2019
Foundry’s present at SIGGRAPH 2019 extends beyond the Digital Production Symposium (DigiPro 2019). Returning for the third year, Foundry’s own All Stars event will take place on Sunday, July 28th, with speakers from the most innovative companies from around the world including Digital Domain, DNEG, Nike, More VFX, LAIKA, Method Studios and Weta Digital. The event features an inspirational keynote from Marvel Studios and opening remarks from Mikki Rose, SIGGRAPH 2019 conference chair.
Foundry is also supporting the SIGGRAPH Student Volunteer Program, the annual JPR SIGGRAPH Luncheon and the new Pipeline Conference, taking place on Sunday, July 28th. Foundry’s Exhibitor Sessions will take place on Monday, July 29th and Tuesday, July 30th. They will focus on Education, Look Development and Lighting, Cloud-Based Solutions, solving creative challenges with Nuke and Modo in 3D design.
Foundry at SIGGRAPH 2019
Jody Madden, Chief Product and Customer Officer at Foundry, commented: “SIGGRAPH is a great opportunity for us to meet our customers and share the latest updates on our products and innovations with the wider industry. We are thrilled to return this year with our third annual All Stars event with another impressive line-up of speakers sharing their work that truly brings imagination to life with our products. Complemented by a suite of exhibitor sessions on everything from education to research, we’ll have something for everyone.”
Mikki Rose, SIGGRAPH 2019 Conference Chair, commented: “I am very excited to be speaking at Foundry’s All Stars event this year, where I plan to address how our industry can continue to thrive and foster creativity by creating a more diverse and inclusive future for artists. Foundry’s All Stars speakers this year truly show how powerful software can inspire creativity in artists and studios from all around the world.”
Foundry will be located at Booth 925 during SIGGRAPH 2019, where they’ll be showcasing products in partnership with Lenovo and NVIDIA. Foundry’s team will also be present at the NVIDIA Limelight event on Monday, July 28th with the latest Modo updates and in the NVIDIA Booth on Wednesday, August 1st, discussing Machine Learning. Foundry’s products will also be showcased at the following booths: WACOM, NVIDIA and AWS.
Retouching is an essential part of the photography process. A photo does not get published without some finishing applied in post production. Automation tools such as presets and actions help speed up this process, but there is a danger in using them. This article discusses the problem with presets.
Complete your lens lineup with Sony’s new FE 35mm F1.8.
Just five years ago, Sony shipped the Sony a7II and has been catering to its dedicated mirrorless following ever since. If you’re a part of that following, good news…your pleas for a fast, all-purpose 35mm lens have been heard.
Sony has announced the release of the FE 35mm F1.8, a full-frame wide-angle lens that joins the company’s 34-lens lineup for a7 and a9 cameras.
For video, the lens runs on a linear motor that allows for linear focus response in MF mode, and fast and quiet autofocus when needed. A customizable focus hold button allows for instantaneous switching between auto and manual focus. The lens has a compact build, coming in at less than 3 inches long. It’s also incredibly light, weighing in at less than ten ounces, allowing it to be a very versatile lens. The 9-blade circular aperture allows for a natural bokeh effect and a smooth blurring of backgrounds.
While looking at my own images on Shutterstock, I noticed the Shutterstock algorithm was suggesting my photos as “similar” images. I thought it was a bug on the Shutterstock website until I noticed that others had downloaded my photos from other sites then uploaded them to Shutterstock. Shutterstock’s similar photos algorithm then noticed this and suggested the stolen photos along with my photos.
After a little research, I found that this has been happening for at least 8 years and it doesn’t look like Shutterstock has done more than deleting the offending photos when found, but more just get uploaded. However, with Shutterstock’s new machine learning similar photos feature, it is now easy to figure out if others have uploaded your photos to Shutterstock.
In the following video, I walk through how to use the Shutterstock similar photos search to figure out if copyright thieves have uploaded your photos to Shutterstock and how to send a DMCA takedown notice to get Shutterstock to take down the photos.
With 1.5 million photos uploaded every week, historically it might have been difficult to check if duplicate images were being uploaded, but with machine learning algorithms that are used to show similar images, it would be very easy for Shutterstock to catch these copyright thieves before the photos are published online. But Shutterstock hasn’t done this, so photographers need to do it manually.
I think this may be because it costs less for them to deal with the DMCA takedown notices than it would be to review and deal with duplicate images being uploaded, but maybe we can change that. If even a small percentage of Shutterstock photographers checked to see if their most popular photos were uploaded to Shutterstock and sent Shutterstock DMCA takedown notices, I think Shutterstock would see a spike in takedown notices and may make changes to stop this from happening in the future.
I am an amateur photographer but a professional software developer, so I understand how easy it would be to write the code to check images. As proof, I have sent a request to Shutterstock to provide access to their similar image search so I can create a free tool to make it easier for photographers to find if others are selling your photos online. I will even make the code open source so Shutterstock can use it at no cost if they want.
So, use the steps in the video to see if any of your photos have been uploaded to Shutterstock and if they have, use this template to send them a DMCA takedown notice.
About the author: James Wheeler is an amateur photographer and co-founder of Photerloo, a website for photographers that simplifies posting and managing their photos across photography, social media, and microstock sites. The opinions expressed in this article are solely those of the author.
In my last blog, I mentioned that I wanted to give an editor’s perspective on the increased amount of data—footage clips—that editors have to deal with. It’s not just that file sizes have ballooned due to resolution increases or the need for more and more coverage. I talked about the lack of urgency to say “Cut!” quickly once a take is done.
Directors’ takes on this have been, “Deal with it. I have better things to worry about than your complaints about how much storage you’ll need for my project.” Or more succinctly, “I have a solution for your problem, and it doesn’t involve you!”
Okay, maybe that’s an exaggeration. But I’ve been on set and have seen it happen. A take finishes and the director never says “Cut.” They talk with the talent and the camera rolls. They talk with the crew or the client and the camera still rolls. Eventually, someone on the crew might gently ask, “Should we cut?”
Directors might avoid cutting because they want to get several takes, one right after the other. When I’ve talked with directors about the multiple takes issue, the comeback is that the energy or the actors’ performance drops off when a director says “Cut!” It takes a bit to get that energy back. By not saying cut, a director can get a number of different performances quickly and efficiently. It makes perfect sense and changing methods to appease post workflow isn’t what this is about.
But here are some side effects of long takes:
During production, DITs (or camera assistants or data wranglers) still deal with limited storage on camera cards. The cards are getting bigger and are easy to swap out, but it still takes time to transfer footage to drives. Fortunately, DITs are prepared with more and more empty cards. The larger the cards, the better.
But as takes get long, the DIT’s focus becomes critically centered on making sure that footage is offloaded as quickly as possible. If those takes get even longer, it becomes a stress point. That’s when bad things can happen. You don’t want bad things to happen anywhere near memory cards.
Long takes are yet another reason the DIT is one of the last crew members to leave the set. Keeping crew happy pays off in the long run.
During production, log notes can be critical for communication between production and post-production. A log note on a single clip/single take is easy to interpret, but if there’s a single clip with multiple takes, then a log note might get confusing. If it’s a “starred” take, does the star refer to the first take, the seventh, the tenth?
There aren’t easy solutions. The director calls the shots and you need one person calling “Cut” or chaos ensues. Is the camera operator able to prompt the director about cutting in some way? For multiple takes on the same clip, can the director and the camera operator agree on a signal to quickly cut to start another clip? This won’t help with total project size, but it might help with logging issues.
Maybe there isn’t a solution. But next time I’ll give a concrete example where long takes directly affect the final outcome of a project.