Saturday, March 08, 2014

Protune 2.0

There is new firmware out for your HERO3+ cameras.  I'm going to expand on some of the new HERO3+ Black Edition features related to Protune:

  • New advanced Protune™ controls for Color, ISO Limit, Sharpness and Exposure
  • Changes the Protune default settings for Color to "GoPro Color" and Sharpness to "High" 

Protune History

Protune was originally introduced in a firmware update to HERO2, and it was developed to solve the need of professionals already using GoPro cameras in feature film and TV projects.  Protune added a range of modes previously unavailable in any POV camera system: 24p, log encoding, H.264 long GOP at 35+Mb/s. See my first introduction of Protune at NAB 2012 -- this video was six months before its release.

Here is my October 2012 blog entry discussing the HERO2 Protune availability, just weeks before the HERO3 launch.

With the HERO3 Black Edition, Protune was included from day one. There were some small changes: 24p was now a standard video mode (no longer requiring Protune to be active), and white balance controls where now offered including CAMRAW.  Protune White balance controls for HERO3 and HERO3+ are:

  • AUTO - Same as in the standard modes
  • 3000K - Locked white balance for indoor warm lighting with an sRGB color space
  • 5500K - Locked white balance for sun conditions with an sRGB color space
  • 6500K - Locked white balance for daylight overcast conditions with an sRGB color space
  • CAMRAW - Lock white balance with sensor native color space.

CAMRAW is the only non-obvious addition, it doesn't attempt to saturate the image to the reduced color gamut, but standard, sRGB. Shooting in CAMRAW improves the ability to cut GoPro footage with larger cinema cameras, but it requires more post color correction -- except within GoPro Studio which handles the color matrix required automatically.

My shooting tip:  I always shoot Protune CAMRAW. The subtler color image is nice to start color correction upon, but CAMRAW is also lower noise. To saturate any image the difference in color channels is gained-up, blue channel noise is crossed into green and red channels, and vise-versa.  This happens in all cameras, and it happens in post saturation, but with CAMRAW it is under your control.


The New Protune


Protune within the new firmware on the HERO3+ Black Edition has changed again.  The original Protune was the "pro", "tune" that GoPro designed for all professionals, yet you don't have all the same needs. Now it is the mode for pros to tune their GoPro cameras.

Note: The new Protune defaults are very different to previous releases.

Protune is accessed within the tools menu by select CAPTURE SETTINGS:


Use the Mode button to scroll to PROTUNE and press the shutter.

While previously there was just on and off, with the number of new modes we added a reset to restoring Protune to the default configuration.  The default configuration turns all of Protune off except for the high bit-rate.  Protune is still the way to get the least compression / highest image quality.  If you're new to color correction, but want the least compression (for high action, high detail video), then turning Protune on is all you need to do.  Protune does required quality SD media, see GoPro SD Card Recommendations.
The next menu item exposed (with Protune set to On) is the white balance mode.   I'm showing my favorite CAMRAW mode here.
The white balance menu is unchanged from the last firmware with the default set to AUTO.

The first new menu sets the encoding curve.  It is described in terms of the video's appearance.  I also use the FLAT Protune log curve.
However the default for this mode is GOPRO COLOR, which is the standard high contrast, high saturation for the classic GoPro look. This is different from earlier Protune implementations which only had the FLAT log curve encoding of the video image.  GoPro Color was added to Protune to help with broadcast / news applications that rarely do significant color correction, but still want the best compression quality possible.

The next menu item is not completely obvious, as isn't common in other camera systems. While I know many pro users want full manual exposure, the nature of a camera with no mechanical iris on a super wide F2.8 lens, makes that tricky. ISO Limit is a step in the direction for manual control. A GoPro's exposure is controlled through shutter speed and sensor gain. ISO limit will restrict the sensor gain to the value selected or lower.
For night scenes you don't typically want to gain up the shadows to 6400 ISO (that is the default.)  If you want dark to be dark, consider 1600 or 400 ISO limit.  I haven't decided my favorite yet, other than not using 6400.  This is an improvement over the older Protune which didn't limit the sensor gain.  For those accessorizing their cameras with variable ND filters, you can use the ISO Limit 400 as a manual exposure mode of sorts. Through the ND, slowly stop the light down until the camera output begins to darken, you are now running ISO 400 with 360 degree shutter (hint: shoot 48p, then in post drop every other frame for 24p at 180 degrees -- also you will also have 2X slow motion ready when you need.)
I wouldn't know that icon meant sharpness either, but it is hard to visually depict sharpness with a off/on level monochrome display. Previous Protune modes had camera sharpening completely off, and whereas the new default and standard modes have the sharpness set to HIGH.
While I'm a fan of the old Protune with everything designed for post corrections, including adding sharpness, but I'm liking the medium mode as nice balance.  It doesn't often need additional sharpening (GoPro Studio will not default to adding any,) and it doesn't have any too obvious sharpening artifacts.
The mode I really needed: EV compensation.  I shoot a lot of events under stage lighting with my GoPros, bright lights on faces, so often blows out.  This problem is solved through EV compensation.

The EV compensation ranges from -2EV to +2EV stops of compensation, defaulting at zero.  I like to use -0.5 as my default, ready for anything, but I have used -1.0EV or -1.5EV for stage events.
Here is Protune 2.0 using the default settings, a 1:1 pixel crop from a 1920x1080 image.  No color correction applied.  Image is nice, but we can do more.


Here is CAMRAW with sharpening at Medium and EV at -0.5.  The color of the petals no longer clipping, under magic hour sunlight (EV -0.5 and CAMRAW helped.) 
Here is the same image as above with the color matrix applied (automatically) in GoPro Studio, some white balancing, contrast, sharpness added, output through the Protune preview LUT. The final image is more true the original rose coloring (this wide gamut magenta flower.)

















My Favorite Configuration for HERO3+BE

While I've listed some of my favorite settings above, here are defaults for all my HERO3+ Black Edition cameras (one is not enough):

  • Video 2.7Kp30 (24p for night events or creative projects) with a Medium FOV (this mode is the sharpest with highest resolution for video presentation, with very little lens curvature.)  For aerial video I use 1080p60 Medium FOV (low light off.)
  • Photo Burst 5/1 at 12MPixels
  • Timelapse -- I don't use time-lapse photo, I use 2.7Kp24 or 1440p24 video modes and compute my timelapses is post (a GoPro Studio feature.)  See Rethinking Time-lapse
  • Protune On
  • FLAT - Log curve
  • CAMRAW
  • ISO Limit 400 or 1600 as needed.
  • Sharpness Medium
  • EV -0.5 (or downward as needed)
  • Sound/Beeps OFF
  • Auto shutdown after 120 seconds of inactivity.

Added March 9th, 2014

GoPro App (Android and iOS)

These new Protune controls are all available through the latest GoPro App (free.)  An added bonus: within the latest GoPro App (version 2.3) will update your HERO3+ camera for you -- no need to update through the web site.

Hint: for updating the camera software via the App.  Get the latest app, and connect to the camera as usual.  Now the App knows which model you have, and it will contact the server to request any camera software updates.  This can take up to 24 hours of connecting to your camera (behavior from iOS and Android differs slightly.)

The next time you connect to your GoPro, this "Install Update" will appear -- click and follow the clear instructions.


Hint 2: For those that just can't wait 24 hours, after connecting to your camera, switch back to internet connected WiFi (i.e. not the camera.)  Then open the GoPro App and go to the App Settings (top right gear icon.)  Toggle the Auto Download on and off -- that should start the camera software download from the server.  See what will be updated under Camera Models.

Thursday, February 13, 2014

QuickTime 16-bit

This might be beating a dead horse, but QuickTime truly sucks.

For those using 16-bit (deep color) applications, always use the Force 16-bit encoding option, it is the highest quality and surprisingly, it is often the lowest data rate.

Now for the weird reason.

QuickTime loves 8-bit, it prefers it greatly, and support for deep color is difficult at best.  Over the years we tried to make the 16-bit the preferred mode for our codec within QuickTime, yet there are many video tools that broke when we did this.  The compromise was to add the Force 16-bit into the QuickTime compression option, to allow user to control the codecs pixel type preference – applications that can handle 16-bit will benefit, and applications that don’t, still work.

Using After Effects for my test environment (but the same applies to other QuickTime enabled deep color applications.) I created a smooth gradient 16-bit image, then encoded it at 8-bit using using a 8-bit composite, 16-bit using a 16-bit composite and 16-bit using a 16-bit composite with Force mode enabled (pictured above.)
Without post color correction, all three encodes looked pretty much the same*, yet the data rates are very different.

* Note: QuickTime screws up the gamma for the middle option, so with the image gamma corrected to compensate, they looked the same.

The resulting file sizes for 1080p 4:4:4 CineForm encodes at Filmscan quality:
8-bit – 13.4Mbytes/s
16-bit – 28.4Mbytes/s
16-bit Forced – 5.3Mbytes/s

Our instincts that higher bit-rate is higher quality will lead us astray in this case.

Under color correction you can see the difference, so I went extreme using this curve:
To output this (16-bit Forced)
The result are beautiful, really a great demo for wavelets.

Zooming in the results are still great. Nothing was lost with the smallest of the output files.

 Of course we know 8-bit will be bad
We also seeing the subtle wavelet compression ringing at the 8-bit contours enhanced by this extreme color correction.  This is normal, yet it shows you something about the CineForm codec, it always uses deep color precision.  8-bit looks better using more than 8-bits to store it.  That ringing mostly disappears using an 8-bit composite, an 8-bit DCT compressor could not do as well.
Storing 8-bit values into a 12-bit encoder, steps of 1,1,1,1,2,2,2,2 (in 8-bit gradients are clipped producing these flat spots) are encoded as 16,16,16,16,32,32,32,32, the larger step does take more bits to encode – all with the aim to deliver higher quality.  Most compression likes continuous tones and gradients, edges are harder. Here the 8-bit breaks the smooth gradients into contours which have edges. The clean 16-bit forced encode above is all gradients, no edges, result in a smaller, smooth, beautiful image.

 Now for the QuickTime craziness, 16-bit without forcing 16-bit.
The image is dithered.  This is the “magic” of QuickTime, I didn’t ask for dithering, I didn’t want dithering. Dithering is why the file is so big when compressed.  QuickTime is given a 16-bit format, to a codec that can do 16-bit, but sees it can also do 8-bit, so it dithers to 8-bit, screws up the gamma, then gives that to the encoder.  Now nearly every pixel has an edge, therefore a lot more information to encode.  CineForm still successfully encodes dithered images with good results, yet this is not want you expect.  If you wanted noise, you can add that as need, you don't want your video interface (QuickTime) to add noise for you.

If anyone can explain why Quicktime does this, I would love to not have users have to manually select “Force 16-bit encoding”.    


P.S. real world deep 10/12-bit sources pretty much always produce smaller files than 8-bit.  This was an extreme example to show way this is happening.


Sunday, December 30, 2012

Rethinking Time-lapse

Since I got my hands on the HERO3 Black Edition, I've been doing significantly more time-lapse.  This is not because the stills are so much better than HERO2, and they are, it is due to the new video modes at 2.7K and 4K, combined with Protune. There is no need to time-lapse with JPEG sequences with this camera, unless you know you need a very long interval.  Most time-lapse shoots that document human experiences, are better with shorter intervals between 0.5 and 10 seconds. Knowing what interval is best takes practice, but forget that! With the new camera shooting 4K at 12 or 15fps, basically a continuous 9MPixel motor drive sequence (stored in an MP), time-lapse guess work can be left to post. Resampling 4K video to your needed interval is a straightforward process in most video tools, and a standard feature in the free GoPro CineForm Studio software.

At 4K you get most of the spatial resolution of the still mode, at 6+ times the temporal resolution (12/15fps vs a maximum of 2fps for JPEG) at approximately half the data rate.  So there is a data rate saving using the video modes for simulating the shorter intervals, but for long intervals there are still reasons to shoot ultra-HD video over stills -- simulated longer shutter intervals.  The HERO cameras mainly use the shutter speed to control exposure, which is fine for high action moments, but for scenes best for time-lapse a fast shutter may not be desirable.  With a DSLR, you can stop down the camera's aperture, but that only gets you so far. For time-lapse exposures of 5 seconds of more, that would require a lot of neutral density filters for daylight shooting.  For a HERO camera, aperture control is not available, and adding neutral density is highly impractical, so we need do the camera operation in post.

So let's say you want to simulate a 5 second exposure with a 10 second interval in full daylight (simulating a 180 degree shutter at play speed.)  HERO's default exposure might be around 1/1000th of a second in strong daylight, nowhere near the 5 second exposure target.  Yet the camera could be recording 4K at 12fps over those 5 seconds, collecting 60 individual frames.  If you average those 60 frames, you get very close to the look of a single long exposure from a DSLR with a hell of a lot of ND filtration, without the setup headache. Typically blending over 30 frames for daylight simulates the motion blur of a single exposure.  With darker shots that might have the camera's shutter exposure near 360 degrees (1/30th for 30 fps video), far fewer frames can be blended for a natural look.  Of course, the more frames used in averaging, the smoother the results.  I have been asked how a GoPro achieved this high action shot with so much motion blur:

Now you know.  This was shot 1920x1440 at 24p, with 30 frames averaged for each single frame in the time-lapse output.

Continuing with the target of 5 second exposure and with 10 second interval, I was intending to model 180 degree shutter, however the CineForm Studio software with the Motion Blur enabled will simulate 360 degree (this was by design.)  So setting "SPEED UP" to 60, "FRAME BLEND (MOTION BLUR)" on and the output frame rate to 23.976p, the result will be a clip with 5 second exposure and a 5 second interval.
To get this to simulate a 10 second interval simply place it in your editing tool's timeline and double the playback speed (with frame blending off.)  Now every other 5 second exposure will be displayed for 180 degree shutter emulation.

Protune helps greatly, particularly in low light. Now that we are averaging frames together, we get an excellent side effect:  a large reduction in noise. Each doubling of the number of frames averaged will half the noise in the image. Combined with Protune, which preserves much more shadow detail, you can basically see new details that would normally be lost to noise with regular video or stills time-lapse.  Protune lifts the shadow detail so that it is no longer crushed to black. In standard mode, averaging crushed black only results in more crush black, yet in Protune averaging a noisy shadow detail results in more shadow detail.

I've used this technique in most of my recent videos, such as this one (the night time-lapses are very clean, because of HERO3 Black and this averaging technique using CineForm Studio):

24 Hours of Lemons at Chuckwalla Dec 2012. from David Newman on Vimeo.

Update Jan 5/2013: Example comparing classic and video blended time-lapse

P.S. For those who have been following my Instagram feed (http://instagram.com/0dan0) or via Twitter (@David_Newman), you are likely aware that I've been combining the above time-lapse technique with a motion controller I have been experimenting with.  This combined a GoPro with a 3D printed motion controller that runs on toy G-scale train track. I've just posted its design on thingiverse.com, for this project. Let me know if you successfully build one, and link me to your videos.

Tuesday, October 16, 2012

Why I shoot Protune -- Always!

If you are reading here and you don't yet know about Protune, read this entry first : Protune

Here are some images that should speak for themselves.  This wide dynamic range scene, outside lighting to the right (frosted glass windows,) indoor shadows in the back left, showcases the improvements the Protune curve offers for color correction:

Stock mode converted within CineForm Studio 
(Premium version of Studio added the waveform.)



Protune mode automatically corrected when converted in
CineForm Studio to be similar to stock
(check out those improve highlights.)



Stock mode with contrast reduced in Studio to show the
dynamic range limitations.









Protune mode with the same contrast applied as above 
(more shadow detail, reduced highlight clipping.)







Reseting Protune to no automatic corrections with
CineForm Studio, also gives you a nice starting 
place for color correction.

P.S. I shot this with a HERO3 Black Edition.






Wednesday, October 10, 2012

Protune

If you haven’t heard already, Protune™ firmware and GoPro App for the WiFi BacPac are now available -- get them now.  While my team worked substantially more on developing the GoPro App, this blog is about the origins and design of Protune.

Before I geek out on why and how Protune is so cool, some readers may want to know what it is and does (and may want to skip the rest.)  Protune is a suite of features designed to enhance an even more professional image capture from your GoPro, while still being accessible to every GoPro user.  Protune has the strongest emphasis on image quality by increasing the data-rate (decreasing compression) from an average of 15Mb/s to 35Mb/s.  Small artifacts that can occur in detailed scenes or extreme motion are gone at 35Mb/s.  Next is adding the 24p frame rate (to the existing frame rate options), greatly easing the combination of GoPro footage with other 24p cameras, common to professional markets. Finally the Protune image is designed for color correction; it will start with a flatter look that is more flexible for creative enhancement of the image in post-production. With the latest HERO2 firmware installed, Protune is enabled with the secondary tools menu.

Now for the why and how.

Protune has been a long time coming, and so has this blog entry.  Protune is an acknowledgement that so many GoPro cameras are used for professional content creation – Discovery Channel looks so much like a GoPro channel to me.  Protune is also the first clear influence the CineForm group has had on in camera features, for which we are super proud, yet most of the engineering was done by the super smart camera imaging team at GoPro HQ.  For the novice Protune user, CineForm Studio 1.3 is setup to handle Protune image development, so all users can benefit from this cool new shooting mode. This synergy between the software and camera groups, allows us to push both further. In the old CineForm days (non-GoPro) I would have probably blogged about helping with the design of a new camera log curve, and all the pluses and minus of color tuning, months before we would have had anything to show, but that was before we became part of a consumer electronics company.  Some things must remain secret. Working at CineForm was exciting, but it is nothing compared to the adventures I’ve already had at GoPro, with so much more to come. 

Protune for me started when HERO2 launched.  Here was a camera that I could use in so many ways, yet in certain higher dynamic range scenarios (I shoot a lot of live theatre and was experimenting with placing GoPros around the stage), the naturally punchy image limited the amount of footage I could intercut with other cameras.  It is of course the intercutting of multiple camera types that is of greatest need for the professional user. Note: there is one professional group I know of that exclusively uses GoPro HEROs, and that is our own media team – even though they now use Protune shooting modes. Protune gets you more dynamic range, and I was amazed how much. 

Sensor technology continues to grow, and we are seeing awesome wide dynamic range images coming from premium cameras like ARRI Alexa and even the amazingly affordable Blackmagic Cinema Camera, but as sensor size (really pixel size) shrinks, there is an impact on dynamic range.  Smaller pixels often result in reduced dynamic range, yet so much has changed in so few years.  Back in 2006, CineForm was very much involved with Silicon Imaging and the development of the SI-2K camera, which was highly praised and generally confirmed to have around 11 stops of dynamic range – good enough to be used on the first digitally acquired feature (well, mostly digital) to win Oscar Cinematography and Best Picture awards.  The HERO2 sensor is smaller and has significantly higher pixel count (11MPixel versus SI-2K’s 2MPixel, HERO2 pixels are way smaller), yet we are also seeing a similar dynamic range.  

It was not just five years of sensor technology that made all the difference, it was using a log curve instead of contrast added to Rec709 with 2.2 gamma -- geek speak for calibrating cameras to make the default image look good on your TV.  Making images look great out of the box is the right thing to do for all consumer cameras, and you get just that with HERO2 via HDMI to your TV. Yet TVs do not generally have 11 stops of dynamic range, maybe 9 on a good set, and that is after you’ve disabled all the crazy image “enhancements” TV defaults to having switched on (which typically reduce dynamic range further.)

So why shoot wider dynamic range for something that may only be seen on TV, computer monitor or smart phone (all decreasing in dynamic range)?  The question is somewhat obvious to professional users, as color correction is part of the workflow.  Color correction simply works better with more information from the source for which to choose the output range. Even the average consumer today is more open to color correction of an image thanks to the likes of Instagram filters. The more dynamic range you start with, the better such stylized looks can work.  Our own media team wasn’t using great tools like Red Giant’s Magic Bullet Looks until shooting Protune, which greatly increased the creative flexibility of the GoPro image output.

So why a log curve, rather than just reduced contrast with the regular gamma?  This is a trickier question.  The full dynamic range can be presented with a 2.2 gamma of standard TV, it will look a little bland (flatter or milkier) just as log curves do on a TV without color correction, so it holds no aesthetic advantage over log.  Log curves do have an advantage over gamma curves when your goal is to preserve as much of the source dynamic range for later color correction.

Some imaging basics:  Light hitting the sensor and the sensor’s response to that light, is effectively linear (not the incorrect use of linear to describe video gamma that still seems to be popular.) Linear has the property that as light doubles (increasing one stop), its sensor value doubles.  With an ideal 12-bit sensor, ignoring noise, there are 4096 values of linear light.  After the first detectable level of light brings our ideal sensor from 0 to 1, a doubling of light goes from 1 to 2, and the next stop from 2 to 4, and so on to produce this series 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048 and 4095 of doubling brightness (to the point where the sensor clips.) An ideal 12-bit sensor has a theoretical maximum of 12-stops of dynamic range.  If we were storing this 12-bit data as uncompressed, this is the most flexible data set (for color correction), yet this would be over 1000Mbits/s compared with today’s standard mode 1080p30 mode on HERO2 at 15Mb/s – think how fast your SD card would fill, if it could actually support that fire hose of data. Fortunately it turns out that linear is a very inefficient way of presenting light when humans are involved, as we see brightness changes logarithmically--a stop change is the same level of brightness change to us, whether it is from linear levels 1 to 2 or from 1024 to 2048.  As a result, most cameras map their sensor’s 12, 14, 16-bit linear image, to an 8, 10 or 12-bit output with a log or gamma curve, exploiting that we humans will not notice.  Even the uncompressed mode of the new Blackmagic camera maps its 16-bit linear output and only stores 12-bit with a curve – this is not lossless, but you will not miss it either. Lossless versus lossy is an argument you might have heard me present before, to the same conclusions.

If we remained in linear, converting to 8-bit from 12-bit would truncate the bottom 4 stops of shadows detail, we will notice that.  So a conventional 2.2 gamma curve does the following with its mapping (top 5 stops shown.)
12-bit Linear input
8-bit Gamma 2.2 output
Codes per stop
256
73
19
512
100
27
1024
137
37
2048
187
50
4095
255
88

So gamma curves don’t fully embrace a human visual model, with many more codes used in the brightest stop as compared with the darker stops.  The perfect scenario might be to have the 256 codes divided amongst the usable stops, e.g. 11 stops would be around 23 codes per stop.  Remember, this is for an ideal sensor (i.e. noise free) and this is not going to happen.  The darkest usable stop is mostly noise, whereas the brightest stop is mostly signal, we need a curve to handle the allocation of our code words with this in mind.  

The top 5-stops of the Protune log curve:
12-bit Linear input (idealized)
8-bit Protune output
Codes per stop
256
112
33
512
146
34
1024
181
35
2048
218
37
4095
255
37

While the darkest useable stop have a similar number of code words as the gamma curve, Protune distributes the codes are more evenly over the remaining stops, more code-words are reserved for shadow and mid-tone information. 

While I glossed over this before, again why not just have 23 code words per stop?  This has to do with compression and noise.  Noise is not compressible, at least without it looking substantially different than this input, and the compressor, H.264, CineForm or any other codec, can’t know signal from noise.  So if too many code words represent noise, quality or data-rate has to give.  The Protune curve shown above will produce smaller files, and generally be more color correctable than using fixed code words per stop. We have determined the best curve to preserve dynamic range without wasting too much data to preserve noise.

Side note for other RAW cameras: We have extended our knowledge gained while developing to the Protune curve to calculating of the best log curve for a particular dynamic range. This feature has now been included in the commercial version of CineForm Studio (Windows versions of Premium and Professional), so that the RAW camera shooter, such as from Canon CR2 time-lapse videography, to Blackmagic CinemaDNG files, can optimize the log encoding of their footage.  Of course transcoding to CineForm RAW at 12-bit rather than 8-bit H.264 helps greatly, yet the same evening out of the code-words per stop to applies as it does in the HERO2 camera running Protune.

Protune couldn’t exist as just a log curve applied upon the existing HERO2 image processing pipeline, we had to increase the bit-rate so that all the details of the wider dynamic range image could be preserved. But we didn’t stop there.  As we tuned the bit-rate, we also tweaked the noise reduction and sharpening, turning both down so that much more natural detail is preserved before compression is applied (at a higher data rate required to support more detail.) Automatically determining what is detail and what is noise, is a very difficult problem, so delaying more of these decisions into post allows the user to select the level noise reduction and sharpening appropriate to their production.  I personally do not apply post noise reduction, happy working Protune as it comes from the camera, adding sharpening to taste.  

The CineForm connection:  35Mb/s H.264 H264 is hard to decode, much harder than 15Mb/s. So transcoding to a faster editing format certainly helps, and that comes for free with GoPro CineForm Studio software.  Also, the new Protune GoPro clips carry metadata that CineForm Studio detects and automatically develops to look more like a stock GoPro mode, cool-looking and ready for show.  All these changes are stored as CineForm Activate Metadata, are non-destructive and reversible, all controlled with the free CineForm Studio software.  GoPro is working to get professional features in the hands of the everyday shooter, and the CineForm codec and software is an increasing part of that solution.  

There is so much to this story, but I’m sure I’ve gone on too long already. Thank you for reading.

P.S. Sorry for the lack of sample images, Protune launched while I'm on vacation, my internal connection is way limited at the moment. 

---

Added sample images in the next blog  Why I Shoot Protune -- Always!

Sunday, August 21, 2011

How did we do that?

As regular readers know, I have had a team in the 48 Hour Film Project every year since its beginning in San Diego. This year we came second in the whole competition, competing against a record 64 teams. We also received the audience award for our premiere screening and best sound design. We do not have a professional team; we only do this once a year with friends and family. For example, our festival winning audio was operated by a 12 year old, she was an actress for us in a previous years (thank you, Julianna.) The one exception is Jake Segraves (you may have corresponded with him through CineForm support) who does not quite have amateur status like the rest of us, with some real world production experience. Still, Jake and I shot two camera scenes with our personally owned Canon 7Ds, with a single cheap shootgun mic on a painter's pole (attached with gaffer tape) recording to Zoom H4n I got secondhand off a Twitter friend. The only additional camera for our opening scene is a GoPro time-lapse, shot while we where setting up for the first part of the shoot. This was not a gear fest, fast and light is key for extreme filmmaking within 48 hours.

As this is a CineForm blog, we of course used our own tools and workflow throughout this process. We used four computers, two regular PCs (Jake's and my office desktops,) an i7 laptop, an older MacBook Pro (for end credit.) During the shoot day, whenever we moved location, we would quickly transfer video data, converting directly from compact flash to CineForm AVIs stored on local media storage. That data was immediately cloned onto the other desktop PC using a standard GigE network. Getting two copies fast is so important, we have had a crashed drive during a 48 hour competition before. I used GoPro-CineForm Studio to convert the JPEG photo sequence from the GoPro Hero into a 2.5K CineForm AVI, and used FirstLight to do a crop to 2.35:1 and re-frame it. By 1am Sunday morning we had completed our shoot, ingested and converted all our media. One additional step that saved time for audio sync: I used a tool to batch rename all the flash media to the time and date of the capture, rather than Canon's MVI_0005.MOV or the Zoom H4n's default naming. Now all the imported media is named 11-35-24-2011-08-06.WAV or AVI etc, very fast to find video and audio pairs with the NLE, without properly timecoded sources. Last year we used Dual Eyes to sync the audio with picture, which works great, yet you have to make a secondary intermediate file which takes a little time, we found slating for manual sync and the batch renaming to be a tad faster. This was the first time we tried slating everything, and it was certainly worth it.

Starting at 1am Sunday, the value of FirstLight really kicked in. One of the two Canon 7Ds color temperature was way off, it seems my camera had overheated during 6 hours of operating under the San Diego sun (yet the other camera was fine -- any ideas on this readers?) The color grew worse from take to take, yet was fine at the beginning of each new setup (weird.) I had to color match the two cameras BEFORE the edit began, anything too hard to correct would be removed from edit selection (but I recovered everything.) This is where FirstLight has no competition, I color corrected footage between 1 and 4am, for every take in the movie, without rendering a single frame, without knowing which shots would make the final cut. The correction included the curve conversion for the shooting CineStyle profile to a Rec709 video gamma (Encode curve set to CStyle, Decode curve to Video Gamma 2.2), adjusting the framing for a 2.35:1 mask, images were moved up or down, others were zoomed slightly if needed (boom mic or camera gear in frame, etc) and adding some look/style to each scene. As the footage was already on the two editing desktops, we simply shared a Dropbox folder to carry all our color correction metadata. If you are not already using Dropbox with FirstLight, please learn more here http://vimeo.com/10024749. Through Dropbox the color corrections where instantly on the second desktop PC, Jake's PC, for our primary editing. The correction data for the entire project of 302 clips was only 144,952 bytes -- way less than one frame of compressed HD.

I set up the edit for base clips for the second half of the movie before crashing out for a two hour sleep. Jake arrived refreshed with more like 5-6 hours of sleep to begin the real edit -- I was secondary editor working on some early scenes (several of which didn't make the final cut.) Editing was done in Premiere Pro 5.5 using the CineForm 2D 1920x1080p 23.976 preset. We had some effects elements, so once the edit was locked for a segment, Jake saved the current project and sent that to me, and made a short segment for the effect elements, and did a project trim and sent that new small project and its trimmed media to Ernesto (our awesome lead actor and effects artist) running After Effect on my i7 laptop. The laptop was also sharing the color correct database via Dropbox. I loaded the latest full edit into my PC (relinking media to the local data,) while Ernesto was preparing the effects composition. I could now complete the color correction based on the edit Jake had competed around the effects area. Again we exclusively used FirstLight as those color corrections are automatically populating the AE comp. The trimmed media has the same global IDs as the parent clip -- why this works so well. Once the color parse was done (about 5 minutes is all I had time for with the pending submission deadline) Ernesto was done with the composition, we purged any cached frames so the latest color corrections would be used, then rendered out a new CineForm AVI for adding back to the edit.

This workflow resulted in very little data transfer and hardly any rendering for the entire project, lots of speed without quality compromise. The only other renders were tiny H264 exports emailed to our composer Marie Haddad throughout the day as the edit was locking in, as she was scoring the movie from her home. The final eight minute movie took about seven minutes to export to a thumb drive (I got a fast thumb drive, as they are normally the slowest element.) We sent a film off the finish line with 40 minutes to spare (a 30 minute drive.) We then checked our film to see what we rendered out from a second copy (we render out from both desktops at the same time.) Checking the audio levels - which were fine. If we had any audio changes we would have rendered only the audio to a wave file (only seconds) then using VirtualDub dub to replace the audio (only a 1 minute or so) -- you learn many shortcuts doing this competition for so many years. We sent a second thumb drive to the finish just in case, which was needed as the first car ran out of fuel (of all things?!) The second copy arrived with only 1 minute to spare.

Hope you enjoy our film.

Sunday, June 05, 2011

GoPro Hero 3D with LCD/viewfinder

Clearly there is no better 3D camera system for POV shooting than GoPro Hero 3D kit. If you already have a couple of Hero HD cameras, adding $99 for the 3D housing, sync cable and accessories is a no-brainer -- you got to do it. But what about non-POV, hand-held shooting? The 2D GoPro Hero HD allows you to add the LCD BacPac, for simple point and shot image framing, but the connector it uses (the HERO Bus™) is occupied by the sync cable required for the 3D to work. So we need to use the camera's video out drive another display.

I saw someone with a 7" Marshall monitor on a 3D GoPro at NAB, so I knew it could be done. I believed they had modified the camera, and I didn't want to do that, plus I wanted to expend much less on the screen. Also a large screen is not need for focus, everything is in focus on a GoPro. I found the perfect screen on ebay.com that was prompted as a "2.5" LCD WRIST CCTV CAMERA TESTER" with its own battery and NTSC/PAL video input, shipped for under $60.

The technical issue is the video out is in-between the stereo paired cameras, but there is a little bit of room if you modify a cable and trim the 3D housing, the cameras are untouched. The video connectors are tiny, I didn't have any of this size, so I hacked the video cable that comes with the camera, taking connector down to it core by crushing the plastic connector exterior in a vice repeatedly until it basically fell off. Using wire cutters I trimmed off the solder pads for the audio (red and white lines) so only the solder pad of the video (yellow) connection remained. Now only about 3-4mm of the connector will extrude from the camera. I removed the BNC connector from the cable that ships with the 2.5" LCD and soldered the video and ground lines the remaining connector elements.

To make this 3-4mm extrusion and newly attached video cable fit, I trimmed out a 'V' shape from the plastic wall that separates the two cameras using a pair of tin-snips or were they garden shears (whatever was laying around, did the job great.)

To mount the LCD, everything needed comes with camera or 3D housing. I used a flat sticky mount on the back of the LCD (on the lid of the battery compartment) and used the multi-jointed mount from the 3D kit to attach the LCD to the 3D rig. This allowed for nice controlled placement of the LCD.

At this point I've only spent $60 on the LCD and used exclusively parts and accessories that came with the camera/3D housing. To make this one step better, I used a spare magnetic LCDVF mount, so I can share my viewfinder between my Canon 7D and my new 3D rig. This has been so much fun to shoot with.