Forum

Notifications
Clear all

Realities of using RED camera

1 Posts
1 Users
0 Reactions
956 Views
(@bjdzyak)
Posts: 587
Honorable Member
Topic starter
 

At Cinematography.com there is an interesting discussion ongoing that anyone who thinks about using the RED should pay attention to. You can find the entire thread at this link: http://www.cinematography.com/index.php?showtopic=44231&st=100&start=100

One of the users has a list of potential "gotchas" that should be taken into account as anyone considers using the RED for their projects:

quote:


Revised 6-25-09
Informal Notes on the Red camera:

Ive been keeping up with the Red camera on the internet, at the ASC, and at various trade shows and vendor presentations. Its a fast moving target. These notes eventually became excessively long and disorganized. The purpose of this revision is to pare them down (hopefully without omitting anything thatll bite you in the tush), and organize the issues in a more reasonable and useful order, rather than in order of discovery.

Bottom line, our answer to shows that want to use the Red is Yes, but you have to do your homework first. Heres the part of that homework assignment that I can provide, a summary of reported issues from actual users. Theyre mostly production rather than post things:

1. Dynamic Range: Reds range is limited, even compared with other digital cameras. Genesis, D-21, and F-35 give you a couple more stops in the highlights before they clip. You may have to set more nets and fill light with Red than with other cameras. Because the raw recording philosophy moves all of color correction to post, it may take longer to time. You need to test this thoroughly before doing your first Red job.

2. Counterintuitive ASA settings: You get better shoulder handling/highlight detail at the expense of more noise in the shadows by going to a *higher* ASA setting on the Red, and vice versa. The ASA setting on the camera isnt at all like loading a film stock of a particular ASA. The sensitivity and color balance of the camera never change. The setting controls the level where the false color indicators come on. You could think of it as if it were the ASA setting on a light meter instead. Strictly speaking, ASA is based on film density curves, and isnt even defined for the Red. Many have found 320 to be overly optimistic, and rate it lower, say 200.

3. Neutral Density/Infrared: The camera is sensitive to infrared in addition to visible light. Conventional ND filters are neutral across the visible spectrum, but pass a great deal of IR. Use enough ND (roughly 0.9 or more), and the ratio of IR to visible light gets high enough to cause visible and unpredictable results, primarily color shifts in specific objects. Formatt and Tiffen now make IR and ND hot mirror filters specifically to solve this problem.

4. Storage and Backup: With CF cards, you get 4 minutes on an 8 Gig card, or 8 minutes on a 16 Gig card. Theres a hard drive, but it has reliability problems in handheld use. The motion of the camera can be more than the head arm can stand. The drives are not RAID protected, and have had non-motion-related failures. They should be backed up frequently, dont put too many eggs in that basket. With a special vibration mount, the hard drive has been used successfully on a helicopter.

With either cards or drives, you need some kind of backup station. The best solution at the moment seems to be a Mac computer with card readers feeding a big RAID array plus some SATA RAM shuttle drives and/or LTO tapes. You need an assistant to watch every take as it backs up, and alert you immediately in case something disastrous happened. So, its a full time job.

5. Overheating: This used to be a big problem, and may still be if you shoot a lot of long takes in a hot environment. It seems to vary from body to body. A white barney is the wrong idea, because the problem is getting rid of heat that the camera generates internally, not heat it absorbs from outside. It needs shade and air flow, not insulation. Cold packs have been used in extreme cases.

6. Rolling Shutter: This is generic to all CMOS cameras. The term is also a little misleading, because theres no pulldown, and therefore no need to actually physically shut anything.

Suppose we have a film camera running 24 fps with a 180 degree shutter, and a CMOS camera shooting 24p with a 1/48 second exposure time. Any point in the aperture of the film camera sees image light for 180 degrees, and darkness for 180 degrees. Any one pixel on the CMOS chip is sensitive to light and accumulating charge for the equivalent of the same 180 degrees, and then it's read out and inactive for the rest of the cycle. Those are the similarities.

The big difference is that the edge of the shutter blade on the film camera spends maybe 45 to 60 degrees passing over the aperture, while the CMOS camera is designed for a continuous uniform flow of data from the chip, so the readout "edge" takes the whole 360 degrees to sweep over the whole image and start over. Therefore, worst case, the top and bottom of the film frame are 60 degrees apart in time, while the top and bottom of the CMOS image are nearly 360 degrees apart, six times as long. This can introduce some subtle distortions. Things that move horizontally rapidly get a kind of leaning effect, for instance car wheels become a little bit oval. Hand held stuff gets a strange rubbery look, sometimes called jell-o-vision. High vibration situations such as vehicle mounts on rough roads or high speeds can get even more rubbery looking.

Those distortions are mostly subtle. The big problem happens if we have a very brief bright flash of light, one that goes on and off very quickly rather than fading up and down. This is trouble if you have stuff like nightclub or party strobes, night scenes with police or fire vehicles, and some kinds of muzzle flashes.

Do a bunch of flashes at random, and there's some chance that you'll catch the film shutter edge during its rapid pass over the aperture. But mostly they'll either hit when the shutter is open, and go on the film, or hit when the shutter is closed, and go into the viewfinder. (That's something to watch out for when you operate -- If you see the flash in the finder, you *didn't* get it on the film, and vice versa.)

On the CMOS camera the "start" and "stop/readout" edges are always in the frame somewhere. So, no matter when the flash happens, you're going to catch the "shutter" edges. To fix that, you'd have to sync both the beginning and end of the flash to the camera, with the flash starting when the readout edge is at the top of the frame, and ending when the turn-on edge is at the bottom of the frame. In this example, the flash would have to have a duration of 90 degrees or 1/96 second.

Another difference is that the CMOS "edges" are absolutely horizontal and pixel-boundary sharp. The film shutter being between the lens and the film casts a soft-edged shadow, and is only horizontal at the middle of the frame, sweeping across the rest of the frame at an angle. (Or it's vertical at the middle in cameras with the shutter under rather than alongside the gate).

Another approach would be to read the CMOS out faster, and pause between frames, which would be more film-like. But to get the same time difference as a film camera, you'd have to read the chip six times faster, which would be equivalent to being able to shoot 144 frames per second in continuous rolling mode. That's not cheap or easy, which is why it doesn't happen in real world cameras.

7. The Black Sun issue: The sensor shuts down individual pixels that are severely overloaded. It was discovered in day exteriors in which the sun appears as a small black circle. This is has been improved but not eliminated. Its a fairly easy post fix, but alas not a freebie.

8. Max Mode: Avoid Max mode unless the camera says to use it. Its for extremely complex and difficult to compress images. It makes everything downstream go much slower.

9. There have been a few total failures of the camera reported, so it would be wise to have at least one spare body on the truck. The bigger the show, the more spares you should carry.

10. Codec Error: Very seldom, the camera shuts down with this error message. You lose the take, and have to re-boot, but the camera isnt totally dead.

11. Dont Mix Builds: Test end to end with the firmware build youll use, and dont change builds during a shoot. New builds sometimes break post production software (17 did this).

12. Firmware Backup: Its recommended to always have a copy of the firmware build youre using on a card so it can be re-installed in the field. (1-09)

13. Booting: The camera takes a long time to boot up, about 1.5 - 2 minutes. That wouldnt be so bad if it didnt have to be re-booted every time you change batteries. One solution is a hot-swap adapter, theyre commercially available, both for block and V-lock batteries. One DP who shot with it says that the solution is big batteries. You can boot in the morning and swap batteries at lunch, making it much less of a problem, provided that you live on the dolly all day. Current builds now give you an image about 10 15 seconds into bootup, so you can see to start setting up. That makes this issue less important than it was. Theres a report of the camera failing to boot with both the viewfinder and LCD screen plugged in. Unplugging the LCD fixed that.

14. Battery Indicator: Batteries from vendors other than Red will work, but the Red cant give you an indication of how much life they have left. You have to go by a meter on the battery.

15. The connectors on the camera are non-standard fragile mini-BNC and mini-XLR. It needs a breakout box or a bunch of pigtails. Breakout boxes are readily available as aftermarket accessories. Hot Swap could also be built into a breakout box.

16. You can mix different frame rates and 2/3/4K on the same card now. The time base setting, though, does have to be the same for the whole card (23.976, 24, 25, or 29.97). You have to reformat the card to change its time base.

17. The run-stop button is close to the user-assignable buttons, and its easy to accidentally press the wrong one.

18. Early firmware didnt do 16:9, only 2:1. On such cameras, its necessary to frame for cropping in post. More recently (10-08) with Build 17, theyve added a mode called 4K HD which does true 16:9, using most but not all of the chip. Its really quad-HD, 3840 x 2160 photosites. Reports are that this works very well, and because the scaling involved to HD is exactly 2:1 rather than a long decimal fraction, the render times are about 40% faster, and the images are sharper. This mode only works with 16 gig cards or larger.

For us in TV, this is the best mode for most purposes. The only exception would be for higher frame rates, only available with smaller active areas. 3K mode is acceptable, but the 2K mode doesnt have adequate resolution. (2-09)

19. You cant pre-slate an MOS like you can with film. The slate would be stored as its own separate clip.

20. The savings are illusory: The camera body may be only $17,500, but its like a Barbie doll. The rest of the stuff you need pushes the total price up into the same region as some conventional 2/3 cameras.

21. Offline editing: The broadcast quality output from the camera is in the form of compressed raw Bayer images in their proprietary .R3D file format. Red started out working with Apple to build some compatibility into Final Cut Pro. With FCP, you can edit immediately using Red-generated proxies, but at less than full HD resolution. For better resolution, or to use Avid, the raw Red files have to be rendered, which takes a lot of computer time. Full 4K to HD at full quality prior to Build 17 took 25 hours to render one hour of material. One workaround is to work at lower resolution offline, and render only the selects for online. But even working with Build 17, it still takes 15 times run time to render. This isnt just a matter of building an extra day into the schedule, its also a facility scheduling bottleneck. (1-09): Some facilities can now render multiple takes simultaneously, thus reducing the overall time to get dailies out.

Red has recently released a software development kit, which will allow other vendors to work with the same inside information that Apple has had.

22. Sub-Prime Lenses: The introduction of thousands of new PL mount cameras has generated an unexpected demand for lenses. One major rental house restricts their best glass to customers who rent their cameras, so as not to have bodies left on the shelf for lack of lenses. Theyre keeping older lenses in their rental inventory just to serve the Red market.

23. Dropped Frame Counter: The camera sometimes doesnt get all the frames recorded. When this happens, a little red square appears in the finder giving the number of frames missing. It seems not to happen often, but when it does, you need to shoot another take. (12-08)

24. Green Screen/Blue Screen: Green screen works well, except with tungsten balanced light. Ideally, work with HMIs at 5600K. If you have to go tungsten, hang at least an 80D filter. Forget about blue screen. The Bayer sensor has only half as much blue resolution as green, and the blue channel is the noisiest on any CMOS or CCD camera.

25. There are reports from a major video facility of blocky artifacts in the blacks, traced to problems in the Red post software. Going to .DPX files doesnt have that problem. (2-09)

26. For accurate time code, a Lockit or equivalent outboard box is recommended.

27. Color decisions should be set in the metadata for automatic transfer. .RSX files created on set will override the cameras color metadata.

28. Color gamut of the camera is limited, especially for saturated blues and violets. The green primary is also quite yellowish. So, if you need to distinguish between fairly saturated colors, test first. If your colors are subtle, theres nothing to worry about here. (2-09)

29. Color space on the HD-SDI monitor out is more limited than in RedCine or the RAW files. The color space you monitor in also makes a difference. Recommended gamma is Rec709 and color space is RedSpace. It's best to monitor on set in Rec709. If you monitor in RedSpace, you probably will underexpose a little too much and get a noisier image in post.

30. Set the "RAW" view mode to one of the user buttons so you can quickly see if something is clipping in the actual raw, or just in the colorspace you are monitoring in.

31. The meaning of K: When Red refers to 4K, theyre counting the photosites across their Bayer masked chip. Thats not the way the rest of the industry uses the term. 4K as used elsewhere refers to pixels. Each pixel is a complete three color RGB data set for a single location in the grid. Red counts one color per location, not three. So, its apples and oranges.

32. The RedAlert software blows away the .RSX files from the camera and substitutes new default files. This can throw you into the wrong color space. (4-09)

From our point of view, the important things are a few mindset issues:

First, Red is not a video camera. Neither is it a film camera. Its a raw data camera. So, the requirement to squeeze the large dynamic range of the sensor into the limited dynamic range of a digital video tape format no longer exists on the set. That means that the DIT is no longer making irreversible color and dynamic range decisions. What we need is a sort of second second AC. Transferring data from CF cards to a RAID array and SATA shuttle drives (or perhaps LTO tapes) isnt a high end DIT function. Its really the traditional job of the 2nd AC, only using cards and drives or tapes instead of magazines and cans. It requires extreme care and organization, but not a whole bunch of tech knowledge. But the requirement to look at every take to be sure its OK makes it vastly too time consuming for the existing second.

The video that comes out of the Red should be treated just like a video tap on a film camera - a very very nice video tap. It is possible to render single frames on location at full resolution to check focus and the look of the image. The DP can create lookup tables (LUTs) on set to show post something closer to the desired final color timing. Nothing is baked into the raw .R3D output, but it is baked into the viewing proxies.

From the point of view of the DP, Red shoots more like reversal film than any previous technology. Its like film, only without the headroom of negative.

-- J.S.


quote:


That list needs a bit of updating now that there is the new MX sensor -- sensitivity and dynamic range have increased, noise reduced, the "black sun" sensor protection artifact is no longer an issue (so I've heard), and the read-out time of the CMOS sensor has been shortened so that rolling shutter artifacts have been reduced.

Don't know if the IR sensitivity is still the same.

Also, there is the option of solid state drives -- RED RAMS -- if you don't want to use the HDD RED DRIVES.

Of course, workflow issues still have to be worked-out but there are more post houses that are Red-experienced now.

Without having done my own testing, I would say that the new MX-sensor in the Red now puts the camera on par with the F35 and Genesis in terms of dynamic range, and probably is even higher in sensitivity and/or lower in noise. In other words, I don't see any image compromising involved in making a choice between these so the only issue for TV production (your area of expertise) is the data-centric capture, post workflow, and archiving issues and costs.

Now that it is pilot season, it's been interesting in job interviews for me to hear which networks or producers are interested in working with the Red and which are dead-set against going that route due to the lack of an HDCAM-SR workflow starting with image capture.

--------------------
David Mullen, ASC
Los Angeles
http://www.davidmullenasc.com


Brian Dzyak
Cameraman/Author
IATSE Local 600, SOC
http://www.whatireallywanttodo.com
http://www.realfilmcareer.com

Brian Dzyak
Cameraman/Author
IATSE Local 600, SOC
http://www.whatireallywanttodo.com
http://www.realfilmcareer.com

 
Posted : 09/02/2010 9:05 am
Share: