3D Imaging Principles
Continued from Chapters 1 & 2 ...
Jijo. 27 March 2010. (revised August 2018).
(Article by the director of S3D movie My Dear Kuttichathan / Chotta Chetan )
illustrations by Narayana Moorthy.
A schematic representation of shooting and monitoring in 3D Digital (2004).
720p, 60fps (genlocked, progressive) dual HD SDI output
red denotes right eye stream. blue left eye stream.
Stereographer's SD Monitor
for setting convergence with 3D reticle
Dual Projectors Stereo aligned
left eye beam -45deg polorised
right eye beam +45deg polrised
left, right streams onto two systems
HD to SDTV
for compositing left/right images on the convergence chart
HD aspect ratio Silver Screen
Shooting 3D Digital in 2004, was cutting-edge technology. We started our design and development on this with improvised 3D Beamsplitter rigs. As early as in 2004, with the ride fabricators of our amusement park we had a couple of custom-built studio rigs made. Down the years with the arrival of newer and newer digital formats, 3D Rig designs have proliferated. Today many special purpose 3D rigs are available on hire.
Digital 3D Imaging (2004)
3D Rig - MRT Prototype Ark of the Covenant
Design by RealFun. Fabrication by SICCO Engineering. Chennai.
This requires two cameras to be mounted on a beamsplitter 3D Rig − the same as of those dual film cameras in the 1950s ... except that today it is digital cameras. The advantage of this ‘dual stream digital imaging’ as against the ‘single filmstrip format’ we used for Kuttichathan (1984) and Magic Magic (2003), is the possibility to use any professional film lens (in matched pairs) of different focal lengths.
Another advantage is the variable interocular, which gives better control on 3D (such as the possibility of extreme close-ups and the use of miniatures – which are not possible with a single lens 3D. It is also possible to monitor 3D on location with 3D-Ready TVs and compatible glasses. (refer schematic diagram above).
For actual theater simulation, this requires the setting up of a mobile dark cabin on a trailor, which also house your media storage devices. With such a visual reference for every shot, the stereographer would be able to convince the director and the cinematographer about his calculations of positioning things in 3D space.
Digital Stereoscopic 3D commercial
was executed from concept to exhibition in 9 days
India's first 3D commercial
Whirlpool 3 door fridge
Dual RED Cameras, Cooke lenses
Shot at Chennai 20 Feb 2010
Premiered Dubai Atlantis 4 March
during product launch.
Senior Brand Manager: Neelima Burra (Client)
Agency: Draft FCB Ulka – Delhi
GM Draft FCB Ulka: P.Shridhar Iyer
Creative: Mukesh Kumar
Production house: Tellywise
3D Team: Navodaya
Producer: Bindu Akash (Tellywise)
Director: Shiva (Tellywise)
Art Director: Kiran
3D Technical Head: Jijo Punnoose
(Director of India’s first 3D film “My Dear Kuttichattan”)
3D Stereographer: Nambiathiri
(Stereoograher, films “Chotta Chetan”, “Magic Magic”)
3D Stills: Jainul Abdeen
DIT / 3D Encoding & Playback: G.Balaji
Post Production: Red Post. Online: Nawaz (Red Post)
CG: Shafi (Red Post)
Navodaya Team: Robert, Tony Adrial, Jai Sankar,
Kamaldas, Santosh, Abinandhan, Preethi
Camera: Two Red ONE
Genlock: Black Magic Sync Generator Mini Converter
3D Rig: Custom made MRT 3D Rig
Rig Type: Beam Splitter (side by side)
Converters: AJA HI5 (HDSDI to HDMI converter)
Projectors: QUBE E-Cinema HD Projectors (2 Nos.) [For Live 3D]
Optima 720 HD Projectors (2 Nos.) [ For Stereographer]
Camera specs :
Software version: build 21.4.1
Resolution: 4K 2:1
Frame guide: 16:9
Frame Rate: 24 fps
Connectivity: USB (Master / Slave) [ Between Two Red One]
XLR (Jam Sync) [ One camera TC out to Other Camera]
Computer: MacPro 2.66 GHz, 8 GB RAM, 3 TB HDD /
MacPro 2.53 GHz, 3 GB RAM, 2 TB HDD /
MacBook Pro 15″ 2.4GHz, 2 GB RAM, 320 GB HDD
Display: 24″ Apple Cinema Display (2 Nos.)
Converter: Matrox Triple Head 2 Go (2 Nos.) [side by side projection]
Operating System: Mac OS 10.6.2, Win XP & Win XP (64 bit)
Edit: Avid Media Composer 4.0.5
Compositing: Adobe After Effects CS4
3D Convergence: Adobe After Effects CS4
3D: Autodesk Softimage
Online: Avid DS Nitris
File Type: R3D / DPX Linear
CG generated and Composited
Final Delivery :
Resolution: 720p 24fps (HD delivery) / 1080p 24fps
(digital cinema delivery)
3D format: side by side
Location: Atlantis Hotel, Dubai
Projectors: Christie 1080i Projectors (2 Nos) [Linear Polarized]
Silver Screen: 40 feet wide
Viewers Stereoscopic :
Polarizer: Linear Polarizer
Glass: Linear Polarizer
Your disadvantage of shooting dual stream 3D Digital is that a rig has to be used for mounting the two cameras. This means, many of the available camera support systems cannot be used for shooting 3D. This can be of very major concern for some filmmakers who are used to improvising shots on the sets. For example, you just cannot decide at the shooting spot to put your camera on the side of a car. If such a shot is called for, it should be planned for in the storyboard session, and a rig support and mounting system should be organised for a camera car.
Film to Digital scan - 3D Imaging
ONE 35mm FRAME. 4K SCAN (Gemini Lab, Chennai)
Scanned Image Pairs - total 145200 frames.
Action-set automation split Left/Right on
Dual Image Stream
encoded Qube Stereo
Chotta Chetan digital restoration year 2010
Scan, DI - Gemini Color Laboratories. Image restoration - PIXION. CG Animation - Indian Artists.
Digital Cinema Prints by RealImage.
Shown above is a schematic representation of how pairs of Stereovision left/right images - a total of 145200 frames, of film Chotta Chetan were scanned at 4K at Gemini Lab, Chennai (resolution 4096 x 3112)
Each image was split into halves (4096 x 1556)
with Action-set on Photoshop application at Navodaya, Chennai.
The scan parameters were calculated and programmed on the Gemini ARRI Film Scanner by Jainul Abdeen - Navodaya stereographer, such that when an image was split into two halves, they matched exactly in vertical alignment. This was possible only because Stereovision dual-image is composed symmetric with respect to four perforations of the 35mm negative. These sets of values - scanner frame setting, number of lines scanned and the split point on the Photoshop application - are unique for a 3D film negative. A different negative, then the values need to be re-calculated. This would be the same as setting up an (Oxberry) Animation Film Printer during the good old days.
In fact, the scanner calibration for Kodak 5248 100T (camera negative) was different from Kodak 5248 color reversal film (internegative) stock. This is because, during the contact printing to make the 'master positive', there would have occurred a minuscule shift in the image centring of the pair of images with respect to the perforations.
The Film to Digital work flow is given below ...
A. Film Scan at Gemini on ARRI Scanner.
B. The scanned files transferred to PIXION, Chennai for Image Restoration. The process involved 200 digital artists about 45 days to touch-up - frame by frame - remove specs, scratches and blemishes, on the negative.
C. After the above restoration, the files transferred back to Gemini where DI Color Correction was done - shot by shot.
D. The dpx files transferred to Navodaya, Chennai where computer systems split the combined frame, as said before, in a process-assembly of ‘Action Set’ on Photoshop. Minor stereo alignment errors, vertical misalignment on lens, shading of images and anomalous objects in individual shots were removed by Jainul’s team.
E. The files were transferred as left/ right image sequences to RealImage for Digital Cinema Print encoding.
The QUBE encoding application pickup the matching pair and encode symmetrically for the sequence in the entire reel.
Digitally enhanced, the digital images looked brilliant than that of a film projection.
One complaint seasoned cinematographers used to make is the excessive sharpness of 'digital image' when compared to 'analogue film image'. But for 3D, the dictum has always been 'the sharper, the better'.
One other interesting observation I did make from our film-to-digital conversion is that, on digital projection, the fastest of our F.P. shots (Fire Arrows, Stone, etc.) do not seem as startling as it did on film projection! I thought maybe it was the nostalgia of good old days playing tricks in my mind ... before I did an actual comparison and found out the reason ...
The bottom image shows that the explosion - a pyrotechnic charge that propels the arrow towards the lens, has advanced in time by the time the shutter cuts it.
The bottom image shows that the arrow has moved further towards the lens, by the time the shutter cuts it.
The bottom image shows that the stone has moved further towards the lens, by the time the shutter cuts it. (NOTE - The Images are vertically inverted in a camera gate).
With its vertical plane revolving shutter, an ARRI reflex 35mm camera exposes the top half of frame first. Which means the right-eye Stereovision image on the top gets exposed 1/48th of a second earlier than the left-eye image at the bottom. In a film projection system, this is fine ... because, a film projector's shutter also functions in the same manner. [i.e; In the camera, the revolving shutter cuts from top down when the film after being pulled down by the 'claw', becomes steady in the film gate. Likewise, the revolving shutter of the projector cuts the film gate from top down as soon as the picture is pulled down by the 'intermittent sprocket' and becomes steady in the gate]. But on digital projection, both images fall simultaneously (without the 1/48th second delay) on the screen. So, there is a minuscule time-mismatch between film-originated 3D images when projected digitally. This is apparent only in very fast movements ...
as you see above in
a) a trigger charge that ejects the arrow,
b) final frame of arrow hitting the lens,
c) stone coming fast to hit the lens.
The way to avoid this is to use a horizontal plane shutter - as in an ECLAIR movie camera. But then, nobody anticipated a subsequent digital projection when this 3D movie was shot on film!
When shooting for 3D Cinema in Digital, and also while exhibiting them with Digital Projectors, there are certain guidelines to be followed. Most of them come from traditional 3D cinematography of celluloid film times. Let us discuss them before coming to Digital times.
There needs to be a stereographer for a 3D film project. A stereographer collaborates with the cinematographer as regards to the lighting of the objects, the aperture value and plane of focus (these become critical when attempting to bring objects 'off the screen'). The stereographer also (like the cinematographer) need consult with the C.G. Supervisor/ Art Director/ Editor during pre-production and postproduction phases of the project. It is a stereographer’s responsibility to see that all images are converged well - if not, it shall give eye strain to the audience.
3D requires pre-planning. It requires guidance through scripting, production design, art direction, C.G., D.I., release printwork/ encoding ... and even upto theatre 3D alignment. The agency who provide the film company with 3D expertise should advise those personnel the company engage in the above areas of filmmaking.
Cost factor. More than the time spent and effort involved, cost is the deterrent to 3D filmmaking. Now, the real cost of your 3D film is not your 3D equipment or consultancy costs. From experience we have found that it takes 2 ½ to 3 times the execution time for 3D, than it takes for a normal shoot. With the best of planning, this can be brought down to about 2 times. Also, to maintain a maximum depth of field (the reason for this, explained later) it is mandatory to shoot at minimum 5.6 lens T (after giving due allowance for the 50% division of illumination between two images on the beamsplitter). If this is indoors, this is a very high key. Outdoors and shooting digital, then this is a lot of fill. In economic terms it means that everyday the lighting budget is about twice that of an equivalent shoot in 2D production. Down the years people have tried to circumvent this cost by trying out production alternatives. But with limited success.
When you take-up a special format for your presentation, you have to have ingredients for justifying the selection of the format. For example, a large format (like Imax) works only when it has nature’s wide open vistas. Likewise, for 3D to work, it needs (1) subjects moving between the auditorium space and screen space, (2) tightly composed subjects well molded in x,y,z axes fascinating to watch. True, one can always find instances in a script that brings forth 3D possibilities ... but I am afraid that alone would not be sufficient. One has to incorporate narrative elements that would work for sure in 3D.
Stereo Imaging - thematic possibilities and prerequisites
I am giving some examples here….
Flying / gliding objects work well in 3D. Kites, flags,….. etc. If there are ribbons flying out in song sequences … well, a director can work ahead from that.
There was once an Airtel ad (by JS) which had streams of digital numeric flowing in gentle waves from one subscriber to another. Such flowing subjects shall work well in 3D.
Below, I have used music score notations emanating from musical pillars of Hampi. Could even be swarms of bees from beehives or butterflies from flowers.
Exploding objects (captured high speed) work well in 3D.
Water sprays, soap bubbles, shattering objects, fireballs, flame jugglery … etc. can be used as part of people’s activities.
C.G. Live Action blend
In today’s cinematic visualization, C.G and live shots cannot to be segregated. Shots should have a seamless blend of live action and special effects. That make the visuals magical. For instance, here is a cricket shot which may find excuse as being part of community activity.
To portray the drama of a ball hitting the stumps (and coming out into theatre space) it would require a combination of the following elements -
1) Live shot of the ball being bowled and the batsman hitting it
2) CG generated ball taking the trajectory we desire
3) Mechanical rigging for the stumps and bales to smash and scatter ... at the moment the ball hits them.
4) Pyrotechnics for particles to fly (motion slowed adequately)
5) Further particles added in CG so as to fly into the theatre space.
Only in such combinations, the magic works.
Thematic appropriateness for 3D
Giving thousands of such ideas is one thing ... but finding thematic sensibility is something else. How the master filmmaker Alfred Hitchcock made 3D sensible in his Dial M for Murder (1954) is an example to thematic appropriateness.
Incidentally, Hitchcock never intended his classic to be shot in 3D! Eventually, it was not even released in 3D!! But Warner Bros., at the height of the 3D fad, browbeat the master to shoot the film in 3D.
In the film Dial M for Murder, the ‘Hitchcock twist’ occurs at the plot-point where the hired killer C.A. Swan (played by Anthony Dawson) gets killed by the intended victim Margot Wendice (played by Grace Kelly). As Swan tries to strangle Margot, she reaches behind to grab a pair of scissors, and stabs to death her would be assassin.
For his dramatic moment how Hitchcock made use of Forced Perspective in 3D was as follows.
At the point of blackout, from her bed the struggling heroine manages to reach out behind her - arm extended right into theatre space towards the audience - to grab the scissors before stabbing her assailant!
Technical pertinence for 3D
I have discussed creative appropriateness (an abstract parameter, always open to beholder’s interpretation) in 3D filmmaking. Now I am on to technical factors (which is exact science ... well; almost) in 3D filmmaking.
In general, classic photography - the kind you see in films like The Sound of Music or those shot by cinematographer Freddie Young, is safe for 3D Imaging. Sharp, saturated and well-lit. That was what Chris Condon advised me during my first lesson with him. Let us understand those tried and tested rules of film cinematography before we break them and write new rules in the Digital era.
In 2D cinematography, it is always a DOP’s call on how sharp the image ought to be. It is a creative decision as to where the focus should be and how unfocused or blurred other areas need be. Through years, an industry itself was developed for manufacturing filters so as to selectively diffuse photographic image. Sure, but when shooting 3D ... aah, with apologies to every self-respecting DOP, it is to be stated that all areas and objects in your image should look as sharp as possible ... maximum sharp possible.
Before anybody screams murder, let me make the case.
Unlike 2D Imaging, Stereo Imaging is to fool the viewer’s brain so as to what is perceived before his/her eyes seems real. In real life, whenever you try reading in low light, whenever you look hard to distinguish objects during dying sunlight, you tend to get a headache or an eyestrain. Same is the case with 3D.
In real life, when we 'look' at a subject, our eyes would be converging on it ... and, would automatically focus on it. This is instinctual. It would take terrible effort and intense training to focus on one plane while converging on another. But, while viewing a 3D image, the viewer has the freedom to roam eyes all over the X, Y and Z axes and take in information. Hence, in 3D, everything upon which the viewer choose to converge eyes on, should also hold focus. This is possible only if everything in the image-pair is in the best possible focus.
In 3D cinematography, how to realise that?
While shooting 3D, focus on the main subject. Then, to increase the depth of focus, use the maximum possible T stop for the lens aperture. (Ideally between 5.6 to 11). Even if there are dark areas in the field, make sure it is lit so that some information, however small, gets registered in the recorded image.
Clarity of an image is associated with 1) image sharpness, 2) image resolution, and also 3) intensity of illumination. The principle behind good 3D imaging is to provide the best possible of the above three during shooting as well as during exhibition.
To have adequate separation between objects and the background, you need to have sufficient contrast. Adequate contrast enhances Three Dimensional depth. Too much contrast, then there is the problem of ‘ghosting’ at places they overlap. Stereo Imaging has always had the characteristics we associate with the first generation chemical films and the earliest of video tapes - meaning, Low Latitude & High Contrast. This has to do with the limitation of 3D glass filters’ polorising efficiency/ shutter speed also. It is yet to be seen how things improve with technological advancement.
One other factor that can provide separation between an object and the background is highlight. In photography this is achieved with backlight, morning/ evening light in the outdoors and cross-lighting in the indoors. Same with 3D cinematography. Now, ‘burning highlight’ is sometimes an artistic rendition in 2D cinematography. But not so in 3D. If highlight is more than 4 T stops above that of the subject lighting, ‘fringing’ or ‘image leak’ occurs. This fringing, as noted before, would be acute if adjacent areas overlapped by the subject and its highlight happen to be still darker. The further the parallax, the worse the ‘leak’.
To sum up, too much contrast is not good for 3D. This, as said, has to do much with the limited efficiency of poloriser glasses that are bound to start ‘leaking’ when the latitude becomes more than 4 stops.
Warm colors towards the foreground and cooler colors towards the background work well in enhancing depth. It has to do with our associating of blue skies and green trees with far distances. We are also familiar with warm objects - fire for example, nearer to us. It seems that while our brain evolved, nature had imprinted in it the Rayleigh scattering of light spectrum - by which, distant objects seem blueish. Whatever so, ... it is my personal experience that warmer objects (red ball, fire arrow) seem to come out more ‘off the screen’ into auditorium space than cooler objects.
Creating separation with contrasting colors work well in 3D. Costume, flowers, balloons, lights ... you name it. Yet, too much contrast between adjacent colors can cause ‘fringing’. It is good to be ‘gaudy’ and colorful for 3D’s sake. But again, aesthetics is something else.
Texture, Smoothness, specular-highlights, etc.
It is always ideal to have textured surfaces on 3D. On a textured surface, with the incident light playing different shades on it, the surface and hence its plane gets better defined to our eyes. This is good for 3D. A detail-less surface (such as a white plain wall or a smooth metallic surface) when composed with a subject, may look artistic in 2D. But if seen in 3D, the same would leave our brain wondering "what is happening there?" So it has been our practice to make the artdirectors shoot textured/ smooth surfaces of their choice and see for themselves how they look in 3D. The artdirectors would invariably select dress, props, artifacts, backgrounds with maximum textural/ design details on them. (An example - For the cellar scenes in our 3D films, we had the background walls covered with flaking masonry, mossy growths, cracks and creases. The flooring were either beveled wooden planks or textured terracotta tiles). There have been notable exceptions; but it is better to do some tests before you land up in serious trouble.
3D silver screens are never 100% perfect. Large, smooth, plain areas in your 3D images sometimes reveal the deformities of the theater screen.
When shooting 3D at actual locations, it has been a practice for our art department to diffuse with matte-sprays the specular-highlights on objects (like glassware) and light-glares from metallic surfaces (like car chromium). The reason is that "No specular-reflection or light-glare shall present itself uniformly to both your eyes". In 3D photography this would cause 'anomaly', and hence is bad for 3D (as explained later). Do you plan to have translucent glasses in the foreground and mirror mazes in the background for your 3D scene? Sure, go ahead!! Such layers would add depth to 3D fantastically. Just make sure to avoid glares and diffused reflections from such props.
There is a whole section in Condon’s Stereovision manual about eyestrain caused by anomalous objects. When shooting 3D and also while screening 3D, headache can happen to the audience due to image shading.
The reason behind the problem caused by anomaly between left/ right images is simple. While the human brain ‘understands’ depth by analysing subtle differences in parallax between left-eye/ right-eye images, it does not forgive any anomaly between them. Anomaly which does not occur in real life.
For example, as shown in the above illustration, suppose there is a shading on the right side of the right-eye image.
[NOTE : This could occur due to image obstruction either during shooting or during the screening of 3D].
This is a case of image anomaly because the shading is there only on one of the images.
In real life, such a situation could happen to you only when there is something lurking (my emphasis) at the extreme right of your vision. Sitting today within the comforts of a cinema hall or home theatre, this may be of no serious concern to you. But for your brain, it is a matter of survival or extinction! You or me wouldn’t be here discussing this topic, if a mammalian ancestor of ours had not reacted by quickly jumping, when from its vision’s periphery a predator pounced. Since our ancestor recognized the danger and reacted fast, it survived the predator ... and passed on this gene to us as we evolved into humans. Even today, somewhere inside our brain there is a primal circuit furiously computing to recognize danger when faced with visual anomaly. Can’t dismiss this function as completely useless today. We still have human predators and speeding cars in urban neighbourhoods.
Some of you may have noticed that in real life we get easily distracted when something is viewed in only one of our eyes. This something could be a partial vision through the crack of a window or a glare visible to one of the eyes. The brain is trying to make sense of it. Even if you disregard this, the brain is urging you to resolve it. This conflict manifests in the form of an eyestrain or headache.
While on the topic of visual anomaly, another important factor to be careful about is the ‘vertical alignment’.
As objects recede from your eyes, their parallaxes keep varying between the left-right images. But there can never be a shift in the vertical. A vertical parallax shift can occur only when a misalignment of lens/ camera/ projector happens when imaging 3D. The effect of this on the audience would be as strenuous as they being forced to squint their eyes.
(pair of left/right images interchanged).
In the 3D Imaging process chain, albeit erroneously, if the pair of left and right images get ‘reversed’ or 'interchanged' (ie; the right eye gets to see the image meant for the left and the left eye gets to see the image meant for the right) the beholder would be subjected to ‘pseudo 3D’. Its kind of strange how different people articulate this effect. Some would say the images look ‘ghosted’ ... or, there is kind of a feeling that ‘oil got into the eyes’. Some would correctly realize what should have been in front seem to have gone behind. Everybody would agree that objects which ought to have ‘come out of the screen’ fail to do so.
At every stage of the 3D process, care should be taken in keeping the pair of images from getting ‘reversed’ or 'interchanged'. Since images are tagged L/R in computers, this hopefully should not happen in Digital. But there have been instances where image interchange has occurred during the encoding of DCP prints and even during digital projection in the best of multiplexes.
(Given below are safeguards during the film era)
(Instructions to cabin staff in Hindi language).
During projection, it may be rather difficult to say whether an image interchange has actually occurred or not. To ascertain that it is ‘pseudo 3D’ and thereby to rule out possible mis-alignments of other nature, the method is to get hold of two 3D glasses and viewing the screen with the glasses swapped before your eyes. If the problem resolves, you can be sure it is a case of ‘pseudo 3D’.
‘Pseudo 3D’ can creep in during matting and compositing of CG animation ... especially if you are playing around with reduced or extended interoculars.
End, Chapter 3 & 4
Chapter 5 - vanishing point