Looking to buy some cheap 3D glasses to view this footage? Click on these!
1/29/11 IMPORTANT NOTE: It has been brought to my attention that the “Tim Dashwood Final Cut Levels Filter” method I posted in my video blog above is for Red/Cyan glasses. If you want to edit 3d Magenta/Green using the Dashwood tip, all you need to do is remove ‘green’ from V1 (right eye) and remove ‘red’ and ‘blue’ from V2 (left eye). Thanks to Bryan Golder and Juhani Väihkönen for helping me with this. Check out YouTube 3D and try the different display formats by clicking here. I have posted some raw content shot on the Panasonic AG-3DA1 camera using the 3D feature on YouTube. Just click the little “3D” on the lower right of the player and choose your flavor of 3D based on the glasses you own. The footage looks amazing.
3D. You either love it or hate it.
In the entertainment business, whether you are talking television or films, I am a huge proponent of progress. I want people to create stuff that keeps us all moving forward. If we thought silent films or black and white television was adequate, then my side of the business would be a pretty boring place.
I can remember the transition from standard definition television to 16:9 high definition television like it happened yesterday. But a lot has happened since yesterday.
Back in 2003, we got the latest-and-greatest pro-level HD cameras and lenses at work for the Boston Red Sox baseball season. Shortly after that, I jumped onto the HDV bandwagon and then leaped to XDCAM HD. The transition renewed my love for the business that I am very passionate for. I think progress, creativity, drive and a thirst for improvement is very important to any craft. But what about the next phase in visual media? What about 3D?
Well, to give my opinion in a nutshell: 3D works great for movies but it does not work for home television viewing. At least not with our current at-home viewing habits. You are tricking the brain into thinking it is actual real-life 3D and that could create health problems.
First off, when you go to the movie theater, you are headed out for a 3 hour event without interruption. For people who are serious about viewing a film, the person sitting with them is not the focus of the night. You are locked onto that movie screen. 3D works well in this setting because of the type of confined viewing, large screen, limited distractions and restricted viewing time.
Viewing television at home, at least for me, is much different. Sitting in the living room with glasses and viewing a smaller screen is inconvenient. The long amount of viewing will strain my eyes and make my head hurt. I did a few experiments at local electronic stores with 3D view times to see what would happen. It only takes a few minutes for me to start feeling the effects and they last for several minutes after I remove the glasses. Prolonged viewing on current 3DTV’s gave me a headache and I think it is because of the distractions outside the television and around the room.
Think about this…normally, when in a theater, you stare at the movie screen 95% of the time. Perhaps, 5% of the time you glance at a friend or stare at that annoying illuminated “exit” sign in the corner of the room. When you have the 3D glasses on, you forget about em!
Now, at home, people do not stare at a TV screen 95% of the time! There are so many more distractions. Phone, friends, a well lit room. If you are wearing glasses, you are always reminded of that. The only exception would be if you have a dedicated 3d home theater room. But the average person does not fall into that category.
I have seen glasses-free first generation television sets, but as of now, I am not impressed. I find that they work a bit like “magic eye” cartoons in the Sunday comics. The effect is there after a few seconds, but once you look away or move your head, the depth of the image seems to drift away.
When a broadcast cameraman uses lenses with shallow depth of field or a dolly to create the sliding parallax effect, that is 3d for me. I know the foreground and background in the frame. I visualize depth in a 2d picture without the nauseous side-effects. But is this just “old school” thought? And coming from me?!
I watched “Avatar” in RealD and in IMAX 3D. I was very happy with the result. I had a great time in a normal theater watching in RealD. The IMAX version made me feel sick. Perhaps the L/R projectors were out of alignment, I don’t know. Avatar changed my opinion on 3d in a theater to get the audience into the film. But, when done incorrectly, like in “Clash of the Titans”, it can take away from the story and kill the experience. I also loved the new “Tron” movie and felt that the 3D was used perfectly in the film not only to help tell the story, but to make for an immersible experience into a fictional world.
I say television broadcasters focus on high speed digital cinema (Phantom), better compression and color space or higher resolution mastering (2k, 3k, ect) as our next great leap in home television viewing. At least that does not make people sick! And if someone out there can make a futuristic holographic 3d display, I won’t stop them. I may not buy the technology and use it at home, but I want progress. I will “rent” the cutting-edge technology for three hours when I visit a movie theater equipped with it.
Sports television is a great place to test out and experiment with new technology. It has happened before with HDTV, virtual advertisements, first and ten football markers, and other computer aided visuals. I am fortunate to be locked into this progressive field and love trying out new gear. 3D is getting a big push in sports and ESPN has launched a 3D sports channel. Live sports will make or break this 3D freight train soon since we are not working on a scripted, take-eight-hours-to-set-up-a-shot film set.
To go off on a tangent, ESPN and entertainment directors that sit in a remote television trucks and view 3D as they work are getting ill. I spoke to a director that said he almost fell down the stairs in a dizzy daze exiting a 3d truck after viewing stereo images for five hours straight. He also said that when he went out for drinks later that night, the alcohol hit him twice as hard. Interesting.
The biggest reason widespread 3D will fail is because of the cost. It is much more labor intensive and expensive to produce real 3d content for broadcast television. I am talking real 3d, two cameras, two lenses, and proper stereography. Perhaps the capturing technology will change and get much better, but the cost is not going down anytime soon.
“Fake 3D” is a cheaper option for a quick answer to getting 3D on the air. This is a very bad idea. Synthetic 3d is where they take a normal video image or single print of film and duplicate it to create an artificial left and right eye ball. If you have seen any of that rubbish, you will think you are stuck in a pop-up story book. If that is the only 3D you have seen, then you have made your opinion’s about 3D without really even seeing what the technology can actually do! The only way to do it is with two eyes (two cameras) and convergence at the time of recording.
So all this leads to 3d acquisition for the rest of us. None of us here will be lugging around two cameras with a convergence stereo op, sled or a beam splitter to cover our little brother’s lacrosse game. But maybe, if a camera was small enough, cheap enough and did a lot of the math for us, we could all try out the 3d gimmick. My biggest fear is that now, in addition to an internet wasteland of horribly shot high definition video, we will have an even more disgusting vomit-inducing wasteland of bad 3d content existing on the interwebs. Hey Youtube, BUY A TRIPOD!
Panasonic is leading the way in portable 3D video cameras. They just recently created the AG-3DA1 stereoscopic camera. The system is not cheap, it comes with a retail $21,000 price tag. And even with that high cost, it is not easy to shoot with. Well, I should say, not easy to get good looking 3D. Anyone can point and shoot a camera. Recording quality 3D is quite complicated and requires great skill.
I was given the opportunity to test out the stereoscopic camera for a couple days this past summer, thanks to my friends at Rule Boston Camera. Mike Sutton also tested out the camera and wrote an excellent blog about his experience. You can read it by clicking here at the Rule Boston Camera Blog.
I had worked with 3ality Digital (Burbank, CA) on big dual Sony HDC-1500 3D rigs. I wrote a blog about that day here. But these systems required lots of people and gear to make them work. They were not designed for portability or a single user. The idea that Panasonic had created a 3D camera in a single, lightweight package was of great interest to me.
The AG-3DA1 world’s first integrated twin-lens HD 3D camera recorder. It features a camera section with two integrated lens systems that are configured to resemble the human eyes, and a recorder section that records left channel and right channel 1080p HD images using the AVCHD format. You basically have two cameras in one. Everything across the two systems is exactly the same. Time code, white balance, focus, zoom, ect. And, you have six sensors! Each of the two chip blocks have three CMOS sensors. The camera can also record 60p at 720p and captures to cheap SD cards. I has all the normal audio inputs and controls you would expect.
The camera is an investment at over $20k, but it does do a lot to automatically make 3D work. Vertical deviation, angle deviation, differences in brightness and color, and rotational error are taken care of inside the camera. Two major things you have to worry about are correct adjustment of the parallax and appropriate image composition.
Convergence is the new buzz word and something you will only find on a 3D camera. This is also known as the 3D plane. Kinda like a focus plane, or the area in a shot that is the most sharp, when setting your focus. But with 3D, everything that is between the camera’s lens and the convergence point seems to float off the screen. This 3D plane is the spot where to two lenses cross in space. There is a way to see both left and right channels in the viewfinder to see the convergence point. This method of viewing is tricky since the picture looks like a half dissolve between two offset images. Plus trying to focus the camera in the mix mode is nearly impossible.
The stuff in the foreground is called the negative parallax and the stuff in the background past the convergence point is called the positive parallax. You have to keep the camera lens about 8 feet away from an object for this to even work. I know this is all confusing. It is much easier (I think) to understand if you view the video blog I shot.
The camera came with a tutorial DVD from Panasonic. I watch the 16 minute DVD and immediately became nervous. It seemed very complicated. Not the operation of the camera itself, that was easy. The hard part was to capture 3D that would not make the viewer’s retina detach. The DVD tried to explain it to me, using graphs and numbers, but I did not respond to that very well. The DVD said that you had to take your audience’s screen size into consideration, not get to close to objects, worry about glare on just one eyeball, and even shoot differently for children because their eyes are closer together! I just wanted to get out and shoot with the camera. I would set my convergence point to an object or subject in frame using mix mode in the viewfinder. I hoped everything would fall into place from there.
My initial thoughts about the camera was that it was very lightweight and compact. It was amazing that I was holding two video cameras somehow fitted into a single case. Both of these cameras were chained together for all function. I found that to be quite amazing. The only thing that was different was the idea of pulling depth and crossing the lenses to hit a convergence point. I was worried about the camera taking a bump and the lens contraption on the front getting knocked out of alignment. I’m sure that could happen, so I was very careful with the camera.
The convergence knob is located just below the lens. A big problem with this knob is that it doubles as the iris. A toggle switch lets you change its functionality. That means you cannot pull depth and adjust the iris at the same time! But, I guess Panasonic thinks you got enough on your plate trying to focus, act as a stereographer and zoom the thing. Adding iris adjustment to that cocktail would mean having a third arm.
The 3DA1 does have a “3D Guide” to help you get the best results. The guide pops up in the viewfinder as distance figures but I did not really use this feature. To be honest, I did not really understand how it worked. Perhaps next time I will try to measure out distance of subject to lens and try harder to correctly adjust the parallax.
The camera did not have an ND filter and there was no way of screwing ND filters to the front of the 3DA1′s eyeballs. So, a wide matte box is necessary with 4×5.65 ND plates when shooting in very bring conditions. I did not have time to use a matte box so my solution to this was to add some shutter. When I get more time on the camera I will try it.
Battery life seemed to be about the same as with Panasonic HVX cameras. The viewfinder was the same low resolution, hard to focus LCD but the menu was easy to navigate. The camera has two HD-SDI outputs for sending video to a 3D monitor using each channel to display a stereoscopic image. I did not have the chance to test this feature and in fact, the only 3D that I have seen off this camera is anaglyph red/green. So take that into consideration. I hope to update this post when I can view my content on a proper polarized 3d monitor.
This leads me to post production. I shot with this camera several months ago for just two days. The footage from each of the SD card channels looked like any Panasonic other HD image. The 1920×1080 MPEG-4 AVC/H.264 AVCHD in PH mode looked very clean and free of compression artifacts. I have not published anything about the left and right footage (until now) because I had no idea how to edit 3D using Final Cut Pro. I knew there were plug ins and software for creating 3D movies and preforming post convergence, but I did not know which one to try. Cineform Neo3d is one of them and you can test out a trial version. I did and found it a bit too complicated.
I was doing to 3D post production research and found a guy named, Tim Dashwood. He has a few programs out for working with 3D. You can find them at Dashwood Cinema Solutions. Tim gave me a tip to edit 3D using what was already included with Final Cut Pro. This simple tip got you the red/green blur on the footage, but it did not let you adjust the convergence to dial in the 3D after the fact. But that is fine since I nailed convergence during acquisition…. or did I ?
I have included a full tutorial in my video blog to show you how to preform basic 3D editing without convergence adjustment in post. Basically, you place each left and right channel on V1 and V2 in the FCP time line. Make sure they match up perfectly, in start and finish. Then, drag the “levels” filter onto the top clip in V2. Remove all the ‘red channel’, dragging output and output tolerance to ’0′. Next, drag the “levels” filter onto the other eye in V1. Apply the filter twice and remove green on one and blue on the other. The final step is to click in V2 and set the composite output to “screen” mode. Make sense? If you are confused, watch my video blog.
There are many ways to edit and display 3D. Anaglyph may be the easiest, but it is also the worst in my opinion. The resulting blurry image can be viewed in 3D on any television set, computer screen or projector. Watch “Journey To The Center of The Earth” starring Brandon Fraser for an example of bad anaplyph red/green 3D. I bought the anaglyph blu-ray DVD for the glasses that came with it!
My philosophy is if you can get red/green anaglyph to look even remotely decent on any display, then when you do things right and visualize real polarized 3D on a 3DTV using an enabled blu-ray player, the images should look amazing. I have yet to confirm that though!
I am not totally opposed to having 3d technology in my living room. I have seen some good 3d content using powered polarized glasses and an expensive 3DTV. If I needed a new TV set, I would consider buying a 3D ready one, as long as It costs the same and I had the ability to turn off the 3D functionality. But I won’t run out and buy a new set just because I want 3D. As of right now, I am much more interested in a television with excellent HD images that can connect to streaming content and the internet.
This Panasonic 3D camera is in high demand at the moment. Back in December 2010, Panny had moved more than 1000 units, with even more on back order. ESPN and other networks trying to establish 3D channels and create content are experimenting with the AG-3DA1 as a cheaper alternative to shooting stuff with big expensive two camera rigs. They are using the 3DA1 for NASCAR and even NASA has a few units.
I have included a few left and right files straight off the Panasonic 3DA1 memory cards for you to play with. If you figure out a better way to edit and display my 3D test files, please let me know. I would love to view the footage. Good luck and please feel free to use this content anywhere.
Files directly from the Panasonic AG-3DA1 memory cards:
Converted to Apple ProRes 422 HQ .mov
DUE TO HEAVY TRAFFIC, DOWNLOADS ARE DISABLED. SORRY.
For those of you who want to watch the Panasonic DVD tutorial on how to shoot with this camera, I will include it below. This video is only viewable on this page and please do not display or copy it. I do not own it and I am only posting it for demonstration/education purposes.
Some of the illustrations and diagrams used on this page are from the Panasonic Website.