HD Camera Results (DVi.net)

Greetings all,

As you guys may be aware there have been some VERY interesting experiments on DVinfo.net concerning home made cinema cameras. The current *general* goals are to create a camera with:

1080x720 (and eventually 1920x1080) images
4:4:4 uncompressed images!
Variable frame rates from 4-60fps

and various other details. For more information see the thread HERE

I've been following the developments there for some time now with great interest. Just recently there have been some VERY sucessful demonstrations by a German filmmaker. His camera, while not perfect, is certainly a step in an amazing new direction. Screw SONY and HDV! Take a look at some of these absolutely amazing clips:

www.drachenfeder.com/int/hand1.avi
www.drachenfeder.com/int/hand_still.bmp
www.drachenfeder.com/int/feuer1.avi
www.drachenfeder.com/int/fackel1.avi
www.drachenfeder.com/int/cool_soft_gamma.avi
www.drachenfeder.com/int/nah_soft_gamma.avi
www.drachenfeder.com/int/nah_hard_gamma.avi
www.drachenfeder.com/int/bow_hard_gamma.avi

Welcome to the future of indie cinema!
 
uggg Im gonna edit this.

OK i understand more what he's doing.


But its still a bit choppy. Im sure he will work those kinks out and get it more fluid.

Interesting. So his chipset has to be the same width as the film to emulate the same quality of 16mm?


Im curious how those chips work.. I understand a projector chip basically is pixles with mirrors that close for white and open so no light reflects causing a black image and uses rbg on top of it to fill in color and color shades.
 
Last edited:
I think the choppyness may be due to both encoding and the high data levels. It takes a LOT of processing power to handle even lightly compressed HD material.

I'm not sure how large the chip is on that specific camera. I know that the 1080p cameras in development are going to utilize a single 2/3" CMOS sensor. Because it is one chip the camera will use a bayer filter which is what you describe.

While this isn't the same width as film (I'm unsure how it compares to 16mm but it's smaller than 35mm for sure) it is pretty darn good considering the estimated cost of 5K for the camera!

I suspect that as sensors evolve over the next year or so we will see full 35mm chips being utilized in home made cameras. Watch out Mr Lucas! These things are already threatening to be quite a bit better than what he shot Episode I and II with!
 
See.. but here is where Im confused.

You're taking a 16mm film camera.. Gutting it out. then installing a DV chip but using the lens of a film camera? or is it still film that uses a digital processor to help project image onto the film so real time Digital effects can be applied.

Im guessing its that you're just using a Gutted Film Camera to make it into a DV camera. So how does that improve the digital of it? its still under 2 million pixlation where 35mm and 16mm are above 8 million pix of zinc and gold flakes.

I was watching this show on a San Francisco TV station where they interviewed Lucas at his ranch in Novato and he showed how 35mm film has a form of pixelation in for form of micro metal flakes. its above 10 million. Digital cameras (HDV) is under 1 million and the high end Million Dollar Digi Cams are 2 Million.

It will take about 5 years to get there Professionally and about 10-12 years before Prosumer cams get there.

I know this is coming soon for us, But I think all this guy is doing is showing how to build a High End Digital camera for under 2 thousand bucks. maybe a few hundred.?
 
Mr Goldfish:

There are actually approx 5 different HD cameras in development over there. The gutted 16mm camera was an early test. I honestly don't remember the results of that project, if any.

Super 16 film, depending heavily upon the type of stock chosen, can barely surpass 1920x1280 HD (highest spec HD). 35mm film has indeed an approximate resolution of 12million pix. The interesting thing though is that the actual film projectors themselves can only distinguish approx 2000 vertical lines of resolution! That's why Lord of the Rings, for example, was scanned at a 2K resolution for effects work and then transfered back out to 35mm film. Why HD often beats 16mm in terms of percieved resolution is because (well shot) HD has no grain whatsoever. 35mm transfers are therefore much cleaner and retain more of the initial resolution.

Regarding the actual cameras in development:

The clips I posted are from a camera which captures 8bit HD and 1280x720. As I mentioned, this is a first step along the way. The other cameras in development right now (3 I can think of that fall into the following catagory) will 100% uncompressed footage captured at either 1280x720 or 1920x1280 with a frame rate that can be varied between 4 and 60 frames per second. These cameras will easily match the digital cinema cameras that Lucas shot episode I and II with. And all of this will be available for approx 5K as estimated by those building the cameras!

These cameras will allow the indie filmmaker to compete at the same level as current pros! Even the Cinealta (1080i compressed 4:2:2) and the Panasonic Varicam (720p 4:2:2 I believe) can't capture this sort of footage!
 
Well.. i wasnt challenging you. I just wanted you to break it down more for me. sometimes the learning process sounds like im head butting people. :D


I have 4 computers in my house. 3 are being reinstalled because of a worm in my network. Im on an old P3 800mhz with a 19 inch monitor that only goes up to 1040. and there is alot of color bleeding going on. But it still looks good.
 
Oh I didn't mean for my reply to sound irritated! Quite the opposite! Sorry! :) Certainly no offense was taken for my part.

There is indeed quite a bit of bleeding. It will be interesting to see what some of the other more advanced cameras will be capable of since they all utilize a single CMOS sensor.
 
CMOS (complementary metal oxide semiconductor) is a technology similar to CCD (charge coupling device). The benefit of using CMOS is that it requires less power and some say has better dynamic range than a CCD. I'm not sure I believe the second part but plenty of people do.

3 is indeed better. The reason that one sensor is used instead of three is the large datastream that needs to be captured and written to disk. In fact what these cameras do is capture a greyscale image and send that to disk. The color filter is only applied in post where there is extra time to do processing.
 
Shaw said:
CMOS (complementary metal oxide semiconductor) is a technology similar to CCD (charge coupling device). The benefit of using CMOS is that it requires less power and some say has better dynamic range than a CCD. I'm not sure I believe the second part but plenty of people do.

3 is indeed better. The reason that one sensor is used instead of three is the large datastream that needs to be captured and written to disk. In fact what these cameras do is capture a greyscale image and send that to disk. The color filter is only applied in post where there is extra time to do processing.


haha.. There is some sales people out there that need to do homework then. I asked what CCD stood for and he said Color Chip Device. LOL

So its not a red chip, green chip and blue chip.. just a way to process information better?

or are we talking about the CMOS only.

anyways, Thanks for the input.
 
I asked what CCD stood for and he said Color Chip Device. LOL

LOL! That just made my day :D

So its not a red chip, green chip and blue chip.. just a way to process information better?

Let me back up a bit. CCD and CMOS are just different forms of capturing light. They route the energy produced by light striking the surface of the chip differently. Which is "better" is certainly up for debate.

However, you can have one chip or three chips of either CCD or CMOS. They are just different ways of handling the light and don't have any effect on how color is captured.

Color can be captured in two ways:

1) 3 chips (CCD or CMOS) with one chip dedicated to each color RGB
2) Use a single chip and dedicate pixels to colors. Thus in a 4x4 pixel grid you have two dedicated to capturing green, one to blue, and one to red. This is inferior to a 3 chip setup as your color information is far less. With three chips you have a green image a blue image and a red image which are combined to create specific pixel values.

Hope that made some sense. I often ramble and forget to explain important things! :)
 
There is some sales people out there that need to do homework then. I asked what CCD stood for and he said Color Chip Device. LOL


I know what you mean about getting accurate information from sales people, I've just had maybe the hardest week of my life, getting our HD feature onlined. The biggest problem was getting reliable information to help us solve the technical problems we were having.

After sitting at an AVID Nitris editing suite all week, I really wonder how this guys is going to deal with post production, the processing power required to cope with HD as is, is staggering and actually looking at full resolution HD, I've realised that it already contains almost too much information.
 
those are images straight from the camera as far as I know. MUCH better systems are under development. The current camera (which the footage is from) captures 8bit 4:2:2 footage. The newer ones will capture 4:4:4 uncompressed.

The extra chroma information is what makes it look so much more like film to me. The resolution doesn't hurt either though ;)
 
Back
Top