It can't be my XH-a1 (Bad picture qualitY)

Hey guys, Im in need of advice.

Heres my problem: When I view my film on DVD, or computer, I notice my subjects' colors pixelated. And the edges of objects and subjects are squigley....choppy..


I highly doubt it is my XH-A1.

I made 2 mistakes in post-production. Let me know in your opinion, if this is the root of the problem.

1. I shot in 24f, however, when I opened Sony Vegas for a new project, I selected NTSC DV (720x480, 29.970 fps). I captured my footage into the editor, then edited my footage; I rendered it back to 24p. Since I originally shot my footage on 24f, imported as 30p, then rended the footage back to 24p, could this be the cause of pixelation, and uneven edges? I'm assuming, I should have selected NTSC DV 24p Widescreen (720x480, 23.976 fps), then captured my project into editor, then rendered it to 24p...

2. I color corrected the same footage twice on accident. For example, I dragged "RGB to Computer color corrector" over my clips, forgot later, and did it again. It seems each time you added color correction, the footage gets brighter and more blown out.

I am pretty worried :(; I want my footage end result to be perfect crisp quality when I view it on my T.V. Just as if I were to put in a real DVD.

So what do you guys think im doing wrong?

thanks in advance!:)
 
Last edited:
Actually, you didn't import it as progressive. NTSC DV is interlaced (60i, 2 fields), so you converted from progressive to interlaced back to progressive (depending on export settings), and clearly something was lost during the translation. What you describe with the edges sounds like deinterlacing artifacts. Blockiness could be compression artifacts (or a low quality export). What were your export settings (bitrate, video size, progressive vs. interlaced, etc.)?

Can't you import it as progressive? And why would you want to use computer color correction if you're targeting DVD?

Where's Oakstreet? :)

Also sounds like you you need to set your bit precision higher in the editor.
 
Last edited:
The Canon XH-A1 shoots in 1080i (60 fields/second NTSC or 50 fields/second PAL). Period. If you shooting in 24f, the camera adds 2:3 pulldown (the footage is still 60i, but it's 24 virtual frames per second that is then interlaced into 60 fields). If you software can remove the 2:3 pulldown, you can get back to a true 24 frames per second.

I don't know what you are seeing, but you may well be seeing comb effects from the interlaced fields and you trashed a bunch of information rendering out to 24p without first reconforming the fields, and you may also be doing too much processing and/or compression as you and VP suggested.

The good news is that your video on the tape is probably fine. The bad news is that you may need to recapture everything.

Doug
 
Correction: If you have the original captures that were DV NTSC 29.97 interlaced, those were probably correct, but require the removal of 2:3 pulldown to be converted to 24p. You can simply flag the clips as anamorphic to make them widescreen.
 
Additional information (regarding Canon XH-A1): In HDV (which is how I do all my recording), I have only the option of 2:3 pulldown, but if you were shooting in SD, you have 2:3 and 2:3 advanced. It will matter which one of those you selected, as one records fields in a 2:3:2:3 pattern and the other records them in a 2:3:3:2 pattern (advanced), which is better/easier to conform to 24p, but your software has to know which one you used, to rearrange the fields correctly.
 
Now I wish I knew how to advise someone using Sony Vegas on how to deal with it. :(

Yup. I could cover the Premiere angle, but so far I am only shooting 60i, importing and editing 60i, and then outputting 30p. My first interlaced outputs were trashed, too, especially when using Ken Burns effect on still photos. If I rendered interlaced, the combing was horrible. My progressive DVDs looked really good with both still image animation and 60i footage from my XL1s (7Mb VBR, 720x480).

EDIT: This is why I've previously questioned the need to use 24p when the target output is NTSC DVD and not film. I mean, if the camera is doing 3:2 (or 2:3:3:2) in camera, then why not just shoot 60i, import NTSC DV, edit, color correct, and then convert the footage based on the intended display medium. My preferred output format (for now) is progressive scanned DVD (MPEG2), and so far it has worked well. Keep it in the native format as long as possible to avoid problems with multiple conversions.
 
Last edited:
Ok, I am really ignorant on the whole, "progressive, interlaced" subject. ><

I just finished shooting my new project. It is a smal documentary for my Psychology class in high school.

Can someone please give me a quick step by step idiots guide?

I shot using an XH-A1.
60i
16:9
SD

So I open sony vegas 8. And I select NTSC DV Widescreen (720x480, 29.970 fps) ?

Then I capture?

Someone mentioned that I shouldn't select Studio RGB to Computer RGB color corrector if I am viewing it as a DVD. Does this mean I should select Computer RGB to Studio RGB color corrector instead?

And then I render it as NTSC DV Widescreen (720x480, 29.970 fps)?

that should give me good quality dvds?

and for the future, when I shoot in 24f, what is progressive and interlaced?


By the way! here is an example of what my footage looks like! on the left. http://www.pichotel.com/pic/2999dHTAP/140077.jpg
 
Last edited:
If you shot using 60i, then you did not shoot using 24f? If you shot using 60i, just do like you said and capture NTSC DV Widescreen.

Regarding progressive/interlaced, "f" is a form of progressive that reduces the vertical resolution slightly in order to avoid interlacing artifacts. "i" of course is interlaced, which means the frame is made up of two separate images, taken at different moments 1/60 second apart, that are then interlaced into a single image.

Check out this link for some more information; My web page on interlacing, etc.
 
The simplest way I can put this is that the video from an NTSC version of the Canon XH-A1 is always 60i. Therefore, if you do not want to apply a 2nd step to conform it to 24p, then you should shoot either 60i, or 30f. 30f gives you a progressive look at 30 frames per second, which does not require any special handling; just capture it as NTSC 29.97 interlaced, and treat it like it is 30p, because there is no difference except for the lack of comb artifacts, since both fields (both halves of the image) are captured at the same time.

Doug
 
Doug,

Please forgive me, because I'm trying to understand here. You are stating that the A1 and its little brother HV20/30 are shooting 24p which is wrapped in a 2:3 pulldown resulting in a 60i framerate, correct? In which case that is what I have found.

I am just hoping that you are not insinuating that the Canons do not shoot 24p, because they do, after you IVTC them. My HV20 has to be forced to remove the 60i wrapper using a program called pulldown.exe I found on another forum, because it seems no NLE can natively recognize it. Once that is done, then you end up with true 1080p24p, not deinterlaced 1080i.
 
The simplest way I can put this is that the video from an NTSC version of the Canon XH-A1 is always 60i. Therefore, if you do not want to apply a 2nd step to conform it to 24p, then you should shoot either 60i, or 30f. 30f gives you a progressive look at 30 frames per second, which does not require any special handling; just capture it as NTSC 29.97 interlaced, and treat it like it is 30p, because there is no difference except for the lack of comb artifacts, since both fields (both halves of the image) are captured at the same time.

Doug

oak, so what you are saying to me is, shoot at 30f, and I wont have artifacts anymore?

And second. In the future, if I shoot at 24f, how do I render it as true 24p?
 
...

By the way! here is an example of what my footage looks like! on the left. http://www.pichotel.com/pic/2999dHTAP/140077.jpg

In your image example, that looks like aliasing. This can happen with any raster image at lower resolutions and diagonal lines. Even with still images, scaling them down can produce this type of artifacting. Some programs are better than others. A good example, scale a digital photo down using Microsoft Paint and then scale the same image using Photoshop. MS Paint doesn't interpret the image very well and introduces aliasing. The lower the resolution, the more prominant the "jaggies" become. And with video this can get introduced through poor deinterlacing and/or image scaling. Going back to the topic of 2:3:3:2 pulldown, if you look at the images in that Wiki link I posted earlier, you can see in the "Resulting Video Frame" section how this can get introduced, with a merge of the odd field from frame B with the even field from frame C and then again with C and D fields.

So your workflow could introduce all kinds of artifacts:

1) shooting SD on an HD camera - scaling occurs in-camera
2) shooting 24p or "f" and importing as NTSC DV (taking each full frame and breaking it into two seperate interlaced images for the second time - the camera already performed one deinterlacing step - and also converting from 24fps to 30fps during the capture)
----- a) technically, you get more image data at 60i because an additional 6 frames per second are captured and stored
3) layering effects and/or color correction in low precision mode, forcing mathematical "averaging", thus reducing image quality, and then doing it again (this is like photocopying a photocopy - each subsequent generation gets worse and worse)
4) compressing the image into a low bitrate video stream, further reducing image quality

The frame mode on my XL1s is only taking one of the two fields captured by the CCDs, so I'm losing half of the available resolution in this mode. I suspect your camera does the same thing. You may get better SD recording if you shoot in 1080f and then scale it within your NLE. This will get rid of at least one step where aliasing can occur. Of course, this will eat up a lot more hard disk space in the interim.

EDIT: There is a another variable. Some consumer DVD players and computer DVD playback applications have poor deinterlacing capability, and this can further introduce "jaggies". To check your equipment, pick up a copy of the HQV Benchmark. Your graphics hardware can also play a role. The ATI graphics cards have sophisticated algorithms to improve video playback and reduce many of these artifacts.

EDIT2: Look at the "Cadence Detection" example in the HQV link above. Look familar? 2:3:3:2 is the "cadence" your camera uses. If the player doesn't properly detect and interpret this cadence, you could see the artifacts in your image example. And the de-interlacing example shows how poor quality conversion and/or playback can affect the image in a similar way.

Bottom line, like Doug said, the footage on the camera is probably fine. It's all the scaling, framerate and interlacing/de-interlacing conversion you're doing in your workflow that's messing things up.
 
Last edited:
Wideshot; yes, the Canon shoots 1080i (60 fields/second, regardless), but uses 2:3 or 2:3:3:2 pulldown (only available in SD) to encode 24p/f into 60 fields which can then be conformed and/or converted back to true 24p. However, if you skip that conversion, you could get into some trouble; especially with 2:3:3:2 which is designed to easy, lossless conversion; not smooth playback.

MelonDome, read what Wideshot wrote; he offers a solution to make 24p from 1080i (shot as 24f) on Windows. I haven't used Windows for video editing in years, so this is an unknown to me. Listen to what Wideshot says; he's done it.

What I'm trying to say, people, is simply that you can get 24(p/f) from the Canon, but it's going to be written on the tape in a special field sequence we call 2:3 pulldown and, until it is converted, it is simply 60i.

I'm sorry if this is so confusing. Maybe a page with illustrations is in order.

Doug
 
Oak,

Im referring to 30f now. I think I understand 24f now, thanks to you, VP, and wide. On 30f, you mentioned that

"30f gives you a progressive look at 30 frames per second, which does not require any special handling; just capture it as NTSC 29.97 interlaced, and treat it like it is 30p"

in lamens terms. Are you basically saying, if I use the 30f mode, I wont have to worry about any artifacts if I just render it at NTSC 29.97?

One more quick thing about 24f. I notice that I can make a custom template in rendering. There is this thing I can select called "Field Order" I can choose between Upper Field, Lower Field, and None - Progressive Scanning. NExt time I shoot in 24f, do I choose the progressive scanning option before I Render? is that what you guys are talking about?
 
Back
Top