In terms of pixel numbers, DSLRs can capture much larger pictures in still than in video. I was wondering how the sensor was used in both case. I can think of 2 possibilities.
1) In video mode the part of the sensor used is a smaller rectangle inside the whole sensor.
2) In video mode the whole width and height of the sensor are used but some kind of subsampling is taking place. For instance, several pixels could be averaged into one or only one out of several pixels could be read.
In the 1st case, the video image is like a cropped version of the corresponding still. In the 2nd case, still and video cover exactly the same space but the video image should lose in sharpness.
1) In video mode the part of the sensor used is a smaller rectangle inside the whole sensor.
2) In video mode the whole width and height of the sensor are used but some kind of subsampling is taking place. For instance, several pixels could be averaged into one or only one out of several pixels could be read.
In the 1st case, the video image is like a cropped version of the corresponding still. In the 2nd case, still and video cover exactly the same space but the video image should lose in sharpness.
Last edited: