I remember well the first days that I frequented DPReview.com, when its proprietor was very concerned about rises in pixel density – the number of pixels per unit area on the image sensor. There was a trend within the industry at the time to move from 6MP on an APS-C size sensor to 8MP or more. The fear was that this would result in a drop in image quality, and was therefore to be resisted. The rises occurred, but the predicted drop in image quality didn’t. In fact, the new sensors, with higher pixel density managed to produce higher quality images than their predecessors.
Today’s argument against pixel density
Today, when that APS-C sensor can bear up to 40MP, there is a renewed case against pixel density. The argument is usually based around the idea that ‘larger pixels can gather more light’. This is true, if you consider a pixel in isolation. However, a sensor is an array of pixels, and a larger pixel occupies a larger proportion of that sensor. Apart from one factor, which will be discussed later, the size of the pixels does not affect how much light a sensor can collect. If you look at weather forecasts, the amount of rain is measured in millimetres. This is the depth of rainfall collected in the bottom of a straight-sided vessel left in the rain. The area of the vessel does not matter, since the depth will always be the same.
The same principle applies to pixels, so from the light-gathering point of view, their size is irrelevant. The aforementioned factor which may be affected by pixel size is what is called ‘quantum efficiency’, (QE) which is the proportion of the light particles (or ‘quanta’) that a sensor collects. The argument goes that larger pixels should have a higher QE because proportionately less of their area is committed to circuitry that does not collect light. In practice, things are not so simple, for three reasons:
All modern sensors use a microlens on each pixel to concentrate the light from the whole pixel area to the sensitive part of the circuitry. The smaller the pixel, the easier it is to make an effective microlens, so this factor tends to level out any difference in the size of that sensitive area.
2. Geometry processes
Sensors with smaller pixels are often made using finer geometry processes, so the proportionate size of the sensitive area does not change.
3. Back-side illumination
Many modern sensors use back-side illumination, where the circuitry is at the back of the pixel, so its size is less important. Put this together and the fact of life is that quantum efficiencies have not in general decreased as pixels have shrunk.
What then are the drivers behind reduced pixel size and increased pixel counts?
A 200MP phone sensor is well above the point where it can produce any improvements in resolution. Instead this will be limited by the diffraction of the lens, which is a physical limit that cannot be overcome. However, there are some advantages to having pixels past that diffraction limit.
Firstly, it means that anti-aliasing filters are not required. These filters are far from perfect and inevitably degrade resolution. If the pixels are smaller than the diffraction limit, there will be no aliasing. Secondly, smaller pixels ultimately provide more information about the image projected on them. Information content can be quantified as dynamic range per pixel times the number of pixels.
Suppose we halve the area of a pixel, resulting in twice the number of pixels on a sensor. All else being equal, the theory tells us that the pixel dynamic range will decrease by a factor of 1.4, but since there are two times as many the overall information content increases by the same amount.
Bob Newman is currently Professor of Computer Science at the University of Wolverhampton. He has been working with the design and development of high-technology equipment for 35 years and two of his products have won innovation awards. Bob is also a camera nut and a keen amateur photographer.
- In praise of Bryce Bayer and the supreme Bayer Sensor
- Sensor size MFT vs APS-C vs Full-Frame
- What is pixel binning?