Does Size Matter? New Image Sensors Bring More Pixels, More Problems

In February 2015, Canon announced their planned introduction of a 50.6 Megapixel (MP) DSLR camera. When released in June, the EOS 5DS will boast the largest image file size among DSLRs currently on the market. A file size of 50.6 MP is a massive increase from the 1.3 to 1.75 MP available in the first commercial DSLRs, released in the early 1990s. It’s even a substantive increase over the 36.3 MP size of Nikon’s latest, the D810.
Canon’s announcement made me pause and wonder just how big is big enough. The first hurdle DSLR manufacturers needed to overcome was image quality. An image file size of a few megapixels simply didn’t contain enough information to generate a high quality photograph. As technology improved, more information could be captured in larger file sizes, resulting in better quality photographs. And thus, image quality became linked with file size and the competition for bigger and better image file size was launched.
When it comes to file size, have we reached a point of diminishing returns or is there still room—figuratively and literally—for growth? And if bigger image file size is still a worthwhile pursuit, what are the benefits of that increase? This article considers those questions, weighs the costs and benefits of larger image file size, and explores who might benefit from an investment in larger image file sizes.
I discovered that I couldn’t clearly answer my questions about file size without first understanding the basics of how digital images are captured. And I mean the basics: How does a digital camera sensor work? What is a pixel? Do we need more or bigger? How does capture size relate to output? I thought I understood all of that, but realized that the terminology is slippery and the descriptions conflicting. So, we begin with the basics.
A sensor is an electronic equivalent of a frame of film: both capture light and use the light to generate a picture. Film uses a chemical process; digital uses an electronic process. Both create pictures with an amalgam of small fragments of information. Film uses light sensitive crystals; digital uses light sensitive diodes. When viewed from a distance, our eyes are fooled into seeing the grid of tiny points as continuous tones.

Georges Seurat A Sunday on La Grande Jatte
George Seurat used bits of information—in his case, tiny dots of paint—to create his pictures. When the dots are small, numerous, and placed closely together, they blend into smooth continuous tones. (Georges Seurat, A Sunday on La Grande Jatte, 1884 [Public domain], via Wikimedia Commons)
George Seurat A Sunday on La Grande Jatte detail showing dots of paint
Close examination of the picture reveals Seurat’s use of dots, a technique known as “pointillism.”

In digital photography, these points of information are generally identified as pixels and it’s the number and size of pixels in a file that determine an image file size. Logic would have it that larger image files contain more pixels and thus more points of information, and more information means a better picture. Right? Well, not quite.
A digital image begins with information captured by light sensitive diodes, known asphotosites. The sensor in a digital camera is covered with photosites. Each photosite reacts proportionately to the total amount of light that strikes it, converting the light energy into an electrical signal. The signal is measured by the sensor and translated by a digital algorithm into binary digits (1s and 0s), or bits. These bits, which represent the colour and brightness information captured by the photosites, are recorded in digital picture elements, or pixels. The pixels also record coordinates identifying where in the picture that colour information belongs.
Photosites are little physical sensors; pixels are little digital packages filled with information collected from the photosites. Pixel size is only limited by the amount of information passed on from the photosites. The amount of information a pixel holds from a photosite is referred to as bit-depth. A pixel with an 8-bit depth, for example, can hold information that is an 8 digit combination of 1s and 0s. This means that the pixel holds information within a range of 256 tones of colour: 2 binary digits (1 or 0) to the power of 8 bits, or 28. A pixel with 16-bit depth can place the information within 32,000 tones, and a pixel with 24-bit depth can place the information in about 16.7 million tones.
While pixels can flex their size, photosites are fixed and limited. Photosites can’t distinguish between colours of light; for them, light is light. Therefore, in most sensor designs in order to collect information about colour, a colour filter is placed over the top of each photosite to restrict what light can enter the photosite. A red filter allows only red light to enter the photosite, a green filter only green, and a blue filter only blue. Therefore, each photosite contributes information about one of the three colours that, together, comprise the complete colour system in photography (RGB). This also means that each photosite only registers one-third of the light that strikes it. Therefore, the size of photosites needs to be optimized so they can gather as much light as possible.
A photosite's physical circumference is more important than how deep or shallow it is. Circumference increases the surface of the photosite, meaning more light strikes the photosite. However, because each photosite is limited to capturing only one piece of information and about only one of the three colours in the light, more photosites offer the potential to collect more detailed information. It’s a balance between making photosites big enough to be sensitive to light and numerous enough to capture sufficient detail.
It’s also important to consider how information from the photosites gets translated into the information recorded in the pixels. If light is low and the photosites are unable to collect enough information, their signals need to be amplified when they are converted into digital information. The colour filter system used with photosites also limits the information each photosite can collect. That missing information is calculated and added to the conversion into digital information. Any amplification or manipulation of information needed to convert the photosite signal into a digital representation can introduce unwanted, random information into the process, recorded by the pixels as noise.
Consider ISO, for example. ISO is used both with film and digital sensors to describe light sensitivity. Film with a high ISO is made with crystals capable of holding more light. In other words, the film captures more information. The trade-off is that crystals usually need to be bigger to capture more light; therefore, as ISO increases, crystal size becomes more visible, manifesting as film grain. Digital sensors do not change as you adjust the ISO on your camera. The photosites don’t get bigger or more sensitive, so in low light situations, the photosites record less information. To compensate for this lack of information, the signals from the photosites are amplified. The result is not an increase in a pattern such as film grain, but signal noise, which the pixels record in certain areas of the photograph as random, unwelcome artifact.

The process of capturing digital images
How digital images are captured

Sensor Size and Pixel Pitch
The number of pixels recorded by a camera is, for all intents and purposes, the same as the number of photosites on the sensor. There are some photosites that perform other functions, but their number, in comparison, is few. It would seem, then, that more pixels means more photosites, and more photosites means more information. But, as we sorted out in The Basics, the data stored by pixels is only as good as the information captured by the photosites, and the quality of photosites is related to their size.
Photosite size is referred to as pixel pitch. Large photosites have a large pixel pitch and small photosites have a small pixel pitch. Larger photosites are more receptive to light. They capture more information and have strong signal strength. Smaller photosites gather less light. Transforming their low signal strength into digital information results in more recorded noise.

Increasing photosite circumference allows photosites to capture more light
The depth of photosites is irrelevant. Photosites with a greater circumference will capture more light.

Smaller photosites can also cause poor image quality as the lens aperture is stopped down. Smaller apertures—f/16 as opposed to f/5.6, for example—causes light to bend through the lens at sharper angles. This angled light glances across the sensor instead of penetrating the sensor directly. Photosites need a large pixel pitch to be able to capture this angled light well. As photosites get smaller and the pixel pitch decreases, this diffraction effect occurs at larger or wider f-stops.
For these reasons, the number of photosites—and, thus, the number of pixels—must be considered in relation to the size of the sensor. Increasing the number of photosites on a sensor increases the number of pixels but if the sensor size remains the same, the pixel pitch must get smaller in order to fit the increased number of photosites on the sensor. Many large photosites will improve overall image quality, but, as we’ve established, a large number of small photosites will only provide more details in well-lit situations photographed with wide apertures.
So, the first consideration when evaluating a camera’s pixel dimensions is the relationship between the number of pixels and the size of the sensor. Sensor size varies between camera types and models. The benchmark is a full-framed sensor—a sensor with the same dimensions as a frame of 35mm film; that is, 36 × 24 mm. While higher-end DSLRs are made with full-framed sensors, the sensor size in DSLRs can range from 40 to 100% of full-framed size.

Photosite size shrinks when more are placed on the same sized sensor
Increasing the number of photosites on the same-sized sensor results in photosites with smaller pixel pitch.

Canon’s 5D Mark III has a full-framed sensor that captures 22.1 megapixels (5760 × 3840 pixels) of information. All other things being equal, the overall image quality captured by those pixels will be better than that captured by the same number of pixels on a smaller sensor. Canon will be packing 50.6 megapixels (8712 x 5813 pixels) on a full-framed sensor in the new 5DS. That’s a lot of photosites on a full-framed sensor; the pixel pitch will have to be small to compensate.
If you shoot in low-light conditions and thus use high ISOs, you may want a lower ratio of megapixels to sensor size in order to take advantage of the light sensitivity of large photosites. If, however, you shoot in well-lit conditions—for example, in a studio with light kits—you may prefer a higher ratio of megapixels to sensor size in order to capture more and finer detail.
When evaluating the ratio of megapixels to sensor size, also consider the aperture settings you favour on your shoots. If you prefer small apertures to control the light or increase the depth of field, small photosites (a high ratio of megapixels to sensor size) may add noise to your images. A low ratio of megapixels to sensor size will provide larger photosites and deliver better image quality with small aperture settings.
Pixel size, or bit-depth, may also be a factor when evaluating a camera’s image file size.
I mentioned that pixel size increases with the amount of information recorded from the photosites. As technology advances, sensors are able to define colours more precisely by breaking the information down into larger bits. Most DSLRs can now record information at 14-bit depth. This doesn't mean more colours are recorded but that each colour is recorded more precisely, providing finer tonal gradation between pixels. The image is said to have greater colour depth.
The question is whether higher bit depth translates into higher image quality.
High bit depth slices the data more finely, but the process of slicing the data can introduce digital noise. Therefore, slicing the data for a very high bit depth may, in fact, result in lower image quality. Bit-depth may also be wasted if the bits are capable of registering a greater range of tones than the range of light the sensor can capture. Sensors (and film) cannot capture the darkest shadows and the brightest highlights of an average scene. Many DSLRs can now register a range of 12 stops of light; a bit-depth that can record more than that range would be wasted. A bit-depth of 14-bits is considered to be more than sufficient to capture a range of 12 stops of light.
As sensor technology continues to evolve, reducing digital noise and improving dynamic range, higher bit depth will become more relevant. Still, bit-depth will always need to be considered in light of the generated image file size. Canon’s 5D Mark III captures images at 14-bit depth. Those weighty pixels will result in an image file size of about 27 MB for each RAW image taken. You can choose to shoot with a lower bit-depth if using JPG format, but RAW images are captured with full bit-depth.
Increasing the number of pixels and increasing the bit-depth of pixels both result in larger image file sizes. This affects everything from capture to print or display.
Larger file sizes are slower to record or write. Granted, the difference in speed is now down to milliseconds, but it does require more circuitry and computing, and better memory cards to keep up the writing speed of larger file sizes. Those all add cost and heft to the camera. Memory cards that can write larger files, faster increase in price exponentially. A memory card that will deliver the performance offered by a camera such as the new Canon 5DS currently costs as much as three times the price of a regular memory card.
A file size of 27 MB (the size of RAW images produced by Canon’s 5D Mark III) grows to more than 60 MB when the file is opened in image-editing software. If you work and save the file at 16-bits per channel in order to preserve the full bit-depth captured by the camera, the file size doubles to over 120 MB. That means one regular DVD will hold about 36 finished TIFF images, recorded at full resolution. That’s the same as one old-fashioned roll of film.
While there’s an argument to be made for the merits of improving image quality by limiting selection, the reality is most photographers shoot hundreds of images on each assignment. That’s a lot of DVDs. And if storing both original files and finished images, as most photographers do, the need only increases for hard-drive space, cloud storage, and backup room.
There’s also the cost of internet usage and the amount of time needed if uploading large files to cloud storage or an online gallery. And increased file size places a demand on computer specifications. For example, larger file sizes demand more RAM and faster processors.
Some DSLRs now offer the option of saving “reduced resolution” RAW files. While photographers have long been able to reduce file size by saving images in a small JPG format, the ability to save smaller RAW files is relatively new. Canon’s 5D III, for example, allows photographers to save RAW files as full-size (22.1 MP), medium (10.5 MP), or small (5.5 MP). For photographers who don’t always need a full-sized RAW file, the “reduced resolution” option helps to bring file size down to something more manageable.
It must be noted, however, that a RAW file smaller than full-size still uses data from the same photosites. A smaller file will not compensate for poor image quality delivered by small photosites. If you routinely use a small or medium-sized RAW file, a sensor with fewer—and larger—megapixels might be a better choice.
It’s a misnomer to talk about resolution at capture. A digital camera captures images of a certain pixel dimension (5760 × 3840 pixels, for example) or file size (22.1 megapixels), but you determine the resolution when you prepare your image for its final end use.
If you intend to put your final print on the web, the number of pixels available in your image file will govern the size of the image once posted on the web. Web designers consider a standard size for computer monitors to be 1024 x 768 pixels or bigger. I use a 27” iMac which has a monitor size of 2560 x 1440 pixels. My 60” LED television has a dimension of 1920 x 1080 pixels. A full-sized image captured with a Canon 5D Mark III provides an image of 5760 × 3840 pixels. That means that without reducing the file size, the image would be more than double the size of my television and my iMac and several times larger than an average computer monitor. Clearly, if you take photographs with the sole intent of displaying them electronically, a moderate file size is plenty.
Resolution becomes very important when preparing prints. The standard for best quality photographic prints has been to print them at a resolution of 300 pixels per inch (ppi). That means that an image of 5760 × 3840 pixels, for example, can be printed up to a size of 19 x 12.5 inches without sacrificing any quality.
Many printmakers argue that the larger the print, the lower the resolution can be because the viewer stands further back to view the print. The points in Seurat’s painting, for example, only become clear as you step closer to the painting. Printmakers will work with a resolution of 240 pixels per inch for large prints if they are going to be seen from a distance of a few feet or more. If the paper being used for the print has a low matte finish or is textured, some printmakers will happily work with a resolution as low as 200 pixels per inch for large prints. Therefore, if I use the same 5760 × 3840 pixel file for a large print, I could make a print up to a size of 24 x 16 inches at 240 pixels per inch, or 28.5 x 19 inches at 200 pixels per inch. Those are large prints.
Large file sizes are helpful if you know you will be cropping your photographs before printing. Event photographers, for example, are often restricted from taking photographs from an ideal vantage point. Or they may take generously framed successive images of an active scene in order to catch the whole of a precise moment needed in the image. In these cases, large file sizes allow a photographer to crop the final image and still offer a generously-sized print of the result.
Commercial and art photographers who need image files for massive prints or posters also need large file sizes. However, if capturing digital images, they will typically use medium-format cameras in order to get more and larger photosites. Medium format also offers superior quality lenses, and the lens on your camera can make or break your image.
Shooting for detail and better resolution is a worthwhile pursuit, but it’s only an achievable pursuit if you’ve got the right equipment to feed information to the sensor. A camera's lens is the major determinant of image quality. Regardless of the quality or number of photosites, you can’t improve your image quality if you are limited by the quality of your lens.
An average quality kit lens will not deliver the same detail or clarity as a prime lens or high quality zoom. Clarity and detail are also lost if you use filters on your lenses. Even a quality UV filter put on a lens for protection will diminish the image quality enough to waste the value of the biggest file sizes.
If you are considering a new camera body in order to get better and larger file sizes, spend your money first on great lenses. Then get rid of the filters on your lenses (except when used creatively) and, last, consider upgrading your camera body.
To get the most out of more megapixels, you should also be a photographer with meticulous technique. Shooting handheld at anything but the fastest shutter speed will diminish the quality of the image delivered to the photosites. The same holds true for dirty lenses, compromised exposures, and inaccurate white balances.
If you’re after better quality images and more megapixels is a reasonable answer for you, then be sure to also use the best lenses and practice the best shooting style.
We can already capture more colours that our human eyes can discern. Each 8-bit channel (red, green, or blue) records the colour on a scale of 256. Combining the channels (256 x 256 x 256) means that even at 8-bit depth, a photograph can offer a theoretical maximum of more than 16 million colours. It’s estimated that the human eye can detect somewhere between 10 and 12 million colours.
Calculating the range of detail that a human eye can see is more challenging. Our vision works more like a movie camera than a still image. We are continuously scanning to paint in details and offer more information to our brains. We also process information from two eyes, which our brains merge and combine to “see” even more detail. And we see in three dimensions, not two. Nonetheless, scientists have attempted to understand what pixel resolution equivalency our eyes see.
The print standard of 300 pixels per inch is based on a calculation (of unknown source) that suggests our eyes can no longer differentiate the difference between pixels when printed at a resolution of 300 pixels per inch and the print is held 10 to 12 inches away from our eyes.
Phil Plait, who writes for Discover magazine, did some research and calculations and concluded that for someone with perfect vision, a pixel needs to be 0.0021 inches or smaller for our eye to be unable to resolve the dots at 12 inches. For a person of average vision, a pixel only needs to be 0.0035 inches or smaller. Plait’s conclusion is that for most people, Steve Jobs was correct when he claimed that a human eye cannot detect the pixels in an iPhone 4 screen held at 12 inches from our eyes.
Dr. Roger Clark, a Ph.D. graduate from MIT, specializes in scientific imaging. He proposes that when viewing a 20 x 13.3-inch print from a distance of 20 inches, the print needs to be about 10600 x 7000 pixels (about 74 megapixels) to show detail at the limits of our human ability.
Clark also suggests that we need about 576 megapixels filling our whole field of vision to reach a resolution where the average human eye can no longer distinguish the points that make up the image. However, if that information is broken down to understand what resolution we would need as an instantaneous captured equivalent in the centre of our vision, the suggestion is 7 megapixels will do.
We will probably never know the answer. Regardless, I think it’s safe to say that we have reached an ability to make prints at a high enough resolution that, at a reasonable viewing distance, our eyes cannot distinguish the individual elements that make the picture. Can the detail still be improved? No doubt, but likely we are at a stage of needing significant changes to make improvements that are noticeable.
We also need to ask if we want to make those improvements. Some photographers will want their photographs to appear as life-like as possible and will pursue every improvement in image quality. Others will embrace the representational aspect of photography and prefer a less than real-life re-creation.
Determining how many pixels you need in your image files and whether to spend money on more pixels is not a straight-forward evaluation. Many factors need to be considered, with some features traded off for others. Starting with the following considerations, though, should get you well on your way of determining whether to place an order in June for Canon’s new 5DS.
  • What is the sensor size and the ratio of pixel number to sensor size? Has photosite size been compromised for photosite quantity?
  • Do you shoot primarily in low light and need large photosites, or do you shoot in studio and need the details offered by small photosites?
  • What aperture range do you typically use? Do you use small apertures that would benefit from larger photosites, or do you use large apertures that will feed light directly into even small photosites?
  • What bit-depth do you use when processing your digital images? Is colour and fine detail critical in your images? Do you need 16-bit depth and thus, work with large files, or are you happy at 8-bits with the smaller files?
  • What file size can you manage? Do you have the resources you need or are you willing to purchase them to manage larger file sizes? Is the investment needed to support larger file sizes worth the quality you’ll gain in return?
  • Do you generate images for the web or for print, and if for print, what size, on what paper, and viewed from what distance?
  • What lenses do you have for your camera? Are they high enough quality to make the extra megapixels worthwhile? If they are not, are you willing and able to invest in high quality lenses? Are you a “clean shooter” with good technique or do you prefer to be rough and ready?
  • How real to life do you want your photographs to be?
Your last consideration might be the balance of your bank account!  
Finally, if you are considering a high-pixel-count DSLR, consider renting a medium format first to assess the results. Do you see the quality you expected to find? Can you manage the larger file size? Is the extra work worth the outcome? You may discover that you can have the best of both: keep a DSLR with a reasonable pixel count for day-to-day use, and when you need the quality of large, high-pixel count files, rent a Hasselblad or Phase One/Mamiya for the shoot.


Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation.


Copyright @ 2013 KrobKnea.

Designed by Next Learn | My partner