Note: www.cdegroot.com is in rebuild. Please accept my apologies for broken links, missing stuff, etcetera - more
  Home

Executive Summary: buy a camera that supports RAW mode and use it for everything but your snapshots - JPEG will do just fine for them.

What is RAW mode, anyway?

Inside your camera there are two main chips: the sensor device that captures the light, and the imaging processor that makes sense of what the sensor "saw". The sensor has a completely different response to light than the human eye, and usually the sensor doesn't give as sharp an image as you'd like. The imaging processor corrects this by correcting the colors, sharpening a bit, and writing the result as a JPEG file.

Now, this is extremely convenient, because you can copy the JPEG images and mail them off to friends, the printing shop, or publish them on your website. However, there are drawbacks:

  • JPEG is a compressed format, and the compression is "lossy" - it dumps information the human eye doesn't care about in order to achieve greater compression ratios. Your camera may have "regular", "fine" and "superfine" settings to control the amount of compression, but information is lost.
  • JPEG contains only 8 bits (256 values) worth of information per color. Your camera's imaging chip typically is capable of finer color discrimination (usually 12 bits - 4096 values), so the processor just maps 16 color values in "image sensor space" to 1 color value in "JPEG space" - again, information is lost.
  • The algorithms used on your camera are fixed. Manufacturers do their best to employ the best algorithms available, but the amount of processing power is limited and new research turns up better algorithms continously.
The conclusion is that JPEG images are convenient but not the best. Enter RAW format: this is the data directly from the imaging chip, unprocessed. It's a proprietary format so you will need special software to convert it to something that your photo editing program will understand, but it hasn't been compressed in a lossy way, the full 12 bits of color information is there, and you can process it with more powerful algorithms on your multi-Gigahertz PC, and - what's best - re-process it 5 years from now when a math student publishes a thesis with an even better sharpening algorithm.

If you compare it with film-based photography, the RAW image is a sort of latent image that's on your negative. On film, you can choose and apply a developer only once - this destroys the latent image by turning it into the negative image (or, in case of slides, a positive image). With RAW files, you can "develop" RAW files over and over again with different "developers", whenever you like. That and the better quality you get is enough reason to set your camera to RAW mode whenever you're shooting for your portfolio.

Note: people will likely say that I'm a snob because no-one will see the difference between 8 and 12 bits. However, when you are editing (color correcting, polishing, whatever) in Photoshop, you are always loosing bits of information and when you start with more information (12bits) you will end up with a better quality image


 
Copyright (C)2000-2011 Cees de Groot -- All rights reserved.