It seems a lot of people – even professional photographers – don’t have a good understanding of bit depth as it relates to developing an image for the web. I got in the middle of a discussion today about the ability to bring up underexposed areas of an image, and in trying to explain it I couldn’t really find a comprehensive explanation that discusses all the elements that go through my head when I’m processing images, so I decided I’d try to make one.
I don’t consider myself an expert on digital imaging, but I believe to truly understand something you have to be able to explain it. I do have a background in computer science & math and worked in TV for several years, so I have a pretty good understanding of how all the elements work together, so hopefully this will be useful to some people out there.
This goes a bit more into the nuts and bolts of a how a camera works. Tons of articles have been written on how to change things like ISO, shutter speed, an aperture to achieve the best results. I’m going to go a bit “under the hood” so to speak in the hopes of explaining how much you can expect to change while developing your image.
At its most basic level, the bit depth of your image is how many bits it takes to go from 0% (solid black) to 100% (solid white). The larger the bit depth, the more values you have from 0% to 100%. In all images, the bit depth refers to the levels of gray between black and white. For an 8-bit image (which is what most of the web is showing you), you get 256 different levels from black (0) to white (255). For a color image, you have 8-bits for Red, 8-bits for Green, and 8-bits for Blue for a total of 24-bits of color. This gives a very rich color palette of 16.7 Million colors (256 x 256 x 256 = 16,777,216), but for each color you still only have 256 values in which to exposure your image.
Camera sensors these days can shoot much more information than this though. 12-bit and 14-bit sensors are very common and I’m sure 16-bit is on the horizon (After I posted this, the guys at One River Media pointed out that most of the video cameras that shoot Raw are 16-bit at the sensor and output to a 12-bit file). 12-bit gives you 4,096 levels from black (0) to white (4,095) and 14-bit gives you 16,384 values. Unfortunately most images still end up as an 8-bit JPGs or part of an 8-bit h.264 video on YouTube, so why do I want all that data at the capture stage? In short, it comes down to control, but let’s stop talking in numbers and talk about what these bits actually look like.
So 0% is black and 100% is white, but what about all the levels in between? They’re all just various shades of gray, right? In theory, yes, but no sensor is perfect. We all know that taking images at High ISOs introduces noise, but even the best sensors still have an inherent amount of noise even at low ISOs. In a properly exposed image, you won’t see much noise due to a high signal-to-noise ratio. Put simply, when the amount of light (signal) is greater than the amount of noise generated by the sensor, all you see is signal. The lower the amount of signal (underexposed parts of an image), the more noise becomes an issue. This means in an 8-bit image, out of those 256 values from black to white, a good percentage at the bottom of the scale will be too noisy to be usable if you try to increase their brightness. An image with more bits of data (higher bit depth) makes it possible to have a higher signal-to-noise ratio, so in addition to more stops of exposure, you also gain more control over underexposed parts of the image. A dark area of an image that would be too noisy in an 8-bit image will be much more useful in a 12 or 14-bit image. In addition, an area of an image that looks black in an 8-bit image will be always black, no matter how much you try to raise the black level. In a higher bit-depth image, what you see as black may actually have more data than is visible on the screen.
Below is an example from a recent wedding I shot. The room was very dark, so I was having to shoot at a relatively high ISO setting to get proper exposure. This is taken with a 14-bit sensor, so as you’ll see, there is a lot of data that’s not immediately visible.
Normal exposure. You can see some of the texture, but the folds in the jacket are mostly in shadow.
Increased the black levels. You can see more details in the shadows, but that also increases the amount of visible noise. If the original was an 8-bit image though, this same adjustment would just make the blacks more gray.
I think this is what some people refer to when they say cameras like the C300 record “the right 8-bits”. Even though it doesn’t give you a Raw codec, if the sensor can increase the signal-to-noise ratio (minimize noise), the 8-bits it does record can still be very beautiful. It also means it can record in lower light since you can crank up the ISO fairly high with minimal noise.
WHAT ABOUT HIGHLIGHTS?
Signal-to-Noise ratio mainly effects how the shadows look, but that’s only part of the benefit of higher bit-depth images. At the other end of the spectrum, you’ve got highlights. High bit-depth sensors help tremendously here as well. The higher bit-depth your image, the more data you have in what would otherwise be seen as a pure white part of an image (sometimes referred to as “blown highlights”). In an 8-bit image, if you tried to lower those white parts of an image, you will end up just making the white more gray. In a higher bit-depth image, what you see as white on your display may actually have quite a bit of detail once you start bringing the highlights down. For instance, a sky that looks blown out in an 8-bit image could be blue in a 12-bit image.
I was going to put together an example to show this, but while I was writing this post, I came across a video from One River Media that explains it very well. This is an example using the new BlackMagic Cinema Camera, but this applies just as well to any DSLR that shoots high bit-depth raw images.
BUT EVERYTHING ON THE WEB ENDS UP BEING 8-BIT, RIGHT?
Mostly true. The PNG format can be higher than 8-bit, and most displays can display colors in the 32-bit space, so technically there are ways you can display an image on the web in higher than 8-bit. But that’s not really the point. The world around us is not 8-bit. If you were in a studio environment or could otherwise control the light very carefully, then you certainly should do that and shoot gorgeous images in 8-bit. More data is mainly about having more control in post. As an independent photographer/cinematographer, you don’t always have the budget or crew to get the lighting perfect. Or sometimes you don’t have the time to re-shoot something that is otherwise perfect. In those situations, having a higher bit-depth image is a huge help.