1. Photo
  2. Editing & Post-Processing
  3. Colour

Seeing in Colour: How Our Eyes Sense and Cameras Record

Scroll to top
This post is part of a series called Everything Colour.
Bold Colours: How to Apply Colour Theory in Your Photo Compositions

You had so much fun at the family party on the weekend. You snapped a great photo of your grandparents—one you’d like to print and frame for them. You made a few little tweaks when you processed the photo, then printed it on your photo printer at home. The result was far too dark; the photo looked muddy. Rather than play around with it, you uploaded the image to your local photo printing service, but when you picked up the print, your grandparents’ skin looked unnaturally yellow and your grandmother’s violet-coloured sweater looked blue. What the heck? You set your white balance before taking the photos and the image looked okay on your computer. The image seemed to post to the web without any changes in colour or density. But every version of the image looks a little different and none of them are what you thought you saw in the moment.

Welcome to the world of colour management in photography and video. It all sounds like it should be so perfectly technical and mathematical. Colour balancing gets the whites white. A ColorChecker Target helps you achieve colour accuracy between shoot and processing. ICC (International Color Consortium) profiles standardize colour management. And still, your results aren’t what you remembered or expected. Even your black and white photos sometimes look ‘off.’

An apple on a white plate with colour castAn apple on a white plate with colour castAn apple on a white plate with colour cast

Sanity and reliable outcomes are possible. It takes a bit of work and some trial and error, but oh, there’s no describing the feeling of satisfaction of printing a photo perfectly the first time, or getting the colours in an image just right to express the mood of the moment.

We’re launching a series of articles on working with colour, which will help you get to the pot of satisfaction at the end of a multi-coloured rainbow. In this article, I look at what colour is and how we see it. What makes colour? And why does colour look different to different people and in different circumstances?

What You See Is Not Necessarily What You Get

When I was a kid, I used to play a game with my friends: we’d try to determine if we all saw a certain colour the same way. Did they see red as I did, or did they see red as I saw purple but we both called our perceptions “red” because that’s the word we were taught to associate with what we were seeing? It was a child’s game perhaps, but the wonder wasn’t that far from reality.

What We See

Human vision is complex: not only do we have a varying capacity to see colour and light, we also process what we see through our brains, which add layers of interpretation to colour and light.

"We don't see the world as it is, we see it as we are." —Anaïs Nin

Our eyes perceive colour and light with two kinds of cells, known as “rods” and “cones.” One collection of cells—the cones—is sensitive to colour, but requires good light. These cells have the highest visual acuity. The other collection of cells—the rods—is sensitive to luminance (how bright or dark) but less sensitive to colour. The result is that colour, depth, and details are lost as the light gets darker. What we see in dim light, we perceive as flat and desaturated. In contrast, we see extraordinary details in bright light.

These two types of cells do not exist in equal portions nor are they distributed evenly in our eyes. The cells that see colour and require bright light are fewer in number and are concentrated in the centre of our vision. The cells that see in dim light are more numerous and are concentrated primarily around the edges of our vision. If you are a camper or hiker, you’ll know that the best way to get around in the dark is to focus more on what is on either side of you rather than what is directly in front. If using a flashlight, instead of shining it directly ahead, you’ll navigate through the dark better if you swing the light from side to side. This is because the cells that see details in dim and dark light are most active in our peripheral vision.

Human eye anatomyHuman eye anatomyHuman eye anatomy
Illustration source: iStock. Edited by Dawn Oosterhoff.

Whether the light gets darker or brighter, the decline in what we see is very gradual. We can see details in bright light, and will see colour, if not fine detail, into the very brightest of highlights. Our ability to distinguish colours and details declines gradually as the light fades, but we are able to detect motion and see shapes into very deep shadow.

When we take in a view, the cells of our eyes register colour, luminance, and details, but our brains tell us what we see. Our brains interpret the information and fill in gaps. Our brains also call on our memories and experiences to interpret what we see. We don’t notice how lines converge as they recede into the distance because our brains correct the distortion. Similarly, we don’t notice how much yellow or pink or green might be in a room light because our brains don’t consider that as important as noticing that the red meat is now somewhat grey.

What the Camera Gets

What a camera “sees” can be described simply: a camera’s sensor records a narrow range of light and colour, and the photo receptors respond uniformly across the field of view. Photo receptors do not desaturate colour in shadows, nor do they record more details as light gets brighter. Similarly, photo receptors do not record more colour in the centre of field of view. Each photo receptor, regardless of location on the sensor, will record colour and light as they exist within the sensor’s range of luminance. Further, a sensor’s ability to record colour and details simply ends at either end of a sensor’s range of luminance. Highlights clip to white and shadows clip to black.

Cameras interpret what the photo sensors register, but the interpretation is limited and is based on a fixed algorithm. Interpretation involves comparing and extrapolating existing information to fill in tiny gaps with logic. Interpretation is not fluid or flexible. Converging lines will still converge, and the amount of yellow in incandescent light will show proportionately the same as yellow exists in a banana.

Spectrometers—devices used to colour calibrate display devices such as monitors—work in the same way as camera sensors. They register colour uniformly and in a linear fashion. That means that digital colour management will be consistent across all calibrated devices, but the calibration is not going to adapt to how we see colour and light.

Living Versus Digital Viewing

There’s another layer of visual variance to consider when looking at the difference between how we and digital devices see colour and light. When we look at a scene, our eyes are moving—even if only subtlety—and taking in a great deal of information outside our main field of view. We may not be aware of the colour, light, and shapes in our peripheral vision, but our brains get that information regardless and use it to interpret what we are seeing immediately before us.

Cameras may pick up light and colour that originate outside of the field of view, but only as they cross into the camera’s field of view.

To add yet another layer of complexity to this variance, consider that what we were looking at when we took the photograph included visual information that the camera would not have captured. We then reproduce the photograph and view the photo in the centre of yet another field of view. Visual information that started as a broad expanse is captured in a way that is different than how we would have seen things, then is compressed and presented back to us in the centre of another field of view that contributes different information to our brain. It’s the photographic equivalent of fun mirrors at a carnival.

Panoramic view of Parliament Hill Ottawa CanadaPanoramic view of Parliament Hill Ottawa CanadaPanoramic view of Parliament Hill Ottawa Canada
Our camera "sees" a contained portion of everything we see.
Man looking at photographs on a gallery wallMan looking at photographs on a gallery wallMan looking at photographs on a gallery wall
We add another layer of complexity to what we originally saw when we look at the photographed scene in a new view. Base image source: iStock. Image insertions and editing by Dawn Oosterhoff.

Colour Theories

When it comes to understanding colour and its role in photography, it’s important to also review how colours combine to create other colours. You may have learned at some point—likely in art class—that red, yellow, and blue are primary colours, and mixing them produces the secondary colours of green, orange, and purple. The idea has been around since the 17th century and is still the predominant approach used in classical art. However, while that theory may work when mixing paint, that’s not how we see colour and it’s not how colour is reproduced in photography or print.

Trichromatic Theory

There are two theories explaining how we see colour. According to the trichromatic theory, we have different receptors for different colours in the cone cells of our eyes (the cells that see colour). The different receptors pick up three different wavelengths of light: long, medium, and short, which we see as red, blue, and green. These three colours combine to provide us with all other visible colours.

It shouldn’t be a surprise, then, that all colours in luminance output devices (cameras, computer monitors, projectors, and so on) are composed with varying combinations of red, blue, and green. Because RGB are the colours of light, if you add all three colours together, you get white. Subtract all three colours and you get black. That is the basis of the RGB colour model.

The print colour model—CMY—is the inverse of the RGB model and, thus, also based on the trichromatic theory. CMY are the colours of print. Ink absorbs certain wavelengths of light, and reflects others, to create colour. If you subtract each of red, green, and blue from white, you get the colour opposites: cyan, magenta, and yellow, or CMY. If you add all three colours (CMY) together, you get (almost) black. (K—black—is added to the print colour model to provide a true black, and to save the expense of using all three colours to produce black ink.)

A collage of colour created with red green and blueA collage of colour created with red green and blueA collage of colour created with red green and blue
A collage of colour created by shining light through two layers of red, green, and blue gelatin.

Opponent Process Theory

The opponent process theory suggests that the cone cells of our eyes are neurally linked to form three opposing pairs of colour: blue versus yellow, red versus green, and black versus white. When one of the pair is activated, activity is suppressed in the other. For example, as red is activated, we see less green, and as green is activated, we see less red.

If you stare at a patch of red for a minute, then switch to look at an even patch of white, you’ll see an afterimage of green in the middle of the white. This is the opponent process at work in your vision. The reason we see green after staring at red is because by staring we have fatigued the neural response for red. This allows the neural response for green to increase.

You’ve seen this colour theory at work when colour balancing images. As you decrease red, your image becomes more green, and as you increase yellow, your image becomes less blue. Opposition of black and white affects the luminance of an image.

Blue hose on a yellow tankBlue hose on a yellow tankBlue hose on a yellow tank
Red wall with green shuttersRed wall with green shuttersRed wall with green shutters

Trichromatic Plus Opponent Process Equals Colour Vision

Initially, researchers thought our colour vision could be explained by only one of the two theories. However, although researchers are unable to provide definitive proof, it is now widely accepted that we use both methods in combination to see colour. The trichromatic theory explains how our eyes receive colour and the opponent process theory explains the neural connections that help our brains process colour.

Again, we see the these theories, now in combination, at work in photography. Images are created with red, green, and blue channels. The opposite of red, green, and blue is cyan, magenta, and yellow. Colour is balanced between red and green, and yellow and blue. Adjusting the black (shadows) and white (highlights) balance gives an image its density.

Adobe Photoshop panels showing colour theories at workAdobe Photoshop panels showing colour theories at workAdobe Photoshop panels showing colour theories at work

Lab Colour

When used in photography, both trichromatic (RGB) and opponent process (R/G, Y/B, B/W) colour systems are flat. What I mean is that adjustments within those processes affect only one variable at a time. More red and less blue will tip a colour toward orange. Reduce just the green and you will be working with a shade of purple. Shifting between black and white will make the colour darker or lighter.

Lab colour, in contrast, attempts to replicate the complexity of human vision by combining the two colour processes in a three-dimensional model. Each colour is a result of combined and simultaneous balances of red and green (“a”), blue and yellow (“b”), and black and white (luminance, or “L”). The result is a colour model that represents the full range of colour the human eye can see.

Three-dimensional illustration of Lab colourThree-dimensional illustration of Lab colourThree-dimensional illustration of Lab colour
Illustration source: International Color Consortium (ICC) [Public Domain] 

Because Lab colour is so vast and so precise, every colour in every other colour production model has a corresponding value in Lab. In fact, Lab is used as the base model for calculating every colour in every model. It is, therefore, also a reliable system for translating colours from one model to another.

Some photographers and digital artists prefer to work in Lab, but for many, the system is too large and too complex for general purpose use. In contrast, RGB and its companion, CMYK, are convenient, conceptually simple models that deliver more than enough colours.

Waves, Paths, and Objects

There’s one more property to consider if we want to fully understand colour and how it works in photography: colours are components of light, which travels in waves. If you shine white light into a prism, the prism will bend (refract) the light and a rainbow of colours will emerge from the other side.

Light shining through a prismLight shining through a prismLight shining through a prism
Photograph by Kelvinsong [CCO], via Wikimedia Commons

Colours each travel in their own wavelength. When the colours are all travelling straight together, the result is white light. But when the light is forced to change direction, each colour will bend differently, depending upon its wavelength. Violet, with the shortest wavelength, will bend the most. Red, with the longest wavelength, will bend the least. And so when white light strikes any surface, the light is broken down into its component colours.

Add to this knowledge the consideration that some materials, such as glass, transmit light; other materials, such as a flat rock, absorb light; and yet other materials, such as dried varnish, reflect light. And as we’ve seen with a prism, unless an object is perfectly flat, light will break down into component colours as it interacts with the object. Further, even if perfectly flat but not perfectly clear, materials will absorb some wavelengths of light and reflect others. Thus, a flat rock absorbs light but also reflects back some wavelengths of light, giving the rock a grey-brown colour, for example.

How light is transmitted, absorbed, and reflected affects not just the colours we see, but also affects the quality of the colours we see. An object that absorbs a great deal of light—our rock, for example—will reflect back a desaturated, flat colour. In contrast, a material that reflects a great deal of light—dried varnish, for example—will provide us with a bright, deep sense of colour.

Varnished boat reflecting colour and a wooden stump absorbing colourVarnished boat reflecting colour and a wooden stump absorbing colourVarnished boat reflecting colour and a wooden stump absorbing colour

Add It All Up

By now, you may well be thinking that this was all very interesting, but what difference does it make for me when I’m taking or processing a photograph?

Digital photography has provided us with an opportunity to manipulate colour in a way we’ve not experienced before. Traditional artists are schooled in colour theory and use colour to great advantage to create contrast, convey moods, and direct a viewer’s attention. Photographers now have the same opportunities for expanded creativity.

Digital photography has also introduced technical variations that impact and change what we see and reproduce. By understanding colour theories and how colour works, we can improve our technical approach for colour accuracy.

A deeper understanding of colour and colour management results in better photography. Images will better capture what you perceived and felt when you took the photograph, and your ability to use colour to your advantage will improve the emotional impact of, and interest in, the photograph.

Photography is the art of light, and light is a composite of colours. In this series we'll take a deep dive into colour. You'll learn how to apply the principles and theory we learned above to make better decisions and take more control of colour in your photography.

Did you find this post useful?
Want a weekly email summary?
Subscribe below and we’ll send you a weekly email summary of all new Photo tutorials. Never miss out on learning about the next big thing.
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.