Is it blue and black?
Or, is it white and gold?
How does expectation determine perception?
A recent picture of a colored dress became an Internet sensation because people are divided into two distinct camps—one sees blue and black and the other sees white and gold. This phenomenon brought the complexity and variation in perception to everyone’s attention. There are, in fact, many ways that perceptions are quite different for various individuals. A unique convergence of complex factors allowed people to experience first hand that we do not all perceive the same way. Most consider that that they are seeing “reality” with our eyes and other senses. But, in fact, we are seeing a construction of our mind based on many different factors, most prominently expectation.
This post will first discuss the question of the color of the popular dress. An easy answer is given and then a more complex discussion of the multiple visual factors involved. What seems so simple is, actually very complex, involving many factors that affect people differently.
Then, the post will outline many other elements that determine how our brain chooses a particular perception.
Is the dress Blue/Black or Gold, White?
The very simple answer is that the brain determines color and makes corrections for colors based on the qualities of the light source (sun, incandescent bulb, candle, etc.). The picture of the dress is ambiguous concerning the imagined light source therefore it can be interpreted in several ways. The brain is correcting for the imagined light source.
The more complex answer follows. Color perception depends on multiple factors.
- The nature of the object—in this case the dress—the way the various materials reflect light and the particular combination of pigments
- The ambient light source—electric bulb, candle, sunlight at dawn or noon, blue sky or shadows
- Color constancy—the brain tries to make a tomato appear red in any light source, so we will know its okay to eat it
- The sensor of the camera and the display of the computer that shows the picture
- The sensors in the retina—cones and rods
- The individual differences in strength and number of cones and rods
- The top down effect of the cortex synthesizing the incoming sensory data from the retina and the decision to make it “seen” as a particular color
Retina—Rods and Cones
The two types of receptors responding to light are rods and cones. Rods are round and cones stick out giving a textured appearance. There is one type of rod, which responds to shades of black, white and grey. Rods can see dim light in black and white. There are three different cones, a small one sees blue; a medium cone sees green; and a large cone sees red.
People have differences in the percent of the 6 million cones that pick up red, green or blue—but particularly blue. Blue, always, is the fewest number of receptors. With less blue receptors an object looks whiter. With more blue receptors it looks bluer.
With different combinations of the three cone receptors colors, we can see thousands of unique colors. These colors in the real world are mixtures of many different frequencies. A small number of people have four types of cones and some less than three. In dim light, the sensitivity of cones can make a difference. Some people see a star as red, some white and some pink. Some have more sensitive cones than others.
In the dress, the color that has been seen as gold and black in fact is a mixture of yellow/gold/brown wavelengths. The color that was viewed as light blue or white is, also, a mixture of gold and very light blue.
The coordinates of 3 dimensional space describe exactly where an object is. The coordinates of color are the amounts of red, green and blue triggered by the three cones and these three numbers describe a color exactly. Colors are, almost, never pure (e.g, a pure red), but, always complex combinations.
An analysis of different parts of the dress show various combinations of red/green/blue even for areas that should be the same color. For example, two different dark dots register at R: 128, G: 110, B: 70 and R: 93, G: 76, B: 50. Two different bright dots register R: 135, G: 148, B: 190 and R: 131, G: 140, B: 195.
Also, people’s retinas vary, making it even more complex. This particular color combination happens to be in the middle of many different decisions that the brain makes. Also, the same colors on different backgrounds look different.
Color Balance and Constancy
Because animals need to know whether a food is dangerous or not, the brain constantly adjusts color balance, just like the program Photoshop. White balance is used when we observe a piece of white paper in different light environments such as outside in sunlight, inside with electric light or with candlelight. The brain makes a white balance to make it look the same with all light sources. The brain shifts the color back to the basic white.
Light comes from a source and bounces off an object in the world. This rebounded light from the object hits the cones in the retina at particular wavelengths. The brain calculates the source of light and the bouncing color of the object and subtracts the source, leaving the color of the object. The brain throws out the source of light and just “sees” what comes the from the object. The sources of light in the outside worlds are pinky red at dawn, blue white at noon, and back to reddish for sunset.
The brain normally tries to keep the color constant. It determines that a color of an object doesn’t change even if the ambient light source changes. This dress photo doesn’t allow for any context and therefore the brain has to make up the context. Some people take away the blue side and see white and gold. Or throw out the gold side and see blue and black.
In a room of blue light from daylight coming in a window, the brain takes away blue shadow and leaves it white and yellow. In electric light it is seen as blue the true color.
Shadows can have dramatic effects on the color outdoors from yellow of direct sun to blue from the sky without sun. If the brain thinks its outside yellow light, it would consider the dress blue. If the sunlight is shining through a window with a blue light, the brain will think it is white.
The visual cortex has the largest amount of neurons in the brain—almost half. The massive amount of neurons calculates all of the factors involved in vision and sends neurons down from the cortex to meet the incoming data. This is called top down processing and is discussed below.
Most optical illusions involve top down processing. That is, the brain makes a decision based on many factors and determines what we expect and what we are “seeing”. In many black and white illusions the viewer can switch back and forth, seeing it alternatively, one way or the other. In this color ambiguity, once the brain chooses the color, it doesn’t go back to the other.
Technically, colors engage in additive mixing or subtractive mixing (as with paint). Additive mixing occurs with the major three ingredients blue, green and red adding to form white. With paint, the more different colors that are added, the darker it gets until it is black.
If someone sees blue and black, their cones are very sensitive – eyes do subtractive mixing. If someone sees white and gold, eyes (cones) aren’t working as well in dim light and so they see white. They are less light sensitive causing additive mixing of green and red to make gold. Cones don’t see dim light well and in that situation the rods will see the dress as white. Someone else will respond to the dim color and see it as blue.
The key thing from all of this is that the brain is correcting in many complex ways for an imagined light source.
Variation and Limits of the Senses
A striking finding recently, showed that most people perceive smells differently from others. This is based on the fact that humans have 400 genes, which combine for 900,000 variations in receptors. Any two people will have approximately 30% of receptors that are entirely different from someone else. So, in fact no two people smell in exactly the same way. This situation makes comparisons of smells, which we take for granted, very inexact.
Another recent finding is that individual neurons in the olfactory system evaluate information on their own. This type of response occurs without any analysis by the brain. The neurons in mice noses are primed especially for danger and individually uniquely react to odors perceived as threatening. This occurs before the signal has a chance to go to the brain. This individual neuron understands what a dangerous odor would be and reacts without any brain reflex.
The Visual System
While remarkable progress has been made in defining the very detailed structure of the visual system, in fact, understanding how we see has proven to be very complex. It is based on many different factors. The human visual system’s limitations start with a very narrow bandwidth in the electromagnetic spectrum. Many animals have a much wider range and can therefore “see” much more. Some animals, including ants, see polarized light, as well.
The system is based on rods and cones. Rods respond to black and white peripheral vision. Cones respond to color in the center region of the eye. Both have serious limitations. The rods only have one neuron that is connected to 100 rods. Therefore, most of the information is not available to the brain. The eye filters what it thinks it wants, often motion. The cones, while better innervated, have other limitations. Cones are constantly scanning back and forth that limit the exact input of information. And the cones are constantly being refreshed leaving moments with no information.
For both rods and cones, much of the information is filtered out before it reaches the visual cortex. The brain fills in what is missing in various ways.
Another filter occurs before the data reaches the cortex. This emotional filter responds very rapidly to perceived threats, such as someone about to throw a punch. The flinch occurs so rapidly, there is no time for cortex analysis. These circuits are sent very rapidly to the much closer emotional centers for a rapid reflex.
Recent studies show that when the visual information reaches the cortex, there are a variety of ways that the information is processed. One is very similar to the compression protocols used in computers to make the large amount of video information easy to process. These algorithms, like video protocols, leave alone the information that has been constant over a period of seconds and send only the changes, such as movements or changing colors. Redundant information is, also, suppressed. The cortex controls what is done to the incoming signals and what we “see” and experience.
Multi Sensory Experiences
When the sensory data finally reaches the cortex it is combined and merged with a wide variety of other information. For many years the brain was thought to be modular in the sense that each type of sense would have its own regions. Recent research shows that, in fact, most neurons are multisensory. That means that most neurons combine a variety of senses such as visual with sound, touch and smell. This visual information is, also, merged with thoughts and emotions.
There are many every day examples of the multi sensory brain that we are usually unaware of. An example of multisensory effects can be experienced by watching a video of a waterfall. At the same time, place hands on a desk. Because of the visual information coming from the downward movement of the waterfall, the experience is that the desk is rising. A second example involves orange juice. If the orange juice is colored red, then the taste will change to taste like cherry.
A series of studies, also, shows that sight usually trumps sound in the brain. If someone mouths a particular word, but the sound track is different word, the brain will “hear” what is seen, not what is heard.
An even more surprising study demonstrated this effect in a musical competition. When expert musical critics were rating performances of piano concertos, there was a different when there was only sound and when there was sight and sound. When the critics could see the performer and hear the sound of the performance, they rated the music higher when the performer had a flamboyant visual performance. This was different than when there was only the sound, when the rating occurred purely on the sound performance.
Expectation Determines Perception
The over arching analysis of visual signals depends on what is expected. A famous experiment has been described where people are told to concentrate on the activity of a group of people on a stage, such as passing a basketball. Then, a man in a gorilla suit walks across the stage. Very surprisingly, most people do not “see” this man and don’t remember that it happened. This is because the visual system is focused on a task and didn’t expect a gorilla.
Top Down and Bottom Up Signals
Recent research shows that there are a large number of neurons that come down from the cortex analyzing, synthesizing and altering the incoming signals of senses. This top down effect is how expectation can determine perceptions. The signals of the senses are called bottom up signals and have the limitations already described, but basically represent filtered data from the outside world.
Remarkably, the amount of neurons that are top down from the cortex far outnumber the incoming sensory neurons. Therefore, the influence of the brain and expectation are far greater than the raw data.
There are, also, many examples in daily life that attest to the influence of top down effects on perceptions. These are rarely noticed.
One example is that a picture of bright light will cause the eye pupils to react, just the same as a light used in the doctor’s office. In studies, people who think of good deeds, perceive the room as brighter. The opposite occurs when considering a negative deed—the room appears darker. If someone is hungry, words related to food appear brighter. Good hitters in baseball view the ball as larger. Poor children view coins as larger.
People see what they expect. They see what is new and useful based on previous experience. Expectations can be conscious or unconscious.
Self Observation and Breath Affect Perception
It has been clear for a long time that breathing in particular ways can affect how we see the world. (Please see post Breathing Affects Perception). Breath exercises can lead to decrease of anxiety. Meditation practices and yoga effects are based on different types of breathing. In fact, breathing is highly connected to altering perceptions related to emotions and different physical activities. Breathing exercises are well known to alter perceptions including helping change fear to relaxation.
Recently, data showed a very interesting connection of breathing and perception. In the brain, breathing rates create neuronal oscillations of particular frequencies that are connected to the arrival of sensory data in the cortex. It appears that these oscillations created by breathing are crucial for connecting sensory data into a perception.
Expectations can be conscious or unconscious—related to desires, memories, pain, and medical conditions.
Meditation is a conscious activity. It is based on the acquired skill of observing the mind without altering it. This self-observation allows stepping back from the impact of emotional situations. A painful situation can become clearer and less severe. A situation that creates anxiety can be changed to calm. This skill of living in the present can make the world brighter.
Self-observation of this type, also, increases inner experience, alters perceptions and becomes a new source of expectation for the brain to make determinations about what is occurring. It increases new types of top down effects on perception.
The popular fascination with the dress brought to public awareness that we don’t really see “reality.” Instead, we “see” what the brain determines based on expectations. Doesn’t this make a stronger case for mind interacting with this complex brain?