Table of contents
People discovered very early that white light or sunlight will separate into spectral colors after being refracted by a prism, forming a light band consisting of seven colors: red, orange, yellow, green, blue, indigo, and violet. This light band is called a spectrum . Among them, red light has the longest wavelength, violet light has the shortest wavelength, and the wavelengths of the other colors are in between. As shown in the figure below.
If the colored light decomposed from white light passes through a prism again, it will not continue to decompose into other colored lights. This kind of colored light that cannot be decomposed any further is called monochromatic light . The color of monochromatic light depends on the frequency f of the light , or the wavelength λ . The two are equivalent and can be converted to each other according to the light speed formula c= λf . The light mixed by "monochromatic light" is called " composite light ". The color of composite light mainly depends on the frequency of the light with the largest energy in the composite light. When the composite light contains all visible light frequencies and the energy of each frequency is the same, the color of the composite light appears white. The parameters of many optical media (such as glass) are related to the frequency of light, that is, they show different refractive indices for light of different frequencies, and the speed of light of different frequencies propagating in the medium is also different. These two situations are collectively referred to as the dispersion of light . The dispersion of light by the prism forms the visible spectrum. The dispersion of light in the optical fiber causes the wave packet to broaden, limiting the performance of signal transmission.
The reason why humans can perceive the colorful scenes in the objective world is that the human eye can receive light emitted, reflected or scattered by objects. To put it more concretely, photons carrying information about objects are absorbed by the photoreceptor cells on the retina of the human eye and converted into electrical signals, which are then transmitted to the brain through the optic nerve for interpretation and processing, ultimately producing vision. Now people already know that light is an electromagnetic wave, which is essentially the same as the electromagnetic waves radiated by mobile phone antennas, microwave ovens, and X-ray machines, except that the frequencies of radiation are different. Electromagnetic waves that can stimulate the photoreceptor cells of the human eye are called visible light, and the spectrum ranges from 380 to 760 nm. As can be seen from the electromagnetic radiation spectrum diagram below, the spectrum of visible light is only a small part of the electromagnetic radiation spectrum.
The essence of the color of an object is the absorption and reflection of light. For example, when white light passes through a copper sulfate solution, copper ions selectively absorb part of the yellow light, making the blue light in the transmitted light the main component, so the copper sulfate solution appears blue. If the copper sulfate solution is illuminated by yellow light, the copper sulfate solution cannot appear blue, but black. This example shows that the spectral composition of the light source plays a vital role in the formation of color. The lack of spectral line frequencies in the spectrum means that the observable color will be reduced.
From the perspective of observing colors, sunlight is a very ideal light source. As shown in the figure below, the spectrum of sunlight in the visible light band is continuous (covering all wavelengths) and flat (the power of each wavelength is very close), so it is defined as the standard light source for observing colors.
The generation of light originates from the movement of electrons. According to the theory of quantum mechanics, the electrons that make up an atom can only stay in a series of discontinuous orbits (orbits), each orbit corresponds to an energy level (energy level), and the farther the orbit is from the nucleus, the higher the energy level. When an electron jumps from a high energy level to a low energy level, it releases a photon. If a low-energy electron wants to jump to a high energy level, it must absorb a portion of energy (through thermal excitation or absorption of a photon), and the absorbed energy must be exactly equal to the difference between the two energy levels. High energy levels are generally not very stable. After staying at a high energy level for a period of time, the electron will spontaneously jump back to a low energy level and release a photon at the same time.
The figure below shows the energy level structure of the hydrogen atom and the corresponding spectrum.
As shown in the figure above, when electrons transition from high energy levels to low energy levels, photons are emitted, resulting in an emission spectrum (b). When a continuous spectrum light source passes through hydrogen atomic gas, photons of specific wavelengths are absorbed by electrons, leaving holes in the continuous spectrum, resulting in an absorption spectrum (c).
Some substances (such as halogen elements) have energy levels that are prone to electronic transitions, so they are often used to make artificial light sources. The light energy emitted by such substances is usually concentrated in several specific wavelengths. The figure below is the spectrum of a metal halide lamp. It can be seen that the wavelength with the most concentrated energy is 591nm, and the color of light at this wavelength is yellow-green.
The most common fluorescent lamp uses mercury vapor as the light-emitting substance. The spectrum of mercury atoms is shown in the figure below.
The human visual system (HVS) has three types of cones, which sense long, medium, and short wavelengths of visible light. The frequency responses of the three types of cones to equal-energy light stimuli are shown in the figure below.
Since HVS only has three physical color detectors, human perception of color is actually extrapolated by the nervous system through interpolation. For any color perception, there are infinite spectral forms that can stimulate the same perception. For example, the four cases shown in the figure below, from top to bottom are (a) continuous spectrum of daylight, (b) discrete spectrum of red, green, and blue, (c) discrete spectrum of yellow and blue, and (d) fluorescence spectrum.
Experiments have shown that all four types of light appear white to the human eye, so it seems that HVS is not particularly concerned about the specific spectral composition of the light source.
An ideal black body (such as the sun) emits a continuous spectrum, and the power distribution of the spectrum depends on the temperature of the black body, that is, the color temperature. In the examples (b) to (d) in the above figure, the spectrum lines are discrete and do not meet the definition of color temperature. Therefore , the concept of correlated color temperature (CCT) is introduced. If any light source stimulates the same color perception as a black body with a color temperature of T, the correlated color temperature of the light source is defined as T.
The concept of correlated color temperature was proposed by DB Judd in 1936 and is defined based on color temperature and isotherms. All colors along a given isotherm have the same correlated color temperature. The following are DB Judd's recommendations for isotherms, which were incorporated into the CIE 1960 UCS uniform color space standard, which will be described in detail below.
For the human eye, the frequency of light determines the color of light, and the energy of light determines the intensity of visual stimulation. For two beams of light with the same frequency but different energies, the human eye perceives the same color, but the brightness of the color is different.
The white curve in the figure below is the spectral response of the S cone that senses short waves in the human eye, the black solid dot curve is the discomfort of young people around 20 years old to equal power glare, and the black hollow dot curve is the discomfort of the elderly around 60 years old to equal power glare. Experiments show that all kinds of people are more sensitive to the stimulation of blue glare. I believe that many people have had the painful experience of being blinded temporarily by high-brightness high-beam headlights, and the main cause is the blue light component.
LED stands for Light Emitting Diode. The popular high-power LED emits blue light with a peak wavelength of 460nm. In order to obtain a white light source, people add phosphor compounds to the LED, which can absorb blue light and emit yellow-green light, thereby changing the color tone of the LED light source and modulating light source products with various color temperatures such as cool white and warm white.
From the example below, we can see that different spectral components (color temperature) have a great impact on the quality of photography.
Luminous intensity is a physical quantity that describes the radiant power of a light source in a specified direction. The unit is candela. The name of candela comes from candles. Initially, the British defined the candle power unit as the light emitted by a foot-long candle made of one pound of white wax. Today, the definition of candela has changed.
According to the decision established by the 16th International Conference on Weights and Measures in 1979: If the radiation intensity of a monochromatic light source with a frequency of 540×10^12 Hz in a given direction is 1/683 Watt/sr, then the luminous intensity of the light source in that direction is 1 cd.
Note 1: 1Watt/sr means that the radiation energy passing through 1 unit solid angle in 1 second is 1 joule.
Note 2: The SI unit of solid angle is steradian (sr). The solid angle of a complete sphere to any point on the sphere is 4π sr .
When a light source is used for lighting purposes, it is not enough to measure the light energy emitted by the light source per unit time by radiant power alone. It is also necessary to consider the spectral response of the human eye to light of different wavelengths. The introduction of luminous flux solves this problem.
The International Commission on Illumination (CIE) selected 555nm, the wavelength at which the human eye is most sensitive under photopic vision, as the reference wavelength for converting from "power" to "luminous flux", and defined the luminous flux of 555nm monochromatic light with a power of 1W as 683 lumens.
Lumen is the unit of luminous flux, abbreviated as lumen or lm in English. As for why it is "683", it is related to the history of using candlelight to describe light in the early days.
According to the definition of luminous intensity, we can know that for an isotropic light source with a luminous intensity of 1cd, the total luminous flux at a solid angle of 4π is 12.57lm.
The energy carried by a photon with a wavelength of 555nm is,
At 1 lux illumination (1 lm/m^2), the number of photons passing through an area of 1 m^2 in 1 second is,
Illuminance is a physical quantity that describes the total amount of light falling on a surface. It is defined as the luminous flux per unit area and is measured in lumens per cubic meter (lm/m2), also called lux (lx). 1 lux is equivalent to 1 lumen per square meter, that is, the luminous flux per square meter of the illuminated object, which is vertically illuminated by a light source with a luminous intensity of 1 candlepower at a distance of one meter.
Luminance is a physical quantity that describes the brightness of a luminous surface. It is defined as the luminous flux per unit luminous area in the normal direction and per unit solid angle. The unit is nit , which is 1 lumen per square meter per steradian, or candela per square meter (cd/m2).
Note 1: The m2 in candela per square meter (cd/m2) describes the area of the luminous surface itself.
Note 2: Brightness is sometimes referred to as “brightness”.
Note 3: "Light intensity" is an informal term, and it is difficult to say which photometric concept it corresponds to. In some cases, "light intensity" may refer to luminous intensity, sometimes to illuminance, and sometimes to brightness.
Lambert , The unit of luminance in the
centimeter-gram-second system, equivalent to the luminance of a perfectly diffusing surface that emits or reflects one lumen per square centimeter.
The purity of a light source is a measure of the spectral purity of the light. Monochromatic light has the highest purity, and white light has the lowest purity.
When people conduct color mixing experiments, they find that most colors in nature can be obtained by selecting three different colors of monochromatic light and mixing them in a certain proportion. The three monochromatic lights with this characteristic are called primary colors, and the corresponding three colors are called primary colors, such as red, green, and blue.
The Basics of Color Mixing
The principle of three primary colors is
The three primary colors must be independent of each other, and any one of them cannot be obtained by mixing the other two.
Most colors in nature can be obtained by mixing the three primary colors in a certain proportion, or in other words, most colors can be decomposed into the three primary colors.
The mixing ratio of the three primary colors determines the hue and saturation of the mixed color.
The brightness of a mixed color is equal to the sum of the brightness of the primary colors that make up the mixed color.
As people's understanding of the three primary colors deepens, their applications are becoming more and more extensive. For example, three-primary-color fluorescent lamps provide humans with a richer light source system, and people are increasingly using products based on the three-primary-color principle in the fields of painting and photography. Different applications have different definitions of the three primary colors. Currently, there are mainly the following categories:
Three primary colors of light
In 1931, the International Commission on Illumination (CIE) defined the components of the mercury spectrum with wavelengths of 700nm, 546.1nm, and 435.8nm as the three primary colors of red, green, and blue. CIE stipulates that a red light with a luminous flux of 1 watt is a red primary color unit; a green light with a luminous flux of 4.5907 watts is a green primary color unit; a blue light with a luminous flux of 0.0601 watts is a blue primary color unit, which are recorded as [R], [G], and [B] respectively. The three primary colors can be mixed in a certain proportion to present various colors.
When the three primary colors are mixed in unit amounts, E white light should be obtained. E white light refers to the equal-energy white light point E on the CIE 1931 xyY color spectrum, and the equivalent color temperature is 5400K.
F(E white) = 1 + 4.5907 + 0.0601 = 5.6508 (lm)
Three primary colors of color TV
The fluorescent screen of a color TV is coated with three different phosphors. When the electron beam hits it, one can emit red light, one can emit green light, and one can emit blue light . By mixing these three primary colors in different proportions and strengths, various color changes in nature can be produced.
Pigment three primary colors
The three primary colors of pigments and other non-luminous objects are magenta (equivalent to rose red , pink ), magenta cyan (equivalent to darker sky blue , lake blue ), and light yellow (equivalent to lemon yellow). The three primary colors selected by British chemist Fulbright (1781-1868) can be mixed to produce a variety of colors, but they cannot produce black, only dark gray. Therefore, in color printing, in addition to the three primary colors used, a black version must be added to produce a deep color.
Art practice has proved that magenta plus a small amount of yellow can produce bright red (red=M100+Y100), but bright red cannot produce magenta; cyan plus a small amount of magenta can produce blue (blue=C100+M100), but blue plus white produces a dull cyan.
Printing three primary colors
The colors of printed materials are actually the light reflected by the paper we see. For example, when we mix colors in painting, we also use this combination. The principle of pigments is to absorb light rather than superimpose light. Therefore, the three primary colors of pigments are the colors that can absorb the three wavelengths of RGB light. After research, they are determined to be cyan, magenta, and yellow (CMY), which are the complementary colors of RGB.
When yellow pigment and cyan pigment are mixed together, the yellow pigment absorbs blue light and the cyan pigment absorbs red light, so only green light is reflected. This is why yellow pigment plus cyan pigment forms green.
A color model is an abstract mathematical model, a method of representing colors to facilitate the encoding and storage of colors. A typical color model usually contains three to four components. If constraints are further defined on the basis of the color model so that the color coded values of the model can be accurately mapped to actual colors, then all the colors that the color model can represent are completely determined, forming a color space.
In practice, people tend to use the term "color model" less often, and instead use "color space" more often to refer to the color model. Strictly speaking, this is not very accurate, and you should pay attention to the conceptual distinction when using it. For example, sRGB is a color space that uses the RGB color model to represent colors, defines the specific chromaticity values of the three primary colors R, G, B, and white, and defines constraints such as the environment (brightness range) required to observe colors. These constraints ensure that the encoding of the RGB color model can be accurately and repeatably mapped to specific visual stimuli. Images encoded in sRGB can reproduce the same effect on all displays that support sRGB.
Common color models
For a color display system, the color space that the system can render is also called the color gamut. It should be noted that the concept of color gamut is generally only applicable to color rendering systems, such as monitors, printers, etc. For a color measurement system, such as a camera, spectrometer, etc., there is only the concept of color response, not the concept of color gamut. Despite this, many camera manufacturers still associate the RGB value output by the camera/sensor with a specific color gamut. At this time, the RGB code output by the camera should be directly mapped to a color point in the color gamut. If it is interpreted according to other color gamuts, the color restored on the display system will be quite different from the real color. If the target display system does not support the color gamut defined by the camera manufacturer, in order to correctly display the color, it is necessary to first map the camera RGB to a standard color model (such as XYZ), and then display it through the XYZ value, or convert the XYZ to the display RGB value before displaying it. The main reason why camera manufacturers choose this approach is that the color range that the sensor can record is usually larger than the color range supported by ordinary monitors (sRGB). Selecting a larger target color gamut when recording can reduce the loss of original color information and reserve ample space for later video editing and processing.
The most widely used color space in computer technology is the RGB color space, as shown in the figure below.
For the most common 24-bit three-channel color digital images, each pixel Pixel (x, y) needs to be described using three components R, G, and B. Each component is represented by one byte and the range is [0, 255].
RGB color space is a model closely related to the structure of the human visual system, because the physiological structure of the human eye determines that all colors that people can perceive can be seen as different combinations of the three primary colors R, G, and B. Most displays use this color model. Color cathode ray tubes and color raster graphics displays use R, G, and B values to drive R, G, and B electron guns to emit electrons, and respectively stimulate the R, G, and B phosphors on the screen to emit light of different brightness, and produce various colors by adding and mixing. The working principle of the scanner is also to sample the R, G, and B components of the light reflected from the original, and use them to represent the color of the original.
RGB color space is called device-dependent color space, because different scanners will obtain image data of different colors when scanning the same image; different models of monitors will display the same image with different color display results. The RGB space used by monitors and scanners is different from the CIE 1931 RGB true three-primary color system space, which is a device-independent color space.
YUV is a general term for a type of signal. It was first used in the color television systems PAL and SECAM developed in Germany and France, and is now widely used in computer systems.
The "Y" in YUV stands for brightness (Luminance, Luma), and "U" and "V" stand for chrominance and concentration (Chrominance, Chroma).
YCbCr is not an absolute color space, but a derivative version based on YUV that is compressed and offset. It is generally used in continuous image processing in movies or digital photography systems. JPEG, MPEG, DVD, cameras, digital television, etc. all use this format. Therefore, the commonly known YUV mostly refers to YCbCr.
The Y in YCbCr has the same meaning as that in YUV. Cb and Cr refer to colors just like UV, with Cb referring to blue chromaticity and Cr referring to red chromaticity.
HSV is a relatively intuitive color model, so it is widely used in many image editing tools. The color parameters in this model are: hue (H, Hue), saturation (S, Saturation), and value (V, Value).
Hue H is measured in degrees, ranging from 0° to 360°, starting from red and counting counterclockwise, red is 0°, green is 120°, and blue is 240°. Their complementary colors are: yellow is 60°, cyan is 180°, and magenta is 300°;
Saturation S indicates the degree to which a color is close to a spectral color. A color can be seen as the result of a mixture of a certain spectral color and white. The greater the proportion of the spectral color, the closer the color is to the spectral color, and the higher the saturation of the color. The higher the saturation, the deeper and brighter the color. The white light component of the spectral color is 0, and the saturation reaches the highest. The value range is usually 0% to 100%. The larger the value, the more saturated the color.
The brightness V indicates the brightness of the color. For the light source color, the brightness value is related to the brightness of the light source; for the object color, this value is related to the transmittance or reflectance of the object. The value range is usually 0% (black) to 100% (white).
The International Commission on Illumination (CIE) proposed an RGB color model in 1931, but people soon discovered that this model had defects. The red response had negative values in the 435.1~546.1 band, causing great inconvenience in color matching, as shown in the figure below.
To solve this problem, CIE soon introduced an improved model. Since the RGB name was already taken, the new color model was called the XYZ color space.
The International Commission on Illumination (CIE) first defined a color space model using purely mathematical methods in 1931, namely the CIE 1931 XYZ color space, which covers all colors that the human eye can perceive and does not rely on any specific physical implementation. When a color is represented in XYZ form, it will render a consistent color on any display that supports the XYZ standard, regardless of the specific physical characteristics of the display.
In this model, CIE defines three functions to simulate the response curves of the three types of cone cells in the human eye to the visible spectrum, as shown in the figure below.
CIE defines the values of three functions in the form of a data table. This set of data is called the CIE standard observer response and is applicable to situations where the observation angle is less than 2°.
The origin of 2° is that Wright, Guild and others organized a research experiment on the Color Matching Function (CMF) in the 1920s. They asked subjects to observe colors through a small hole that provided a 2° viewing angle. Later, CIE released a standard observer response with a 10° viewing angle, called CIE 1964 10° RGB CMF.
Assuming that the power spectrum of a certain color light is S(λ), the CIE XYZ tristimulus value is defined by the standard observer response function as follows:
In numerical calculations, dλ is generally taken as 5nm or 10nm.
When the color light is reflected from the surface of an object, S(λ) depends on the illumination source and the reflectivity of the surface of the object. The figure below gives a specific example.
The XYZ coordinates contain the energy information of the light source, so there is no upper limit on the value, as shown in the figure below
CIE-1931 xyY color space
If only chromaticity is considered, the normalized form of XYZ is more convenient. From the CIE tristimulus values, three orthogonal quantities x, y, and z can be derived, namely
The x and y components are used to measure the chromaticity of the color, while the brightness (brightness/luminance) of the color is designed to be represented by the Y component of the tristimulus values. This is the CIE xyY space.
The basis for defining the CIE xyY color space is that for a given color, if its brightness is increased, the luminous flux of each primary color must also be increased proportionally, and the ratio of X:Y:Z remains unchanged, so that this color can be matched. Since the chromaticity value is only related to the wavelength (hue) and purity, but not to the total radiant energy, when calculating the chromaticity of the color, X, Y and Z can be normalized by the total radiant energy (X+Y+Z), or equivalently, only the case of the X+Y+Z=1 section is considered, and the color matching equation is simplified to x+y+z=1. Since z can be derived from x+y+z=1, the color can be expressed by only x and y.
The figure below shows the famous CIE chromaticity diagram, commonly known as the horseshoe diagram or tongue diagram.
The tongue-shaped curve at the outer edge of the chromaticity diagram is the spectral locus, and the number marked is the wavelength of the spectrum in nanometers. The straight line connecting 380nm and 700nm at the bottom is called the purple line. The color saturation gradually decreases from the outer edge to the inside, and finally converges to a special point E in the center of the graph. This point is called the equal energy white light point, with coordinates (0.33, 0.33) and a color temperature of 5400K.
In the chromaticity diagram, red is located in areas with large x values, green is located in areas with large y values, and blue is located in areas with small x and y values (and therefore large z values).
The concave curve in the middle of the chromaticity diagram is called the blackbody locus , or Planckian locus . There are some special points on the curve, namely
Point A (2856K), represents incandescent lamp
Point B (4874K) was once recommended as the daylight standard, but was later abolished and replaced by Point D
Point C (6774K) represents the skylight on a cloudy day
Point D (6500K), represents daylight
Point E, the equal-energy white light point, is an ideal standard that does not exist in reality.
It is worth noting that the scale on the color temperature trajectory is uneven, with a factor of ten change at either end of the curve.
The spectra corresponding to each special point are shown in the figure below.
The following figure is the projection of RGB space in XYZ space. As can be seen from the figure, many colors that the human eye can perceive are not in the RGB space. In other words, the RGB space does not contain all the colors visible to the human eye.
Each point on the xy chromaticity diagram represents a certain color, but if two points are too close, the human eye cannot distinguish the difference. The range in which the human eye cannot perceive color changes is called color bandwidth. The research of Mac Adam et al. shows that the color bandwidth at different positions of the chromaticity diagram is not the same, and its distribution pattern is shown in the figure below (the size of the ellipse is enlarged to 10 times the actual value to highlight the display effect).
An important conclusion that can be drawn from the Mac Adam experiment is that the distance between any two color points on the chromaticity diagram is not linearly related to the chromaticity difference that can be distinguished by human vision.
Perceived uniform space
A color space is defined as perceptually uniform if a difference in value anywhere in the color space corresponds to the same difference in perception.
If a color space satisfies the requirement that a unit change in color value at any position always corresponds to the same perceptual change, then the color space is called perceptually uniform . The characteristic of a perceptually uniform space is that the color bandwidth is equal everywhere, regardless of the specific color value, as shown in the figure below.
By this definition, the CIE-1931 xyY space is perceptually non-uniform. A natural question is, is there a metric that is linearly related to the human visual response? The answer is yes, and that is the micro-reciprocal representation.
Micro-reciprocal
Color bandwidth is also called Just Noticeable Difference (JND), which is known to be a function of color temperature and is represented by E(T). The research results of Judd et al. show that JND and color temperature satisfy the following empirical formula
The formula can be organized into the following form
Let function M(T) = 1E6/T, then dM/dT=-1E6/T^2. Substituting into the above formula, we can get the linear function of M and E.
Therefore, M is defined as the Micro Reciprocal Degree (MRD) of color temperature, and its unit is mired, or MK^-1, which means that the minimum discernible difference of the human eye is 5.5 mireds.
For example, the micro-reciprocal of color temperature T=5000K is M=200mireds, and the next distinguishable color temperature point is M=205.5, corresponding to T=4866K, and the color temperature interval is 134K. The micro-reciprocal of T=2000K is M=500mireds, and the next distinguishable color temperature is
The point is M=505.5, corresponding to T=1978K, and the color temperature interval is 22K. This shows that the human eye has a stronger ability to distinguish low color temperature areas.
The following diagram shows the color bandwidth at a range of color temperatures. The human eye cannot distinguish the differences in color within the ellipse. For illustration purposes, the radius of the ellipse is 24 times the true scale.
The following table lists the color temperature points and the corresponding micro-reciprocals in the figure. It can be seen that if the micro-reciprocal coordinates are used, the points in the figure are just equally spaced on the micro-reciprocal coordinates.
The CIE-1931 XYZ color space has a major flaw. When calculating color difference, the allowable error for each color interval is different. In order to unify the calculation and comparison of colors, CIE introduced a uniform color space. In 1937, Mac Adam converted (x, y) into a (u, v) color coordinate system, which was adopted by CIE in 1960:
u=4x/(-2x+12y+3);
v=6y/(-2x+12y+3)
Or equivalently,
Planck locus, isotherms and special color temperature points on the CIE-1960 UV chromaticity diagram.
On the Planckian locus, the slope at color temperature T can be calculated according to the following formula:
in
Since the isotherm is defined as a tangent line perpendicular to the Planck locus, its slope can be expressed as the negative inverse of the slope of the tangent line,
Given an arbitrary point S(u,v) on the chromaticity diagram, there are many ways to calculate the correlated color temperature (CCT) corresponding to the point, such as interpolation method, perpendicular foot method, etc.
The specific implementation process can be found in the following paper.
CIE-1976 UCS color space
The (u,v) color coordinate system still could not be synchronized with visual color, so Mac Adam continued to deepen his research and finally decided in 1973 to add 50% to the v coordinate. This system was adopted as the CIE 1976 UCS (Uniform Chromaticity Scale) color coordinate system:
u'=u=4x/(-2x+12y+3);
v'=1.5v=9y/(-2x+12y+3)
CIE 1976 UCS converts CIE 1931 chromaticity coordinates so that the color gamut it forms is close to a uniform chromaticity space, allowing color differences to be quantified. It is also called CIE LUV color space in various documents.
The difference ∆u'v' between two colors in the (u',v') coordinate system is proportional to the color difference perceived by humans.
Using the CIE 1964 and 1976 versions to mark the Munsell color sequence, it can be found that the 1976 version has better color uniformity.
CIE-1976 L*a*b* color space
CIE L*a*b*, also known as CIELAB color space, is one of the most widely used color spaces. Its main advantage is that the distance between colors better conforms to the linear relationship with human perception, especially the accuracy of describing darker colors is higher. The slight disadvantage is that the linear relationship will change when describing the yellow area, that is, the diameter of the color tolerance circle will change near yellow.
CIELAB color space is similar to CIE 1976 UCS in terms of perceptual uniformity. Neither standard is perfect, but both have been widely used. The academic community has not yet reached a clear conclusion on which of the two color spaces is more uniform. At present, it seems that CIELAB is more widely used. The figure below uses CIELAB to mark the Munsell color sequence. It can be seen that the distribution of some tones is indeed not uniform, especially the blue problem.
The CIELAB color space does not solve the hue constancy problem very well, and it cannot explain the color appearance phenomena such as Hunt and Stevens. In 1997, CIE proposed an interim standard to improve hue constancy, which was called CIECAM97s. It was subsequently improved to simplify the complexity and improve the accuracy, and finally finalized as the CIECAM02 standard in 2002.
Mathematical definition of the CIE L* a *b* color model
in,
In the above formula,
L* represents the lightness of the color, ranging from 0 to 100;
a* and b* represent chromaticity, ranging from -128 to 128, where a* represents the red-green axis (red + green -), and b* represents the yellow-blue axis (yellow + blue -).
Xn, Yn, Zn represent the XYZ values of the light source
The above formula uses a 1/3 exponential curve to simulate the log curve characteristics of human visual response, which can simplify the calculation.
The distance between two color points in L* a *b* space is defined as
in,
Based on a* and b*, saturation C (also called chroma) and hue h can be defined:
The formula for transforming from L* a *b* space to XYZ space is as follows,
in,
If we ignore the lightness L of the color and only consider the chroma C, we can define the color difference ∆C as
It is generally believed that if the color difference is in the range of 4 to 5, the color quality is very good, if the color difference is in the range of 5 to 6, the color quality is relatively good, and if the color difference is greater than 10, it is at a poor level.
The following figure is a report generated by Imatest software when evaluating the color reproduction effect of the CCM matrix. The figure marks the theoretical and actual positions of each color point on the 24-color card in the L*a*b* space, and calculates the color error mean. It is obvious that all 6 gray blocks on the 24-color card are located at the position of a*=b*=0.
Matlab reference code for transforming XYZ to CIE LAB space,
clc clear all close all X=19.4100; Y=28.4100; Z=11.5766; Xn = 94.811; % refrence white Yn = 100; Zn = 107.304; if X/Xn >(6/29)^3 fx = (X/Xn)^(1/3); else fx = (841/108)* (X/Xn) + 4/29; end if Y/Yn >(6/29)^3 fy = (Y/Yn)^(1/3); else fy = (841/108)* (Y/Yn) + 4/29; end if Z/Zn >(6/29)^3 fz = (Z/Zn)^(1/3); else fz = (841/108)* (Z/Zn) + 4/29; end%% converting XYZ to CIE LAB L = 116 * fy-16; a = 500 *(fx-fy); b = 200 *(fy-fz);
The figure below shows the Munsell color marking system. In an ideal color space, a group of color points of the same hue should be on the same straight line.
However, the situation is not ideal in CIELAB space. When lightness or chroma changes, the color hue will shift.
Online tool for converting between various color spacesColor picker and converter (RGB HSL HSB/HSV CMYK HEX LAB)colorizer.org/
Hue constancy is an important property of color space. Compared with lightness and chromaticity, the changing law of hue is more difficult to describe with mathematical formulas.
In 1998, Fritz Ebner, a doctoral student at the University of Rochester Institute of Technology (RIT) in the United States, and his mentor Mark D. Fairchild proposed the IPT color space in Ebner's doctoral thesis, where IPT stands for Intensity, Protan, and Tritan.
The numerical ranges of IP and T are: I is {0 to 1} , P and T are both between {-1 to 1}. Therefore, in order to convert with CIELAB, it is necessary to convert I ×100 , P ×150 , and T ×150 respectively.
Independent studies have shown that the IPT space performs similarly to the CIELAB space in terms of brightness uniformity and chromaticity uniformity, but in terms of hue uniformity, the linear relationship of blue hue is better than that of the CIELAB space.
The academic community has been continuously researching and developing the IPT color space. A recent hot topic in academic research is the constant hue line, which means that when a ray is drawn from the white point of the color space in any direction, the points in space that the ray passes through have the same hue, but only different saturation. This led to the ICaCb space.
ITU-R Recommendation BT.709, often abbreviated as Rec. 709, BT.709, ITU709, is the HDTV color gamut standard established by the ITU organization in 1990. Displays that comply with the HDTV standard should support all colors in this space.
Rec.709 Gamma Formula
This is an unexpected result. According to Adobe employees, in 1998, Photoshop 5 was about to be released. In order to improve the built-in color management function of the software, engineer Thomas Knoll wanted to refer to the source of BT. 709, the SMPTE 240M standard, to determine the color gamut range. However, since there is no online version of the standard and Photoshop 5 is about to be released, they cannot wait for the paper version to be sent, so Thomas found a set of SMPTE 240M data from a website that looks more official and used it in Photoshop. After the software was released, it received very positive feedback. Users generally believe that the new SMPTE 240M configuration performs well in color range and conversion between CMYK color systems, which is exactly the disadvantage of sRGB. Many books and magazines recommend the use of Adobe's SMPTE 240M color gamut standard.
However, it wasn't long before a user familiar with the SMPTE 240M standard pointed out to Adobe that the SMPTE 240M provided in Photoshop was wrong. It was not the color gamut value specified in the real SMPTE 240M, but an "ideal value" in the standard annex. What's worse, Thomas made a typo when setting the red coordinate, and the value of the red coordinate was not even the same as the "ideal value" in the annex. Adobe tried various ways to correct this error after learning about it, but no matter how hard they tried, they could not surpass the color gamut standard performance brought about by this accident. Finally, Adobe gave up correcting this "error" and named it Adobe RGB to avoid trademark and legal issues.
Adobe RGB mainly solves the problem of different colors displayed in printing and computer monitors, and improves the performance in cyan and green. The CIE color gamut coverage rate reaches 50%. Currently, only some high-end monitors can support 99% of the Adobe RGB color gamut, which are basically used in professional design fields.
The sRGB standard was jointly developed by Microsoft and HP in 1996 and has been widely supported by the computer industry. The monitors, graphic software, video games, pictures, videos, etc. that people use in daily life all support the sRGB standard by default. Computer monitors support more than 95% of the sRGB color space. The sRGB color space follows the definition of Rec.709 and is therefore completely consistent with Rec.709.
Since this color standard was established too early, many technologies and concepts are not mature, so it only covers 30% of the CIE color gamut standard, and the color reproduction is not high, and the green coverage is extremely low. Because of this, it does not have high requirements for monitors, so most monitors on the market can reach sRGB100%.
AdobeRGB was proposed in 1998. It has a wider color coverage than sRGB and can better support CMYK printers. It is widely used in professional publishing. The output effect of a monitor that supports the AdobeRGB standard is very close to that of a CMYK printer, and you can get a "what you see is what you get" effect. When images shot according to the AdobeRGB standard are displayed on an sRGB monitor, the color effect usually appears flat.
As shown in the figure above, the only difference between AdobeRGB and sRGB is the chromaticity coordinates of the green primary color. The other parameters are the same, but the impact of this difference on the image is very obvious.
sRGB and XYZ space conversion
sRGB Gamma formula
The Academy Color Encoding System (ACES) is a global standard for interchanging digital image files, managing color workflows and creating masters for delivery and archiving.
The Academy Color Encoding System (ACES) is a global standard for exchanging digital image files, managing color workflows, and creating masters for delivery and archiving.
It is a combination of SMPTE standards, best practices, and sophisticated color science developed by hundreds of professional filmmakers and color scientists under the auspices of the Science and Technology Council of the Academy of Motion Picture Arts and Sciences. It aims to be the filmmaking industry standard for managing color.
It is a combination of SMPTE standards, best practices and sophisticated color science, developed by hundreds of professional filmmakers and color scientists under the auspices of the Academy of Motion Picture Arts and Sciences' Scientific and Technical Committee. It aims to become the color management standard for the film production industry.
ACES can be used on any type of production from features to television, commercials, AR/VR and more.
ACES can be used for any type of production, from feature films to TV, commercials, AR/VR, and more.
ACES2065-1 defines a particularly wide color gamut that includes all visible colors, with the three primary color points defined outside the visible region. The white point uses CIE D60, with coordinates x = 0.32168, y = 0.33767. ACES2065-1 is often referred to as AP0, or "ACES Primaries 0", and is mainly used for video data storage, using linear gamma.
The difference between ACES series color gamut
DCI-P3 is a wide color gamut standard introduced by the American film industry and is one of the current color standards for digital movie playback equipment. It has a larger color gamut and a wider range of green and red compared to sRGB. DCI-P3 can better meet the human visual experience and is suitable for digital movies, TV series post-production, color grading, etc.
In the CIE 1931 xy color space, the DCI-P3 color space covers 45.5% of the full color gamut and 86.9% of the common color gamut, and in the CIE 1976 u'v' chromaticity diagram, the coverage is 41.7% and 85.5% respectively. The blue primary color is the same as sRGB and Adobe RGB; the red primary color is a monochromatic light source with a wavelength of 615 nanometers.
Compared to AdobeRGB, it does not cover too much of the CIE color gamut, but it can better meet the human visual experience and can meet all color requirements in movies. In other words, DCI-P3 is a color gamut that focuses more on visual impact rather than color comprehensiveness. And compared to other color standards, it has a wider red/green color range.
DCI-P3 is defined by the Digital Cinema Initiatives (DCI) organization and published by the Society of Motion Picture and Television Engineers (SMPTE) in SMPTE EG 432-1 and SMPTE RP 431-2. It is expected to be more widely adopted in television systems and home theaters in 2020 as a step towards implementing the 2020 Initiative.
Definition: SMPTE-EG-0432-1:2010 Digital Source Processing – Color Processing for D-Cinema
Responsible Organization: The Society of Motion Picture and Television Engineers
Color space
Type: Colorimetric RGB color space
RGB primaries:
x | y | z | |
R | 0.68 | 0.32 | 0.00 |
G | 0.265 | 0.69 | 0.045 |
B | 0.15 | 0.06 | 0.79 |
Color component transfer function: 2.6 gamma
White point luminance: 48 cd/m^2
White point chromaticity:
D65: x = 0.3127, y = 0.3290
DCI: x = 0.3140, y = 0.3510
DCI has clear requirements for picture brightness. The calibrated white screen brightness in the center of the picture needs to reach 48nit or 14fL, which is also the picture brightness requirement in well-known commercial cinemas. Interestingly, in the actual situation of theater screening, DCI allows an error of ±3fL, which means that the brightness of the center of the picture in some cinemas may be only 11fL.
Regarding contrast, DCI also has relevant requirements. Theoretically, the inter-frame contrast needs to reach 2000:1, and the intra-frame contrast needs to reach 150:1. However, in the actual environment of a cinema, the relevant requirements are reduced to a minimum of 1200:1 and 100:1.
The DCI-P3 color gamut required in the consumer field needs to have a white point that is consistent with BT.2020 and BT.709, both of which need to be D65, with a coordinate point of x=0.313, y=0.329 in the CIE1931 color space. However, the white point of the DCI-P3 color gamut in commercial theaters is x0.314, y=0.351, which is more green and yellow than D65. When we calibrate the DCI-P3 color gamut for the home theater projection system, we must pay attention to the position of the white point, otherwise it will affect the accuracy of the white balance of the entire picture. In addition, the Gamma value required by DCI is also different from the BT.1886 standard we are familiar with, and adopts the Gamma 2.6 curve.
Since DCI aims to establish industry technical standards for digital movies, most advanced cameras such as SLRs and micro-singles provide two recording color spaces: sRGB and Adobe RGB (abbreviated as ARGB). The ARGB color gamut range capability is also an important identification standard for professional monitors, but movie and video playback is also one of the main applications of mobile phones, tablets, computer monitors and flat-panel TVs. Therefore, products from companies such as Apple, Sony and Samsung are gradually using DCI-P3 as the standard for wide color gamut, and Apple is the most thorough. The color space for both the camera and display of the iPhone 7 series uses the P3 color gamut. Starting from the 2015 version of the iMac, many mobile phones, tablets and professional monitors have also gradually supported the P3 color gamut.
As sRGB is still the absolute mainstream of Internet content, in order to take into account the display effect of sRGB images, many P3 display devices have also made some compromises, setting Gamma to 2.2, while the standard color temperature is still D65. Apple generally calls this Display P3, and Microsoft Surface Studio, which sells a monitor and a host for free, provides three color spaces. In addition to the standard sRGB and DCI-P3, there is also a Vivid mode. In fact, this Vivid mode is the P3 color gamut of Gamma 2.2+D65 white point.
The gamma of DCI-P3 is 2.6, while the gamma of Display P3, sRGB, and AdobeRGB is the same, all of which are 2.2. The main reason for this difference is that DCI-P3 is designed for movie theaters without other light sources, while Display P3 is designed for modern monitors.
In 1980, Michael R. Pointer defined the maximum possible color gamut of the surface colors of common objects, which included 4089 samples. This color gamut has become a powerful tool for studying color reproduction and has been highly praised. Visually, Pointer Gamut represents most of the colors that people may see in nature. Colors outside Pointer Gamut are generally artificial light sources, including neon lights and colors generated by computer animation.
Pointer color gamut covers 47.9% of the color range of CIE1931 xy space. From its irregular shape, it is not difficult to imagine that it is not easy to make a display that supports Pointer color gamut. In fact, it is indeed the case. Research has found that it is theoretically impossible to use three primary colors to realize a display system that supports Pointer color gamut. In fact, at least four primary colors are required.
ProPhoto RGB is also known as the ROMM RGB color space (Reference Output Medium Metric RGB Color Space), which was designed by Kodak for photographic output purposes.
Compared to the general RGB color space, the color gamut provided by this color space is very generous, including more than 90% of the surface colors in the CIE Lab color space and 100% of the surface colors in the Pointer color space. One of the disadvantages of the ProPhoto RGB color space is that it includes about 13% of colors that are not usually present.
A color space defined in 1905 by artist and professor Albert H. Munsell.
Munsell first decomposed color into three independent components, namely hue, value, and chroma. The biggest advantage of Munsell color space is that it is percetually uniform, so it is still useful today, especially when evaluating the percetually uniformity of a space.
In the 1940s, colorimeter scientists discovered that the Munsell system had some flaws and needed to be corrected, so they organized a large-scale color discrimination experiment with participants from several continents. Eventually, a batch of corrected data was formed, called the Munsell renotation system.
The following link is some discussion on stackoverflow about Munsell color space conversion. Color Theory: How to convert Munsell HVC to RGB/HSB/HSLstackoverflow.com/questions/3620663/color-theory-how-to-convert-munsell-hvc-to-rgb-hsb-hsl
In the field of color management, the input and output characteristics of devices (such as monitors) are described using ICC Profiles, which define the transformation matrix from input to output, with D50 as the reference white point. If the reference white point of the input signal is not D50, the Bradford matrix or an equivalent method is needed to transform the input to a space with D50 as the white point. This process is called Chromatic Adaptation Transformation.
In color management, an ICC profile is a set of data that characterizes a color input or output device, or a color space, according to standards promulgated by the International Color Consortium (ICC). Profiles describe the color attributes of a particular device or viewing requirement by defining a mapping between the device source or target color space and a profile connection space (PCS). This PCS is either CIELAB (L*a*b*) or CIEXYZ. Mappings may be specified using tables, to which interpolation is applied, or through a series of parameters for transformations.
In color management, an ICC profile is a set of data that describes the characteristics of a color input or output device or color space according to standards promulgated by the International Color Consortium (ICC). A profile describes the color properties of a specific device or viewing requirement by defining a mapping between a device source or destination color space and a Profile Connection Space (PCS). This PCS is either CIELAB (L a b*) or CIEXYZ. The mapping can be specified using a table, interpolated, or converted through a series of parameters.
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.htmlwww.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
In order to quantitatively evaluate the color reproduction ability of a device, people have developed a variety of evaluation formulas. The more commonly used ones are dE76 and dE2000. dE can also be written as Delta-E. Both are based on the CIE-1976 color space, but dE2000 makes some adjustments to the brightness, so the result is more in line with the perception of the human eye.
Given two points (L1*, a1*, b1*) and (L2*, a2*, b2*) in the CIELAB color space, if only Delta-E is mentioned without specifying a specific suffix, the default is the dE76 standard, and the formula is as follows
The symbol definition of dE00 or dE2000 is as follows
The figure below shows a common method for evaluating the color reproduction ability of a display. First, six typical colors are selected: Red, Green, Blue, Cyan, Magenta, and Yellow. Then, the difference between the ideal value and the actual value is compared at five saturation levels (20, 40, 60, 80, and 100%).
Theoretically, if the dE2000 of two colors is less than 1, the human eye cannot tell the difference between them. A dE2000 value between 3 and 6 meets the quality requirements of general commercial products, but may not be good enough for professional-level prints or video applications. The following are some reference standards.
• 13 – 25: Deemed as different color tones, if the value exceeds this range the two colors are considered two different colors.
• 6.5 – 13: The difference between the two colors is observable, but the two colors are considered the same color tone.
• 3.2 – 6.5: The difference between the two colors is observable, but the impression given by both is basically the same.
• 1.6 – 3.2: From a given distance, the difference between the two colors is basically indistinguishable. Most of the time the two are considered the same color.
The CIE dE2000 standard was released in 2001. The formula is relatively complex and the specific steps are as follows.
Below is the algorithm process of a certain software to evaluate the color accuracy of the camera. You can see that both dE76 and dE2000 standards are used.
FAQ: What does the “E” in delta E or E* stand for?
The “E” in delta E or delta E* is derived from “ Empfindung “, the German word for sensation. Delta E means a “difference in sensation” for any delta E-type metric, CIE or Hunter. Delta E – PrintWikiprintwiki.org/Delta_E
In 1902, German physiologist Von Kries proposed a hypothesis. He believed that "the cone cells in the human eye that perceive color and the human psychological perception of color are two independent entities that do not affect each other."
When the CIE tristimulus values (XYZ) of two colors are the same, the stimulus intensity received by the cone cells of the human eye is the same, so must the perception of these two colors be the same? The answer is no. According to experimental results, visual perception will be the same only when the surrounding environment, background, sample size, sample shape, sample surface characteristics and lighting conditions are the same. Once two identical colors are placed under different observation conditions, although the tristimulus values are still the same, the visual perception of the human eye will change. This is the so-called color appearance phenomenon. The following are some examples of color appearance phenomenon.
Therefore, the physical stimulation of color alone cannot absolutely represent the color seen by the human eye. The influence of the external environment must also be considered. In addition, simply using the color difference formula to calculate the difference between two color points cannot get close to the visual difference between the two color blocks to the human eye. In addition to considering the physical difference, the impact of human vision on color must also be considered.
Another conclusion can be drawn: when the tristimulus values of two colors are the same, if the color appearance of the two colors is different, it must be because their observation conditions are different. Different color appearance phenomena describe the relationship between changes in observation conditions and changes in color appearance.
Listed below are some properties and phenomena related to color appearance.
Hue : The human eye perceives a visual attribute of a main color based on a certain amount of stimulation.
Lightness : The human eye perceives the relative brightness ratio (relative value) of a certain stimulus to the surrounding white points or the brightest area.
Brightness : The degree (absolute value) of the amount of light that the human eye perceives based on a certain stimulus.
Colorfulness : The intensity (absolute value) of the color of a certain primary hue that the human eye perceives based on a certain amount of stimulation.
Saturation : The relative ratio (relative value) of visual color and visual brightness perceived by the human eye based on a certain amount of stimulation.
Chroma : The relative value (relative value) of the visual chromaticity and the brightness of the surrounding white points or the brightest area based on a certain stimulus.
Hue shift : When the brightness changes, the hue of the monochromatic stimulus will drift. That is, the hue of the sample does not remain constant when the brightness of the illuminant changes. When the brightness value of the light source changes, the hue will shift with the brightness change.
Abney effect : When a monochromatic light is mixed with white light, the color purity of the illuminated state will be changed. According to the hue shift effect, the hue of the sample will also change. This phenomenon is called the Abney effect.
Helmholtz-Kohlrausch effect : According to previous theoretical foundations, the human eye's perception of brightness depends only on the Y value of the three stimuli. However, Helmoltz discovered through experiments that changes in brightness and chromaticity values will affect the apparent brightness value. When the absolute brightness is equal, the greater the color saturation, the greater the perceived brightness, as shown in the figure below.
Hunt effect : The color of an object changes significantly as the overall brightness changes. That is, the chromaticity changes as the brightness changes.
Hunt found that the higher the brightness of the light source, the higher the hue of the color. For example, the color of an object appears more vivid and bright in the summer afternoon, but softer in the evening. Under brighter light conditions, the color of an object appears more vivid, and the contrast between light and dark is more intense. Visual saturation increases with increasing brightness. Under brighter light conditions, the color of an object appears more vivid, and the contrast between light and dark is more intense.
As shown in the figure below, the point (0.35, 0.33) under the condition of 1000cd/m2 matches the point (0.55, 0.33) under the condition of 1cd/m2, which shows that as the degree of dark adaptation increases, the human eye's ability to distinguish colors decreases. Therefore, when evaluating color appearance, absolute brightness must be taken into account.
Stevens effect : Brightness contrast or lightness contrast increases with the increase of brightness. When the brightness increases, the color contrast will also increase, which is similar to the conclusion of the Hunt effect.
Memory colors : People have formed deep memories of certain colors in long-term practice, so there are certain rules and inherent habits in the understanding of these colors. These colors are called memory colors. Red apples, gray clouds, blue sky, green grass, green trees and yellow lemons are all common memory colors. Most people know when these colors are right or wrong. Most of these colors are brighter in memory than the actual colors.
Generally speaking, skin color and blue sky are very important memory colors and often require special correction processing.
Chromatic adaptation is more important than light and dark adaptation in the study of color appearance. In the color appearance transformation mode, it is more appropriate to add chromatic adaptation transformation, which is called CAT (Chromatic Adaptation Transform) in English.
The chromatic adaptation conversion model has been developed for more than 100 years, and all current chromatic adaptation models are mainly based on the concept hypothesis first proposed by Johannes von Kries in 1902. Von Kries proposed that "human visual receptors and human eye perception should be independent of each other and not affect each other."
Studies have shown that there are three types of cone cells in the human eye: L, M, and S. They adjust their sensitivity relatively independently according to the intensity of the stimulus they sense. Therefore, for the same intensity of stimulus input, the signal output generated by the cone cells will vary with the environment, as shown in the figure below.
Therefore, in the process of color adaptation conversion experienced by the human eye, the appropriate mode should be used to convert and process the three color stimuli of the observed object into "cone cell induction values" related to human vision, so as to predict the color expression ability under different observation environments; the method is to use the ratio between the input and output ends and the conversion matrix of different modes to convert the color of the object observed under the input source light source into the chromaticity value expressed under the output light source.
Color constancy detection technology uses the Chromatic Adaptation Model to predict the color appearance of any color stimulus under different light sources or illuminations, or even on different media, and then evaluate its color constancy. There are many types of chromatic adaptation conversion models. The chromatic adaptation models that have been published include von Kries, Bartl-eson, BFD, CIE (Nayatani et al.), Hunt, CIEL*a*b*, RLAB, etc.
Von Kries Chromatic Adaptation Model
Assuming the light source is β, the first step of the Von Kries transform is to convert the color tristimulus value XYZ into the stimulus amount sensed by each RGB (or LMS) cone cell of the human eye.
in
The second step is to normalize each stimulus value using the white field response under the β light source. The normalization coefficient is,
Get the stimulus response after the human eye adapts.
The third step is to predict the response under any new light source (δ light source), and infer the stimulus values of the three types of cone cells under the new light source based on the white field response under the new light source, and then use the Von Kries inverse transformation matrix to obtain the three stimulus values in XYZ space.
in,
BFD Chromatic Adaptation Model
In his doctoral thesis in 1985, Lam proposed an improved model based on Von Kries transform, called Bradford model, or BFD model for short.
Similar to the Von Kries transform, the first step of the BFD transform is to convert the observed color tristimulus values XYZ into the stimulus amounts sensed by the RGB (or LMS) cone cells of the human eye.
in,
The characteristics of BFD transformation are
The brightness Y is used to normalize XYZ, and the resulting RGB three-stimulus response is called a "sharp response";
If X=Y=Z, then R=G=B, and the cone perceives a white response;
Since XYZ is normalized using brightness, BFD eliminates the effect of sample brightness on cones;
Sharp response narrows the human perception range and keeps color saturation unchanged, so it is suitable for color constancy calculations.
The method of using the BFD model to predict XYZ under a new light source is:
in,
Finlayson Matrix
Finlayson et al. proposed an improved matrix based on the BFD model in 2000. Experiments have shown that the improved matrix is better than the BFD matrix for color prediction.
The figure below compares the spectral characteristics of the sharp response (solid line) and the BFD response (dashed line). It can be seen that the sharp response is narrower than the BFD in the long wavelength band, and the actual measurement effect is better.
The following link describes methods for evaluating the effects of various chromatic adaptation matrices: http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.htmlwww.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
As shown in the figure below, the color of a certain color patch of ColorChecker under light source A is Sample A, and the color under light source C is Sample C. The color of Sample A is CAT transformed using matrices such as Bradford, Von Kries, and XYZ Scaling, and the colors shown in the three small squares in the figure are obtained. Obviously, no matrix can output completely correct results, but they are generally close.
Pantone TPX 2012 (175 New Colors) TPX 2012 New Colors PANTONE FASHION + HOME/color guide N01 N02 N03 N04 N05 N06 N07 N08 N09 N10 N11 N12 N13 N14 N15 N16 N17 N18 N19 N20 N21 N22 N23 N24 N25 175 New Colors 11-0605TPX N 25 11-0615TPX N 04 11-0
Our booth info is below: Time:4-7 September 2019 Address:International Convention City Bashundhara (ICCB), Dhaka, Bangladesh BoothNo:Hall 12, 37B 3nh will attend with our colorimeter, grating spectrophotometer, gloss meter, color light box, camera detection equipment and other color management equipment, focusing on the difficulties in color management and control, providing professional color theory guidance and equipment operation program for customers. Welcome to visit our booth!
Our booth info is below: Time:19-22June2019 Address:BangkokInternationalTradeExhibitionCentre(BITEC) 88Bangna-TradRoad(Km.1),Bangna,Bangkok10260,Thailand BoothNo:1G35 InterPlas Thailand 2019, ASEANs Most Comprehensive Exhibition on Machiner
Our booth info is below: Show Time: From10to13 April 2019 Address(English): Saigon Exhibition Convention Centre(SECC) 799 Nguyen Van Linh Parkway, District 7, Ho Chi Minh City, Vietnam Address( Vietnam ): Trung Tm Hội Chợ Triển Lm Si
3nh chart is precise aid, designed for image quality determining and for judgment of resolution limits of digital cameras. The chart is designed for photographic cameras up to 2000 addressable pixels in picture high (cca 6Mpix camera). When
Our booth info is below: Show Time: May 21-24, 2019 Address: China Import and Export Fair Pazhou Complex, Guangzhou, China Booth No: 8.1D75 Accompanying the growth of Chinas plastics and rubber industries for 30 years, CHINAPLAS has become
From 28thFebruary to 4thMarch, 3nh participated in INDIAPLAST 2019 and achieved a perfect ending. It is the first time for 3nh to go abroad to attend the exhibition, which also means an important step to further open up international market