• The concept of a color model. Model RGB, CMY(K). The ratio of RGB and CMY models. Color wheel. RGB, CMYK, XYZ and other image color schemes

    When printing color computer maps in one way or another, the problem inevitably arises of ensuring accuracy in transmitting the original colors. This problem occurs for a variety of reasons.

    Firstly, scanners And monitors work in an additive color model RGB, based on the rules of color addition, and printing is carried out in a subtractive model CMYK, in which the rules for subtracting colors apply.

    Secondly, the methods of transmitting images on a computer monitor and on paper are different.

    Thirdly, the reproduction process occurs in stages and is carried out on several devices, such as scanner, monitor, phototypesetting machine, which requires their adjustment in order to minimize color distortion throughout the entire technological cycle - process calibration.

    RGB model.

    RGB color model(Fig. 1) ( R-Red- red, G-Green- green, B - Blue- blue) is used to describe colors visible in transmitted or direct light. It is adequate to the color perception of the human eye. Therefore, constructing images on monitor screens, scanners, digital cameras and others optical instruments corresponds to the RGB model. In a computer RGB model, each primary color can have 256 brightness levels, which corresponds to 8-bit mode.

    Rice. 1. RGB color model

    Model CMY (CMYK)

    CMY color model(Fig. 2) C-Cyan- blue, M - Magenta- purple, Y-Yellow- yellow, used to describe colors visible in reflected light (for example, the color of paint applied to paper). In theory, the sum of CMY colors at maximum intensity should produce pure black. In real practice, due to the imperfection of the coloring pigments of the paint and the initial instability to blue color In color separation, the sum of cyan, magenta and yellow colors produces a dirty brown color. Therefore, a fourth dye is also used in printing - black - blacK, which produces a rich, uniform black color. It is used for printing text and designing other important details, as well as for adjusting the overall tonal range of images. Color saturation in CMYK model measured as a percentage, so each color has 100 gradations of brightness.

    The main task of the reproduction process is to convert the image from the model RGB into the model CMYK. This transformation is carried out using special software filters, taking into account all future printing settings: process ink system, dot gain coefficient, black color generation method, ink balance and others. Thus, color separation is a complex process on which the quality of the final image largely depends. But even with optimal conversion from RGB V CMYK inevitably there is a loss of some shades. This is due to the different nature of the data color models. It should also be noted that the models RGB And CMYK cannot convey the full spectrum of colors visible to the human eye.

    Rice. 2. CMY color model

    Model HSB.

    Color can be characterized using other visual components. Yes, in the model H.S.B. the basic color space is constructed according to three coordinates: color tone (Hue) ; saturation (Saturation) ; brightness (Brightness) . These parameters can be represented as three coordinates, which can be used to graphically determine the position of a visible color in color space.

    Rice. 3. HSB color model

    On the central vertical axis postponed brightness(Fig. 3), and on horizontal - saturation. The color tone corresponds to the angle at which saturation axis moves away from luminance axis. In the area of ​​the outer radius there are saturated, bright color tones, which, as they approach the center, mix and become less saturated. As you move along the vertical axis, colors of different hue and saturation become either lighter or darker.

    In the center, where all the color tones mix, a neutral gray color is formed.

    This color model fits well with human perception: color tone is the equivalent wavelength of light, saturation- wave intensity, and brightness characterizes the amount of light.

    CIE system.

    Color space can be used to describe the range of colors that are perceived by an observer or reproduced by a device. This range is called scale. This 3D format is also very convenient for comparing two or more colors. Three-dimensional color models and and three-digit color systems, such as RGB, CMY And H.S.B., are called three-coordinate colorimetric data.

    Any measurement system requires a repeatable set of standard scales. For colorimetric measurements, the RGB color model is used as standard cannot be used because it unique- this space depends on the specific device. Therefore, there was a need to create universal color system. Such a system is CIE. To obtain a set of standard colorimetric scales, in 1931 International Commission on Illumination- Commission Internationale de l'Eclairage (CIE) - approved several standard color spaces describing the visible spectrum. With these systems, the color spaces of individual observers and devices can be compared against each other based on repeatable standards.

    CIE color systems are similar to others 3D models discussed above, since in order to determine the position of a color in color space, they also use three coordinates. However, unlike the CIE spaces described above - that is, CIE XYZ, CIE L*a*b* and CIE L*u*v* - are device independent, meaning the range of colors that can be defined in these spaces is not limited to pictorial the capabilities of a particular device or the visual experience of a particular observer.

    CIE XYZ.

    The main CIE color space is the CIE XYZ space. It is built on the basis of the visual capabilities of the so-called standard observer, that is, a hypothetical viewer, whose capabilities were carefully studied and recorded during long-term studies of human vision conducted by the CIE commission. This system has three primary colors (red, green and blue) standardized along the wavelength and have fixed coordinates in the xy coordinate plane.

    0.72

    0.28

    0.18

    0.27

    0.72

    0.08

    l, mm

    700.0

    564.1

    435.1

    Based on the data obtained as a result of the research, a xyY color diagram was constructed - a chromatic diagram (Fig. 11).

    All shades visible to the human eye are located inside a closed curve. The primary colors of the RGB model form the vertices of the triangle. This triangle contains the colors displayed on the monitor. The CMYK colors that can be reproduced in printing are enclosed within a polygon. The third coordinate, Y, is perpendicular to any point on the curve and displays the gradations of brightness of a particular color.

    CIE Lab model

    This model is created as an improved CIE model and is also hardware independent. The idea behind the Lab model is that each step in increasing the numerical value of one channel corresponds to the same visual perception as the other steps.

    In the model Lab:

    Magnitude L characterizes lightness (Lightness) (from 0 to 100%);

    Index A defines color range on the color wheel from green to red (- 120 (green) to +120 (red));

    Index b defines range from blue (-120) to yellow (+120).

    At the center of the wheel, the color saturation is 0.

    Lab's color gamut completely includes the color gamuts of all other color models and the human eye. Publishing programs use the Lab model as an intermediate model when converting RGB to CMYK.

    • Translation

    I'm going to take a tour of the history of the science of human perception that led to the creation of modern video standards. I will also try to explain commonly used terminology. I'll also briefly discuss why the typical game creation process will, over time, become more and more similar to the process used in the film industry.

    Pioneers of color perception research

    Today we know that the retina of the human eye contains three different types of photoreceptor cells called cones. Each of three types The cones contain a protein from the opsin family of proteins that absorbs light in different parts of the spectrum:

    Light absorption by opsins

    Cones correspond to the red, green and blue parts of the spectrum and are often called long (L), medium (M) and short (S) according to the wavelengths to which they are most sensitive.

    One of the first scientific works on the interaction of light and the retina was the treatise “Hypothesis Concerning Light and Colors” by Isaac Newton, written between 1670-1675. Newton had a theory that light of different wavelengths caused the retina to resonate at the same frequencies; these vibrations were then transmitted through the optic nerve to the "sensorium".


    “Rays of light falling on the bottom of the eye excite vibrations in the retina, which propagate along the fibers of the optic nerves to the brain, creating the sense of vision. Different types rays create vibrations of different strengths, which, according to their strength, excite sensations of different colors ... "

    More than a hundred years later, Thomas Young came to the conclusion that since resonance frequency is a system-dependent property, in order to absorb light of all frequencies, there must be an infinite number of different resonance systems in the retina. Jung considered this unlikely, and reasoned that the quantity was limited to one system for red, yellow and blue. These colors have traditionally been used in subtractive paint mixing. In his own words:

    Since, for reasons given by Newton, it is possible that the movement of the retina is of an oscillatory rather than a wave nature, the frequency of the oscillations must depend on the structure of its substance. Since it is almost impossible to believe that each sensitive point of the retina contains an infinite number of particles, each of which is capable of vibrating in perfect harmony with any possible wave, it becomes necessary to assume that the number is limited, for example, to the three primary colors: red, yellow and blue...
    Young's assumption about the retina was wrong, but he concluded correctly: there are a finite number of cell types in the eye.

    In 1850, Hermann Helmholtz was the first to obtain experimental proof of Young's theory. Helmholtz asked a subject to match the colors of different patterns of light sources by adjusting the brightness of several monochrome light sources. He came to the conclusion that to compare all samples, three light sources are necessary and sufficient: in the red, green and blue parts of the spectrum.

    The Birth of Modern Colorimetry

    Fast forward to the early 1930s. By that time, the scientific community had a fairly good understanding of internal work eyes. (Although it took another 20 years for George Wald to experimentally confirm the presence and function of rhodopsins in retinal cones. This discovery led him to the Nobel Prize in Medicine in 1967.) Commission Internationale de L'Eclairage (International Commission on Illumination), CIE, set out to create a comprehensive quantitative assessment of human color perception. Quantification was based on experimental data collected by William David Wright and John Guild under parameters similar to those first chosen by Hermann Helmholtz. Basic settings 435.8 nm was chosen for blue, 546.1 nm for green and 700 nm for red.


    John Guild's experimental setup, three knobs adjusting primary colors

    Due to the significant overlap in M ​​and L cone sensitivities, it was not possible to match some wavelengths to the blue-green portion of the spectrum. To “match” these colors, I needed to add a little base red as a reference point:

    If we imagine for a moment that all primary colors contribute negatively, then the equation can be rewritten as:

    The result of the experiments was a table of RGB triads for each wavelength, which was displayed on the graph as follows:


    CIE 1931 RGB color matching functions

    Of course, colors with a negative red component cannot be displayed using the CIE primaries.

    We can now find the trichrome coefficients for the light spectral intensity distribution S as the following inner product:

    It may seem obvious that sensitivity to different wavelengths can be integrated in this way, but in fact it depends on the physical sensitivity of the eye, which is linear with respect to wavelength sensitivity. This was empirically confirmed in 1853 by Hermann Grassmann, and the integrals presented above in their modern formulation are known to us as Grassmann's law.

    The term “color space” arose because the primary colors (red, green and blue) can be considered the basis vector space. In this space, the different colors perceived by a person are represented by rays emanating from a source. The modern definition of vector space was introduced in 1888 by Giuseppe Peano, but more than 30 years earlier James Clerk Maxwell was already using the nascent theories of what later became linear algebra to formally describe the trichromatic color system.

    CIE decided that, to simplify calculations, it would be more convenient to work with a color space in which the coefficients of the primary colors are always positive. The three new primary colors were expressed in RGB color space coordinates as follows:

    This new set of primary colors cannot be realized in the physical world. It's simply a mathematical tool that makes working with color space easier. In addition, to ensure that the coefficients of the primary colors are always positive, the new space is arranged in such a way that the color coefficient Y corresponds to the perceived brightness. This component is known as CIE brightness(You can read more about this in Charles Poynton's excellent Color FAQ article).

    To make it easier to visualize the resulting color space, we'll perform one last transformation. Dividing each component by the sum of the components, we obtain a dimensionless color value that does not depend on its brightness:

    The x and y coordinates are known as chromaticity coordinates, and together with the Y CIE luminance they make up the xyY CIE color space. If we plot the chromaticity coordinates of all colors with a given brightness on a graph, we get the following diagram, which is probably familiar to you:


    XyY diagram CIE 1931

    The last thing you need to know is what is considered white in the color space. In such a display system, white is the x and y coordinates of the color, which are obtained when all the coefficients of the RGB primary colors are equal to each other.

    Over the years, several new color spaces have emerged that improve upon the CIE 1931 spaces in various ways. Despite this, the CIE xyY system remains the most popular color space for describing the properties of display devices.

    Transfer functions

    Before looking at video standards, two more concepts need to be introduced and explained.

    Optoelectronic transfer function

    Optical-electronic transfer function(optical-electronic transfer function, OETF) determines how linear light captured by a device (camera) should be encoded in the signal, i.e. this is the function of the form:

    V used to be an analog signal, but now, of course, it is digitally encoded. Typically, game developers rarely encounter OETF. One example where the feature will be important is the need for a game to combine video recording with computer graphics. In this case, it is necessary to know which OETF the video was recorded with in order to recover the linear light and mix it correctly with the computer image.

    Electro-optical transfer function

    The electronic-optical transfer function (EOTF) performs the opposite task of OETF, i.e. it determines how the signal will be converted into linear light:

    This feature is more important for game developers because it determines how the content they create will be displayed on users' TV screens and monitors.

    Relationship between EOTF and OETF

    The concepts of EOTF and OETF, although interrelated, serve different purposes. OETF is needed to represent the captured scene from which we can then reconstruct the original linear lighting (this representation is conceptually the HDR (High Dynamic Range) framebuffer of a normal game). What happens during the production stages of a regular film:
    • Capture scene data
    • Inverting OETF to restore linear lighting values
    • Color correction
    • Mastering for various target formats (DCI-P3, Rec. 709, HDR10, Dolby Vision etc.):
      • Reducing the dynamic range of a material to match the dynamic range of the target format (tone mapping)
      • Convert to target format color space
      • Invert EOTF for the material (when using EOTF in the display device, the image is restored as desired).
    A detailed discussion of this technical process will not be included in our article, but I recommend studying a detailed formalized description of the ACES (Academy Color Encoding System) workflow.

    Until now, the standard technical process of the game looked like this:

    • Rendering
    • HDR Frame Buffer
    • Tonal correction
    • Invert EOTF for the intended display device (usually sRGB)
    • Color correction
    Most game engines use a color grading technique popularized by Naty Hoffman's presentation "Color Enhancement for Videogames" with Siggraph 2010. This technique was practical when only target SDR (Standard Dynamic Range) was used, and it allowed color grading software to be used already installed on most artists' computers, such as Adobe Photoshop.


    Standard SDR color grading workflow (image credit: Jonathan Blow)

    After the introduction of HDR, most games began to move towards a process similar to that used in film production. Even in the absence of HDR, a cinematic-like process allowed for optimized performance. Doing color grading in HDR means you have the entire dynamic range of the scene. In addition, some effects that were previously unavailable become possible.

    Now we are ready to consider different standards, currently used to describe television formats.

    Video standards

    Rec. 709

    Most standards related to video broadcasting are issued by the International Telecommunication Union (ITU), a UN body primarily concerned with information technology.

    ITU-R Recommendation BT.709, more commonly referred to as Rec. 709 is a standard that describes the properties of HDTV. The first version of the standard was released in 1990, the latest in June 2015. The standard describes parameters such as aspect ratios, resolutions, and frame rates. Most people are familiar with these specifications, so I will skip them and focus on the color and brightness sections of the standard.

    The standard describes in detail chromaticity, limited to the xyY CIE color space. The red, green and blue illuminants of a display standard must be selected such that their individual chromaticity coordinates are as follows:

    Their relative intensity must be adjusted so that white dot had color

    (This white point is also known as CIE Standard Illuminant D65 and is similar to capturing the chromaticity coordinates of the spectral intensity distribution of normal daylight.)

    Color properties can be visually represented as follows:


    Coverage Rec. 709

    The area of ​​the chromaticity scheme bounded by the triangle created by the primary colors given system display is called coverage.

    Now we move on to the brightness portion of the standard, and this is where things get a little more complicated. The standard states that "General optical-electronic transfer characteristic in the source" is equal to:

    There are two problems here:

    1. There is no specification on what physical brightness corresponds to L=1
    2. Although it is a video broadcast standard, it does not specify EOTF
    This happened historically because it was believed that the display device, i.e. consumer TV and there is EOTF. In practice, this was done by adjusting the captured luminance range in the above OETF so that the image looked good on a reference monitor with the following EOTF:

    Where L = 1 corresponds to a luminance of approximately 100 cd/m² (the unit of cd/m² is called a "nit" in the industry). This is confirmed by the ITU latest versions standard with the following comment:

    In standard production practice, the encoding function of the image sources is adjusted so that the final image has the desired appearance as seen on the reference monitor. The decoding function from Recommendation ITU-R BT.1886 is taken as a reference. The reference viewing environment is specified in ITU-R Recommendation BT.2035.
    Rec. 1886 is the result of work to document the characteristics of CRT monitors (the standard was published in 2011), i.e. is a formalization of existing practice.


    Elephant Graveyard CRT

    The nonlinearity of brightness as a function of applied voltage has led to the way CRT monitors are physically designed. By pure chance, this nonlinearity is (very) approximately the inverted nonlinearity of human brightness perception. When we moved to digital representation of signals, this had the fortunate effect of evenly distributing the sampling error across the entire brightness range.

    Rec. 709 is designed to use 8-bit or 10-bit encoding. Most content uses 8-bit encoding. For it, the standard states that the distribution of the signal brightness range should be distributed in codes 16-235.

    HDR10

    When it comes to HDR video, there are two main contenders: Dolby Vision and HDR10. In this article I will focus on HDR10 because it is an open standard that has become popular faster. This standard is chosen for Xbox One S and PS4.

    We'll start again by looking at the chrominance portion of the color space used in HDR10, as defined in the ITU-R BT.2020 (UHDTV) Recommendation. It contains the following chromaticity coordinates of primary colors:

    Once again, D65 is used as the white point. When visualized on an xy Rec. 2020 looks like this:


    Coverage Rec. 2020

    It is clearly noticeable that the coverage of this color space is significantly greater than that of Rec. 709.

    Now we move on to the brightness section of the standard, and this is where things get interesting again. In his 1999 Ph.D. thesis “Contrast sensitivity of the human eye and its effect on image quality”(“Contrast sensitivity of the human eye and its influence on image quality”) Peter Barten presented a slightly scary equation:

    (Many of the variables in this equation are themselves complex equations; for example, brightness is hidden inside the equations that calculate E and M).

    The equation determines how sensitive the eye is to changes in contrast at different brightnesses, and various parameters determine viewing conditions and certain properties of the observer. "Minimum distinguishable difference"(Just Noticeable Difference, JND) is the inverse of Barten's equation, so for EOTF sampling to get rid of viewing conditions, the following must be true:

    The Society of Motion Picture and Television Engineers (SMPTE) decided that Barten's equation would be a good basis for a new EOTF. The result was what we now call SMPTE ST 2084 or Perceptual Quantizer (PQ).

    PQ was created by choosing conservative values ​​for the parameters of the Barten equation, i.e. expected typical consumer viewing conditions. PQ was later defined as the sampling that, for a given luminance range and number of samples, most closely matches Barten's equation with the chosen parameters.

    The discretized EOTF values ​​can be found using the following recurrent formula for finding k< 1 . The last sampling value will be the required maximum brightness:

    For a maximum brightness of 10,000 nits using 12-bit sampling (which is used in Dolby Vision), the result looks like this:


    EOTF PQ

    As you can see, sampling does not cover the entire brightness range.

    The HDR10 standard also uses EOTF PQ, but with 10-bit sampling. This is not enough to stay below the Barten threshold in the 10,000 nit brightness range, but the standard allows metadata to be built into the signal to dynamically adjust peak brightness. Here's what 10-bit PQ sampling looks like for different brightness ranges:


    Various EOTF HDR10

    But even so, the brightness is slightly above the Barten threshold. However, the situation is not as bad as it might seem from the graph, because:

    1. The curve is logarithmic, so the relative error is actually not that great
    2. Do not forget that the parameters taken to create the Barten threshold were chosen conservatively.
    At the time of writing, HDR10 TVs on the market typically have a peak brightness of 1000-1500 nits, and 10 bits is sufficient for them. It's also worth noting that TV manufacturers can decide what to do with brightness levels above the range they can display. Some take a hard pruning approach, others a softer pruning approach.

    Here's an example of what 8-bit Rec sampling looks like. 709 with 100 nits peak brightness:


    EOTF Rec. 709 (16-235)

    As you can see, we're well above Barten's threshold, and importantly, even the most indiscriminate buyers will tune their TVs to well above 100 nits peak brightness (usually 250-400 nits), which will raise the Rec curve. 709 is even higher.

    In conclusion

    One of the biggest differences between Rec. 709 and HDR in that the brightness of the latter is indicated in absolute values. In theory, this means that content designed for HDR will look the same on all compatible TVs. At least until their peak brightness.

    There is a popular misconception that HDR content will be brighter overall, but this is generally not the case. HDR films will most often be produced in such a way that intermediate level image brightness was the same as for Rec. 709, but so that the brightest parts of the image are brighter and more detailed, which means the midtones and shadows will be darker. Combined with the absolute brightness values ​​of HDR, this means that optimal HDR viewing requires good conditions: in bright light, the pupil constricts, meaning details in dark areas of the image will be harder to see.

    Tags:

    Add tags

    This article describes the color models used by Adobe Photoshop.

    The world around us is full of all kinds of colors and shades of color. From a physical point of view, color is a set of specific wavelengths reflected from an object or transmitted through a transparent object. However, now we are interested not in the question of what color is, what its physical nature is, but in how one can obtain this or that color in practice. With the development of many industries, including printing and computer technology, the need for objective methods for describing and processing color has arisen.

    Colors in nature are rarely simple. Most colors are made by mixing some other colors. For example, a combination of red and blue produces purple, blue and green produces cyan. Thus, by mixing a small number of simple colors, you can get many (and quite a large number) complex (composite) ones. Therefore, to describe color, the concept of a color model is introduced - as a way of representing a large number of colors by decomposing it into simple components.

    Color wheel

    One of these models is the color wheel, which has been mentioned many times before. It is shown in the figure and is called Oswald's big circle.

    Along with Oswald's circle, there is also Goethe circle, in which the primary colors are located at the corners of an equilateral triangle, and the secondary colors are located at the corners of an inverted triangle. A diagram of such a circle is presented below. Contrasting colors are located opposite each other.

    Color gamut

    Before we look at color models individually, let's first consider the concept of color gamut, which gives us an idea of ​​how well a particular color model represents colors. Color gamut is the maximum range of colors that a device or the human eye can reproduce or detect.

    The cathode ray tube of a monitor or TV, color models, printing inks and, of course, the human eye have a certain color gamut. Figure 3 schematically shows a comparison of the color gamuts of the human eye, monitor and printing machine. The color gamut of the monitor approximately corresponds to the RGB model in various variations; the color gamut of the printing machine corresponds to CMYK.

    So, color in computer technology, in printing, in many other industries related to image processing, is represented as a combination of a small number of three components. This representation is called a color model. Various types models have different color schemes. This is their main advantages or disadvantages. Reflected and absorbed color are described differently. There are quite a large number of color models, but we will focus only on those that are most often used in graphics packages.

    RGB color model

    This is one of the most common and frequently used models. It is used in devices that emit light, such as monitors, projectors, and televisions. This color model is based on three primary colors: Red - red, Green - green and Blue - blue. Each of the above components can vary from 0 to 255, forming different colors and thus providing access to all 16 million. When working with graphic editor Adobe Photoshop allows you to select a color, relying not only on what we see, but, if necessary, specify a digital value, thereby sometimes, especially when color correction, controlling the work process.

    This color model is additive, that is, as the brightness of individual components increases, the brightness of the resulting color will also increase: if you mix all three colors with maximum intensity, the result will be white; on the contrary, in the absence of all colors the result is black.

    Important to know: the numerical values ​​of the channels in Photoshop indicate the brightness of a given color. That is, than larger number, the lighter the channel looks. To better understand this fundamental principle, experiment with the color selection dialog box by typing different meanings one channel with zero others.

    The advantage of this mode is that it allows you to work with 16 million colors at 8 bits per channel (224 colors), but the disadvantage is that when the image is printed, some of these colors are lost, mainly the brightest and most saturated ones, also There is a problem with blue flowers.

    The RGB color model is considered the easiest to master. The vast majority of lessons for beginners and intermediate users are written specifically for it. But high level Proficiency in the Photoshop program requires knowledge of the basics and the ability to work in other color models.

    CMYK color model

    The CMYK color model is much closer to the color gamut of a printed image.

    Unlike the previous RGB color model, this model uses the so-called subtractive color synthesis. It uses reflected light parameters. That is, if the color of an object, for example, is blue (Cyan), this means that it absorbs red from white, in other words, it is subtracted from white. If an object's color is magenta, it means it absorbs green. And finally, if the color of an object is yellow, then it absorbs blue light. If an object absorbs all colors, we see it as black. In the CMYK model, black is called the skeletal or key color. The abbreviation CMYK is formed by the first letters of subtractive colors.

    Important to know: channels of the CMYK color model in Photoshop indicate the amount of paint a certain color. That is, the higher the numerical value of the channel, the darker it is. This is a fundamental difference between this model and the previous one. In addition, since CMYK contains 4 channels, it becomes possible for more subtle, even jewelry, color correction. This is why professional users prefer to perform color correction in this color model.

    Preparing an image for printing in a printing house or on a printer also requires knowledge and ability to work in CMYK, since printing machines, including printers, create images exactly according to this principle.

    The disadvantage of CMYK is that it has a narrower color gamut, so some colors are irretrievably lost when converted from another color model.

    Lab color model

    If there are usually no difficulties with previous color models, then with the Lab model the situation is completely different. Understanding the interaction of color channels in it is a little more difficult. The fact is that in the Lab space color is separated from contrast. One L (luminance) channel contains information about image detail and luminance contrast. This is almost a black and white version of the image. Channel a covers the palette from magenta (127) to green (-128). Channel b covers the palette from yellow (127) to blue (-128). Zero values ​​for a and b correspond to neutral tones, that is, all shades of gray.

    Lab is also called a hardware-independent model. In fact, the entire work of the Photoshop program is based precisely on the algorithms of this color model (although most people are not aware of it). Lab's color gamut matches all the colors we see, so almost half of them are not reproduced in print, and a fifth are not reproduced by the monitor.

    Mastering the work in Lab is not easy, but mastering even a few techniques of working in this space allows you to perform corrections that are either impossible to make in other models, or the result obtained in Lab in a few seconds is achieved with a lot of effort and time.

    In conclusion, I would like to add that no matter what color space you choose to work with, this in itself does not mean anything. To achieve a good result, you need to clearly know the principles of color formation for each model, and, of course, the basics of working with all the tools of the Photoshop program.

    I wish you creative success!
    Evgeniy Kartashov

    RGB color model(from the English Red, Green, Blue - red, green, blue) - an additive color model that describes a method of color synthesis for color reproduction. In the Russian tradition it is sometimes referred to as KZS.

    Story
    In 1861, the English physicist James Clarke Maxwell proposed using a method for obtaining a color image, which is known as additive color fusion. The additive (summative) color rendering system means that the colors in this model are added to the black color. Additive color shift can be interpreted as the process of combining light streams of different colors before they reach the eye.
    Additive color models (from the English add - add) are color models in which a luminous flux with a spectral distribution, visually perceived as the desired color, is created based on the operation of proportional mixing of light emitted by three sources. Mixing schemes can be different, one of them is presented in
    The additive color model assumes that each light source has its own constant spectral distribution, and its intensity is adjustable.
    There are two types of additive color model: hardware dependent and perceptual. In the device-dependent model, the color space depends on the characteristics of the image output device (monitor, projector). Because of this, the same image represented based on such a model, when reproduced on various devices ah will be perceived visually a little differently.
    The perceptual model is built taking into account the characteristics of the observer’s vision, and not technical characteristics devices.
    In 1931, the International Commission on Illumination (CIE) standardized the color system and also completed work that created a mathematical model of human vision. The CIE 1931 XYZ color space was adopted, which is basic model to this day.

    Mechanism of flower formation
    When a person perceives color, it is they that are directly perceived by the eye. The remaining colors are a mixture of three basic colors in different proportions. The color model is shown here . R+G=Y (Yellow); G+B=C (Cyan - blue); B+R=M (Magenta - purple). The sum of all three primary colors in equal parts gives white (White) color R+G+B=W (White - white). For example, on the screen of a monitor with a cathode ray tube, as well as a similar TV, the image is created by illuminating a phosphor with a beam of electrons. With this effect, the phosphor begins to emit light. Depending on the composition of the phosphor, this light has one color or another.
    Intermediate shades are obtained due to the fact that different colored grains are located close to each other. At the same time, their images in the eye merge, and the colors form some mixed shade. If grains of one color are illuminated differently than the others, then the mixed color will not be a shade of gray, but will acquire color. This method of color formation is reminiscent of lighting white screen in complete darkness with multi-colored spotlights. If we encode the color of one image point with three bits, each of which will indicate the presence (1) or absence (0) of the corresponding system component, RGB 1 bit for each RGB component, then we will get all eight different colors . In practice, to store information about the color of each point of a color image in the RGB model, 3 bytes (i.e. 24 bits) are usually allocated, 1 byte (i.e. 8 bits) for the color value of each component. Thus, each RGB component can take a value in the range from 0 to 255 (total 2 to the 8th power = 256 values). Therefore, you can mix colors in different proportions, changing the brightness of each component. Thus, you can get 256 x 256 x 256 = 16,777,216 colors. RGB coordinates varying in the range from 0 to 255 form a color cube. . Any color is located inside this cube and is described by its own set of coordinates, showing in what proportions the red, green and blue components are mixed in it. The ability to display no less than 16.7 million shades is a full-color image type that is sometimes called True Color (true or true colors). because the human eye is still unable to discern greater diversity. The maximum brightness of all three basic components corresponds to white, the minimum to black. Therefore, white color has the code (255,255,255) in decimal, and FFFFFF in hexadecimal. Black color encodes (0,0,0) or 000000, respectively. All shades of gray are formed by mixing three components of the same brightness. For example, (200,200,200) or C8C8C8 produces a light gray color, while (100,100,100) or 646464 produces a dark gray color. The darker the shade of gray you want to achieve, the lower the number you need to enter in each text field. Black color is formed when the intensity of all three components is zero, and white - when their intensity is maximum.

    Restrictions
    The model RGB colors There are three fundamental disadvantages: The first is insufficient color gamut. Regardless of the size of the color space of the RGB color model, it is impossible to reproduce many colors perceived by the eye (for example, spectrally pure blue and orange). Such colors in the RGB color formula have negative intensities of the base color, and it is very difficult to implement not addition, but subtraction of base colors in the technical implementation of the additive model. This shortcoming is eliminated in the perceptual additive model.
    The second disadvantage of the RGB color model is the inability to reproduce colors consistently across devices (hardware dependency) due to the fact that the base colors of this model depend on technical parameters image output devices. Therefore, strictly speaking, there is no single RGB color space; the regions of reproduced colors are different for each output device. Moreover, even comparing these spaces numerically is only possible using other color models. The third drawback is the correlation of color channels (as the brightness of one channel increases, others reduce it).

    Advantages
    Many computer equipment works using the RGB model, in addition, this model is very simple, its “genetic” relationship with the equipment (scanner and monitor), wide color gamut (the ability to display a variety of colors close to the capabilities of human vision) this explains its wide distribution.
    The main advantages of the RGB color model are its simplicity, clarity, and the fact that any point in its color space corresponds to a visually perceived color.
    Due to the simplicity of this model, it can be easily implemented in hardware. In particular, in monitors, microscopic particles of three types of phosphor serve as controlled light sources with different spectral distributions. They are clearly visible through a magnifying glass, but when viewing the monitor with the naked eye, due to the phenomenon of visual closure, a continuous image is visible.
    The intensity of light radiation in monitors based on cathode ray tubes is controlled using three electron guns that excite the glow of phosphors. Availability of many image processing procedures (filters) in raster graphics programs, small (compared to the CMYK model) volume occupied by the image in RAM computer and on disk.

    Application
    The RGB color model is widely used in computer graphics for the reason that the main output device (monitor) works in this system. The image on the monitor is formed from individual luminous dots of red, green and blue colors. By looking at the screen of a working monitor through a magnifying glass, you can see individual colored dots - and it’s even easier to see this on a TV screen, since its dots are much larger.
    Widely used in the development of electronic (multimedia) and printed publications.
    Illustrations made using raster graphics are rarely created manually using computer programs. Most often, scanned illustrations prepared by the artist on paper or photographs are used for this purpose.
    IN lately Digital photo and video cameras are widely used for inputting raster images into a computer. Accordingly, most graphic editors designed for working with raster illustrations are focused not so much on creating images, but on processing them. On the Internet, raster illustrations are used in cases where it is necessary to convey the full range of shades of a color image.

    Sources used
    1. Domasev M.V. Color, color management, color calculations and measurements. St. Petersburg: Peter 2009
    2. Petrov M. N. Computer graphics. Textbook for universities. St. Petersburg: Peter 2002
    3. ru.wikipedia.org/wiki/Color model.
    4. darkroomphoto.ru
    5. bourabai.kz/graphics/0104.htm
    6.litpedia.ru
    7. youtube.com/watch?v=sA9s8HL-7ZM

    This is one of the most common and frequently used models. It is used in devices that emit light, such as monitors, spotlights, filters and other similar devices.

    In the RGB model, derived colors are obtained by adding or mixing base, primary colors, called color coordinates. The coordinates are red (Red), green (Green) and blue (Blue). The RGB model got its name from the first letters of the English names of color coordinates.

    Each of the above components can vary from 0 to 255, forming different colors and thus providing access to all 16 million (the total number of colors represented by this model is 256 * 256 * 256 = 16,777,216.).

    This model additive. The word additive (addition) emphasizes that color is obtained by adding points of three basic colors, each with its own brightness. The brightness of each base color can take values ​​from 0 to 255 (256 values), so the model can encode 256 3 or about 16.7 million colors. These triplets of base points (luminous points) are located very close to each other, so that each triple merges for us into a large point of a certain color. The brighter the color dot (red, green, blue), the more of this color will be added to the resulting (triple) point.

    When working with a graphics editor Adobe PhotoShop we can choose a color, relying not only on what we see, but, if necessary, indicate a digital value, thereby sometimes, especially during color correction, controlling the work process.

    This color model is considered additive, that is, when Increasing the brightness of individual components will increase the brightness of the resulting color: If you mix all three colors with maximum intensity, the result will be white; on the contrary, in the absence of all colors the result is black.

    Table 1

    The meanings of some colors in the RGB model

    The model is hardware-dependent, since the values ​​of the basic colors (as well as the white point) are determined by the quality of the phosphor used in the monitor. As a result, the same image looks different on different monitors.

    The properties of the RGB model are well described by the so-called color cube (see Fig. 3). This is a fragment of three-dimensional space, the coordinates of which are red, green and blue. Each point inside the cube corresponds to a certain color and is described by three projections - color coordinates: the content of red, green and blue. Adding all the primary colors of maximum brightness gives the color white; the starting point of the cube means zero contributions of the primary colors and corresponds to the color black.

    If color coordinates are mixed in equal proportions, the result is a gray color of varying saturation. Points corresponding gray color, lie on the diagonal of the cube. Mixing red and green produces yellow, red and blue produce magenta, and green and blue produce cyan.

    Rice. 3.

    Color coordinates: red, green, and blue are sometimes called primary or additive colors. The colors cyan, magenta, and yellow, which are obtained as a result of pairwise mixing of primary colors, are called secondary. Since addition is the basic operation of color synthesis, the RGB model is sometimes called additive (from the Latin additivus, which means added).

    The principle of adding colors is often depicted in the form of a flat pie chart (see Fig. 4), which, although it does not provide new information about the model, compared to spatial image, but is easier to perceive and easier to remember.

    Rice. 4.

    Many technical devices work on the principle of color addition: monitors, televisions, scanners, overhead projectors, digital cameras etc. If you look through a magnifying glass at the monitor screen, you can see a regular grid, at the nodes of which there are red, green and blue phosphor grain dots. When excited by a beam of electrons, they emit basic colors of varying intensities. The addition of radiation from closely spaced grains is perceived by the human eye as color at a given point on the screen.

    In computer technology, the intensity of primary colors is usually measured by integers in the range from 0 to 255. Zero means the absence of a given color component, the number 255 means its maximum intensity. Since primary colors can be mixed without restriction, it is easy to calculate the total number of colors that an additive model produces. It is equal to 256 * 256 * 256 = 16,777,216, or more than 16.7 million colors. This number seems huge, but in reality the model produces only a small part of the color spectrum.

    Any natural color can be broken down into its red, green and blue components and their intensity measured. But the reverse transition is not always possible. It has been experimentally and theoretically proven that the range of colors in the RGB model is narrower than many colors in the visible spectrum. To obtain the part of the spectrum lying between blue and green, emitters with a negative red intensity are required, which, of course, do not exist in nature. The range of colors a model or device can reproduce is called color gamut. One of the serious disadvantages of the additive model, as paradoxical as it may sound, is its narrow color gamut.

    It seems that this set of color coordinates uniquely defines a light green color on any device that works on the principle of adding base colors. In reality, things are much more complicated. The color produced by the device depends on a variety of external factors, often impossible to account for.

    Display screens are coated with phosphors that differ in chemical and spectral composition. Monitors of the same brand have different wear and lighting conditions. Even one monitor produces different colors when warmed up and immediately after turning on. By calibrating devices and using color management systems, you can try to approximate the color gamuts of different devices. This is discussed in more detail in the next chapter.

    It is impossible not to mention another drawback of this color model. From the point of view of a practicing designer or computer artist, it is non-intuitive. Operating in its environment, it can be difficult to answer the simplest questions related to color synthesis. For example, how should the color coordinates be changed to make the current color a little brighter or less saturated? To answer this simple question correctly requires a lot of experience with this color system.