• Understanding Photoshop color modes. Image Basics

    HEX/HTML

    HEX color is nothing but a hexadecimal representation of RGB.

    Colors are presented in three groups hexadecimal digits, where each group is responsible for its own color: #112233, where 11 is red, 22 is green, 33 is blue. All values ​​must be between 00 and FF.

    Many applications allow a shortened form of hexadecimal color notation. If each of the three groups contains the same characters, for example #112233, then they can be written as #123.

    1. h1 ( color: #ff0000; ) /* red */
    2. h2 ( color: #00ff00; ) /* green */
    3. h3 ( color: #0000ff; ) /* blue */
    4. h4 ( color: #00f; ) /* same blue, shorthand */

    RGB

    The RGB (Red, Green, Blue) color space consists of all possible colors that can be created by mixing red, green, and blue. This model is popular in photography, television, and computer graphics.

    RGB values ​​are specified as an integer from 0 to 255. For example, rgb(0,0,255) is displayed as blue because the blue parameter is set to its highest value (255) and the others are set to 0.

    Some applications (particularly web browsers) support percentage recording of RGB values ​​(from 0% to 100%).

    1. h1 ( color: rgb(255, 0, 0); ) /* red */
    2. h2 ( color: rgb(0, 255, 0); ) /* green */
    3. h3 ( color: rgb(0, 0, 255); ) /* blue */
    4. h4 ( color: rgb(0%, 0%, 100%); ) /* same blue, percentage entry */

    RGB color values ​​are supported in all major browsers.

    RGBA

    Recently modern browsers learned to work with the RGBA color model - an extension of RGB with support for an alpha channel, which determines the opacity of an object.

    The RGBA color value is specified as: rgba(red, green, blue, alpha). The alpha parameter is a number ranging from 0.0 (fully transparent) to 1.0 (fully opaque).

    1. h1 ( color: rgb(0, 0, 255); ) /* blue in regular RGB */
    2. h2 ( color: rgba(0, 0, 255, 1); ) /* the same blue in RGBA, because opacity: 100% */
    3. h3 ( color: rgba(0, 0, 255, 0.5); ) /* opacity: 50% */
    4. h4 ( color: rgba(0, 0, 255, .155); ) /* opacity: 15.5% */
    5. h5 ( color: rgba(0, 0, 255, 0); ) /* completely transparent */

    RGBA is supported in IE9+, Firefox 3+, Chrome, Safari, and Opera 10+.

    HSL

    The HSL color model is a representation of the RGB model in a cylindrical coordinate system. HSL represents colors in a more intuitive and human-readable way than typical RGB. The model is often used in graphics applications, in color palettes, and for image analysis.

    HSL stands for Hue (color/hue), Saturation (saturation), Lightness/Luminance (lightness/lightness/luminosity, not to be confused with brightness).

    Hue specifies the position of the color on the color wheel (from 0 to 360). Saturation is the percentage value of the saturation (from 0% to 100%). Lightness is a percentage of lightness (from 0% to 100%).

    1. h1 ( color: hsl(120, 100%, 50%); ) /* green */
    2. h2 ( color: hsl(120, 100%, 75%); ) /* light green */
    3. h3 ( color: hsl(120, 100%, 25%); ) /* dark green */
    4. h4 ( color: hsl(120, 60%, 70%); ) /* pastel green */

    HSL is supported in IE9+, Firefox, Chrome, Safari, and Opera 10+.

    HSLA

    Similar to RGB/RGBA, HSL has an HSLA mode that supports an alpha channel to indicate the opacity of an object.

    The HSLA color value is specified as: hsla(hue, saturation, lightness, alpha). The alpha parameter is a number ranging from 0.0 (fully transparent) to 1.0 (fully opaque).

    1. h1 ( color: hsl(120, 100%, 50%); ) /* green in normal HSL */
    2. h2 ( color: hsla(120, 100%, 50%, 1); ) /* the same green in HSLA, because opacity: 100% */
    3. h3 ( color: hsla(120, 100%, 50%, 0.5); ) /* opacity: 50% */
    4. h4 ( color: hsla(120, 100%, 50%, .155); ) /* opacity: 15.5% */
    5. h5 ( color: hsla(120, 100%, 50%, 0); ) /* completely transparent */

    CMYK

    The CMYK color model is often associated with color printing and printing. CMYK (unlike RGB) is a subtractive model, which means that more high values associated with darker colors.

    Colors are determined by the ratio of cyan (Cyan), magenta (Magenta), yellow (Yellow), with the addition of black (Key/blacK).

    Each of the numbers that define a color in CMYK represents the percentage of ink of a given color that makes up the color combination, or more precisely, the size of the screen dot that is output on the phototypesetting machine on film of that color (or directly on the printing plate in the case of CTP).

    For example, to obtain the PANTONE 7526 color, you would mix 9 parts cyan, 83 parts magenta, 100 parts yellow, and 46 parts black. This can be denoted as follows: (9,83,100,46). Sometimes the following designations are used: C9M83Y100K46, or (9%, 83%, 100%, 46%), or (0.09/0.83/1.0/0.46).

    HSB/HSV

    HSB (also known as HSV) is similar to HSL, but they are two different color models. They are both based on cylindrical geometry, but HSB/HSV is based on the "hexcone" model, while HSL is based on the "bi-hexcone" model. Artists often prefer to use this model, it is generally accepted that the HSB/HSV device is closer to the natural perception of colors. In particular, the HSB color model is used in Adobe Photoshop.

    HSB/HSV stands for Hue (color/hue), Saturation (saturation), Brightness/Value (brightness/value).

    Hue specifies the position of the color on the color wheel (from 0 to 360). Saturation is the percentage value of the saturation (from 0% to 100%). Brightness is a percentage of brightness (from 0% to 100%).

    XYZ

    Color model XYZ(CIE 1931 XYZ) is a purely mathematical space. Unlike RGB, CMYK, and other models, in XYZ the principal components are “imaginary,” meaning you cannot associate X, Y, and Z with any set of colors to mix. XYZ is the master model for almost all other color models used in technical fields.

    LAB

    The LAB color model (CIELAB, “CIE 1976 L*a*b*”) is calculated from the CIE XYZ space. Lab's design goal was to create a color space in which color changes would be more linear in terms of human perception (compared to XYZ), that is, so that the same change in color coordinate values ​​in different regions of the color space would produce the same sensation of color change.

    The RGB model describes the colors emitted. It is based on three primary (basic) colors: red (Red), green (Green) and blue (Blue). The RGB model can be called “native” for the display. The remaining colors are obtained by combining the basic ones. This type of color is called additive.

    From the figure it can be seen that the combination of green and red produces yellow, the combination of green and blue produces blue, and the combination of all three colors- white. From this we can conclude that colors in RGB are added subtractively.

    Primary colors are taken from human biology. That is, these colors are based on the physiological reaction of the human eye to light. The human eye has photoreceptor cells that respond to most green (M), yellow-green (L) and blue-violet (S) light ( maximum length waves from 534 nm, 564 nm and 420 nm respectively). The human brain can easily distinguish a wide range of different colors based on differences in the signals received from the three waves.

    The most widely used RGB color model is in LCD or plasma displays, such as a TV or computer monitor. Each pixel on a display can be represented in a hardware interface (such as graphics cards) as red, green, and blue values. RGB values ​​vary in intensity, which are used for clarity. Cameras and scanners also work in the same way, they capture color with sensors that record different RGB intensities at each pixel.

    In 16 bits per pixel mode, also known as Highcolor, there are either 5 bits per color (often referred to as 555 mode) or with an extra bit for green (known as 565 mode). The color green is added due to the fact that the human eye has the ability to detect more shades of green than any other color.

    RGB values, represented in 24 bits per pixel (bpp) mode, also known as Truecolor, typically have three integer values ​​between 0 and 255. Each of these three numbers represents the intensity of red, green, and blue, respectively.

    RGB has three channels: red, blue and green, i.e. RGB is a three-channel color model. Each channel can take values ​​from 0 to 255 in decimal or, more realistically, from 0 to FF in hexadecimal. This is explained by the fact that the byte with which the channel is encoded, and indeed any byte, consists of eight bits, and a bit can take 2 values ​​0 or 1, for a total of 28=256. In RGB, for example, red can have 256 gradations: from pure red (FF) to black (00). Thus, it is easy to calculate that the RGB model contains only 2563 or 16777216 colors.

    RGB has three channels, and each is encoded with 8 bits. The maximum value, FF (or 255), gives a pure color. White color is obtained by combining all colors, or rather, their extreme gradations. White color code = FF(red) + FF(green) + FF(blue). Accordingly, black code = 000000. Yellow code = FFFF00, magenta = FF00FF, cyan = 00FFFF.

    There are also 32 and 48 bit color display modes.

    RGB is not used for printing on paper; instead, there is a CMYK color space.

    CMYK is a color model used in color printing. A color model is a mathematical model for describing colors using integers. The CMYK model is based on cyan, magenta, yellow and black.

    • Translation

    I'm going to take a tour of the history of the science of human perception that led to the creation of modern video standards. I will also try to explain commonly used terminology. I'll also briefly discuss why the typical game creation process will, over time, become more and more similar to the process used in the film industry.

    Pioneers of color perception research

    Today we know that the retina of the human eye contains three different types of photoreceptor cells called cones. Each of three types The cones contain a protein from the opsin family of proteins that absorbs light in different parts of the spectrum:

    Light absorption by opsins

    Cones correspond to the red, green and blue portions of the spectrum and are often called long (L), medium (M) and short (S) according to the wavelengths to which they are most sensitive.

    One of the first scientific works on the interaction of light and the retina was the treatise “Hypothesis Concerning Light and Colors” by Isaac Newton, written between 1670-1675. Newton had a theory that light of different wavelengths caused the retina to resonate at the same frequencies; these vibrations were then transmitted through the optic nerve to the "sensorium".


    “Rays of light falling on the bottom of the eye excite vibrations in the retina, which propagate along the fibers of the optic nerves to the brain, creating the sense of vision. Different types rays create vibrations of different strengths, which, according to their strength, excite sensations of different colors ... "

    More than a hundred years later, Thomas Young came to the conclusion that since resonance frequency is a system-dependent property, in order to absorb light of all frequencies, there must be an infinite number of different resonance systems in the retina. Jung considered this unlikely, and reasoned that the quantity was limited to one system for red, yellow and blue. These colors have traditionally been used in subtractive paint mixing. In his own words:

    Since, for reasons given by Newton, it is possible that the movement of the retina is of an oscillatory rather than a wave nature, the frequency of the oscillations must depend on the structure of its substance. Since it is almost impossible to believe that each sensitive point of the retina contains an infinite number of particles, each of which is capable of vibrating in perfect harmony with any possible wave, it becomes necessary to assume that the number is limited, for example, to the three primary colors: red, yellow and blue...
    Young's assumption about the retina was wrong, but he concluded correctly: there are a finite number of cell types in the eye.

    In 1850, Hermann Helmholtz was the first to obtain experimental proof of Young's theory. Helmholtz asked a subject to match the colors of different patterns of light sources by adjusting the brightness of several monochrome light sources. He came to the conclusion that to compare all samples, three light sources are necessary and sufficient: in the red, green and blue parts of the spectrum.

    The Birth of Modern Colorimetry

    Fast forward to the early 1930s. By that time, the scientific community had a fairly good understanding of the inner workings of the eye. (Although it took another 20 years for George Wald to experimentally confirm the presence and function of rhodopsins in retinal cones. This discovery led him to the Nobel Prize in Medicine in 1967.) Commission Internationale de L'Eclairage (International Commission on Illumination), CIE, set out to create a comprehensive quantitative assessment of human color perception. Quantification was based on experimental data collected by William David Wright and John Guild under parameters similar to those first chosen by Hermann Helmholtz. The base settings were chosen to be 435.8 nm for blue, 546.1 nm for green and 700 nm for red.


    John Guild's experimental setup, three knobs adjusting primary colors

    Due to the significant overlap in M ​​and L cone sensitivities, it was not possible to match some wavelengths to the blue-green portion of the spectrum. To “match” these colors, I needed to add a little base red as a reference point:

    If we imagine for a moment that all primary colors contribute negatively, then the equation can be rewritten as:

    The result of the experiments was a table of RGB triads for each wavelength, which was displayed on the graph as follows:


    CIE 1931 RGB color matching functions

    Of course, colors with a negative red component cannot be displayed using the CIE primaries.

    We can now find the trichrome coefficients for the light spectral intensity distribution S as the following inner product:

    It may seem obvious that sensitivity to different wavelengths can be integrated in this way, but in fact it depends on the physical sensitivity of the eye, which is linear with respect to wavelength sensitivity. This was empirically confirmed in 1853 by Hermann Grassmann, and the integrals presented above in their modern formulation are known to us as Grassmann's law.

    The term “color space” arose because the primary colors (red, green and blue) can be considered the basis vector space. In this space, the different colors perceived by a person are represented by rays emanating from a source. The modern definition of vector space was introduced in 1888 by Giuseppe Peano, but more than 30 years earlier James Clerk Maxwell was already using the nascent theories of what later became linear algebra to formally describe the trichromatic color system.

    CIE decided that, to simplify calculations, it would be more convenient to work with a color space in which the coefficients of the primary colors are always positive. The three new primary colors were expressed in RGB color space coordinates as follows:

    This new set of primary colors cannot be realized in the physical world. It's simply a mathematical tool that makes working with color space easier. In addition, to ensure that the coefficients of the primary colors are always positive, the new space is arranged in such a way that the color coefficient Y corresponds to the perceived brightness. This component is known as CIE brightness(you can read more about it in the excellent Color FAQ article by Charles Poynton).

    To make it easier to visualize the resulting color space, we'll perform one last transformation. Dividing each component by the sum of the components, we obtain a dimensionless color value that does not depend on its brightness:

    The x and y coordinates are known as chromaticity coordinates, and together with the CIE luminance Y they make up the CIE xyY color space. If we plot the chromaticity coordinates of all colors with a given brightness on a graph, we get the following diagram, which is probably familiar to you:


    XyY diagram CIE 1931

    The last thing you need to know is what is considered white in the color space. In such a display system, white is the x and y coordinates of the color, which are obtained when all the coefficients of the RGB primary colors are equal to each other.

    Over the years, several new color spaces have emerged that improve upon the CIE 1931 spaces in various ways. Despite this, the CIE xyY system remains the most popular color space for describing the properties of display devices.

    Transfer functions

    Before looking at video standards, two more concepts need to be introduced and explained.

    Optoelectronic transfer function

    Optical-electronic transfer function(optical-electronic transfer function, OETF) determines how linear light captured by a device (camera) should be encoded in the signal, i.e. this is the function of the form:

    V used to be analog signal, but now, of course, it is digitally encoded. Typically, game developers rarely encounter OETF. One example where the feature will be important is the need for a game to combine video recording with computer graphics. In this case, it is necessary to know which OETF the video was recorded with in order to recover the linear light and mix it correctly with the computer image.

    Electro-optical transfer function

    The electronic-optical transfer function (EOTF) performs the opposite task of OETF, i.e. it determines how the signal will be converted into linear light:

    This feature is more important for game developers because it determines how the content they create will be displayed on users' TV screens and monitors.

    Relationship between EOTF and OETF

    The concepts of EOTF and OETF, although interrelated, serve different purposes. OETF is needed to represent the captured scene from which we can then reconstruct the original linear lighting (this representation is conceptually the HDR (High Dynamic Range) framebuffer of a normal game). What happens during the production stages of a regular film:
    • Capture scene data
    • Inverting OETF to restore linear lighting values
    • Color correction
    • Mastering for various target formats (DCI-P3, Rec. 709, HDR10, Dolby Vision, etc.):
      • Reducing the dynamic range of a material to match the dynamic range of the target format (tone mapping)
      • Convert to target format color space
      • Invert EOTF for the material (when using EOTF in the display device, the image is restored as desired).
    A detailed discussion of this technical process will not be included in our article, but I recommend studying a detailed formalized description of the ACES (Academy Color Encoding System) workflow.

    Until now, the standard technical process of the game looked like this:

    • Rendering
    • HDR Frame Buffer
    • Tonal correction
    • Invert EOTF for the intended display device (usually sRGB)
    • Color correction
    Most game engines use a color grading technique popularized by Naty Hoffman's presentation "Color Enhancement for Videogames" with Siggraph 2010. This technique was practical when only target SDR (Standard Dynamic Range) was used, and it allowed color grading software to be used already installed on most artists' computers, such as Adobe Photoshop.


    Standard SDR color grading workflow (image credit: Jonathan Blow)

    After the introduction of HDR, most games began to move towards a process similar to that used in film production. Even in the absence of HDR, a cinematic-like process allowed for optimized performance. Doing color grading in HDR means you have a whole dynamic range scenes. In addition, some effects that were previously unavailable become possible.

    Now we are ready to consider different standards, currently used to describe television formats.

    Video standards

    Rec. 709

    Most standards related to video broadcasting are issued by the International Telecommunication Union (ITU), a UN body primarily concerned with information technology.

    ITU-R Recommendation BT.709, more commonly referred to as Rec. 709 is a standard that describes the properties of HDTV. The first version of the standard was released in 1990, the latest in June 2015. The standard describes parameters such as aspect ratios, resolutions, and frame rates. Most people are familiar with these specifications, so I will skip them and focus on the color and brightness sections of the standard.

    The standard describes in detail chromaticity, limited to the xyY CIE color space. The red, green and blue illuminants of a display standard must be selected such that their individual chromaticity coordinates are as follows:

    Their relative intensity must be adjusted so that the white point has chromaticity

    (This white point is also known as CIE Standard Illuminant D65 and is similar to capturing the chromaticity coordinates of the spectral intensity distribution of normal daylight.)

    Color properties can be visually represented as follows:


    Coverage Rec. 709

    The area of ​​the chromaticity scheme bounded by the triangle created by the primary colors given system display is called coverage.

    Now we move on to the brightness portion of the standard, and this is where things get a little more complicated. The standard states that "General optical-electronic transfer characteristic in the source" is equal to:

    There are two problems here:

    1. There is no specification on what physical brightness corresponds to L=1
    2. Although it is a video broadcast standard, it does not specify EOTF
    This happened historically because it was believed that the display device, i.e. consumer TV and there is EOTF. In practice, this was done by adjusting the captured luminance range in the above OETF so that the image looked good on a reference monitor with the following EOTF:

    Where L = 1 corresponds to a luminance of approximately 100 cd/m² (the unit of cd/m² is called a "nit" in the industry). This is confirmed by the ITU in the latest versions of the standard with the following comment:

    In standard production practice, the encoding function of the image sources is adjusted so that the final image has the desired appearance as seen on the reference monitor. The decoding function from Recommendation ITU-R BT.1886 is taken as a reference. The reference viewing environment is specified in ITU-R Recommendation BT.2035.
    Rec. 1886 is the result of work to document the characteristics of CRT monitors (the standard was published in 2011), i.e. is a formalization of existing practice.


    Elephant Graveyard CRT

    The nonlinearity of brightness as a function of applied voltage has led to the way CRT monitors are physically designed. By pure chance, this nonlinearity is (very) approximately the inverted nonlinearity of human brightness perception. When we moved to digital representation of signals, this had the fortunate effect of evenly distributing the sampling error across the entire brightness range.

    Rec. 709 is designed to use 8-bit or 10-bit encoding. Most content uses 8-bit encoding. For it, the standard states that the distribution of the signal brightness range should be distributed in codes 16-235.

    HDR10

    When it comes to HDR video, there are two main contenders: Dolby Vision and HDR10. In this article I will focus on HDR10 because it is an open standard that has become popular faster. This standard is chosen for Xbox One S and PS4.

    We'll start again by looking at the chrominance portion of the color space used in HDR10, as defined in the ITU-R BT.2020 (UHDTV) Recommendation. It contains the following chromaticity coordinates of primary colors:

    Once again, D65 is used as the white point. When visualized on an xy Rec. 2020 looks like this:


    Coverage Rec. 2020

    It is clearly noticeable that the coverage of this color space is significantly greater than that of Rec. 709.

    Now we move on to the brightness section of the standard, and this is where things get interesting again. In his 1999 Ph.D. thesis “Contrast sensitivity of the human eye and its effect on image quality”(“Contrast sensitivity of the human eye and its influence on image quality”) Peter Barten presented a slightly scary equation:

    (Many of the variables in this equation are themselves complex equations; for example, brightness is hidden inside the equations that calculate E and M).

    The equation determines how sensitive the eye is to changes in contrast at different brightnesses, and various parameters determine viewing conditions and certain properties of the observer. "Minimum distinguishable difference"(Just Noticeable Difference, JND) is the inverse of Barten's equation, so for EOTF sampling to get rid of viewing conditions, the following must be true:

    The Society of Motion Picture and Television Engineers (SMPTE) decided that Barten's equation would be a good basis for a new EOTF. The result was what we now call SMPTE ST 2084 or Perceptual Quantizer (PQ).

    PQ was created by choosing conservative values ​​for the parameters of the Barten equation, i.e. expected typical consumer viewing conditions. PQ was later defined as the sampling that, for a given luminance range and number of samples, most closely matches Barten's equation with the chosen parameters.

    The discretized EOTF values ​​can be found using the following recurrent formula for finding k< 1 . The last sampling value will be the required maximum brightness:

    For a maximum brightness of 10,000 nits using 12-bit sampling (which is used in Dolby Vision), the result looks like this:


    EOTF PQ

    As you can see, sampling does not cover the entire brightness range.

    The HDR10 standard also uses EOTF PQ, but with 10-bit sampling. This is not enough to stay below the Barten threshold in the 10,000 nit brightness range, but the standard allows metadata to be embedded into the signal to dynamically adjust peak brightness. Here's what 10-bit PQ sampling looks like for different brightness ranges:


    Various EOTF HDR10

    But even so, the brightness is slightly above the Barten threshold. However, the situation is not as bad as it might seem from the graph, because:

    1. The curve is logarithmic, so the relative error is actually not that great
    2. Do not forget that the parameters taken to create the Barten threshold were chosen conservatively.
    At the time of writing, HDR10 TVs on the market typically have a peak brightness of 1000-1500 nits, and 10-bit is sufficient for them. It's also worth noting that TV manufacturers can decide what to do with brightness levels above the range they can display. Some take a hard pruning approach, others a softer pruning approach.

    Here's an example of what 8-bit Rec sampling looks like. 709 with 100 nits peak brightness:


    EOTF Rec. 709 (16-235)

    As you can see, we're well above Barten's threshold, and importantly, even the most indiscriminate buyers will tune their TVs to well above 100 nits peak brightness (usually 250-400 nits), which will raise the Rec curve. 709 is even higher.

    In conclusion

    One of the biggest differences between Rec. 709 and HDR in that the brightness of the latter is indicated in absolute values. In theory, this means that content designed for HDR will look the same on all compatible TVs. At least until their peak brightness.

    There is a popular misconception that HDR content will be brighter overall, but this is generally not the case. HDR films will most often be produced in such a way that intermediate level image brightness was the same as for Rec. 709, but so that the brightest parts of the image are brighter and more detailed, which means the midtones and shadows will be darker. Combined with the absolute brightness values ​​of HDR, this means that optimal HDR viewing requires good conditions: in bright light, the pupil constricts, meaning details in dark areas of the image will be harder to see.

    Tags:

    Add tags

    Many people are probably wondering what sRGB is in camera settings, why is it needed and what is better, sRGB or Adobe RGB?

    RGB is an abbreviation for the names of primary colors (Red, Green, Blue). Why are they basic? Because humans, unlike some other species, have trichromatic vision. That is, there are receptors in the eye that are sensitive to these three colors. Our brain makes a huge contribution to the perception of color, so the task of correctly displaying color is non-trivial and requires significant tricks.

    Color space is the set of colors that we can observe or display. There are many ways to display color spaces graphically, but clever mathematicians have come up with one very elegant way that you see all the time on the Internet.

    The concept of color can be represented as follows: color consists of two components - brightness and tonality. That is, gray differs from white only in brightness; they have the same tonality. As a result of experiments at the beginning of the 20th century, it was possible to determine the range of colors that are perceived by humans. By using mathematical transformations, the entire set of tonalities was able to be displayed on a plane, and this diagram was called CIE 1931 (1931 is the year when the diagram was presented). Thus, it became possible to describe color by x,y coordinates on a graph, plus brightness.

    The colors shown in the diagram are for illustrative purposes; these are not the colors you see in everyday life.

    There have never been any problems with color registration, any digital camera The color gamut that the sensor sees is much wider than what a person can see. This is partly why infrared and ultraviolet filters are used inside the camera to simplify subsequent signal processing.

    There were problems with color display, especially on the monitor screen. Display capabilities are severely limited due to physical reasons, and obtaining the full set of colors that the human brain distinguishes was practically impossible. There have been many attempts to create a color display that displays most shades, but a compromise between color reproduction and device price was achieved in the 50s with CRT displays.

    To curb the variety of color displays and professional processing To make images on a computer more predictable, the sRGB standard was developed in the 90s. It appeared as a result of an analysis of the capabilities of the most common CRT (CRT) monitors at that time. At that time, no one even dreamed of LCD displays; moreover, in terms of characteristics and price, LCDs lagged far behind CRTs and could not be the basis for a standard.

    The operating principle of CRT screens is simple - by mixing three primary colors (red, green, blue), various shades are obtained. Two problems:

    1. the number of shades available depends on the purity of the primary colors, and pure colors are very difficult to achieve
    2. You can't get all the visible colors just by mixing the three primary colors.

    The sRGB standard describes exactly what purity the primary colors should be and what shades are achievable when mixed. It also determines where the white point is. On the CIE diagram, the sRGB standard looks like a triangle with the coordinates of the primary colors at the vertices:

    It is easy to see how modest the capabilities of technology are compared to what nature has endowed us with.

    Even if you get primary colors of exceptional purity, as is achieved with laser displays, you will not get the full color gamut that we see in the world around us. Everything that such a display is capable of is limited by the triangle:

    By the way, when printing there are no such strict restrictions on the number of sources of primary colors and therefore, for quite reasonable money, cool photo printers use, for example, 8-color printing. At the same time, the color gamut is expanded at a not very high cost and looks like a polygon on the diagram. Here's what the color gamut of a not-so-cool printer looks like compared to sRGB:

    But printers have a lot of other problems, in particular, the dependence of color rendering on paper quality and so on.

    Adobe RGB is a different but very similar standard, it is slightly wider and covers more colors:

    You'll probably want to immediately run and switch your camera's sRGB to Adobe RGB, but don't rush to do it.

    Adobe RGB is only needed by those who print professionally and know exactly what they are doing (such people do not need to read our articles). The vast majority of screens and programs work in the sRGB standard and know nothing about Adobe RGB, this is how it happened historically. Moreover, if you try to display Adobe RGB colors on an sRGB screen, color rendering problems may occur. sRGB ensures that at least most people will see roughly the same colors as you.

    Due to the limited sRGB range, you've probably noticed that after photographing a red rose, you can't distinguish the petals in the photo. The screen's capabilities are simply not enough to depict all the details in shades of red, for example.

    Of course, a lot depends on the monitor settings, so photographers prefer to deal with monitors on IPS matrices and look for models that are calibrated at the factory, such as LG IPS236V. All manufacturers try to comply with the sRGB standard, some do better, some do worse.

    IN lately Technology has advanced greatly and LCD monitors sometimes demonstrate a color gamut even wider than CRT monitors, although until recently this was not possible, which is why the old bulky screens could not be forced out of design departments for a long time. Here is the color gamut of a professional LCD monitor:

    Our attentive readers have probably already exhausted themselves with the question of what kind of diagram is in the title of the article, what monitor is it from? This is not a monitor, but a Samsung Galaxy Note phone. The trick is that in modern smartphones used new technology displays – AMOLED (organic light-emitting diodes). So far, full-fledged large AMOLED monitors are very expensive, but I believe that the future lies with them.

    AMOLED allows you to achieve purer primary colors and, as a result, a wider color gamut. In practice this means that Samsung Galaxy Note, the picture will be richer and more contrasty than on screens of previous generations.

    Thank you for your attention.

    An image in Photoshop can be transformed, displayed and edited in any of eight modes: Bitmap(Bitmap), Grayscale(Halftone), Duotone(Dual tone), Indexed Color(Indexed color), RGB, CMYK, Lab And Multichannel(Multichannel). Simply select the desired mode from the submenu Image > Mode(Image > Mode) - see fig. 2.7.

    Rice. 2.7. Submenu Mode

    To take advantage of the unavailable mode (its name appears dim), you must first convert the image to a different representation. For example, if you want to convert an image to Indexed Color, it must be in mode RGB or Grayscale.

    Some picture mode changes cause noticeable color shifts; others concern only subtle nuances. Dramatic changes can occur when converting an image from RGB to CMYK, as the printed colors are replaced by rich, vibrant RGB colors. Color matching may become less accurate if you repeatedly convert an image from RGB to CMYK and back again.

    Mid- and low-end scanners usually only produce RGB images. If you are creating an image in Photoshop that will later be printed, to speed up editing and applying filters, work with it in RGB mode, and then when you're ready to print the image, convert it to a CMYK representation. To preview a CMYK image as it will appear in print, use the submenu commands View> Proof Setup(View > Proof Print Settings) (Fig. 2.8) in combination with submenu commands View> Proof Colors(View > Proof Colors) or press the keyboard shortcut Ctrl+Y.

    Rice. 2.8. Submenu for setting test print options

    You can preview a CMYK image in one window and open a second Photoshop window that displays the same image without first converting it to CMYK.

    Some Photoshop transformations cause layers to be merged, such as converting to Indexed Color, Multichannel or Bitmap For other transformations, if you want to be sure to preserve the layers, check the box Don't Flatten(Do not merge layers).

    The latest scanners produce CMYK images, and to avoid losing color data, this mode should not be changed. If you find it cumbersome to work with such large files, you can use the scheme of replacing the image with copies with a lower resolution, save commands using the palette Actions and then apply the action to the high resolution CMYK image. However, some operations in Photoshop will still have to be done manually, for example, applying strokes with a tool Brush.

    Some output devices require that the image be saved in a specific representation. The availability of some commands and tool options in Photoshop may also change depending on the current image mode.

    In mode Bitmap(Fig. 2.9, 2.11), pixels are either 100% white or 100% black, there is no access to layers, filters, or submenu commands Adjustments

    Rice. 2.9. Image in view Bitmap conversion method Diffusion Dither

    Rice. 2.10. Image in Grayscale representation

    Rice. 2.11. Mode Bitmap

    (Adjustments) except for the team Invert(Reverse). Before you can convert an image to this representation, it needs to have a representation Grayscale.

    In mode Grayscale(Fig. 2.10, 2.12) pixels can be black, white and have up to 254 shades of gray. If you convert a color image to grayscale, then save and close it, the brightness information will be retained, but the color information will be permanently lost.

    Rice. 2.12. Mode Grayscale

    Image in mode Indexed Color(see Figure 2.13) contains one channel, and the color table can have a maximum of 256 colors or shades (8-bit color representation). This is the maximum number of colors available in the most Web-friendly GIF and PNG-8 formats. However, in Photoshop it is better to use the Save command for Web(Save Web Sensitive) when preparing graphics for Web browsers. Often, when using images in multimedia applications, it is useful to reduce the number of colors to an 8-bit representation. You can also convert the image to Indexed Color, to create artistic color effects.

    Rice. 2.13. Indexed Color Mode

    RGB mode is the most universal, since only in this mode all filters and tool options in Photoshop are available (Fig. 2.14). Some video and media applications can import RGB images in Photoshop format.

    Rice. 2.14. RGB mode

    Photoshop is one of the few programs that allows you to display and edit an image in CMYK(Fig. 2.15). You can convert an image to this mode when it is ready to be printed on a color printer or when color separations need to be performed.

    Rice. 2.15. CMYK mode

    Mode Lab(Fig. 2.16) has three channels, it was designed to increase compatibility between printers and monitors when displaying colors. The channels contain information about brightness and two colors: one from the green-to-red gamut and the other from the blue-to-yellow gamut. Introducing Lab(or RGB) Photo images are usually converted in Photoshop. Sometimes files are saved in this mode to export them to other operating systems.

    Mode Duotone(Fig. 2.17) corresponds to a printing method that uses two or more printed forms to obtain more saturated and deep color in a halftone image.

    Rice. 2.16. Lab mode

    Rice. 2.17. Duotone mode

    Image in mode Multichannel(Fig. 2.18) consists of several grayscale channels with 256 shades of color in each. This mode is used when printing some halftone images. In addition, using this mode You can assemble individual channels from different images before converting the new image to color. When switching to mode Multichannel Custom color channels are saved (spot color channel). If you convert an image from .RGB mode to Multichannel then the Red, Green and Blue channels will be converted to Cyan, Magenta"n Yellow respectively. As a result, the image may become a little lighter, but there will be no significant changes.

    Rice. 2.18. Multichannel mode