• What is a discrete image? and what is hardware resolution? Fourier processing of digital images

    Digital photography or other raster image is an array of numbers recorded by brightness level sensors in a two-dimensional plane. Knowing that, from a mathematical point of view, a thin lens performs a Fourier transform of images placed in focal planes, it is possible to create image processing algorithms that are analogous to image processing by a classical optical system.

    The formula for such algorithms will look like this:

    1. Z=FFT(X) – direct two-dimensional Fourier transform
    2. Z′=T(Z) – application of a function or transparency to the Fourier transform of an image
    3. Y=BFT(Z′) – inverse two-dimensional Fourier transform
    To calculate Fourier transforms, fast algorithms are used discrete transform Fourier. Although the optical system of lenses carries out the Fourier transform on the continuous range of the argument and for the continuous spectrum, but when moving to digital processing data, the Fourier transform formulas can be replaced by the discrete Fourier transform formulas.

    Implementation examples

    • Image Blur Algorithm
    The implemented algorithms are part of an open source library source code FFTTools. Internet address: github.com/dprotopopov/FFTTools

    Image Blur Algorithm

    In optical systems, the diaphragm, located in the focal plane, is a simple hole in the screen. As a result of passing luminous flux through the diaphragm, high frequency waves (shorter wavelengths) pass through the obstacle, and the waves low frequencies(with longer wavelengths) are cut off by the screen. This increases the sharpness of the resulting image. If you replace a hole in the screen with an obstacle in the screen, the result will be blurry image, since it will be formed from frequencies of long wavelengths.

    Algorithm:

    1. Calculate the array Z′=T(Z), where T is the zeroing of rows and columns located in the given internal areas of the argument matrix corresponding to high 5. frequencies (that is, zeroing of the Fourier expansion coefficients corresponding high frequencies)

    Image sharpening algorithm

    In optical systems, the diaphragm, located in the focal plane, is a simple hole in the screen. As a result of the passage of light through the diaphragm, high-frequency waves (with shorter wavelengths) pass through the obstacle, and low-frequency waves (with longer wavelengths) are cut off by the screen. This increases the sharpness of the resulting image.

    Algorithm:

    1. Let X(N1,N2) be an array of image pixel brightnesses.
    2. Calculate Px = average (rms) brightness of pixels in array X
    3. Calculate array Z=FT(X) – direct two-dimensional discrete Fourier transform
    4. Save the value L=Z(0,0) – corresponding to the average brightness of the pixels of the original image
    5. Calculate the array Z′=T(Z), where T is the zeroing of rows and columns located in the given external areas of the argument matrix corresponding to low 6. frequencies (that is, zeroing of the Fourier expansion coefficients corresponding to low frequencies)
    6. Restore the value Z’(0,0)=L – corresponding to the average brightness of the pixels of the original image
    7. Calculate array Y=RFT(Z′) – inverse two-dimensional discrete Fourier transform
    8. Calculate Py = average (rms) brightness of pixels in array Y
    9. Normalize the array Y(N1,N2) by the average brightness level Px/Py

    Image scaling algorithm

    In optical systems, the luminous flux in the focal plane of the system is a Fourier transform of the original image. Output size optical system image is determined by the ratio of the focal lengths of the lens and eyepiece.

    Algorithm:

    1. Let X(N1,N2) be an array of image pixel brightnesses.
    2. Calculate Px = average (rms) brightness of pixels in array X
    3. Calculate array Z=FT(X) – direct two-dimensional discrete Fourier transform
    4. Calculate the array Z′=T(Z), where T is either adding zero rows and columns of the matrix corresponding to high frequencies, or removing rows and columns of the matrix corresponding to high frequencies to obtain the required size of the final image
    5. Calculate array Y=RFT(Z′) – inverse two-dimensional discrete Fourier transform
    6. Calculate Py = average (rms) brightness of pixels in array Y
    7. Normalize the array Y(M1,M2) by the average brightness level Px/Py
    Software used
    • Microsoft Visual Studio 2013 C# - environment and programming language
    • EmguCV/OpenCV – C++ library of structures and algorithms for image processing
    • FFTWSharp/FFTW – C++ library implementing fast discrete Fourier transform algorithms

    Image Blur Algorithm

    Algorithm code

    ///

    /// Clear internal region of array /// /// Array of values /// Internal blind region size private static void Blind(Complex[,] data, Size size) ( int n0 = data.GetLength(0); int n1 = data.GetLength(1); int n2 = data.GetLength(2); int s0 = Math. Max(0, (n0 - size.Height)/2); int s1 = Math.Max(0, (n1 - size.Width)/2); int e0 = Math.Min((n0 + size.Height)/ 2, n0); int e1 = Math.Min((n1 + size.Width)/2, n1);< e0; i++) { Array.Clear(data, i*n1*n2, n1*n2); } for (int i = 0; i < s0; i++) { Array.Clear(data, i*n1*n2 + s1*n2, (e1 - s1)*n2); } for (int i = e0; i < n0; i++) { Array.Clear(data, i*n1*n2 + s1*n2, (e1 - s1)*n2); } } /// /// Blur bitmap with the Fastest Fourier Transform /// /// Blured bitmap public Bitmap Blur(Bitmap bitmap) ( using (var image = new Image (bitmap)) ( int length = image.Data.Length; int n0 = image.Data.GetLength(0); int n1 = image.Data.GetLength(1); int n2 = image.Data.GetLength(2); var doubles = new double; Buffer.BlockCopy(image.Data, 0, doubles, 0, length*sizeof (double)); double power = Math.Sqrt(doubles.Average(x => x*x)); = new fftw_complexarray(doubles.Select(x => new Complex(x, 0)).ToArray()); var output = new fftw_complexarray(length); fftw_plan.dft_3d(n0, n1, n2, input, output, fftw_direction. Forward, fftw_flags.Estimate).Execute(); complex complex= output.GetData_Complex(); var data = new Complex; var buffer = new double; GCHandle complexHandle = GCHandle.Alloc(complex, GCHandleType.Pinned); GCHandle dataHandle = GCHandle.Alloc(data, GCHandleType.Pinned); IntPtr complexPtr = complexHandle.AddrOfPinnedObject(); IntPtr dataPtr = dataHandle.AddrOfPinnedObject(); Marshal.Copy(complexPtr, buffer, 0, buffer.Length); Marshal.Copy(buffer, 0, dataPtr, buffer.Length); Blind(data, _blinderSize); Marshal.Copy(dataPtr, buffer, 0, buffer.Length); Marshal.Copy(buffer, 0, complexPtr, buffer.Length); complexHandle.Free(); dataHandle.Free(); input.SetData(complex); fftw_plan.dft_3d(n0, n1, n2, input, output, fftw_direction.Backward, fftw_flags.Estimate).Execute(); double array2 = output.GetData_Complex().Select(x => x.Magnitude).ToArray(); double power2 = Math.Sqrt(array2.Average(x => x*x)); doubles = array2.Select(x =>

    Image sharpening algorithm

    Algorithm code

    ///

    /// Clear external region of array /// /// Array of values /// External blind region size private static void Blind(Complex[,] data, Size size) ( int n0 = data.GetLength(0); int n1 = data.GetLength(1); int n2 = data.GetLength(2); int s0 = Math. Max(0, (n0 - size.Height)/2); int s1 = Math.Max(0, (n1 - size.Width)/2); int e0 = Math.Min((n0 + size.Height)/ 2, n0); int e1 = Math.Min((n1 + size.Width)/2, n1);< s0; i++) { Array.Clear(data, i*n1*n2, s1*n2); Array.Clear(data, i*n1*n2 + e1*n2, (n1 - e1)*n2); } for (int i = e0; i < n0; i++) { Array.Clear(data, i*n1*n2, s1*n2); Array.Clear(data, i*n1*n2 + e1*n2, (n1 - e1)*n2); } } /// /// Sharp bitmap with the Fastest Fourier Transform /// /// sharp bitmap public Bitmap Sharp(Bitmap bitmap) ( using (var image = new Image (bitmap)) ( int length = image.Data.Length; int n0 = image.Data.GetLength(0); int n1 = image.Data.GetLength(1); int n2 = image.Data.GetLength(2); var doubles = new double; Buffer.BlockCopy(image.Data, 0, doubles, 0, length*sizeof (double)); double power = Math.Sqrt(doubles.Average(x => x*x)); = new fftw_complexarray(doubles.Select(x => new Complex(x, 0)).ToArray()); var output = new fftw_complexarray(length); fftw_plan.dft_3d(n0, n1, n2, input, output, fftw_direction. Forward, fftw_flags.Estimate).Execute(); Complex complex = output.GetData_Complex(); Complex level = complex; var buffer = new double; GCHandle.Alloc(complex, GCHandleType.Pinned) ; GCHandle dataHandle = GCHandle.Alloc(data, GCHandleType.Pinned); IntPtr complexPtr = complexHandle.AddrOfPinnedObject(); IntPtr dataPtr = dataHandle.AddrOfPinnedObject(); Copy(buffer, 0, dataPtr, buffer.Length); Blind(data, _blinderSize); Marshal.Copy(dataPtr, buffer, 0, buffer.Length); Marshal.Copy(buffer, 0, complexPtr, buffer.Length); complexHandle.Free(); dataHandle.Free(); complex = level; input.SetData(complex); fftw_plan.dft_3d(n0, n1, n2, input, output, fftw_direction.Backward, fftw_flags.Estimate).Execute(); double array2 = output.GetData_Complex().Select(x => x.Magnitude).ToArray(); double power2 = Math.Sqrt(array2.Average(x => x*x)); doubles = array2.Select(x => x*power/power2).ToArray(); Buffer.BlockCopy(doubles, 0, image.Data, 0, length*sizeof (double)); return image.Bitmap; ) )

    Image scaling algorithm

    Algorithm code

    ///

    /// Copy arrays /// /// Input array /// Output array private static void Copy(Complex[,] input, Complex[,] output) ( int n0 = input.GetLength(0); int n1 = input.GetLength(1); int n2 = input.GetLength(2); int m0 = output.GetLength(0); int m1 = output.GetLength(1); int ex0 = Math.Min(n0, m0)/2; , m1)/2; int ex2 = Math.Min(n2, m2); Debug.Assert(n2 == m2);< ex2; k++) { for (int i = 0; i <= ex0; i++) { for (int j = 0; j <= ex1; j++) { int ni = n0 - i - 1; int nj = n1 - j - 1; int mi = m0 - i - 1; int mj = m1 - j - 1; output = input; output = input; output = input; output = input; } } } } /// /// Resize bitmap with the Fastest Fourier Transform /// /// Resized bitmap public Bitmap Stretch(Bitmap bitmap) ( using (var image = new Image (bitmap)) ( int length = image.Data.Length; int n0 = image.Data.GetLength(0); int n1 = image.Data.GetLength(1); int n2 = image.Data.GetLength(2); var doubles = new double; Buffer.BlockCopy(image.Data, 0, doubles, 0, length*sizeof (double)); double power = Math.Sqrt(doubles.Average(x => x*x)); = new fftw_complexarray(doubles.Select(x => new Complex(x, 0)).ToArray()); var output = new fftw_complexarray(length); fftw_plan.dft_3d(n0, n1, n2, input, output, fftw_direction. Forward, fftw_flags.Estimate).Execute(); complex complex = output.GetData_Complex(); using (var image2 = new Image (_newSize)) ( int length2 = image2.Data.Length; int m0 ​​= image2.Data.GetLength(0); int m1 = image2.Data.GetLength(1); int m2 = image2.Data.GetLength(2); var complex2 = new Complex; var data2 = new double; GCHandle complexHandle = GCHandle.Alloc(complex, GCHandleType.Pinned); ); IntPtr complexPtr = complexHandle.AddrOfPinnedObject(); IntPtr dataPtr = dataHandle.AddrOfPinnedObject(); Marshal.Copy(complexPtr, buffer, 0, buffer.Length); complexHandle.Free(); Copy(data, data2); complexHandle = GCHandle.Alloc(complex2, GCHandleType. Pinned); complexPtr = complexHandle.AddrOfPinnedObject(); dataPtr = dataHandle.AddrOfPinnedObject(); Marshal.Copy(dataPtr, buffer, 0, buffer.Length); complexHandle.Free(); dataHandle.Free(); var input2 = new fftw_complexarray(complex2); var output2 = new fftw_complexarray(length2); fftw_plan.dft_3d(m0, m1, m2, input2, output2, fftw_direction.Backward, fftw_flags.Estimate).Execute(); double array2 = output2.GetData_Complex().Select(x => x.Magnitude).ToArray(); double power2 = Math.Sqrt(array2.Average(x => x*x)); double doubles2 = array2.Select(x => x*power/power2).ToArray(); Buffer.BlockCopy(doubles2, 0, image2.Data, 0, length2*sizeof (double)); return image2.Bitmap; ) ) )

    Images consisting of discrete elements, each of which can take only a finite number of distinguishable values ​​that change over a finite time, are called discrete. It should be emphasized that the elements of a discrete image, generally speaking, may have an unequal area and each of them may have an unequal number of distinguishable gradations.

    As was shown in the first chapter, the retina transmits discrete images to the higher parts of the visual analyzer.

    Their apparent continuity is just one of the illusions of vision. This “quantization” of initially continuous images is determined not by the limitations associated with the resolution of the optical system of the eye and not even by the morphological structural elements of the visual system, but by the functional organization of nerve networks.

    The image is divided into discrete elements by receptive fields that unite one or another number of photoreceptors. Receptive fields produce the primary selection of useful light signal by spatial and temporal summation.

    The central part of the retina (fovea) is occupied only by cones; in the periphery outside the fovea there are both cones and rods. Under night vision conditions, the cone fields in the central part of the retina have approximately the same size (about 5" in angular measure). The number of such fields in the fovea, whose angular dimensions are about 90", is about 200. The main role in night vision is played by the rod fields, which occupy the entire remaining surface of the retina. They have an angular size of about 1° over the entire surface of the retina. The number of such fields in the retina is about 3 thousand. Not only detection, but also viewing of dimly lit objects under these conditions is carried out by the peripheral areas of the retina.

    As illumination increases, another system of storage cells—the cone receptive fields—begins to play a major role. In the fovea, an increase in illumination causes a gradual decrease in the effective field strength until, at a brightness of about 100 asb, it is reduced to one cone. At the periphery, with increasing illumination, the rod fields gradually turn off (inhibit) and the cone fields come into action. The cone fields in the periphery, like the foveal fields, have the ability to decrease depending on the light energy incident on them. Largest quantity cones, which cone receptive fields can have with increasing illumination, grows from the center to the edges of the retina and at an angular distance of 50-60° from the center reaches approximately 90.

    It can be calculated that in good daylight conditions the number of receptive fields reaches about 800 thousand. This value approximately corresponds to the number of fibers in the human optic nerve. Discrimination (resolution) of objects during daytime vision is carried out mainly by the fovea, where the receptive field can be reduced to one cone, and the cones themselves are most densely located.

    If the number of storage cells of the retina can be determined to a satisfactory approximation, then there is not yet sufficient data to determine the number of possible states of the receptive fields. Only some estimates can be made based on the study of differential thresholds of receptive fields. The threshold contrast in the foveal receptive fields in a certain operating range of illumination is of the order of 1. In this case, the number of distinguishable gradations is small. Throughout the entire range of restructuring of the cone foveal receptive field, 8-9 gradations differ.

    The period of accumulation in the receptive field - the so-called critical duration - is determined on average by a value of about 0.1 second, but at high levels lighting may apparently be significantly reduced.

    In fact, the model describing the discrete structure transmitted images, should be even more difficult. One would have to take into account the relationship between receptive field sizes, thresholds and critical duration, as well as the statistical nature of visual thresholds. But for now there is no need for this. It is enough to imagine as an image model a set of elements of equal area, the angular dimensions of which are smaller than the angular dimensions of the smallest detail resolved by the eye, the number of distinguishable states of which is greater than the maximum number of distinguishable gradations of brightness, and the time of discrete change of which is less than the flickering period at the critical flicker fusion frequency.

    If you replace images of real continuous objects in the external world with such discrete images, the eye will not notice the substitution.* Consequently, discrete images of this kind contain at least no less information than the visual system perceives. **

    * Color and volume images can also be replaced with a discrete model.
    ** The problem of replacing continuous images with discrete ones is important for film and television technology. Time quantization is the basis of this technique. In pulse-code television systems, the image is also divided into discrete elements and quantized by brightness.

    In the previous chapter we studied linear spatially invariant systems in a continuous two-dimensional domain. In practice, we are dealing with images that have limited dimensions and at the same time are measured in a discrete set of points. Therefore, the methods developed so far need to be adapted, extended and modified so that they can be applied in such an area. Several new points also arise that require careful consideration.

    The sampling theorem tells us under what conditions a continuous image can be accurately reconstructed from a discrete set of values. We will also learn what happens when its applicability conditions are not met. All this has direct bearing on the development of visual systems.

    Methods that require moving to the frequency domain have become popular in part due to algorithms for fast computation of the discrete Fourier transform. However, care must be taken as these methods require the presence of periodic signal. We will discuss how this requirement can be met and what the consequences of violating it are.

    7.1. Image Size Limit

    In practice, images always have finite dimensions. Consider a rectangular image with width and height H. Now there is no need to take integrals in the Fourier transform over infinite limits:

    It is interesting that we do not need to know at all frequencies to restore function. Knowing that at represents a hard constraint. In other words, a function that is nonzero only in a limited region of the image plane contains much less information than a function that does not have this property.

    To see this, imagine that the screen plane is covered with copies given image. In other words, we extend our image to a function that is periodic in both directions

    Here is the largest integer not exceeding x. The Fourier transform of such a multiplied image has the form

    By using in a suitable manner selected convergence factors in Ex. 7.1 it is proved that

    Hence,

    from where we see that it is equal to zero everywhere except for a discrete set of frequencies. Thus, to find it, it is enough for us to know at these points. However, the function is obtained by simply cutting off the section for which . Therefore, in order to restore it, it is enough for us to know only for everyone This is a countable set of numbers.

    Note that the transformation of a periodic function turns out to be discrete. The inverse transformation can be represented as a series, since

    You can replace a continuous image with a discrete one in various ways. You can, for example, choose any system of orthogonal functions and, having calculated the coefficients of image representation using this system (using this basis), replace the image with them. The variety of bases makes it possible to form various discrete representations of a continuous image. However, the most commonly used is periodic sampling, in particular, as mentioned above, sampling with a rectangular raster. This discretization method can be considered as one of the options for using an orthogonal basis that uses shifted -functions as its elements. Next, following mainly, we will consider in detail the main features of rectangular sampling.

    Let be a continuous image, and let be the corresponding discrete one, obtained from the continuous one by rectangular sampling. This means that the relationship between them is determined by the expression:

    where are the vertical and horizontal steps or sampling intervals, respectively. Fig. 1.1 illustrates the location of samples on the plane with rectangular sampling.

    The main question that arises when replacing a continuous image with a discrete one is to determine the conditions under which such a replacement is complete, i.e. is not accompanied by a loss of information contained in the continuous signal. There are no losses if, having discrete signal, you can restore continuous. From a mathematical point of view, the question is therefore to reconstruct a continuous signal in two-dimensional spaces between nodes in which its values ​​are known or, in other words, to perform two-dimensional interpolation. This question can be answered by analyzing the spectral properties of continuous and discrete images.

    The two-dimensional continuous frequency spectrum of a continuous signal is determined by a two-dimensional direct Fourier transform:

    which corresponds to the two-dimensional inverse continuous transformation Fourier:

    The last relation is true for any values, including at the nodes of a rectangular lattice . Therefore, for the signal values ​​at the nodes, taking into account (1.1), relation (1.3) can be written as:

    For brevity, let us denote by a rectangular section in the two-dimensional frequency domain. The calculation of the integral in (1.4) over the entire frequency domain can be replaced by integration over individual areas and summing up the results:

    By replacing variables according to the rule, we achieve independence of the integration domain from the numbers and:

    It is taken into account here that for any integer values ​​and . This expression is very close in form to the inverse Fourier transform. The only difference is the incorrect form of the exponential factor. To give it the required form, we introduce normalized frequencies and perform a change of variables in accordance with this. As a result we get:

    Now expression (1.5) has the form of an inverse Fourier transform, therefore, the function under the integral sign is

    (1.6)

    is a two-dimensional spectrum of a discrete image. In the plane of non-standardized frequencies, expression (1.6) has the form:

    (1.7)

    From (1.7) it follows that the two-dimensional spectrum of a discrete image is rectangularly periodic with periods and along the frequency axes and, respectively. The spectrum of a discrete image is formed as a result of the summation of an infinite number of spectra of a continuous image, differing from each other in frequency shifts and . Fig. 1.2 qualitatively shows the relationship between the two-dimensional spectra of continuous (Fig. 1.2.a) and discrete (Fig. 1.2.b) images.

    Rice. 1.2. Frequency Spectra continuous and discrete images

    The summation result itself significantly depends on the values ​​of these frequency shifts, or, in other words, on the choice of sampling intervals. Let us assume that the spectrum of a continuous image is nonzero in some two-dimensional region in the vicinity of zero frequency, that is, it is described by a two-dimensional finite function. If the sampling intervals are chosen so that for , , then the overlap of individual branches when forming the sum (1.7) will not occur. Consequently, within each rectangular section only one term will differ from zero. In particular, when we have:

    at , . (1.8)

    Thus, within the frequency domain, the spectra of continuous and discrete images coincide up to a constant factor. In this case, the spectrum of the discrete image in this frequency region contains full information about the spectrum of a continuous image. We emphasize that this coincidence occurs only under specified conditions, determined by a successful choice of sampling intervals. Note that the fulfillment of these conditions, according to (1.8), is achieved at sufficiently small values ​​of sampling intervals, which must satisfy the requirements:

    in which are the boundary frequencies of the two-dimensional spectrum.

    Relationship (1.8) determines the method of obtaining a continuous image from a discrete one. To do this, it is enough to perform two-dimensional filtering of a discrete image using a low-pass filter with frequency response

    The spectrum of the image at its output contains non-zero components only in the frequency domain and is equal, according to (1.8), to the spectrum of a continuous image. This means that the output image of an ideal low-pass filter is the same as .

    Thus, ideal interpolation reconstruction of a continuous image is performed using a two-dimensional filter with a rectangular frequency response (1.10). It is not difficult to explicitly write down an algorithm for reconstructing a continuous image. Two-dimensional impulse response reconstruction filter, which can be easily obtained using the inverse Fourier transform from (1.10), has the form:

    .

    The filter product can be determined using a two-dimensional convolution of the input image and a given impulse response. Introducing input image as a two-dimensional sequence of -functions

    after performing the convolution we find:

    The resulting relationship indicates a method for accurate interpolation reconstruction of a continuous image from a known sequence of its two-dimensional samples. According to this expression, for accurate reconstruction, two-dimensional functions of the form should be used as interpolating functions. Relation (1.11) is a two-dimensional version of the Kotelnikov-Nyquist theorem.

    Let us emphasize once again that these results are valid if the two-dimensional spectrum of the signal is finite and the sampling intervals are sufficiently small. The fairness of the conclusions drawn is violated if at least one of these conditions is not met. Real images rarely have spectra with pronounced cutoff frequencies. One of the reasons leading to the unlimited spectrum is the limited image size. Because of this, when summing in (1.7), the action of terms from neighboring spectral zones appears in each of the zones. In this case, accurate restoration of a continuous image becomes completely impossible. In particular, the use of a filter with a rectangular frequency response does not lead to accurate reconstruction.

    A feature of optimal image restoration in the intervals between samples is the use of all samples of a discrete image, as prescribed by procedure (1.11). This is not always convenient; it is often necessary to reconstruct a signal in a local area, relying on a small number of available discrete values. In these cases, it is advisable to use quasi-optimal restoration using various interpolating functions. This kind of problem arises, for example, when solving the problem of linking two images, when, due to the geometric detuning of these images, the available samples of one of them may correspond to some points located in the spaces between the nodes of the other. The solution to this problem is discussed in more detail in subsequent sections of this manual.

    Rice. 1.3. The influence of sampling interval on image reconstruction

    "Fingerprint"

    Rice. Figure 1.3 illustrates the effect of sampling intervals on image restoration. The original image, which is a fingerprint, is shown in Fig. 1.3, a, and one of the sections of its normalized spectrum is in Fig. 1.3, b. This image is discrete, and the value is used as the cutoff frequency. As follows from Fig. 1.3b, the value of the spectrum at this frequency is negligible, which guarantees high-quality reconstruction. In fact, observed in Fig. 1.3.a the picture is the result of restoring a continuous image, and the role of a restoring filter is performed by a visualization device - a monitor or printer. In this sense, the image in Fig. 1.3.a can be considered continuous.

    Rice. 1.3, c, d show the consequences of an incorrect choice of sampling intervals. When obtaining them, the “continuous” image was “sampled” in Fig. 1.3.a by thinning out its counts. Rice. 1.3,c corresponds to an increase in the sampling step for each coordinate by three, and Fig. 1.3, g - four times. This would be acceptable if the values ​​of the cutoff frequencies were lower by the same number of times. In fact, as can be seen from Fig. 1.3, b, there is a violation of requirements (1.9), especially severe when the samples are thinned out four times. Therefore, the images restored using algorithm (1.11) are not only defocused, but also greatly distort the texture of the print.

    Rice. 1.4. The influence of the sampling interval on the reconstruction of the “Portrait” image

    In Fig. 1.4 shows a similar series of results obtained for an image of the “portrait” type. The consequences of stronger thinning (four times in Fig. 1.4.c and six times in Fig. 1.4.d) are manifested mainly in loss of clarity. Subjectively, the quality loss seems less significant than in Fig. 1.3. This is explained by the significantly smaller spectral width than that of a fingerprint image. The sampling of the original image corresponds to the cutoff frequency. As can be seen from Fig. 1.4.b, this value is much higher than the true value. Therefore, the increase in the sampling interval, illustrated in Fig. 1.3, c, d, although it worsens the picture, still does not lead to such destructive consequences as in the previous example.

    A compaction algorithm that provides very high quality images with a data compression ratio of more than 25:1. A full-color 24-bit image with a resolution of 640 x 480 pixels (VGA standard) usually requires video RAM for storage... ...

    Discrete wavelet transform- An example of the 1st level of discrete wavelet image transformation. At the top is the original full-color image, in the middle is the wavelet transformation made horizontally of the original image (only the brightness channel), at the bottom is the wavelet... ... Wikipedia

    RASTER - raster- a discrete image presented as a matrix [of] pixels... E-Business Dictionary

    computer graphics- visualization of the information image on the display screen (monitor). Unlike reproducing an image on paper or other media, an image created on a screen can be almost immediately erased and/or corrected, compressed or stretched... ... Encyclopedic Dictionary

    raster- A discrete image presented as a matrix of pixels on a screen or paper. A raster is characterized by its resolution by the number of pixels per unit length, size, color depth, etc. Examples of combinations: density... ... Technical Translator's Guide

    table- ▲ array two-dimensional table two-dimensional array; discrete representation of a function of two variables; information grid. matrix. report card | tabulation. line. line. column. column. column. graph. graph. graph. ▼ schedule… Ideographic Dictionary of the Russian Language

    Laplace transform- The Laplace transform is an integral transform that connects a function of a complex variable (image) with a function of a real variable (original). It is used to study the properties dynamic systems and decide... ...Wikipedia

    Laplace transform

    Inverse Laplace transform- The Laplace transform is an integral transform that connects a function of a complex variable (image) with a function of a real variable (original). With its help, the properties of dynamic systems are studied and differential and ... Wikipedia are solved

    GOST R 52210-2004: Digital broadcast television. Terms and definitions- Terminology GOST R 52210 2004: Digital broadcast television. Terms and definitions original document: 90 (television) demultiplexer: A device designed to separate combined digital television data streams... ... Dictionary-reference book of terms of normative and technical documentation

    Video compression- (English Video compression) reducing the amount of data used to represent the video stream. Video compression allows you to effectively reduce the flow required to transmit video over broadcast channels, reduce space,... ... Wikipedia