• Example vb6 spectral Fourier analysis. Time series in STATISTICA. Spectral (Fourier) analysis. Fast Fourier Transform

    The analysis method was based on the so-called Fourier series. The series begins with the decomposition of complex shapes into simple ones. Fourier showed that a complex waveform can be represented as a sum of simple waves. As a rule, the equations describing classical systems can be easily solved for each of these simple waves. Fourier then showed how these simple solutions can be summed up to give a solution to the entire complex problem. (Mathematically speaking, the Fourier series is a method of representing a function as a sum of harmonics - sine and cosine, which is why Fourier analysis was also known as “harmonic analysis”.)

    According to the Fourier hypothesis, there is no function that cannot be expanded into a trigonometric series. Let us consider how this decomposition can be carried out. Consider the following system of orthonormal functions on the interval [–π, π]: (1, cos(t),
    sin(t),
    cos(2t),
    sin(2t),
    cos(3t),
    sin(3t), …,
    cos(nt),
    sin(nt),… ).

    Guided by the fact that this system of functions is orthonormal, the function f(t) on the interval [π, –π] can be approximated as follows:

    f(t) = α0 + α1
    cos(t) + α2
    cos(2t) +
    α3 cos(3t) + …

    ... + β1
    sin(t) + β2
    sin(2t) + β3
    sin(3t)+… (6)

    The coefficients α n, β n are calculated through the scalar product of the function and the basis function according to the formulas discussed earlier and are expressed as follows:

    α 0 = , 1> =
    ,

    α n = , cos(nt) > =
    ,

    β n = , sin(nt) > =
    .

    Expression (6) can be written in compressed form as follows:

    f(t) = a 0 /2 + a 1 cos(t) + a 2 cos(2t) + a 3 cos(3t) + …

    B 1 sin(t) + b 2 sin(2t) + b 3 sin(3t)+… (7)

    a 0 = 2α 0 =
    ,

    and n =
    α n =
    , (8)

    b n=
    β n=
    . (9)

    Since at n = 0 cos(0) = 1, the constant a 0 /2 expresses the general form of the coefficient a n at n = 0.

    The coefficients a n and b n are called Fourier coefficients, and the representation of the function f(t) according to formula (7) is called the Fourier series expansion. Sometimes a Fourier series expansion presented in this form is called a real Fourier series expansion, and the coefficients are called real Fourier coefficients. The term “real” is introduced in order to distinguish this decomposition from a complex decomposition.

    Let us analyze expressions (8) and (9). Coefficient 0 represents the average value of the function f(t) on the segment [–π,π] or the constant component of the signal f(t). Coefficientsa n and b n (at n> 0) are the amplitudes of the cosine and sine components of the function (signal) f(t) with an angular frequency equal to n. In other words, these coefficients specify the magnitude of the frequency components of the signals. For example, when we talk about an audio signal with low frequencies (for example, the sound of a bass guitar), this means that the coefficients a n and b n are larger for smaller values ​​of n, and vice versa - in high-frequency sound vibrations (for example, the sound of a violin) they are larger for larger values ​​of n.

    The oscillation of the longest period (or lowest frequency), represented by the sum of a 1 cos(t) and b 1 sin(t), is called the oscillation of the fundamental frequency or the first harmonic. An oscillation with a period equal to half the period of the fundamental frequency is a second harmonic, an oscillation with a period equal to 1/n of the fundamental frequency is an n-harmonic. Thus, using the expansion of the Function f(t) into a Fourier series, we can make the transition from the time domain to the frequency domain. This transition is usually necessary to identify signal features that are “invisible” in the time domain.

    Please note that formulas (8) and (9) are applicable for a periodic signal with a period equal to 2π. In the general case, a periodic signal with period T can be expanded into a Fourier series, then the segment [–T/2, T/2] is used in the expansion. The period of the first harmonic is equal to T and the components take the form cos(2πt/T) and sin(2πt/T), the components of the n-harmonic are cos(2πtn/T) and sin(2πtn/T).

    The function f(t) on the interval [–T/2,T/2] can be approximated as follows:

    f(t) = a 0 /2 + a 1 cos(2πt/T) + a 2 cos(4πt/T) + a 3 cos(6πt/T) + …

    B 1 sin(2πt/T) + b 2 sin(4πt/T) + b 3 sin(6πt/T)+…, (10)

    a n =
    ,

    b n=
    .

    If we denote the angular frequency of the first harmonic as ω 0 = 2π/T, then the n-harmonic components take the form cos(ω 0 nt), sin(ω 0 nt) and

    f(t) = a 0 /2 + a 1 cos(ω 0 t) + a 2 cos(2ω 0 t) + a 3 cos(3ω 0 t) + …

    B 1 sin(ω 0 t) + b 2 sin(2ω 0 t) + b 3 sin(3ω 0 t)+…=

    =
    , (11)

    where the Fourier coefficients are calculated using the formulas:

    a n =
    ,

    b n =
    .

    Any wave of complex shape can be represented as a sum of simple waves.

    Joseph Fourier was very keen to describe in mathematical terms how heat passes through solid objects ( cm. Heat exchange). His interest in heat may have been sparked while he was in North Africa: Fourier accompanied Napoleon on the French expedition to Egypt and lived there for some time. To achieve his goal, Fourier had to develop new mathematical methods. The results of his research were published in 1822 in the work “Analytical Theory of Heat” ( Théorie analytique de la chaleur), where he explained how to analyze complex physical problems by breaking them down into a series of simpler ones.

    The analysis method was based on the so-called Fourier series. In accordance with the principle of interference, the series begins with the decomposition of a complex form into simple ones - for example, a change in the earth's surface is explained by an earthquake, a change in the orbit of a comet is due to the influence of the attraction of several planets, a change in the flow of heat is due to its passage through an irregularly shaped obstacle made of heat-insulating material. Fourier showed that a complex waveform can be represented as a sum of simple waves. As a rule, the equations describing classical systems can be easily solved for each of these simple waves. Fourier then showed how these simple solutions can be summed up to give a solution to the entire complex problem. (Mathematically speaking, a Fourier series is a method of representing a function as a sum of harmonics—sine and cosine waves, which is why Fourier analysis was also known as “harmonic analysis.”)

    Before the advent of computers in the mid-twentieth century, Fourier methods and their ilk were the best weapons in the scientific arsenal when attacking the complexities of nature. Since the advent of complex Fourier methods, scientists have been able to use them to solve not only simple problems that can be solved by direct application of Newton's laws of mechanics and other fundamental equations. Many of the great achievements of Newtonian science in the 19th century would in fact have been impossible without the use of the methods pioneered by Fourier. Subsequently, these methods were used to solve problems in various fields - from astronomy to mechanical engineering.

    Jean-Baptiste Joseph FOURIE
    Jean-Baptiste Joseph Fourier, 1768-1830

    French mathematician. Born in Auxerre; at the age of nine he was left an orphan. Already at a young age he showed aptitude for mathematics. Fourier was educated at a church school and a military school, then worked as a mathematics teacher. Throughout his life he was actively involved in politics; was arrested in 1794 for defending victims of terror. After Robespierre's death he was released from prison; took part in the creation of the famous Polytechnic School (Ecole Polytechnique) in Paris; his position provided him with a springboard for advancement under Napoleon's regime. He accompanied Napoleon to Egypt and was appointed governor of Lower Egypt. Upon returning to France in 1801, he was appointed governor of one of the provinces. In 1822 he became permanent secretary of the French Academy of Sciences, an influential position in the French scientific world.

    FOURIER TRANSFORM AND CLASSICAL DIGITAL SPECTRAL ANALYSIS.
    Medvedev S.Yu., Ph.D.

    Introduction

    Spectral analysis is one of the signal processing methods that allows you to characterize the frequency composition of the measured signal. The Fourier transform is a mathematical framework that relates a temporal or spatial signal (or some model of that signal) to its frequency domain representation. Statistical methods play an important role in spectral analysis, since signals, as a rule, are random or noisy during propagation or measurement. If the basic statistical characteristics of a signal were precisely known, or they could be determined from a finite interval of this signal, then spectral analysis would represent a branch of “exact science.” However, in reality, from a signal segment one can only obtain an estimate of its spectrum. Therefore, the practice of spectral analysis is a kind of craft (or art?) of a rather subjective nature. The difference between the spectral estimates obtained as a result of processing the same signal segment by different methods can be explained by different assumptions made regarding the data, different averaging methods, etc. If the signal characteristics are not known a priori, it is impossible to say which of the estimates is better.

    Fourier transform - the mathematical basis of spectral analysis
    Let's briefly discuss different types of Fourier transform (for more details, see).
    Let's start with the Fourier transform of a time-continuous signal

    , (1)

    which identifies the frequencies and amplitudes of those complex sinusoids (exponents) into which some arbitrary oscillation is decomposed.
    Reverse conversion


    . (2)


    The existence of direct and inverse Fourier transforms (which we will further call the continuous-time Fourier transform - CTFT) is determined by a number of conditions. Sufficient - absolute signal integrability


    . (3)

    A less restrictive sufficient condition is the finiteness of the signal energy


    . (4)


    Let us present a number of basic properties of the Fourier transform and functions used below, noting that a rectangular window is defined by the expression


    (5)

    and the sinc function is the expression


    (6)

    The time domain sampling function is given by

    (7)


    This function is sometimes also called the periodic continuation function.

    Table 1. Main properties of NVPF and functions

    Property, function

    Function

    Conversion

    Linearity

    ag(t) + bh(t)

    aG(f) + bH(f)

    Time shift

    h (t - t 0)

    H(f)exp(-j2pf t 0)

    Frequency shift (modulation)

    h (t)exp(j2pf0 t)

    H(f - f 0)

    Scaling

    (1 / |a|)h(t / a)

    H(af)

    Time domain convolution theorem

    g(t)*h(t)


    G(f)H(f)

    Frequency domain convolution theorem

    g(t) h(t)

    G(f)*H(f)

    Window function

    Aw(t/T)

    2ATsinc(2Tf)

    Sinc function

    2AFsinc(2Ft)

    Aw(f/F)

    Pulse function

    Ad(t)

    Counting function

    T(f)

    FF(f), F=1/T

    Another important property is established by Parseval’s theorem for two functions g(t) and h(t):


    . (8)

    If we put g(t) = h(t), then Parseval’s theorem reduces to the theorem for energy

    . (9)

    Expression (9) is, in essence, simply a formulation of the law of conservation of energy in two domains (time and frequency). In (9) on the left is the total signal energy, thus the function


    (10)

    describes the frequency distribution of energy for a deterministic signal h(t) and is therefore called the spectral energy density (SED). Using Expressions


    (11)

    the amplitude and phase spectra of the signal h(t) can be calculated.

    Sampling and weighting operations

    In the next section, we will introduce the discrete-time Fourier series (DTFS) or otherwise the discrete Fourier transform (DFT) as a special case of the continuous-time Fourier transform (CTFT) using two basic signal processing operations - taking samples ( sampling) And weighing using a window. Here we consider the influence of these operations on the signal and its transformation. Table 2 lists the functions that perform weighting and sampling.

    For uniform readings with an interval of T seconds, the sampling frequency F is equal to 1/T Hz. Note that the weighting function and sampling function in the time domain are designated TW (time windowing) and TS (time sampling), respectively, and in the frequency domain - FW (frequency windowing) and FS (frequency sampling).


    Table 2. Weighting and sampling functions

    Operation

    Time function

    Conversion

    Time domain weighting (window width NT sec)

    TW=w(2t / NT - 1)

    F(TW)=NTsinc(NTf)exp(-jpNTf)

    Frequency domain weighting (window width 1/T Hz)

    FW=w(2Tf)

    Counting in time (interval T sec)

    TS=T T(t)

    Frequency sampling (at 1/NT Hz intervals)

    Let us assume that samples of a continuous real signal x(t) with a limited spectrum are taken, the upper frequency of which is equal to F0. The NVFT of a real signal is always a symmetric function with a full width of 2F0, see Fig. 1.
    Samples of the signal x(t) can be obtained by multiplying this signal by the sample function:


    (12)

    Fig. 1 - illustration of the sampling theorem in the time domain for a real signal with a limited spectrum:
    a - the original time function and its Fourier transform;
    b - function of samples in time and its Fourier transform;
    in - time samples of the original function and its periodically continued Fourier transform for the case of Fo<1/2T;
    d - frequency window (ideal low-pass filter) and its Fourier transform (sinc function);
    d - the original time function restored through the convolution operation with the sinc function.


    According to the frequency domain convolution theorem, the FTFT of signal x(t) is simply the convolution of the spectrum of signal x(t) and the Fourier transform of the time sample (TS) function:


    . (13)

    The convolution of X(f) with the Fourier transform of the sample function F (TS)=Y1/T(f) simply periodically continues X(f) with a frequency interval of 1/T Hz. Therefore XS(f) is a periodically extended spectrum of X(f). In general, samples in one domain (for example, time) lead to periodic continuation in the transformation domain (for example, frequency). If the sample rate is selected low enough (F< 2Fo), то периодически продолженные спектры будут перекрываться с соседними. Это перекрытие носит название эффекта наложения в частотной области.
    In order to restore the original time signal from its samples, i.e. to interpolate a certain continuum of values ​​between these samples, you can pass the sampled data through an ideal low-pass filter with a rectangular frequency response (Fig. 1d)


    . (14)

    As a result (see Fig. 1 d), the original Fourier transform is restored. Using convolution theorems in the time and frequency domains, we obtain

    . (15)

    Expression (15) is a mathematical notation time domain sampling theorems(the theorem of Whittaker, Kotelnikov, Shannon - UKSH), which states that using the interpolation formula (15) a real signal with a limited spectrum can be accurately restored by infinite number known time samples taken with frequency F = 2F0. The dual to Theorem (15) is the theorem samples in the frequency domain for signals with limited duration.
    Operations in the time domain, similar to (14), are described by the expression

    , (16)

    and the corresponding transformations are expressions


    Thus, the NVPF X(f) of some signal with a limited duration can be unambiguously restored from equidistant samples of the spectrum of such a signal if the selected frequency sampling interval satisfies the condition F1/2T 0 Hz, where T 0 is the signal duration.

    Relationships between continuous and discrete transformations

    A pair of transformations for the conventional definition of the N-point discrete Fourier transform (DFT) time sequence x[n] and the corresponding N-point Fourier transform sequences X[k] is given by the expressions

    , (18)
    . (19)

    In order to obtain spectral estimates from data samples in the corresponding units of energy or power, we write a discrete-time Fourier series (DTFS), which can be considered as some approximation of the continuous-time Fourier transform (CTFT), based on the use of a finite number of data samples:

    In order to show the nature of compliance with the DVRF ( discrete functions in both the time and frequency domains) and CVDFs (continuous functions in the time and frequency domains), we need a sequence of four linear commutative operations: weighting in the time and frequency domains and sampling or sampling both in the time and frequency domains. If a weighting operation is performed in one of these regions, then, according to the convolution theorem, it will correspond to a filtering (convolution) operation in another region with the sinc function. Similarly, if discretization is performed in one region, then a periodic continuation operation is performed in another. Since weighing and taking samples are linear and commutative operations, various ways of ordering them are possible, giving the same final result with different intermediate results. Figure 2 shows two possible sequences for performing these four operations.

    Rice. 2. Two possible sequences of two weighing operations and two sampling operations, connecting the NVPF and DVRF: FW - application of a window in the frequency domain; TW - application of a window in the time domain; FS - taking samples in the frequency domain; TS - taking samples in the time domain.
    1 - continuous time Fourier transform, equation (1);
    4 - discrete-time Fourier transform, equation (22);
    5 - Fourier series with continuous time, equation (25);
    8 - Fourier series with discrete time, equation (27)


    As a result of performing weighing and sampling operations at nodes 1, 4, 5 and 8, four different types of Fourier relations will occur. Nodes in which the function is in frequency domain is continuous, refer to transformations Fourier, and the nodes at which the function is in the frequency domain discrete refer to Fourier series(for more details see).
    So in node 4, weighting in the frequency domain and sampling in the time domain generates discrete time conversion Fourier Transform (FTFT), which is characterized by a periodic spectrum function in the frequency domain with a period of 1/T Hz:

    (22)

    (23)


    Note that expression (22) defines a certain periodic function that coincides with the original transformed function specified in node 1 only in the frequency range from -1/2T to 1/2T Hz. Expression (22) is related to the Z-transform of the discrete sequence x[n] by the relation

    (24)

    So the DVFT is simply the Z-transform calculated on the unit circle and multiplied by T.
    If we move from node 1 to node 8 in Fig. 2 along the lower branch, in node 5 the operations of weighting in the time domain (restricting the signal duration) and sampling in the frequency domain generate a continuous-time Fourier series (CFTS). Using the properties and definitions of functions given in Tables 1 and 2, we obtain the following pair of transformations
    (25)
    (26)


    Note that expression (26) defines a certain periodic function, which coincides with the original one (at node 1) only in the time interval from 0 to NT.
    Regardless of which of the two sequences of four operations is chosen, the final result at node 8 will be the same - discrete-time Fourier series, which corresponds to the following pair of transformations obtained using the properties indicated in Table 1.


    , (27)

    where k=-N/2, . . . ,N/2-1


    , (28)

    where n=0, . . . ,N-1 ,
    The energy theorem for this DVRF is:

    , (29)

    and characterizes the energy of a sequence of N data samples. Both sequences x[n] and X[k] are periodic modulo N, so (28) can be written in the form

    , (30)

    where 0 n N. The factor T in (27) - (30) is necessary so that (27) and (28) are in fact an approximation of the integral transformation in the domain of integration

    .(31)

    Zero padding

    Through a process called padding with zeros, the discrete-time Fourier series can be modified to interpolate between N values ​​of the original transform. Let the available data samples x,...,x be supplemented with zero values ​​x[N],...X. The DVRF of this zero-padded 2N-dot data sequence will be given by

    (32)

    where the upper limit of the sum on the right is modified to accommodate the presence of null data. Let k=2m, so

    , (33)

    where m=0,1,...,N-1, defines even values ​​of X[k]. This shows that for even values ​​of the index k, the 2N-point discrete-time Fourier series is reduced to an N-point discrete-time series. Odd values ​​of the index k correspond to interpolated DVRF values ​​located between the values ​​of the original N-point DVRF. As more and more zeros are added to the original N-dot sequence, even more interpolated data can be obtained. In the limiting case of an infinite number of input zeros, the DVRF can be considered as a discrete-time Fourier transform of an N-point data sequence:


    . (34)

    Transformation (34) corresponds to node 6 in Fig. 2.
    There is a misconception that zero padding improves resolution because it increases the length of the data sequence. However, as follows from Fig. 3, padding with zeros doesn't improve resolution of the transformation obtained from a given finite data sequence. Zero padding simply allows for an interpolated conversion more smoothed shape. In addition, it eliminates the uncertainties caused by the presence of narrow-band signal components whose frequencies lie between the N points corresponding to the estimated frequencies of the original DVRF. When padding with zeros, the accuracy of estimating the frequency of spectral peaks also increases. By the term spectral resolution we will mean the ability to distinguish between the spectral responses of two harmonic signals. A generally accepted rule of thumb, often used in spectral analysis, is that the frequency separation of distinguished sinusoids cannot be less than equivalent window width, through which segments (sections) of these sinusoids are observed.



    Fig.3. Interpolation using zero padding:
    a - DVRF module for 16-point data recording containing three sinusoids without padding with zeros (uncertainties are visible: it is impossible to say how many sinusoids are in the signal - two, three or four);
    b - DVRF module of the same sequence after doubling the number of its samples due to the addition of 16 zeros (uncertainties are resolved, since all three sinusoids are distinguishable;
    c - DVRF module of the same sequence after a fourfold increase in the number of its samples due to the addition of zeros.


    The equivalent window bandwidth can be defined as
    where W(f) is the discrete-time Fourier transform of the window function, for example, rectangular (5). Similarly, you can enter equivalent window duration

    It can be shown that the equivalent duration of a window (or any other signal) and the equivalent bandwidth of its transformation are mutually inverse quantities: TeBe=1.

    Fast Fourier Transform

    The Fast Fourier Transform (FFT) is not another type of Fourier transform, but the name of a number of effective algorithms, designed for fast calculation of discrete-time Fourier series. The main problem that arises in the practical implementation of DVRF lies in the large number of computational operations proportional to N2. Although long before the advent of computers several efficient computing schemes were proposed that could significantly reduce the number of computational operations, the real revolution was made by the publication in 1965 of an article by Cooly and Tukey with a practical algorithm for fast (number of operations Nlog 2 N) calculations of DVRF . After this, many variants, improvements and additions to the basic idea were developed, constituting a class of algorithms known as the fast Fourier transform. The basic idea of ​​the FFT is to divide an N-point DVRF into two or more smaller DVRFs, each of which can be calculated separately and then linearly summed with the others to obtain the DVRF of the original N-point sequence.
    Let us represent the discrete Fourier transform (DFFT) in the form

    , (35)

    where the value W N =exp(-j2 /N) is called the turning factor (hereinafter in this section, the sampling period is T=1). Let us select elements with even and odd numbers from the sequence x[n]


    . (36)

    But since then
    . Therefore, (36) can be written in the form

    , (37)

    where each term is a transformation of length N/2

    (38)

    Note that the sequence (WN/2) nk is periodic in k with period N/2. Therefore, although the number k in expression (37) takes values ​​from 0 to N-1, each of the sums is calculated for values ​​of k from 0 to N/2-1. It is possible to estimate the number of complex multiplication and addition operations required to calculate the Fourier transform in accordance with algorithm (37)-(38). Two N/2-point Fourier transforms according to formulas (38) involve performing 2(N/2) 2 multiplications and approximately the same number of additions. Combining two N/2-point transformations using formula (37) requires another N multiplications and N additions. Therefore, to calculate the Fourier transform for all N values ​​of k, it is necessary to perform N+N 2 /2 multiplications and additions. At the same time, direct calculation using formula (35) requires N 2 multiplications and additions. Already for N>2 the inequality N+N 2 /2 is satisfied< N 2 , и, таким образом, вычисления по алгоритму (37)-(38) требуют меньшего числа математических операций по сравнению с прямым вычислением преобразования Фурье по формуле (35). Так как вычисление N-точечного преобразования Фурье через два N/2-точечных приводит к экономии вычислительных операций, то каждое из N/2-точечных ДПФ следует вычислять путем сведения их к N/4-точечным преобразованиям:

    , (39)
    (40)


    In this case, due to the periodicity of the sequence W nk N/4 in k with period N/4, sums (40) need to be calculated only for values ​​of k from 0 to N/4-1. Therefore, calculating the sequence X[k] using formulas (37), (39) and (40) requires, as is easy to calculate, already 2N+N 2 /4 multiplication and addition operations.
    By following this path, the amount of computation X[k] can be reduced more and more. After m=log 2 N expansions we arrive at two-point Fourier transforms of the form

    (41)

    where the "one-point transformations" X 1 are simply samples of the signal x[n]:

    X 1 = x[q]/N, q=0,1,...,N-1. (42)

    As a result, we can write the FFT algorithm, which for obvious reasons is called time thinning algorithm :

    X 2 = (x[p] + W k 2 x) / N,

    where k=0.1, p=0.1,...,N/2 -1;

    X 2N/M =X N/M + W k 2N/M X N/M ,

    where k=0.1,...,2N/M -1, p=0.1,...,M/2 -1;

    X[k] = X N [k] =X N/2 + W k N X N/2 , (43)

    where k=0,1,...,N-1

    At each stage of calculations, N complex multiplications and additions are performed. And since the number of decompositions of the original sequence into half-length subsequences is equal to log 2 N, then the total number of multiplication-addition operations in the FFT algorithm is equal to Nlog 2 N. For large N, there is a significant saving in computational operations compared to direct DFT calculations. For example, when N = 2 10 = 1024 the number of operations is reduced by 117 times.
    The time-decimated FFT algorithm we considered is based on calculating the Fourier transform by forming subsequences of the input sequence x[n]. However, it is also possible to use a subsequence decomposition of the Fourier transform X[k]. The FFT algorithm based on this procedure is called the c frequency thinning. You can read more about the fast Fourier transform, for example, in.

    Random processes and power spectral density

    A discrete random process x can be considered as a certain set, or ensemble, of real or complex discrete time (or spatial) sequences, each of which could be observed as the result of some experiment (n is the time index, i is the observation number). The sequence obtained as a result of one of the observations will be denoted by x[n]. The operation of averaging over the ensemble (i.e. statistical averaging) will be denoted by the operator<>. Thus, - the average value of the random process x[n] at time n. Autocorrelation random process at two different times n1 and n2 is determined by the expression r xx = .

    A random process is called stationary in in a broad sense, if its average value is constant (independent of time), and autocorrelation depends only on the difference in time indices m=n1-n2 (time shift or delay between samples). Thus, a broadly stationary discrete random process x[n] is characterized by a constant average value =And autocorrelation sequence(Automatic transmission)

    r xx [m] =< xx*[n] >. (44)

    Let us note the following properties of the automatic transmission:

    r xx |r xx [m]| , r xx [-m] = r* xx [m] , (45)

    which are valid for all m.
    Power spectral density (PSD) is defined as the discrete-time Fourier transform (DTFT) of an autocorrelation sequence

    . (46)

    The PSD, the width of which is assumed to be limited to ±1/2T Hz, is a periodic function of frequency with a period of 1/T Hz. The PSD function describes the frequency distribution of the power of a random process. To confirm the name chosen for it, consider the inverse DVFT

    (47)

    calculated at m=0

    (48)

    Autocorrelation at zero shift characterizes average power random process. According to (48), the area under the curve P xx (f) characterizes the average power, so P xx (f) is a density function (power per unit frequency) that characterizes the frequency distribution of power. The pair of transformations (46) and (47) are often called Wiener-Khinchin theorem for the case of discrete time. Since r xx [-m]=r* xx [m], then the PSD must be a strictly real positive function. If the ACP is a strictly real function, then r xx [-m]=r xx [m] and the PSD can be written in the form of the Fourier cosine transform

    ,

    which also means that P xx (f) = P xx (-f), i.e. SPM is an even function.
    Until now, when determining the average value, correlation and power spectral density of a random process, we used statistical averaging over the ensemble. However, in practice it is usually not possible to obtain an ensemble of implementations of the required process from which these statistical characteristics could be calculated. It is advisable to evaluate all statistical properties using one sample realization x(t), replacing y ensemble averaging time averaging. The property that allows such a replacement to be made is called ergodicity. A random process is said to be ergodic if, with probability equal to one, all its statistical characteristics can be predicted from one implementation from the ensemble using time averaging. In other words, the time averages of almost all possible implementations of the process converge with probability one to the same constant value - the ensemble average

    . (49)

    This limit, if it exists, converges to the true mean if and only if the time variance of the mean tends to zero, which means the following condition holds:

    . (50)


    Here c xx [m] is the true value of the covariance of process x[n].
    Similarly, observing the value of the product of process samples x[n] at two points in time, one can expect that the average value will be equal to

    (51)

    The ergodicity assumption allows us not only to introduce, through time averaging, the definitions for the mean value and autocorrelation, but also to give a similar definition for the power spectral density

    . (52)

    This equivalent form of the PSD is obtained by statistically averaging the DVFT modulus of the weighted data set divided by the length of the data record, for the case where the number of samples increases to infinity. Statistical averaging is necessary here because the DVFT itself is a random variable that changes for each realization of x[n]. In order to show that (52) is equivalent to the Wiener-Khinchin theorem, we represent the square of the DVFT modulus as a product of two series and change the order of the summation and statistical averaging operations:


    (53)

    Using the famous expression

    , (54)


    relation (53) can be reduced to the following:


    (55)

    Note that at the last stage of derivation (55) the assumption was used that the autocorrelation sequence “decays”, so that

    . (56)

    The relationship between the two definitions of PSD (46) and (52) is clearly shown by the diagram presented in Figure 4.
    If we do not take into account the operation of mathematical expectation in expression (52), we obtain an estimate of the SPM

    , (57)

    which is called sample spectrum.

    Rice. 4. Relationship between two methods for estimating power spectral density

    Periodogram method of spectral estimation

    Above we introduced two formal equivalent methods for determining power spectral density (PSD). The indirect method is based on the use of an infinite sequence of data to calculate an autocorrelation sequence, the Fourier transform of which gives the desired PSD. The direct method for determining the PSD is based on calculating the squared modulus of the Fourier transform for an infinite sequence of data using appropriate statistical averaging. The PSD obtained without such averaging turns out to be unsatisfactory, since the root-mean-square error of such an estimate is comparable to its average value. Now we will consider averaging methods that provide smooth and statistically stable spectral estimates over a finite number of samples. PSD estimates based on direct data transformation and subsequent averaging are called periodograms. PSD estimates, for which correlation estimates are first formed from the initial data, are called correlogram. When using any PSD estimation method, the user must make many trade-off decisions in order to obtain statistically stable spectral estimates with the highest possible resolution from a finite number of samples. These trade-offs include, but are not limited to, the choice of window for data weighting and correlation estimates, and time-domain and frequency-domain averaging parameters that balance the requirements of reducing side lobes due to weighting, performing efficient averaging, and providing acceptable spectral resolution. In Fig. Figure 5 shows a diagram showing the main stages periodogram method



    Rice. 5. Main stages of estimating PSD using the periodogram method

    The application of the method begins with the collection of N data samples, which are taken at an interval of T seconds per sample, followed (optional) by a detrending step. In order to obtain a statistically stable spectral estimate, the available data must be divided into overlapping (if possible) segments and subsequently averaged the sample spectra obtained for each such segment. The parameters of this averaging are changed by appropriately selecting the number of samples per segment (NSAMP) and the number of samples by which the beginning of the next segment must be shifted (NSHIFT), see Fig. 6. The number of segments is selected depending on the required degree of smoothness (dispersion) of the spectral estimate and the required spectral resolution. A small value for the NSAMP parameter results in more segments over which averaging will be performed, and therefore estimates with less variance, but also less frequency resolution, will be obtained. Increasing the segment length (NSAMP parameter) increases the resolution, naturally due to an increase in the variance of the estimate due to a smaller number of averagings. The return arrow in Fig. 5 indicates the need for several repeated passes through the data at different lengths and numbers of segments, which allows us to obtain more information about the process under study.

    Fig.6. Splitting data into segments to calculate a periodogram

    Windows

    One of the important issues that is common to all classical spectral estimation methods is related to data weighting. Windowing is used to control sidelobe effects in spectral estimates. Note that it is convenient to consider the existing finite data record as some part of the corresponding infinite sequence, visible through the applied window. Thus, the sequence of observed data x 0 [n] from N samples can be written mathematically as the product of an infinite sequence x[n] and a rectangular window function

    X 0 [n]=x[n] rect[n].
    This makes the obvious assumption that all unobserved samples are equal to zero, regardless of whether this is actually the case. The discrete-time Fourier transform of a weighted sequence is equal to the convolution of the transformations of the sequence x[n] and the rectangular window rect[n]

    X 0 (f)=X(f)*D N (f) , where
    D N (f)= Texp(-j2pfT)sin(pfTN)/sin(pfT).

    The function D N (f), called the discrete sinc function, or Dirichlet kernel, is a DCFT of a rectangular function. The transformation of an observed finite sequence is a distorted version of the transformation of an infinite sequence. The effect of a rectangular window on a discrete-time sinusoid with frequency f 0 is illustrated in Fig. 7.


    Fig.7. Illustration of discrete-time Fourier transform bias due to leakage due to data weighting: a, b - original and weighted sequences; b, d - their Fourier transforms.

    It can be seen from the figure that the sharp spectral peaks of the DTFT of the infinite sine wave sequence are expanded due to the convolution with the window transform. Thus, the minimum width of the spectral peaks of a window-weighted sequence is determined by the width of the main transform lobe of that window and is independent of the data. The side lobes of the window transform will change the amplitudes of adjacent spectral peaks (sometimes called bleed-through). Since the DVFT is a periodic function, the overlap of side lobes from neighboring periods can lead to additional bias. Increasing the sampling rate reduces the effect of sidelobe aliasing. Similar distortions will naturally be observed in the case of non-sinusoidal signals. Bleeding not only introduces amplitude errors in the spectra of discrete signals, but can also mask the presence of weak signals. There are a number of other window features that can be offered that can reduce side lobes compared to a rectangular window. Reducing the level of side lobes will reduce the shift in the spectral estimate, but this comes at the cost of expanding the main lobe of the window spectrum, which naturally leads to a deterioration in resolution. Consequently, here too some compromise must be chosen between the width of the main lobe and the level of the side lobes. Several parameters are used to evaluate the quality of windows. The traditional indicator is the main lobe bandwidth at half power. The second indicator uses the equivalent bandwidth introduced above. Two indicators are also used to evaluate the characteristics of the side lobes. The first is their maximum level, the second is the decay rate, which characterizes the speed at which the side lobes decrease with distance from the main lobe. Table 3 shows definitions of some commonly used discrete-time window functions, and Table 4 shows their characteristics.
    Table 3. Definitions of typical N-point discrete-time windowsMax. side lobe level, dB -31.5

    . (46)

    Correlogram method estimating the PSD is simply substituting into expression (46) a finite sequence of values ​​for the autocorrelation estimate ( correlograms) instead of an infinite sequence of unknown true autocorrelation values. More information about the correlogram method of spectral estimation can be found in.

    Literature

    1. Rabiner L., Gould B. Theory and application of digital signal processing. M.: Mir, 1978.

    2. Marple Jr. S.L. Digital spectral analysis and its applications: Transl. from English -M.: Mir, 1990.

    3. Goldberg L.M., Matyushkin B.D., Polyak M.N., Digital signal processing. - M.: Radio and Communications, 1990.

    4. Otnes R., Enokson L. Applied analysis of time series. - M.: Mir, 1982.

    1. Fourier transform and signal spectrum

    In many cases, the task of obtaining (calculating) the spectrum of a signal looks like this. There is an ADC that, with a sampling frequency Fd, converts a continuous signal arriving at its input during time T into digital samples - N pieces. Next, the array of samples is fed into a certain program that produces N/2 of some numerical values ​​(the programmer who stole from the Internet wrote a program, assures that it does the Fourier transform).

    To check whether the program works correctly, we will form an array of samples as the sum of two sinusoids sin(10*2*pi*x)+0.5*sin(5*2*pi*x) and slip it into the program. The program drew the following:


    Fig.1 Graph of signal time function


    Fig.2 Signal spectrum graph

    On the spectrum graph there are two sticks (harmonics) 5 Hz with an amplitude of 0.5 V and 10 Hz with an amplitude of 1 V, everything is the same as in the formula of the original signal. Everything is fine, well done programmer! The program works correctly.

    This means that if we apply a real signal from a mixture of two sinusoids to the ADC input, we will get a similar spectrum consisting of two harmonics.

    Total, our real measured signal lasting 5 seconds, digitized by the ADC, that is, represented discrete counts, has discrete non-periodic spectrum.

    From a mathematical point of view, how many errors are there in this phrase?

    Now the authorities have decided, we decided that 5 seconds is too long, let's measure the signal in 0.5 seconds.



    Fig.3 Graph of the function sin(10*2*pi*x)+0.5*sin(5*2*pi*x) for a measurement period of 0.5 sec


    Fig.4 Function spectrum

    Something doesn't seem right! The 10 Hz harmonic is drawn normally, but instead of the 5 Hz stick, several strange harmonics appear. We look on the Internet to see what’s going on...

    Well, they say that you need to add zeros to the end of the sample and the spectrum will be drawn as normal.


    Fig.5 Added zeros up to 5 seconds


    Fig.6 Received spectrum

    It's still not the same as it was at 5 seconds. We'll have to deal with the theory. Let's go to Wikipedia- source of knowledge.

    2. Continuous function and its Fourier series representation

    Mathematically, our signal with a duration of T seconds is a certain function f(x) specified on the interval (0, T) (X in this case is time). Such a function can always be represented as a sum of harmonic functions (sine or cosine) of the form:

    (1), where:

    K - trigonometric function number (harmonic component number, harmonic number)
    T - segment where the function is defined (signal duration)
    Ak is the amplitude of the k-th harmonic component,
    ?k- initial phase of the k-th harmonic component

    What does it mean to “represent a function as the sum of a series”? This means that by adding the values ​​of the harmonic components of the Fourier series at each point, we obtain the value of our function at this point.

    (More strictly, the root-mean-square deviation of the series from the function f(x) will tend to zero, but despite the root-mean-square convergence, the Fourier series of a function, generally speaking, is not required to converge to it pointwise. See https://ru.wikipedia.org/ wiki/Fourier_Series.)

    This series can also be written as:

    (2),
    where , k-th complex amplitude.

    The relationship between coefficients (1) and (3) is expressed by the following formulas:

    Note that all these three representations of the Fourier series are completely equivalent. Sometimes, when working with Fourier series, it is more convenient to use exponents of the imaginary argument instead of sines and cosines, that is, use the Fourier transform in complex form. But it is convenient for us to use formula (1), where the Fourier series is presented as a sum of cosine waves with the corresponding amplitudes and phases. In any case, it is incorrect to say that the Fourier transform of a real signal will result in complex harmonic amplitudes. As Wiki correctly says, “The Fourier transform (?) is an operation that associates one function of a real variable with another function, also a real variable.”

    Total:
    The mathematical basis for spectral analysis of signals is the Fourier transform.

    The Fourier transform allows you to represent a continuous function f(x) (signal), defined on the segment (0, T) as the sum of an infinite number (infinite series) of trigonometric functions (sine and/or cosine) with certain amplitudes and phases, also considered on the segment (0, T). Such a series is called a Fourier series.

    Let us note some more points, the understanding of which is required for the correct application of the Fourier transform to signal analysis. If we consider the Fourier series (the sum of sinusoids) on the entire X-axis, we can see that outside the segment (0, T) the function represented by the Fourier series will periodically repeat our function.

    For example, in the graph of Fig. 7, the original function is defined on the segment (-T\2, +T\2), and the Fourier series represents a periodic function defined on the entire x-axis.

    This happens because sinusoids themselves are periodic functions, and accordingly their sum will be a periodic function.


    Fig.7 Representation of a non-periodic original function by a Fourier series

    Thus:

    Our original function is continuous, non-periodic, defined on a certain segment of length T.
    The spectrum of this function is discrete, that is, it is presented in the form of an infinite series of harmonic components - the Fourier series.
    In fact, the Fourier series defines a certain periodic function that coincides with ours on the segment (0, T), but for us this periodicity is not significant.

    The periods of the harmonic components are multiples of the value of the segment (0, T) on which the original function f(x) is defined. In other words, the harmonic periods are multiples of the duration of the signal measurement. For example, the period of the first harmonic of the Fourier series is equal to the interval T on which the function f(x) is defined. The period of the second harmonic of the Fourier series is equal to the interval T/2. And so on (see Fig. 8).


    Fig.8 Periods (frequencies) of the harmonic components of the Fourier series (here T = 2?)

    Accordingly, the frequencies of the harmonic components are multiples of 1/T. That is, the frequencies of the harmonic components Fk are equal to Fk = k\T, where k ranges from 0 to?, for example k = 0 F0 = 0; k=1 F1=1\T; k=2 F2=2\T; k=3 F3=3\T;… Fk= k\T (at zero frequency - constant component).

    Let our original function be a signal recorded during T=1 sec. Then the period of the first harmonic will be equal to the duration of our signal T1=T=1 sec and the harmonic frequency will be 1 Hz. The period of the second harmonic will be equal to the signal duration divided by 2 (T2=T/2=0.5 sec) and the frequency will be 2 Hz. For the third harmonic T3=T/3 sec and the frequency is 3 Hz. And so on.

    The step between harmonics in this case is 1 Hz.

    Thus, a signal with a duration of 1 second can be decomposed into harmonic components (obtaining a spectrum) with a frequency resolution of 1 Hz.
    To increase the resolution by 2 times to 0.5 Hz, you need to increase the measurement duration by 2 times - up to 2 seconds. A signal lasting 10 seconds can be decomposed into harmonic components (to obtain a spectrum) with a frequency resolution of 0.1 Hz. There are no other ways to increase frequency resolution.

    There is a way to artificially increase the duration of a signal by adding zeros to the array of samples. But it does not increase the actual frequency resolution.

    3. Discrete signals and discrete Fourier transform

    With the development of digital technology, the methods of storing measurement data (signals) have also changed. If previously a signal could be recorded on a tape recorder and stored on tape in analog form, now signals are digitized and stored in files in computer memory as a set of numbers (samples).

    The usual scheme for measuring and digitizing a signal is as follows.


    Fig.9 Diagram of the measuring channel

    The signal from the measuring transducer arrives at the ADC during a period of time T. The signal samples (sampling) obtained during the time T are transmitted to the computer and stored in memory.


    Fig. 10 Digitized signal - N samples received during time T

    What are the requirements for signal digitization parameters? A device that converts an input analog signal into a discrete code (digital signal) is called an analog-to-digital converter (ADC) (Wiki).

    One of the main parameters of the ADC is the maximum sampling frequency (or sampling rate, English sample rate) - the sampling rate of a time-continuous signal when sampling it. It is measured in hertz. ((Wiki))

    According to Kotelnikov’s theorem, if a continuous signal has a spectrum limited by the frequency Fmax, then it can be completely and uniquely reconstructed from its discrete samples taken at time intervals , i.e. with frequency Fd? 2*Fmax, where Fd is the sampling frequency; Fmax - maximum frequency of the signal spectrum. In other words, the signal digitization frequency (ADC sampling frequency) must be at least 2 times higher than the maximum frequency of the signal that we want to measure.

    What will happen if we take samples with a lower frequency than required by Kotelnikov’s theorem?

    In this case, the “aliasing” effect occurs (also known as the stroboscopic effect, moiré effect), in which a high-frequency signal, after digitization, turns into a low-frequency signal, which actually does not exist. In Fig. 5 red high frequency sine wave is a real signal. A blue sinusoid of a lower frequency is a fictitious signal that arises due to the fact that during the sampling time more than half a period of the high-frequency signal has time to pass.


    Rice. 11. The appearance of a false low-frequency signal at an insufficiently high sampling rate

    To avoid the aliasing effect, a special anti-aliasing filter is placed in front of the ADC - a low-pass filter (LPF), which passes frequencies below half the ADC sampling frequency, and cuts off higher frequencies.

    In order to calculate the spectrum of a signal from its discrete samples, the discrete Fourier transform (DFT) is used. Let us note once again that the spectrum of a discrete signal “by definition” is limited by the frequency Fmax, which is less than half the sampling frequency Fd. Therefore, the spectrum of a discrete signal can be represented by the sum of a finite number of harmonics, in contrast to the infinite sum for the Fourier series of a continuous signal, the spectrum of which can be unlimited. According to Kotelnikov's theorem, the maximum frequency of a harmonic must be such that it accounts for at least two samples, therefore the number of harmonics is equal to half the number of samples of a discrete signal. That is, if there are N samples in the sample, then the number of harmonics in the spectrum will be equal to N/2.

    Let us now consider the discrete Fourier transform (DFT).

    Comparing with Fourier series

    We see that they coincide, except that time in the DFT is discrete in nature and the number of harmonics is limited by N/2 - half the number of samples.

    DFT formulas are written in dimensionless integer variables k, s, where k are the numbers of signal samples, s are the numbers of spectral components.
    The value s shows the number of complete harmonic oscillations over period T (duration of signal measurement). The discrete Fourier transform is used to find the amplitudes and phases of harmonics using a numerical method, i.e. "on the computer"

    Returning to the results obtained at the beginning. As mentioned above, when expanding a non-periodic function (our signal) into a Fourier series, the resulting Fourier series actually corresponds to a periodic function with period T (Fig. 12).


    Fig. 12 Periodic function f(x) with period T0, with measurement period T>T0

    As can be seen in Fig. 12, the function f(x) is periodic with period T0. However, due to the fact that the duration of the measurement sample T does not coincide with the period of the function T0, the function obtained as a Fourier series has a discontinuity at point T. As a result, the spectrum of this function will contain a large number of high-frequency harmonics. If the duration of the measurement sample T coincided with the period of the function T0, then the spectrum obtained after the Fourier transform would contain only the first harmonic (sinusoid with a period equal to the sampling duration), since the function f(x) is a sinusoid.

    In other words, the DFT program “does not know” that our signal is a “piece of a sinusoid”, but tries to represent a periodic function in the form of a series, which has a discontinuity due to the inconsistency of individual pieces of the sinusoid.

    As a result, harmonics appear in the spectrum, which should sum up the shape of the function, including this discontinuity.

    Thus, in order to obtain the “correct” spectrum of a signal, which is the sum of several sinusoids with different periods, it is necessary that an integer number of periods of each sinusoid fit into the signal measurement period. In practice, this condition can be met for a sufficiently long duration of signal measurement.


    Fig. 13 Example of the function and spectrum of the gearbox kinematic error signal

    With a shorter duration, the picture will look “worse”:


    Fig. 14 Example of the function and spectrum of a rotor vibration signal

    In practice, it can be difficult to understand where are the “real components” and where are the “artifacts” caused by the non-multiple periods of the components and the duration of the signal sampling or “jumps and breaks” in the signal shape. Of course, the words “real components” and “artifacts” are put in quotation marks for a reason. The presence of many harmonics on the spectrum graph does not mean that our signal actually “consists” of them. This is the same as thinking that the number 7 “consists” of the numbers 3 and 4. The number 7 can be represented as the sum of the numbers 3 and 4 - this is correct.

    So our signal... or rather not even “our signal”, but a periodic function composed by repeating our signal (sampling) can be represented as a sum of harmonics (sine waves) with certain amplitudes and phases. But in many cases that are important for practice (see the figures above), it is indeed possible to associate the harmonics obtained in the spectrum with real processes that are cyclic in nature and make a significant contribution to the shape of the signal.

    Some results

    1. A real measured signal with a duration of T seconds, digitized by an ADC, that is, represented by a set of discrete samples (N pieces), has a discrete non-periodic spectrum, represented by a set of harmonics (N/2 pieces).

    2. The signal is represented by a set of real values ​​and its spectrum is represented by a set of real values. Harmonic frequencies are positive. The fact that it is more convenient for mathematicians to represent the spectrum in complex form using negative frequencies does not mean that “this is correct” and “this should always be done.”

    3. A signal measured over a time interval T is determined only over a time interval T. What happened before we started measuring the signal, and what will happen after that, is unknown to science. And in our case, it’s not interesting. The DFT of a time-limited signal gives its “true” spectrum, in the sense that, under certain conditions, it allows one to calculate the amplitude and frequency of its components.

    Materials used and other useful materials.

    Spectral analysis

    Spectral analysis is a wide class of data processing methods based on their frequency representation, or spectrum. The spectrum is obtained by decomposing the original function, which depends on time (time series) or spatial coordinates (for example, an image), into the basis of some periodic function. Most often, for spectral processing, the Fourier spectrum obtained on the basis of the sine basis (Fourier decomposition, Fourier transform) is used.

    The main meaning of the Fourier transform is that the original non-periodic function of an arbitrary shape, which cannot be described analytically and is therefore difficult to process and analyze, is represented as a set of sines or cosines with different frequencies, amplitudes and initial phases.

    In other words, a complex function is transformed into many simpler ones. Each sine wave (or cosine wave) with a certain frequency and amplitude, obtained as a result of Fourier expansion, is called spectral component or harmonic. The spectral components form Fourier spectrum.

    Visually, the Fourier spectrum is presented in the form of a graph on which the circular frequency, denoted by the Greek letter “omega,” is plotted along the horizontal axis, and the amplitude of the spectral components, usually denoted by the Latin letter A, is plotted along the vertical axis. Then each spectral component can be represented as a count, position which horizontally corresponds to its frequency, and height – its amplitude. A harmonic with zero frequency is called constant component(in temporal representation this is a straight line).

    Even a simple visual analysis of the spectrum can tell a lot about the nature of the function on the basis of which it was obtained. It is intuitively clear that rapid changes in the initial data give rise to components in the spectrum with high frequency, and slow ones - with low. Therefore, if the amplitude of its components decreases rapidly with increasing frequency, then the original function (for example, a time series) is smooth, and if the spectrum contains high-frequency components with large amplitude, then the original function will contain sharp fluctuations. Thus, for a time series, this may indicate a large random component, instability of the processes it describes, or the presence of noise in the data.

    Spectral processing is based on spectrum manipulation. Indeed, if you reduce (suppress) the amplitude of high-frequency components, and then, based on the changed spectrum, restore the original function by performing an inverse Fourier transform, then it will become smoother due to the removal of the high-frequency component.

    For a time series, for example, this means removing information on daily sales, which is highly susceptible to random factors, and leaving behind more consistent trends, such as seasonality. You can, on the contrary, suppress low-frequency components, which will remove slow changes and leave only fast ones. In the case of a time series, this will mean suppression of the seasonal component.

    By using the spectrum in this way, you can achieve the desired change in the original data. The most common use is to smooth time series by removing or reducing the amplitude of high-frequency components in the spectrum.

    To manipulate spectra, filters are used - algorithms that can control the shape of the spectrum, suppress or enhance its components. Main property any filter is its amplitude-frequency response (AFC), the shape of which determines the transformation of the spectrum.

    If a filter passes only spectral components with a frequency below a certain cutoff frequency, then it is called a low-pass filter (LPF), and it can be used to smooth the data, clear it of noise and anomalous values.

    If a filter passes spectral components above a certain cutoff frequency, then it is called a high-pass filter (HPF). It can be used to suppress slow changes, such as seasonality in data series.

    In addition, many other types of filters are used: mid-pass filters, band-stop filters and bandpass filters, as well as more complex ones that are used in signal processing in radio electronics. By selecting the type and shape of the frequency response of the filter, you can achieve the desired transformation of the original data through spectral processing.

    When performing frequency filtering of data for the purpose of smoothing and removing noise, it is necessary to correctly specify the low-pass filter bandwidth. If you select it too high, the degree of smoothing will be insufficient, and the noise will not be completely suppressed. If it is too narrow, then changes that carry useful information may be suppressed along with the noise. If in technical applications there are strict criteria for determining the optimal characteristics of filters, then in analytical technologies it is necessary to use mainly experimental methods.

    Spectral analysis is one of the most effective and well-developed data processing methods. Frequency filtering is just one of its many applications. In addition, it is used in correlation and statistical analysis, synthesis of signals and functions, building models, etc.