• Radio engineering signals. Signal theory. Classification. Basic characteristics of signals. Gonorovsky I. S. Radio engineering circuits and signals. Textbook for universities Basic radio engineering processes and their characteristics

    Chapter 1 Elements of the general theory of radio signals

    The term “signal” is often found not only in scientific and technical matters, but also in everyday life. Sometimes, without thinking about the rigor of terminology, we identify concepts such as signal, message, information. This usually does not lead to misunderstandings, since the word “signal” comes from the Latin term “signum” - “sign”, which has a wide range of meanings.

    However, when embarking on a systematic study of theoretical radio engineering, one should, if possible, clarify the substantive meaning of the concept “signal”. In accordance with the accepted tradition, a signal is the process of changing the physical state of an object over time, which serves to display, register and transmit messages. In the practice of human activity, messages are inextricably linked with the information contained in them.

    The range of issues based on the concepts of “message” and “information” is very wide. It is the object of close attention of engineers, mathematicians, linguists, and philosophers. In the 40s, K. Shannon completed the initial stage of developing a deep scientific direction - information theory.

    It should be said that the problems mentioned here, as a rule, go far beyond the scope of the course “Radio Engineering Circuits and Signals”. Therefore, this book will not outline the relationship that exists between the physical appearance of a signal and the meaning of the message contained in it. Moreover, the question of the value of the information contained in the message and, ultimately, in the signal will not be discussed.

    1.1. Classification of radio signals

    When starting to study any new objects or phenomena, science always strives to carry out their preliminary classification. Below, such an attempt is made in relation to signals.

    The main goal is to develop classification criteria, and also, which is very important for what follows, to establish certain terminology.

    Description of signals using mathematical models.

    Signals as physical processes can be studied using various instruments and devices - electronic oscilloscopes, voltmeters, receivers. This empirical method has a significant drawback. Phenomena observed by an experimenter always appear as private, isolated manifestations, deprived of the degree of generalization that would allow us to judge their fundamental properties and predict results under changed conditions.

    In order to make signals objects of theoretical study and calculations, one must indicate a method for their mathematical description or, in the language of modern science, create a mathematical model of the signal being studied.

    A mathematical model of a signal can be, for example, a functional dependence, the argument of which is time. As a rule, in the future such mathematical models of signals will be denoted by Latin symbols s(t), u(t), f(t), etc.

    Creating a model (in this case, a physical signal) is the first significant step towards systematically studying the properties of a phenomenon. First of all, the mathematical model allows us to abstract from the specific nature of the signal carrier. In radio engineering, the same mathematical model equally successfully describes current, voltage, electromagnetic field strength, etc.

    An essential aspect of the abstract method, based on the concept of a mathematical model, is that we get the opportunity to describe precisely those properties of signals that objectively act as decisively important. In this case, a large number of secondary signs are ignored. For example, in the overwhelming majority of cases it is extremely difficult to select exact functional dependencies that would correspond to the electrical vibrations observed experimentally. Therefore, the researcher, guided by the totality of information available to him, selects from the available arsenal of mathematical signal models those that in a particular situation best and most simply describe the physical process. So, choosing a model is a largely creative process.

    Functions that describe signals can take both real and complex values. Therefore, in the future we will often talk about real and complex signals. The use of one or another principle is a matter of mathematical convenience.

    Knowing the mathematical models of signals, you can compare these signals with each other, establish their identity and difference, and carry out classification.

    One-dimensional and multidimensional signals.

    A typical signal for radio engineering is the voltage at the terminals of a circuit or the current in a branch.

    Such a signal, described by a single function of time, is usually called one-dimensional. In this book, one-dimensional signals will be studied most often. However, sometimes it is convenient to introduce multidimensional, or vector, signals of the form

    formed by some set of one-dimensional signals. The integer N is called the dimension of such a signal (terminology borrowed from linear algebra).

    For example, the system of voltages at the terminals of a multi-terminal network serves as a multidimensional signal.

    Note that a multidimensional signal is an ordered collection of one-dimensional signals. Therefore, in the general case, signals with different orders of components are not equal to each other:

    Multidimensional signal models are especially useful in cases where the functioning of complex systems is analyzed using a computer.

    Deterministic and random signals.

    Another principle of classification of radio signals is based on the possibility or impossibility of accurately predicting their instantaneous values ​​at any time.

    If the mathematical model of the signal allows such a prediction to be made, then the signal is called deterministic. The methods for specifying it can be varied - a mathematical formula, a computational algorithm, and finally, a verbal description.

    Strictly speaking, deterministic signals, as well as deterministic processes corresponding to them, do not exist. The inevitable interaction of the system with the physical objects surrounding it, the presence of chaotic thermal fluctuations and simply incomplete knowledge about the initial state of the system - all this forces us to consider real signals as random functions of time.

    In radio engineering, random signals often manifest themselves as interference, preventing the extraction of information from a received vibration. The problem of combating interference and increasing the noise immunity of radio reception is one of the central problems of radio engineering.

    The concept of a "random signal" may seem contradictory. However, this is not true. For example, the signal at the output of a radio telescope receiver aimed at a source of cosmic radiation represents chaotic oscillations, which, however, carry a variety of information about a natural object.

    There is no insurmountable boundary between deterministic and random signals.

    Very often, in conditions where the level of interference is significantly less than the level of a useful signal with a known shape, a simpler deterministic model turns out to be quite adequate to the task.

    The methods of statistical radio engineering, developed in recent decades to analyze the properties of random signals, have many specific features and are based on the mathematical apparatus of probability theory and the theory of random processes. A number of chapters of this book will be entirely devoted to this range of issues.

    Pulse signals.

    A very important class of signals for radio engineering are pulses, i.e. oscillations that exist only within a finite period of time. In this case, a distinction is made between video pulses (Fig. 1.1, a) and radio pulses (Fig. 1.1, b). The difference between these two main types of impulses is as follows. If is a video pulse, then the corresponding radio pulse (frequency and initial are arbitrary). In this case, the function is called the envelope of the radio pulse, and the function is called its filling.

    Rice. 1.1. Pulse signals and their characteristics: a - video pulse, b - radio pulse; c - determination of the numerical parameters of the pulse

    In technical calculations, instead of a full mathematical model that takes into account the details of the “fine structure” of the pulse, numerical parameters are often used that give a simplified idea of ​​its shape. Thus, for a video pulse close in shape to a trapezoid (Fig. 1.1, c), it is customary to determine its amplitude (height) A. From the time parameters, the duration of the pulse is indicated, the duration of the front and the duration of the cutoff

    In radio engineering, we deal with voltage pulses whose amplitudes range from fractions of a microvolt to several kilovolts, and whose durations reach fractions of a nanosecond.

    Analog, discrete and digital signals.

    Concluding a brief overview of the principles of classification of radio signals, we note the following. Often the physical process that produces a signal evolves over time in such a way that the signal values ​​can be measured in. any moment in time. Signals of this class are usually called analog (continuous).

    The term “analog signal” emphasizes that this signal is “analogous”, completely similar to the physical process that generates it.

    A one-dimensional analog signal is clearly represented by its graph (oscillogram), which can be either continuous or with break points.

    Initially, radio engineering used exclusively analog signals. Such signals made it possible to successfully solve relatively simple technical problems (radio communications, television, etc.). Analogue signals were easy to generate, receive and process using the means available at the time.

    Increased demands on radio systems and a variety of applications have forced us to look for new principles for their construction. In some cases, analog ones have been replaced by pulsed systems, the operation of which is based on the use of discrete signals. The simplest mathematical model of a discrete signal is a countable set of points - an integer) on the time axis, in each of which the reference value of the signal is determined. Typically, the sampling step for each signal is constant.

    One of the advantages of discrete signals over analog signals is that there is no need to reproduce the signal continuously at all times. Due to this, it becomes possible to transmit messages from different sources over the same radio link, organizing multi-channel communication with time-separated channels.

    Intuitively, fast time-varying analog signals require a small step size to be sampled. In ch. 5 this fundamentally important issue will be explored in detail.

    A special type of discrete signals are digital signals. They are characterized by the fact that the reference values ​​are presented in the form of numbers. For reasons of technical convenience of implementation and processing, binary numbers with a limited and, as a rule, not too large number of digits are usually used. Recently, there has been a trend towards widespread implementation of systems with digital signals. This is due to the significant advances achieved by microelectronics and integrated circuit technology.

    It should be borne in mind that in essence any discrete or digital signal (we are talking about a signal - a physical process, and not about a mathematical model) is an analog signal. Thus, an analog signal slowly changing over time can be associated with its discrete image, which has the form of a sequence of rectangular video pulses of the same duration (Fig. 1.2, a); the height of ethnh pulses is proportional to the values ​​at the reference points. However, you can do it differently, keeping the height of the pulses constant, but changing their duration in accordance with the current reference values ​​(Fig. 1.2, b).

    Rice. 1.2. Sampling of an analog signal: a - with variable amplitude; b - with variable duration of counting pulses

    Both analog signal sampling methods presented here become equivalent if we assume that the values ​​of the analog signal at the sampling points are proportional to the area of ​​the individual video pulses.

    Recording of reference values ​​in the form of numbers is also carried out by displaying the latter in the form of a sequence of video pulses. The binary number system is ideally suited for this procedure. You can, for example, associate a high potential level with one and a low potential level with zero, f Discrete signals and their properties will be studied in detail in Chapter. 15.

    Basic radio engineering processes are the processes of converting signals containing and carrying messages. The basic processes are approximately the same (similar) for all radio-electronic systems, regardless of what class and what generation of technology these systems belong to, regardless of the structure and purpose of these systems.

    13. Emission of high-frequency radio signals and propagation of radio waves

    13.1. Radio signals and electromagnetic waves

    In accordance with the law of electromagnetic induction, an emf appears in a circuit enveloping a changing magnetic field, which excites a current in this circuit. The conductor does not play a significant role here. It only allows you to detect the induced current. The true essence of the phenomenon of induction, as established by J.C. Maxwell, is that in space where the magnetic field changes, a time-varying electric field appears. Maxwell called this time-varying electric field the electric displacement current.

    Unlike the field of stationary charges, the field lines of a time-varying electric field (electrical displacement current) can be closed in the same way as the magnetic field lines. Therefore, there is a close connection and interaction between electric and magnetic fields. It is established by the following laws.

    1. A time-varying electric field at any point in space creates a changing magnetic field. The lines of force of the magnetic field cover the lines of force of the electric field that created it (Fig. 13.1, A). At each point in space, the electric field strength vector E and magnetic field strength vector N orthogonal to each other.

    2. A time-varying magnetic field at any point in space creates a changing electric field. The electric field lines cover the alternating magnetic field lines in Fig. 3.1. b). At each point of the space under consideration, the magnetic field strength vector N and electric field strength vector E mutually perpendicular.

    3. An alternating electric field and an inextricably linked alternating magnetic field together form an electromagnetic field.

    Rice. 13.1. First A) and second b) laws of electromagnetic field (Maxwell's laws)

    The transfer of electromagnetic energy by a wave in space is characterized by the vector P, equal to the vector product of the electric and magnetic field strengths:

    .

    Vector direction P coincides with the direction of wave propagation, and the modulus is numerically equal to the amount of energy that the wave transfers per unit time through a unit area located perpendicular to the direction of wave propagation. The concept of energy flow of any kind was first introduced by N.A. Umovov in 1874. Formula for vector P was obtained based on the electromagnetic field equations by Poynting in 1884. Therefore, the vector P, the modulus of which is equal to the wave power flux density is called the Umov–Poynting vector.

    The most important feature of the electromagnetic field is that it moves in space in all directions from the point at which it originated. The field can exist even after the source of electromagnetic disturbance has ceased to operate. Changing electric and magnetic fields, moving from point to point in space, propagate in a vacuum at the speed of light (310 8 m/s).

    The process of propagation of a periodically changing electromagnetic field is a wave process. Electromagnetic waves of the radiated field, meeting conductors on their way, excite in them an EMF of the same frequency as the frequency of the electromagnetic field creating the induced EMF. Part of the energy carried by electromagnetic waves is transferred to currents arising in conductors.

    The distance over which the wave front moves in a time equal to one period of electromagnetic oscillation is called the wavelength

    .

    Radio waves, thermal and ultraviolet radiation, light, X-rays and -radiation are all waves of an electromagnetic nature, but of different lengths. And all these waves are used by different radio-electronic systems. Scale of electromagnetic waves ordered by frequency f, wavelength, and by the name of the range is presented in Fig. 4.2.

    Knowledge of the conditions of propagation of the electromagnetic field is very important for determining the range and coverage of radio-electronic systems, dangerous distances at which unauthorized access of technical intelligence equipment to the information contained in intercepted signals is possible. If possible, the area within which there is a risk of interception is controlled to exclude the presence of technical reconnaissance equipment. In other cases, it is necessary to take other measures to protect information carried by signals informative for reconnaissance by electromagnetic fields.

    The conditions for the propagation of electromagnetic fields depend significantly on frequency (wavelength). The propagation of radio waves is significantly different from the propagation of infrared radiation, visible light and harder radiation.

    The speed of propagation of radio waves in free space in a vacuum is equal to the speed of light. The total energy transferred by the radio wave remains constant, and the energy flux density decreases with increasing distance r from the source in inverse proportion to r 2. The propagation of radio waves in other media occurs at a phase speed different from With and is accompanied by the absorption of electromagnetic energy. Both effects are explained by the excitation of oscillations of electrons and ions in the pore medium by the action of the electric field of the wave. If the field strength | E| If the harmonic wave is small compared to the field strength acting on the charges in the medium itself (for example, on an electron in an atom), then oscillations also occur according to the harmonic law with the frequency of the arriving wave. Oscillating electrons emit secondary radio waves of the same frequency, but with different amplitudes and phases. As a result of the addition of secondary waves with the incoming one, a resulting wave with a new amplitude and phase is formed. The phase shift between the primary and re-emitted waves leads to a change in the phase velocity. Energy losses during the interaction of waves with atoms are the reason for the absorption of radio waves.

    The amplitude of the electric (and, of course, magnetic) field of the wave decreases with distance according to the law

    ,

    and the phase of the wave changes as

    Where absorption rate, and n– refractive index, depending on the dielectric permeability medium, its conductivityo and wave frequency:

    ,

    The medium behaves like a dielectric , If
    and as a guide if
    . In the first case
    , absorption is low, in the second
    .

    In a medium where  and  depend on frequency, wave dispersion is observed . The type of frequency dependence  and  is determined by the structure of the medium. The dispersion of radio waves is especially significant in cases where the wave frequency is close to the characteristic natural frequencies of the medium, for example, when radio waves propagate in ionospheric and cosmic plasma.

    When radio waves propagate in media that do not contain free electrons (in the troposphere, deep within the Earth), bound electrons in the atoms and molecules of the medium are displaced in the direction opposite to the wave field E, at the same time n>1, and the phase speed v f<With(a radio signal carrying energy propagates at group velocity v gr<With). In a plasma, the wave field causes a displacement of free electrons in the direction E, while n<1 иv f<With.

    In homogeneous media, radio waves propagate in a straight line, like light rays. The process of propagation of radio waves in this case obeys the laws of geometric optics. Considering the sphericity of the Earth, the line-of-sight range can be estimated based on simple geometric constructions by the relation

    ,

    Where h prd and h prm – height of the transmitting and receiving antennas in meters; R – line of sight range in kilometers.

    However, real environments are heterogeneous. In them n, and therefore v f are different in different parts of the medium, which leads to curvature of the radio wave trajectory. Refraction (refraction) of radio waves occurs. Taking into account the normal refraction of radio waves, the maximum range is determined more accurately than by the relation

    If n depends on one coordinate, for example height h(plane-layered medium), then when a wave passes through each flat layer, the ray incident on the inhomogeneous medium at the point with n 0 = 1 at an angle 0 in space is curved so that at an arbitrary point in the medium h the following ratio is observed:

    .

    If n decreases with increasing h, then, as a result of refraction, the beam, as it propagates, deviates from the vertical and at a certain height h m becomes parallel to the horizontal plane and then spreads downwards. Maximum height h m, by which the beam can penetrate into an inhomogeneous plane-layered medium, depends on the angle of incidence 0. This angle can be determined from the condition:

    To the region h>h m rays do not penetrate and, according to the approximation of geometric optics, the wave field in this region should be equal to 0. In fact, near the plane h=h m the wave field increases, and at h>h m decreases exponentially. Violation of the laws of geometric optics during the propagation of radio waves is associated with wave diffraction, due to which radio waves can penetrate into the region of the geometric shadow. At the boundary of the geometric shadow region o6p, a complex distribution of wave fields appears. Diffraction of radio waves occurs when there are obstacles (opaque or translucent bodies) in their path. Diffraction is especially significant in cases where the size of obstacles is comparable to the wavelength.

    If the propagation of radio waves occurs near a sharp boundary (on a scale ) between two media with different electrical properties (for example, the atmosphere, the surface of the Earth or the troposphere - the lower boundary of the ionosphere for sufficiently long waves), then when radio waves fall on a sharp boundary, reflected and refracted (transmitted) waves are formed. ) radio waves.

    In inhomogeneous media, waveguide propagation of radio waves is possible, in which the energy flow is localized between certain surfaces, due to which the wave fields between them decrease with distance more slowly than in a homogeneous medium. This is how atmospheric waveguides are formed .

    In a medium containing random local inhomogeneities, secondary waves are emitted randomly in different directions. Scattered waves partially carry away the energy of the original wave, which leads to its weakening. When scattering on inhomogeneities of size l<<рассеянные волны распространяются почти изотропно. В случае рассеяния на крупномасштабных прозрачных неоднородностях рассеянные волны распространяются правлениях, близких к направлению исходной волны. Приl strong resonant scattering occurs.

    The influence of the Earth's surface on the propagation of radio waves depends on the location of the transmitter and receiver relative to it. The propagation of radio waves is a process that covers a large region of space, but the most significant role in the propagation of radio waves is played by the region limited by a surface in the shape of a scattering ellipsoid, at the foci of which r transmitter and receiver are located.

    If the heights h 1 and h 2, which summarizes the antennas of the transmitter and receiver above the surface of the Earth, are large compared to the wavelength, then it does not affect the propagation of radio waves . When both or one of the end points of the radio path is lowered, close to specular reflection from the Earth's surface will be observed. In this case, the radio wave at the receiving point is determined by the interference of direct and reflected waves . Interference maxima and minima determine the lobe structure of the field in the reception area. This picture is especially typical for meter and shorter radio waves. The quality of radio communication in this case is determined by the conductivity of the soil. The soils that form the surface layer of the earth's crust, as well as the waters of the seas and oceans, therefore have electrical conductivity. But since n and depend on frequency, then for centimeter waves all types of the earth's surface have dielectric properties. For meter and longer waves, the Earth is a conductor into which the waves penetrate to depth
    ( 0 – wavelength in vacuum). Therefore, mainly long and ultra-long waves are used for underground and underwater radio communications.

    The convexity of the earth's surface limits the distance at which the transmitter is visible from the receiving point (line of sight area). However, radio waves can penetrate a greater distance into the shadow area
    (R z is the radius of the Earth), bending around the Earth as a result of diffraction. In practice, only kilometer and longer waves can penetrate into this region due to diffraction. Beyond the horizon the field grows with increasing altitude h 1, to which the emitter is raised, and quickly (almost exponentially) decreases with distance from it.

    The influence of the relief of the earth's surface on the propagation of radio waves depends on the height of the irregularities h, their horizontal extent l, wavelengthand angleof incidence of the wave on the surface. If the bumps are small enough and gentle enough that kh cos<1(
    wave number) and the Rayleigh criterion is satisfied: k 2 l 2 cos<1, то они слабо влияют на распространение радиоволн. Влияние неровностей зависит, также от поляризации волн. Например, для горизонтально поляризованных волн оно меньше, чем для волн, поляризованных вертикально. Когда не ровности не малы и не пологи, энергия радиоволны может рассеиваться (радиоволна отражается от них). Высокие горы и холмы сh> form shaded areas. Diffraction of radio waves on mountain ridges sometimes leads to amplification of the wave due to the interference of direct and reflected waves: the mountain top serves as a natural repeater.

    The phase speed of radio waves propagating along the earth's surface (ground waves) near the emitter depends on its electrical properties. However, at a distance of several  from the emitter v f  c. If radio waves propagate over an electrically inhomogeneous surface, for example, first over land and then over the sea, then when crossing the coastline the amplitude and direction of propagation of radio waves changes sharply (coastal refraction is observed).

    Propagation of radio waves in the troposphere. Troposphere - the region in which air temperature usually decreases with height h. The height of the tropopause over the globe is not the same: it is greater above the equator than above the poles, and in the middle latitudes, where there is a system of strong westerly winds, it changes abruptly. The troposphere consists of a mixture of gases and water vapor; its conductivity for radio waves larger than a few centimeters is negligible. The troposphere has properties close to vacuum, since the refractive index of the Earth's surface is
    and the phase speed is only slightly less With. As altitude increases, air density decreases, and therefore n decrease, approaching even more unity. This leads to a deviation of the trajectories of radio rays towards the Earth. This normal tropospheric refraction contributes to the propagation of radio waves beyond the line of sight, since due to refraction the waves can bend around the bulge of the Earth. In practice, this effect can only play a role for VHF. For longer wavelengths, the bending of the Earth's bulge due to diffraction predominates.

    Meteorological conditions can weaken or enhance refraction compared to normal, since air density depends on pressure, temperature and humidity. Typically, in the troposphere, gas pressure and temperature decrease with altitude, and water vapor pressure increases. However, under certain meteorological conditions (for example, when air heated over land moves over the sea), the air temperature increases with height (temperature inversion). The deviations are especially large in summer at an altitude of 2...3 km. Under these conditions, temperature inversions and cloud layers often form, and the refraction of radio waves in the troposphere can become so strong that a radio wave emerging at a slight angle to the horizon at a certain height will change direction and return back to the Earth. In a space limited below by the earth's surface and above by the refractive layer of the troposphere, a wave can propagate over very long distances (waveguide propagation). In tropospheric waveguides, as a rule, waves with <1 м.

    The absorption of radio waves in the troposphere is negligible for all radio waves down to the centimeter range. The absorption of centimeter and shorter waves increases sharply when the vibration frequency coincides with one of the natural vibration frequencies of air molecules (resonant absorption). Molecules receive energy from the incoming wave, which turns into heat and is only partially transferred to secondary waves. A number of resonant absorption lines in the troposphere are known: =1.35 cm, 1.5 cm, 0.75 cm (absorption in water vapor) and =0.5 cm, 0.25 cm (absorption in oxygen). Between the resonance lines lie areas of weaker absorption (transparency windows).

    The attenuation of radio waves can also be caused by scattering by irregularities that arise during the turbulent movement of air masses . Scattering increases sharply when droplet inhomogeneities in the form of rain, snow, and fog are present in the air. Almost isotropic Rayleigh scattering on small-scale irregularities makes radio communication possible at distances significantly greater than line of sight. Thus, the troposphere significantly influences the propagation of VHF. For decameter and longer waves, the troposphere is almost transparent and their propagation is influenced by the earth's surface and higher layers of the atmosphere (ionosphere).

    Propagation of radio waves in the ionosphere. The ionosphere is formed by the upper layers of the earth's atmosphere, in which gases are partially (up to 1%) ionized under the influence of ultraviolet, X-ray and corpuscular solar radiation. The ionosphere is electrically neutral; it contains equal amounts of positively and negatively charged particles, i.e. is plasma .

    Sufficiently large ionization, which affects the propagation of radio waves, begins at an altitude of 60 km (layer D), increases to a height of 300...400 km, forming layers E, F 1 , F 2 , and then slowly decreases. At the main maximum the electron concentration N reaches 10 2 m -3. Addiction N from height changes with time of day, year, solar activity, as well as latitude and longitude.

    Depending on the frequency, the main role in the propagation of radio waves is played by one or another type of natural oscillations. Therefore, the electrical properties are different for different parts of the radio range. At high frequencies, ions do not have time to follow field changes, and only electrons take part in the propagation of radio waves. Forced oscillations of free electrons of the ionosphere proceed in antiphase with the acting force and cause polarization of the plasma in the direction opposite to the electric field of the wave E. Therefore, the dielectric constant of the ionosphere<1. Она уменьшается с уменьшением частоты:
    . Taking into account the collisions of electrons with atoms and ions gives more accurate formulas for the dielectric constant and conductivity of the ionosphere:

    ,

    where  is the effective collision frequency.

    For decameter and shorter waves in most of the ionosphere     and refractive indices n and absorptionapproach the values:

    .

    Since for the ionosphere n>1, then the phase speed of radio wave propagation
    , and the group velocity
    .

    Absorption in the ionosphere is proportional to , since the more collisions, the more of the energy received by the electron turns into heat. Therefore, absorption is greater in the lower regions of the ionosphere (layer D), where the gas density is higher. As frequency increases, absorption decreases. Short waves experience weak absorption and can travel over long distances.

    Refraction of radio waves in the ionosphere. Only radio waves with a frequency 0 can propagate in the ionosphere. At 0 refractive index n becomes purely imaginary, and the electromagnetic field decreases exponentially deeper into the plasma. A radio wave with frequency incident vertically on the ionosphere is reflected from the level at which 0 and n=0. In the lower part of the ionosphere, the electron concentration u 0 increases with height, therefore, with increasing , the wave emitted from the Earth penetrates deeper into the ionosphere. The maximum frequency of a radio wave that is reflected from the ionospheric layer during vertical incidence is called the critical frequency of the layer:

    .

    Critical layer frequency F 2 (main maximum) varies throughout the day and year within a wide range (from 3...5 to 10 MHz). For waves with  cr, the refractive index does not vanish and a vertically incident wave passes through the ionosphere without being reflected.

    When a wave is incident obliquely on the ionosphere, refraction occurs, as in the troposphere. In the lower part of the ionosphere, the phase velocity increases with height (along with an increase in electron concentration N). Therefore, the beam path is deflected towards the Earth. A radio wave incident on the ionosphere at an angle 0 turns toward the Earth at an altitude h, for which the condition = cr. The maximum frequency of a wave reflected from the ionosphere when incident at an angle 0 is called the maximum applicable frequency max =
    . Waves with< max отражаясь от ионосферы, возвращаются на Землю. Этот эффект что используется для дальней радиосвязи и загоризонтной радиолокации. Вследствие сферичности Земли величина угла 0 ограничена и дальность связи при однократном отражении от ионосферы не превосходит 3500…4000 км. Связь на большие расстояния осуществляется за счет нескольких последовательных отражений от ионосферы и Земли (скачков). Возможны и более сложные, волноводные траектории, возникающие за счет горизонтального градиентаN or scattering on inhomogeneities of the ionosphere during the propagation of radio waves with a frequency> max. As a result of scattering, the angle of incidence of the beam on the layer F 2 turns out to be greater than with normal propagation. The beam experiences a series of successive reflections from the layer F 2 until it falls into an area with such a gradient N, which will cause some of the energy to be reflected back to Earth.

    Influence of the Earth's magnetic field with intensity H 0 comes down to To that , what about an electron moving at speed v, Lorentz force acts
    , under the influence of which it rotates in a circle in a plane perpendicular to N 0, with gyroscopic frequency N. The trajectory of each charged particle is a helical line with an axis along N 0. The action of the Lorentz force leads to a change in the nature of forced oscillations of electrons under the influence of the electric field of the wave, and, consequently, to a change in the electrical properties of the medium. As a result, the electrical properties of the ionosphere become dependent on the direction of propagation of radio waves and are described not by a scalar quantity, but by the dielectric constant tensor ij. A wave incident on such a medium experiences double refraction , i.e., it splits into two waves that differ in speed and direction of propagation, absorption and polarization. If the direction of propagation of radio waves is perpendicular N 0, then the incident wave can be imagined as the sum of two linearly polarized waves with EN 0 and E||N 0 . For the first “extraordinary” wave, the nature of the forced motion of electrons under the influence of the wave field changes (an acceleration component appears, perpendicular to E) and therefore changes p. For the second “ordinary” wave, the forced motion remains the same as without a field N 0 .

    The bulk of the energy from low-frequency (LF) and very low-frequency (VLF) radio waves practically does not penetrate into the ionosphere. The waves are reflected from its lower boundary (during the day - due to strong refraction in the D-layer, at night - from the E-layer, as from the boundary of two media with different electrical properties). The propagation of these waves is well described by a model according to which the homogeneous and isotropic Earth and ionosphere form a surface waveguide with sharp spherical walls. It is in this waveguide that radio waves propagate. This model explains the observed decrease in the field with distance and the increase in field amplitude with height. The latter is associated with the sliding of waves along the concave surface of the waveguide, leading to a kind of focusing of the field. The amplitude of radio waves increases significantly at the antipodeal point of the Earth with respect to the source. This is explained by the addition of radio waves that circle the Earth in all directions and converge on the opposite side.

    The influence of the Earth's magnetic field determines a number of features of the propagation of low-frequency waves in the ionosphere: ultra-long waves can exit from the surface waveguide beyond the ionosphere, propagating along the geomagnetic field lines between conjugate points A And IN Earth.

    Nonlinear effects during radio wave propagation in the ionosphere appear already for radio waves of relatively low intensity and are associated with a violation of the linear dependence of the polarization of the medium on the electric field of the wave . “Heating” nonlinearity plays a major role when the characteristic dimensions of the plasma region perturbed by the electric field are many times larger than the electron free path. Since the mean free path of electrons in a plasma is significant, the electron manages to receive noticeable energy from the field during one run. The transfer of energy during collisions from electrons to ions, atoms and molecules is difficult due to the large difference in their masses. As a result, plasma electrons become strongly “heated up” already in a relatively weak electric field, which changes the effective collision frequency. Therefore, plasmas become dependent on the electric field strength E waves and the propagation of radio waves becomes nonlinear.

    Nonlinear effects can manifest themselves as self-interaction of a wave and as the interaction of waves with each other. The self-interaction of a powerful wave leads to a change in its absorption and modulation depth. The absorption of a powerful radio wave depends nonlinearly on its amplitude. The collision frequency with increasing temperature (electron energy) can either increase (in the lower layers, where collisions with neutral particles play the main role) or decrease (in collisions with ions). In the first case, absorption increases sharply with increasing wave power (field saturation in the plasma). In the second case, absorption decreases (this effect is called plasma bleaching for a powerful radio wave). Due to the nonlinear change in absorption, the amplitude of the wave depends nonlinearly on the amplitude of the incident field, so its modulation is distorted (self-modulation and demodulation of the wave). Change in refractive index n in the field of a powerful wave leads to distortion of the beam trajectory. When narrowly directed beams of radio waves propagate, this effect can lead to self-focusing of the beam, similar to self-focusing light and to the formation of a waveguide channel in plasma.

    The interaction of waves under nonlinear conditions leads to a violation of the superposition principle . In particular, if a powerful wave with frequency  1 is modulated in amplitude, then due to a change in absorption this modulation can be transmitted to another wave with frequency  2 passing in the same region of the ionosphere. This phenomenon is called cross modulation.

    Propagation of radio waves in outer space has its peculiarities due to the fact that a wide spectrum of electromagnetic waves comes to the Earth from outer space, which on their way from space must pass through the ionosphere and troposphere. Waves of two main frequency ranges propagate through the Earth’s atmosphere without noticeable attenuation: the “radio window” corresponds to the range from the ionospheric critical frequency to the frequencies of strong absorption by aerosols and gases of the atmosphere (10 MHz…20 GHz), the “optical window” covers the range of visible and IR radiation (1 THz...10 3 THz). The atmosphere is also partially transparent in the low frequency range up to 300 kHz, where whistling atmospherics and magnetohydrodynamic waves propagate.

    Propagation of radio waves of different ranges. Radio waves very low(3…30 kHz) and low (30…300 kHz) frequencies They bend around the earth's surface due to waveguide propagation and diffraction, penetrate relatively weakly into the ionosphere and are little absorbed by it. They are characterized by high phase stability and the ability to uniformly cover large areas, including polar regions. This makes it possible to use them for stable long- and ultra-long-range radio communications and radio navigation, despite the high level of atmospheric interference. The frequency band from 150 kHz to 300 kHz is used for radio broadcasting. Difficulties in using the very low frequency range are associated with the bulkiness of antenna systems with a high level of atmospheric interference and the relative limitation of information transmission speed. Slow oscillations of waves of very low frequencies cannot be modulated by fast processes that carry information at high speed. As N. Wiener wrote about this, “You cannot play a jig on the lower register of the organ.”

    Medium waves(300 kHz...3000 kHz) during the day propagate along the surface of the Earth (ground or direct wave). There is practically no wave reflected from the ionosphere, since the waves are strongly absorbed in the layer D ionosphere. At night, due to the lack of solar radiation, the layer D disappears, an ionospheric wave appears reflected from the layer E. In this case, the propagation range and, accordingly, reception increases. The addition of direct and reflected waves entails strong field variability at the receiving point. Therefore, the sky wave is a source of interference for many services that use ground wave propagation.

    Short waves(3 MHz...30 MHz) weakly absorbed D- And E- layers and reflected from the layer F, when their frequencies< max . В результате отражения от ионосферы возможна связь как на малых, так и на больших расстояниях при значительно меньшем уровне мощности передатчика и гораздо более простых антеннах, чем в более низкочастотных диапазонах. Особенность радиосвязи в этом диапазоне – наличие замираний (фединга) сигнала из-за изменений условий отражения от ионосферы и интерференционных эффектов. Коротковолновые линии связи подвержены влиянию атмосферных помех. Ионосферные бури вызывают прерывание связи.

    For very high frequencies and VHF (30...1000 MHz) are characterized by the predominance of radio wave propagation within the troposphere and penetration through the ionosphere. The role of the ground wave is declining. Interference fields in the low frequency part of this range can still be determined by reflections from the ionosphere, and up to a frequency of 60 MHz, ionospheric scatter continues to play a significant role. All types of radio wave propagation, with the exception of tropospheric scatter, allow the transmission of signals with a frequency bandwidth of several MHz.

    UHF and microwave waves (1000 MHz...10,000 MHz) propagate mainly within line of sight and are characterized by a low noise level. In this range, the known regions of maximum absorption and emission frequencies of chemical elements play a role in the propagation of radio waves (for example, resonant absorption lines by hydrogen molecules near the frequency of 1.42 GHz).

    Microwave waves (>10 GHz) only travel within line of sight. Losses in this range are slightly higher than at lower frequencies, and their magnitude is strongly influenced by the amount of precipitation. The increase in losses at these frequencies is partially compensated by the increase in the efficiency of antenna systems. A diagram illustrating the features of the propagation of radio waves of different ranges is illustrated in Fig. 13.3.

    Rice. 13.3. Propagation of electromagnetic waves in surface space

    Despite the fact that historically, radiation from the optical wavelength range began to be used by humanity much earlier than any other electromagnetic fields, the propagation of optical waves through the atmosphere is the least studied compared to the propagation of any radio waves. This is explained by a more complex picture of propagation phenomena, as well as by the fact that a broader study of these phenomena began only recently, after the invention and widespread use of optical quantum generators - lasers.

    Three main phenomena determine the patterns of optical wave propagation through the atmosphere: absorption, scattering and turbulence. The first two determine the average attenuation of the electromagnetic field under fixed atmospheric conditions and relatively slow changes in the field (slow fading) when meteorological conditions change. The third phenomenon is that turbulence causes rapid field changes (fast fading) that are observed in any weather. In addition, due to turbulence, a multipath effect is observed, when the structure of the beam arriving at the reception can change significantly compared to the structure of the beam at the output of the transmitting device.

    Lecture No. 2 Radio signals

    Signal theory. Classification. Main characteristics of signals

    The change in voltage, current, charge or power over time in electrical circuits is called electrical oscillation. The electrical vibration used to transmit information is a signal. The complexity of processes in electrical circuits depends on the complexity of the source signals. Therefore, it is advisable to use a spectrum of signals. From mathematics, Fourier series and transforms are known, with the help of which it is possible to represent signals as a set of harmonic components. In practice, characteristic analysis is useful, giving an idea of ​​the rate of change and duration of the signal. This can be achieved using correlation analysis.

    2.1. General information about radio signals

    Traditionally, electrical (and now optical) signals related to the radio range are considered to be radio engineering. From a mathematical point of view, any radio signal can be represented by some

    a time function u(t), which characterizes the change in its instantaneous values ​​of voltage (this representation is most often used), current, charge or power. Each class of signals has its own characteristics and requires specific methods of description and analysis. One of the key components of signal representation and processing is analysis. The main purpose of the analysis is to compare signals with each other to identify their similarities and differences. There are three main components of electrical signal analysis:

    Measurement of numerical parameters of signals (energy, average power and root mean square value);

    Decomposing a signal into its elementary components, either to consider them separately, or to compare the properties of different signals; such expansion is carried out using series and integral transformations, the most important of which are series and the Fourier transform;

    Quantitative measurement of the degree of “similarity” of various signals, their parameters and characteristics; This measurement is carried out using a correlation analysis apparatus.

    In order to make signals objects of study and calculations, it is necessary to indicate a method for their mathematical description, i.e., create a mathematical model of the signal under study. In radio engineering, each class of signals has its own mathematical representation, its own mathematical model, and the same mathematical model can almost always adequately describe voltage, current, charge, power, electromagnetic field strength, etc. The most common methods of representing (describing) signals are are temporal, spectral, analytical, statistical, vector, graphical and geometric. Functions that describe signals can take both real and complex values. Therefore, later in the book we will often talk about real and complex signals. Part of the brief classification of signals according to a number of characteristics is shown in Fig. 2.1.

    Fig.2.1. Classification of radio signals

    It is convenient to consider radio signals in the form of mathematical functions specified in time and physical coordinates. From this point of view, signals are usually described by one (one-dimensional signal; n = 1), two (two-dimensional signal; n = 2) or more (multivariate signal n > 2) independent variables. One-dimensional signals are functions only of time, and multidimensional ones, in addition, reflect position in “-dimensional space. For definiteness and simplicity, we will mainly consider one-dimensional time-dependent signals, a multidimensional case when a signal is represented as a finite or infinite collection of points, for example in space, the position of which depends on time. In television systems, a black and white image signal can be considered as a function f(x,y,f) of two spatial coordinates and time, representing the radiation intensity at a point (x, y) at an instant in time t at the cathode. When transmitting a color television signal we have three functions f (x, y, t), g(x, y, t), h(x, y, t) defined on a three-dimensional set (we can also consider these three functions as components of a three-dimensional vector field). In addition, various types of television signals may occur when television images are transmitted together with sound. Multidimensional signal ordered collection of one-dimensional signals. A multidimensional signal is created, for example, by a system of voltages at the terminals of a multi-terminal network (Fig. 2.2).

    Rice. 2.2. Multi-port voltage system.

    Multidimensional signals are described by complex functions, and their processing is often possible in digital form. Therefore, multidimensional signal models are especially useful in cases where the functioning of complex systems is analyzed using computers. So, multidimensional, or vector, signals consist of many one-dimensional signals

    where n integer, signal dimension. According to the peculiarities of the structure of time representation (Fig. 2.3), all radio signals are divided into analog ( analog), discrete (discrete - time; from Latin discretus divided, intermittent) and digital ( digital ). If a physical process generating a one-dimensional signal can be represented as a continuous function of time u(t) (Fig. 2.3, a), then such a signal is called analog (continuous). An example of an analog signal is some voltage that is applied to the input of an oscilloscope, resulting in a continuous waveform appearing on the screen as a function of time. A discrete signal is obtained from an analog signal through a special conversion. The process of converting an analog signal into a sequence of samples is called sampling, and the result of such conversion is a discrete signal or discrete series. The simplest mathematical model of a discrete signal U n (t) sequence of points on the time axis, taken, as a rule, at equal time intervals T = ∆t, called the sampling period (or interval, sampling step; sample time), and in each of which the values ​​of the corresponding continuous signal are specified (Fig. 2.3, b). The reciprocal of the sampling period is called the sampling frequency: f D = 1/T (another designation f D f D = 1/∆t). The corresponding angular (circular) frequency is determined as follows: ω D = 2π /∆t.

    Rice. 2.3. Radio signals: a analog; b discrete; in quantized; g digital

    A type of discrete signals is a digital signal ( digital signal ), In the process of converting discrete signal samples into digital form (usually binary numbers), it is quantized by level ( quantization ) voltage ∆. In this case, the values ​​of the signal levels can be numbered with binary numbers with a finite, required number of digits. A signal that is discrete in time and quantized in level is called a digital signal. In a digital signal, the discrete values ​​of the signal u T (t) are first quantized by level (Fig. 2.3, c) and then the quantized samples of the discrete signal are replaced by numbers u C (t), most often implemented in binary code, which is represented by high (one) and low (zero) levels of voltage potentials short pulses of duration τ (Fig. 2.3, d). Such a code is called unipolar. When presenting a signal, rounding inevitably occurs. The rounding errors that arise in this case are called quantization errors (or noise) ( quantization error, quantization noise ). The sequence of numbers that represents a signal in digital processing is a discrete series. One of the main characteristics by which signals differ is the predictability of the signal (its values) over time. Deterministic are radio signals whose instantaneous values ​​are reliably known at any time. The simplest examples of a deterministic signal are a harmonic oscillation with a known initial phase, high-frequency oscillations modulated according to a known law. A deterministic signal cannot be a carrier of information. Deterministic signals are divided into periodic and non-periodic(pulse). A signal of final energy, significantly different from zero during a limited time interval commensurate with the time of completion of the transient process in the system it is intended to influence, is called a pulse signal.

    Random signals are those whose instantaneous values ​​at any time are unknown and cannot be predicted with a probability equal to one. A signal carrying useful information can only be a random signal.

    Random processes whose parameters and properties can be determined from one random implementation (sample) are called ergodic; they have certain properties.

    Often, when describing and analyzing certain types of signals (primarily narrowband), it is convenient to use a complex form of their representation

    Where - respectively, the module and phase of the complex quantity

    The complex function u(t) can also be represented as

    where Re, Im are the real and imaginary parts of the complex function. From both formulas we get:

    In a vector representation, the complex signal is a vector on the complex plane with a real axis abscissa axis and an imaginary axis ordinate axis (Fig. 2.5). The vector on the plane rotates in the positive direction (counterclockwise) with speed ω 0 . The length of the vector is equal to the modulus of the complex signal, the angle between the vector and the abscissa argument φ 0 . The projections of the vector on the coordinate axes are equal to the real and imaginary parts of the complex value, respectively.

    Before starting to study any new phenomena, processes or objects, science always strives to classify them according to the greatest possible criteria. To consider and analyze signals, we will highlight their main classes. This is necessary for two reasons. First, checking whether a signal belongs to a specific class is an analysis procedure. Secondly, to represent and analyze signals of different classes, it is often necessary to use different tools and approaches. Basic concepts, terms and definitions in the field of radio signals are established by the national (formerly state) standard “Radio signals. Terms and definitions." Radio signals are extremely diverse. Part of the brief classification of signals according to a number of characteristics is shown in Fig. 1. More detailed information about a number of concepts is presented below. It is convenient to consider radio signals in the form of mathematical functions specified in time and physical coordinates. From this point of view, signals are usually described by one (one-dimensional signal; n = 1), two

    (two-dimensional signal; n = 2) or more (multivariate signal n > 2) independent variables. One-dimensional signals are functions of time only, while multi-dimensional signals also reflect position in n-dimensional space.

    Fig.1. Classification of radio signals

    For definiteness and simplicity, we will mainly consider one-dimensional signals that depend on time, however, the material in the textbook can be generalized to the multidimensional case, when the signal is represented as a finite or infinite collection of points, for example in space, the position of which depends on time. In television systems, a black and white image signal can be considered as a function f(x, y, f) of two spatial coordinates and time, representing the radiation intensity at a point (x, y) at time t at the cathode. When transmitting a color television signal, we have three functions f(x, y, t), g(x, y, t), h(x, y, t), defined on a three-dimensional set (we can consider these three functions also as components of a three-dimensional vector fields). In addition, various types of television signals may occur when television images are transmitted together with sound.

    A multidimensional signal is an ordered collection of one-dimensional signals. A multidimensional signal is created, for example, by a system of voltages at the terminals of a multi-terminal network (Fig. 2). Multidimensional signals are described by complex functions, and their processing is often possible in digital form. Therefore, multidimensional signal models are especially useful in cases where the functioning of complex systems is analyzed using computers. So, multidimensional, or vector, signals consist of many one-dimensional signals

    where n is an integer, the dimension of the signal.

    R
    is. 2. Multi-port voltage system

    According to the peculiarities of the structure of time representation (Fig. 3), all radio signals are divided into analog (analog), discrete (discrete-time; from the Latin discretus - divided, intermittent) and digital (digital).

    If the physical process generating a one-dimensional signal can be represented as a continuous function of time u(t) (Fig. 3, a), then such a signal is called analog (continuous), or, more generally, continuous (continuos - multi-stage), if the latter has jumps , discontinuities along the amplitude axis. Note that traditionally the term “analog” is used to describe signals that are continuous in time. A continuous signal can be treated as a real or complex time oscillation u(t), which is a function of a continuous real time variable. The concept of an “analog” signal is due to the fact that any instantaneous value of it is similar to the law of change of the corresponding physical quantity over time. An example of an analog signal is some voltage that is applied to the input of an oscilloscope, resulting in a continuous waveform appearing on the screen as a function of time. Since modern continuous-wave signal processing using resistors, capacitors, op-amps, etc. has little in common with analog computers, the term "analog" today seems not entirely unfortunate. It would be more correct to call continuous signal processing what is commonly called analog signal processing today.

    In radio electronics and communications technology, pulse systems, devices and circuits are widely used, the operation of which is based on the use of discrete signals. For example, an electrical signal that reflects speech is continuous both in level and in time, and a temperature sensor that produces its values ​​every 10 minutes serves as a source of signals that are continuous in value but discrete in time.

    A discrete signal is obtained from an analog signal through a special conversion. The process of converting an analog signal into a sequence of samples is called sampling, and the result of such conversion is a discrete signal or discrete series.

    The simplest mathematical model of a discrete signal
    - a sequence of points on the time axis, taken, as a rule, at equal intervals of time
    , called the sampling period (or interval, sampling step; sample time), and in each of which the values ​​of the corresponding continuous signal are specified (Fig. 3, b). The reciprocal of the sampling period is called the sampling frequency:
    (another designation
    ). The corresponding angular (circular) frequency is determined as follows:
    .

    Discrete signals can be created directly by the source of information (in particular, discrete samples of sensor signals in control systems). The simplest example of discrete signals is temperature information transmitted in radio and television news programs, but in the pauses between such transmissions there is usually no weather information. It should not be assumed that discrete messages are necessarily converted into discrete signals, and continuous messages into continuous signals. Most often, continuous signals are used to transmit discrete messages (as their carriers, i.e., carriers). Discrete signals can be used to transmit continuous messages.

    It is obvious that in the general case, representing a continuous signal by a set of discrete samples leads to a certain loss of useful information, since we know nothing about the behavior of the signal in the intervals between samples. However, there is a class of analog signals for which such loss of information practically does not occur, and therefore they can be reconstructed with a high degree of accuracy from the values ​​of their discrete samples.

    A type of discrete signal is a digital signal. In the process of converting discrete signal samples into digital form (usually binary numbers), it is quantized by voltage level . In this case, the values ​​of the signal levels can be numbered with binary numbers with a finite, required number of digits. A signal that is discrete in time and quantized in level is called a digital signal. By the way, signals that are quantized in level but continuous in time are rare in practice. In a digital signal, discrete signal values
    first they quantize by level (Fig. 3, c) and then the quantized samples of the discrete signal are replaced by numbers
    most often implemented in binary code, which is represented by high (one) and low (zero) levels of voltage potentials - short pulses of duration (Fig. 3, d). Such a code is called unipolar. Since the readings can take on a finite set of voltage level values ​​(see, for example, the second reading in Fig. 3, d, which in digital form can almost equally likely be written as both the number 5 - 0101 and the number 4 - 0100), then when presenting a signal it is inevitable it is rounded. The rounding errors that arise in this case are called quantization errors (or quantization noise).

    The sequence of numbers that represents a signal in digital processing is a discrete series. The numbers that make up the sequence are the values ​​of the signal at separate (discrete) moments in time and are called digital signal samples. Next, the quantized value of the signal is represented as a set of pulses characterizing zeros (“0”) and ones (“1”) when representing this value in the binary number system (Fig. 3, d). A set of pulses is used to amplitude modulate the carrier wave and obtain a pulse-code radio signal.

    As a result of digital processing, nothing “physical” is obtained, only numbers. And numbers are an abstraction, a way of describing the information contained in a message. Therefore, we need to have something physical that will represent the numbers or “carry” the numbers. So, the essence of digital processing is that a physical signal (voltage, current, etc.) is converted into a sequence of numbers, which is then subjected to mathematical transformations in a computing device.

    The transformed digital signal (sequence of numbers) can be converted back into voltage or current if necessary.

    Digital signal processing provides ample opportunities for transmitting, receiving and converting information, including those that cannot be realized using analog technology. In practice, when analyzing and processing signals, digital signals are most often replaced by discrete ones, and their difference from digital ones is interpreted as quantization noise. In this regard, the effects associated with level quantization and signal digitization will in most cases not be taken into account. We can say that both discrete and digital circuits (in particular, digital filters) process discrete signals, only within the structure of digital circuits these signals are represented by numbers.

    Computing devices designed for signal processing can operate with digital signals. There are also devices built primarily on analog circuitry that operate with discrete signals presented in the form of pulses of various amplitudes, durations, or repetition rates.

    One of the main characteristics by which signals differ is the predictability of the signal (its values) over time.

    R
    is. 3. Radio signals:

    a - analog; b - discrete; c - quantized; g - digital

    According to the mathematical concept (according to the degree of availability of a priori, from the Latin a priori - from previous, i.e. pre-experimental information), all radio signals are usually divided into two main groups: deterministic (regular; determined) and random (casual) signals (Fig. 4).

    Deterministic are radio signals whose instantaneous values ​​at any time are reliably known, that is, predictable with a probability equal to one. Deterministic signals are described by predetermined time functions. By the way, the instantaneous value of a signal is a measure of how much and in what direction the variable deviates from zero; thus, instantaneous signal values ​​can be both positive and negative (Fig. 4, a). The simplest examples of a deterministic signal are a harmonic oscillation with a known initial phase, high-frequency oscillations modulated according to a known law, a sequence or burst of pulses, the shape, amplitude and temporal position of which are known in advance.

    If the message transmitted through communication channels were deterministic, that is, known in advance with complete reliability, then its transmission would be meaningless. Such a deterministic message essentially does not contain any new information. Therefore, messages should be considered as random events (or random functions, random variables). In other words, there must be a certain set of message options (for example, a set of different pressure values ​​​​produced by the sensor), of which one is realized with a certain probability. In this regard, the signal is a random function. A deterministic signal cannot be a carrier of information. It can only be used for testing a radio engineering information transmission system or testing its individual devices. The random nature of messages, as well as interference, determined the critical importance of probability theory in the construction of the theory of information transmission.

    Rice. 4. Signals:

    a - deterministic; b - random

    Deterministic signals are divided into periodic and non-periodic (pulse). A signal of final energy, significantly different from zero during a limited time interval commensurate with the time of completion of the transient process in the system it is intended to influence, is called a pulse signal.

    Random signals are those whose instantaneous values ​​at any time are unknown and cannot be predicted with a probability equal to one. In fact, for random signals you can only know the probability that it will take on some value.

    It may seem that the concept of “random signal” is not entirely correct.

    But that's not true. For example, the voltage at the output of a thermal imager receiver aimed at a source of infrared radiation represents chaotic oscillations that carry a variety of information about the analyzed object. Strictly speaking, all signals encountered in practice are random and most of them represent chaotic functions of time (Fig. 4, b). As paradoxical as it may seem at first glance, only a random signal can be a signal carrying useful information. The information in such a signal is contained in a variety of amplitude, frequency (phase) or code changes in the transmitted signal. Communication signals change instantaneous values ​​over time, and these changes can be predicted only with a certain probability less than one. Thus, communication signals are in some way random processes, and therefore their description is carried out using methods similar to methods for describing random processes.

    In the process of transmitting useful information, radio signals can be subjected to one or another transformation. This is usually reflected in their name: signals modulated, demodulated (detected), encoded (decoded), amplified, delayed, sampled, quantized, etc.

    According to the purpose that signals have during the modulation process, they can be divided into modulating (the primary signal that modulates the carrier wave) or modulated (carrier wave).

    According to their belonging to one or another type of radio engineering systems, and in particular information transmission systems, they distinguish between “communication”, telephone, telegraph, radio broadcasting, television, radar, radio navigation, measuring, control, service (including pilot signals) and other signals .

    The given brief classification of radio signals does not fully cover all their diversity.

    Radio circuits and elements used to implement the signal and oscillation conversions listed in § 1.2 can be divided into the following main classes:

    linear circuits with constant parameters;

    linear circuits with variable parameters;

    nonlinear circuits.

    It should immediately be pointed out that in real radio devices, a clear distinction between linear and nonlinear circuits and elements is not always possible. Classification of the same elements as linear or nonlinear often depends on the level of signals affecting them.

    However, the above classification of circuits is necessary for understanding the theory and technology of signal processing.

    Let us formulate the basic properties of these circuits.

    2. LINEAR CIRCUITS WITH CONSTANT PARAMETERS

    We can proceed from the following definitions.

    1. A circuit is linear if the elements included in it do not depend on the external force (voltage, current) acting on the circuit.

    2. A linear circuit obeys the principle of superposition (overlay).

    In mathematical form, this principle is expressed by the following equality:

    where L is an operator characterizing the effect of the circuit on the input signal.

    The essence of the superposition principle can be formulated as follows: when several external forces act on a linear circuit, the behavior of the circuit (current, voltage) can be determined by superposing (superposition) the solutions found for each force separately. You can also use the following formulation: in a linear chain, the sum of the effects of individual influences coincides with the effect of the sum of influences. It is assumed that the chain is free of initial energy reserves.

    The superposition principle underlies the spectral and operator methods for analyzing transient processes in linear circuits, as well as the superposition integral method (Duhamel integral). Using the principle of superposition, any complex signals, when transmitted through linear circuits, can be decomposed into simple ones that are more convenient for analysis (for example, harmonic).

    3. For any, no matter how complex, influence in a linear circuit with constant parameters, no oscillations of new frequencies arise. This follows from the fact that when a linear circuit with constant parameters is subjected to harmonic action, the output oscillation also remains harmonic with the same frequency as the input; only the amplitude and phase of the oscillation change. By decomposing the signals into harmonic oscillations and substituting the results of the expansion into (1.1), we make sure that only oscillations with frequencies that are part of the input signal can exist at the output of the circuit.

    This means that none of the signal transformations that involve the appearance of new frequencies (i.e., frequencies not present in the spectrum of the input signal) can, in principle, be carried out using a linear circuit with constant parameters. Such circuits find wide application for solving problems unrelated to spectrum transformation, such as linear signal amplification, filtering (by frequency), etc.

    3. LINEAR CIRCUITS WITH VARIABLE PARAMETERS

    This refers to circuits in which one or more parameters change over time (but do not depend on the input signal). Such circuits are often called linear parametric.

    Properties 1 and 2 formulated in the previous paragraph are also valid for linear parametric circuits. However, unlike the previous case, even the simplest harmonic effect creates a complex oscillation with a frequency spectrum in a linear circuit with variable parameters. This can be illustrated with the following simple example. Let to a resistor whose resistance changes over time according to the law

    applied harmonic emf

    Current through resistance

    As we can see, the current contains components with frequencies that are not present in . Even from this simplest model it is clear that by changing the resistance over time, it is possible to transform the spectrum of the input signal.

    A similar result, albeit with more complex mathematical calculations, can be obtained for a circuit with variable parameters containing reactive elements - inductors and capacitors. This issue is discussed in Chap. 10. Here we only note that a linear circuit with variable parameters transforms the frequency spectrum of the influence and, therefore, can be used for some signal transformations accompanied by spectrum transformation. From what follows it will also be clear that a periodic change in time of the inductance or capacitance of an oscillatory circuit allows, under certain conditions, to “pump” energy from an auxiliary device that changes this parameter (“parametric amplifiers” and “parametric generators”, Chapter 10).

    4. NONLINEAR CIRCUITS

    A radio circuit is nonlinear if it includes one or more elements whose parameters depend on the level of the input signal. The simplest nonlinear element is a diode with a current-voltage characteristic shown in Fig. 1.4.

    Let us list the main properties of nonlinear circuits.

    1. The principle of superposition is not applicable to nonlinear circuits (and elements). This property of nonlinear circuits is closely related to the curvature of the current-voltage (or other similar) characteristics of nonlinear elements, which violates the proportionality between current and voltage. For example, for a diode, if voltage corresponds to current and voltage corresponds to current, then the total voltage will correspond to a current different from the sum (Fig. 1.4).

    From this simple example it is clear that when analyzing the effect of a complex signal on a nonlinear circuit, it cannot be decomposed into simpler ones; it is necessary to look for the response of the circuit to the resulting signal. The inapplicability of the superposition principle to nonlinear circuits makes spectral and other analysis methods based on the decomposition of a complex signal into components unsuitable.

    2. An important property of a nonlinear circuit is the transformation of the signal spectrum. When a nonlinear circuit is exposed to the simplest harmonic signal, in addition to oscillations of the fundamental frequency, harmonics appear in the circuit with frequencies that are multiples of the fundamental frequency (and in some cases, a constant component of current or voltage). In the future, it will be shown that with a complex signal shape in a nonlinear circuit, in addition to harmonics, oscillations with combination frequencies also arise, which are the result of the interaction of individual oscillations that make up the signal.

    From the point of view of transforming the signal spectrum, the fundamental difference between linear parametric and nonlinear circuits should be emphasized. In a nonlinear circuit, the structure of the output spectrum depends not only on the shape of the input signal, but also on its amplitude. In a linear parametric circuit, the spectrum structure does not depend on the signal amplitude.

    Of particular interest for radio engineering are free oscillations in nonlinear circuits. Such oscillations are called self-oscillations, since they arise and can exist stably in the absence of external periodic influence. Energy consumption is compensated by a DC power source.

    Basic radio engineering processes: generation, modulation, detection and frequency conversion are accompanied by transformation of the frequency spectrum. Therefore, these processes can be carried out using either nonlinear or linear parametric circuits. In some cases, both nonlinear and linear parametric circuits are used simultaneously. It should also be emphasized that nonlinear elements work in combination with linear circuits that select useful components of the converted spectrum. In this regard, as already noted at the beginning of this paragraph, the division of circuits into linear, nonlinear and linear parametric is very arbitrary. Usually, to describe the behavior of various components of the same radio device, it is necessary to use a variety of mathematical methods - linear and nonlinear.

    Rice. 1.4. Current-voltage characteristic of a nonlinear element (diode)

    The above basic properties of circuits of three classes - linear with constant parameters, linear parametric and nonlinear - are preserved in any form of implementation of circuits: with lumped parameters, with distributed parameters (lines, radiating devices), etc. These properties also apply to devices digital signal processing.

    It should, however, be emphasized that the superposition principle underlying the division of circuits into linear and nonlinear was formulated above for the operation of summing signals at the input of a circuit [see. (1.1). However, this operation does not exhaust the requirements for modern signal processing systems. Important for practice is, for example, the case when the signal at the input of the circuit is the product of two signals. It turns out that for such signals it is possible to carry out processing that obeys the principle of superposition, but this processing will be a combination of specially selected nonlinear and linear operations. Such processing is called homomorphic.

    The synthesis of such devices is considered at the end of the course (see Chapter 16), after studying linear and nonlinear circuits, as well as digital signal processing, the development of which was the impetus for the widespread use of homomorphic processing.