• What types of signals exist. Concepts of information, message, signal. Types of signals and their main characteristics. Comparison of digital and analog signals

    Analog signal is a continuous function of a continuous argument, i.e. defined for any value of the independent variable. Sources of analog signals, as a rule, are physical processes and phenomena that are continuous in their development (the dynamics of changes in the values ​​of certain properties) in time, in space or in any other independent variable, while the recorded signal is similar (analogous) to the process generating it. An example of a mathematical notation for a specific analog signal: y(t) = 4.8exp[-( t-4) 2 /2.8]. An example of a graphical display of this signal is shown in Fig. 2.2.1, while both the numerical values ​​of the function itself and its arguments can take any values ​​within certain intervals y£1 y £ y 2,t£1 t £ t 2. If the intervals of signal values ​​or its independent variables are not limited, then by default they are assumed to be equal to -¥ to +¥. The set of possible signal values ​​forms a continuous space in which any point can be determined with infinite accuracy.

    Rice. 2.2.1. Graphical display of the signal y(t) = 4.8 exp[-( t-4) 2 /2.8].

    Discrete signal in its values ​​it is also a continuous function, but defined only by discrete values ​​of the argument. According to the set of its values, it is finite (countable) and is described by a discrete sequence y(n×D t), Where y£1 y £ y 2,D t- interval between samples (signal sampling interval), n = 0, 1, 2, ..., N– numbering of discrete reading values. If a discrete signal is obtained by sampling an analog signal, then it represents a sequence of samples, the values ​​of which are exactly equal to the values ​​of the original signal in coordinates n D t.

    An example of sampling an analog signal shown in Fig. 2.2.1, is shown in Fig. 2.2.2. At D t= const (uniform data sampling) a discrete signal can be described by the abbreviated notation y(n).

    When the signal is unevenly sampled, the designations of discrete sequences (in text descriptions) are usually enclosed in curly brackets - ( s(t i)), and the reading values ​​are given in the form of tables indicating the coordinate values t i. For short, uneven number sequences, the following numerical description is also used: s(t i) = {a 1 , a 2 , ..., a N}, t = t 1 , t 2 , ..., t N.

    Digital signal quantized in its values ​​and discrete in its argument. It is described by a quantized lattice function y n = Q k[y(n D t)], Where Q k- quantization function with the number of quantization levels k, while the quantization intervals can be either uniform or uneven, for example, logarithmic. A digital signal is specified, usually in the form of a numeric array of successive values ​​of the argument at D t = const, but, in general, the signal can also be specified in the form of a table for arbitrary argument values.



    Essentially, a digital signal is a formalized version of a discrete signal when the values ​​of the latter are rounded to a certain number of digits, as shown in Fig. 2.2.3. In digital systems and in computers, a signal is always represented with an accuracy of up to a certain number of bits and therefore is always digital. Taking these factors into account, when describing digital signals, the quantization function is usually omitted (implied uniform by default), and the rules for describing discrete signals are used to describe signals.

    Rice. 2.2.2. Discrete signal Fig. 2.2.3. Digital signal

    y(n D t) = 4.8 exp[-( n D t-4) 2 /2.8], D t= 1. y n = Q k,D t=1, k = 5.

    In principle, an analog signal recorded by appropriate digital equipment can also be quantized in its values ​​(Fig. 2.2.4). But it makes no sense to separate these signals into a separate type - they remain analog piecewise continuous signals with a quantization step, which is determined by the permissible measurement error.

    Most of the discrete and digital signals you deal with are sampled analog signals. But there are signals that initially belong to the discrete class, for example gamma rays.

    Rice. 2.2.4. Quantized signal y(t)= Q k, k = 5.

    Spectral representation of signals. In addition to the usual time (coordinate) representation of signals and functions, when analyzing and processing data, the description of signals by frequency functions is widely used, i.e. by arguments inverse to the arguments of the time (coordinate) representation. The possibility of such a description is determined by the fact that any signal, no matter how complex in its shape, can be represented as a sum of simpler signals, and, in particular, as a sum of the simplest harmonic oscillations, the totality of which is called the frequency spectrum of the signal. Mathematically, the signal spectrum is described by functions of the amplitude values ​​and initial phases of harmonic oscillations using a continuous or discrete argument - frequency. The amplitude spectrum is usually called amplitude-frequency response(frequency response) of the signal, spectrum of phase angles – phase-frequency response(FCHH). The description of the frequency spectrum displays the signal as unambiguously as the coordinate description.

    In Fig. Figure 2.2.5 shows a segment of the signal function, which is obtained by summing the constant component (the frequency of the constant component is 0) and three harmonic oscillations. The mathematical description of the signal is determined by the formula:

    Where A n= (5, 3, 6, 8) - amplitude; fn= (0, 40, 80, 120) - frequency (Hz); φ n= (0, -0.4, -0.6, -0.8) - initial phase angle (in radians) of oscillations; n = 0,1,2,3.

    Rice. 2.2.5. Temporal representation of the signal.

    The frequency representation of this signal (signal spectrum in the form of frequency response and phase response) is shown in Fig. 2.2.6. Please note that the frequency representation of a periodic signal s(t), limited in the number of harmonics of the spectrum, is only eight samples and is very compact compared to the continuous time representation, defined in the interval from -¥ to +¥.

    Rice. 2.2.6. Frequency representation of the signal.

    Graphic display analog signals (Fig. 2.2.1) do not require any special explanation. When graphically displaying discrete and digital signals, either the method of direct discrete segments of the corresponding scale length above the argument axis is used (Fig. 2.2.6), or the method of an envelope (smooth or broken) based on sample values ​​(dotted curve in Fig. 2.2.2). Due to the continuity of fields and, as a rule, the secondary nature of digital data obtained by sampling and quantization of analog signals, we will consider the second method of graphic display to be the main one.

    By types (types) of signals the following stand out:

    1. analog
    2. digital
    3. discrete

    Analog signal

    Analog signal is natural. It can be detected using various types of sensors. For example, environmental sensors (pressure, humidity) or mechanical sensors (acceleration, speed). Analog signals in mathematics they are described by continuous functions. Electrical voltage is described using a straight line, i.e. is analog.

    Digital signal

    Digital the signals are artificial, i.e. they can only be obtained by converting an analog electrical signal.

    The process of sequentially converting a continuous analog signal is called sampling. There are two types of discretization:

    1. by time
    2. by amplitude

    Time sampling is usually called a sampling operation. And sampling by signal amplitude is quantization by level.

    Mostly digital signals are light or electrical impulses. A digital signal uses the entire given frequency (bandwidth). This signal still remains analog, only after conversion it is endowed with numerical properties. And you can apply numerical methods and properties to it.

    Discrete signal

    Discrete signal– this is still the same converted analog signal, only it is not necessarily quantized in level.

    This is the basic information about types (types) of signals.

    Signals are information codes that people use to convey messages in an information system. The signal can be given, but it is not necessary to receive it. Whereas a message can only be considered a signal (or a set of signals) that was received and decoded by the recipient (analog and digital signal).

    One of the first methods of transmitting information without the participation of people or other living beings were signal fires. When danger arose, fires were lit sequentially from one post to another. Next, we will consider the method of transmitting information using electromagnetic signals and will dwell in detail on the topic analog and digital signal.

    Any signal can be represented as a function that describes changes in its characteristics. This representation is convenient for studying radio engineering devices and systems. In addition to the signal in radio engineering, there is also noise, which is its alternative. Noise does not carry useful information and distorts the signal by interacting with it.

    The concept itself makes it possible to abstract from specific physical quantities when considering phenomena related to the encoding and decoding of information. The mathematical model of the signal in research allows one to rely on the parameters of the time function.

    Signal types

    Signals based on the physical environment of the information carrier are divided into electrical, optical, acoustic and electromagnetic.

    According to the setting method, the signal can be regular or irregular. A regular signal is represented as a deterministic function of time. An irregular signal in radio engineering is represented by a chaotic function of time and is analyzed using a probabilistic approach.

    Signals, depending on the function that describes their parameters, can be analog or discrete. A discrete signal that has been quantized is called a digital signal.

    Signal Processing

    Analog and digital signals are processed and directed to transmit and receive information encoded in the signal. Once information is extracted, it can be used for various purposes. In special cases, information is formatted.

    Analog signals are amplified, filtered, modulated, and demodulated. Digital data can also be subject to compression, detection, etc.

    Analog signal

    Our senses perceive all information entering them in analog form. For example, if we see a car passing by, we see its movement continuously. If our brain could receive information about its position once every 10 seconds, people would constantly get run over. But we can estimate distance much faster and this distance is clearly defined at each moment of time.

    Absolutely the same thing happens with other information, we can evaluate the volume at any moment, feel the pressure our fingers exert on objects, etc. In other words, almost all information that can arise in nature is analogue. The easiest way to transmit such information is with analog signals, which are continuous and defined at any time.

    To understand what an analog electrical signal looks like, you can imagine a graph that shows amplitude on the vertical axis and time on the horizontal axis. If we, for example, measure the change in temperature, then a continuous line will appear on the graph, displaying its value at each moment in time. To transmit such a signal using electric current, we need to compare the temperature value with the voltage value. So, for example, 35.342 degrees Celsius can be encoded as a voltage of 3.5342 V.

    Analog signals used to be used in all types of communications. To avoid interference, such a signal must be amplified. The higher the noise level, that is, interference, the more the signal must be amplified so that it can be received without distortion. This method of signal processing spends a lot of energy generating heat. In this case, the amplified signal may itself cause interference for other communication channels.

    Nowadays, analog signals are still used in television and radio, to convert the input signal in microphones. But in general, this type of signal is being replaced or replaced by digital signals everywhere.

    Digital signal

    A digital signal is represented by a sequence of digital values. The most commonly used signals today are binary digital signals, as they are used in binary electronics and are easier to encode.

    Unlike the previous signal type, a digital signal has two values ​​“1” and “0”. If we remember our example with temperature measurement, then the signal will be generated differently. If the voltage supplied by the analog signal corresponds to the value of the measured temperature, then a certain number of voltage pulses will be supplied in the digital signal for each temperature value. The voltage pulse itself will be equal to “1”, and the absence of voltage will be “0”. The receiving equipment will decode the pulses and restore the original data.

    Having imagined what a digital signal will look like on a graph, we will see that the transition from zero to maximum is abrupt. It is this feature that allows the receiving equipment to “see” the signal more clearly. If any interference occurs, it is easier for the receiver to decode the signal than with analog transmission.

    However, it is impossible to restore a digital signal with a very high noise level, while it is still possible to “extract” information from an analog type with large distortion. This is due to the cliff effect. The essence of the effect is that digital signals can be transmitted over certain distances, and then simply stop. This effect occurs everywhere and is solved by simply regenerating the signal. Where the signal breaks, you need to insert a repeater or reduce the length of the communication line. The repeater does not amplify the signal, but recognizes its original form and produces an exact copy of it and can be used in any way in the circuit. Such signal repetition methods are actively used in network technologies.

    Among other things, analog and digital signals also differ in the ability to encode and encrypt information. This is one of the reasons for the transition of mobile communications to digital.

    Analog and digital signal and digital-to-analog conversion

    We need to talk a little more about how analog information is transmitted over digital communication channels. Let's use examples again. As already mentioned, sound is an analog signal.

    What happens in mobile phones that transmit information via digital channels

    Sound entering the microphone undergoes analog-to-digital conversion (ADC). This process consists of 3 steps. Individual signal values ​​are taken at equal intervals of time, a process called sampling. According to Kotelnikov’s theorem on channel capacity, the frequency of taking these values ​​should be twice as high as the highest signal frequency. That is, if our channel has a frequency limit of 4 kHz, then the sampling frequency will be 8 kHz.

    Next, all selected signal values ​​are rounded or, in other words, quantized. The more levels created, the higher the accuracy of the reconstructed signal at the receiver. All values ​​are then converted into binary code, which is transmitted to the base station and then reaches the other party, which is the receiver. A digital-to-analog conversion (DAC) procedure takes place in the receiver's phone. This is a reverse procedure, the goal of which is to obtain a signal at the output that is as identical as possible to the original one. Next, the analog signal comes out in the form of sound from the phone speaker.

    A signal is defined as a voltage or current that can be transmitted as a message or as information. By their nature, all signals are analog, be it DC or AC, digital or pulse. However, it is common to make a distinction between analog and digital signals.

    A digital signal is a signal that has been processed in a certain way and converted into numbers. Usually these digital signals are connected to real analog signals, but sometimes there is no connection between them. An example is data transmission over local area networks (LANs) or other high-speed networks.

    In digital signal processing (DSP), the analog signal is converted into binary form by a device called an analog-to-digital converter (ADC). The ADC output produces a binary representation of the analog signal, which is then processed by an arithmetic digital signal processor (DSP). After processing, the information contained in the signal can be converted back to analog form using a digital-to-analog converter (DAC).

    Another key concept in defining a signal is the fact that a signal always carries some information. This leads us to a key problem in physical analog signal processing: the problem of information retrieval.

    Goals of signal processing.

    The main purpose of signal processing is the need to obtain the information contained in them. This information is typically present in the signal amplitude (absolute or relative), frequency or spectral content, phase, or relative timing of multiple signals.

    Once the desired information has been extracted from the signal, it can be used in a variety of ways. In some cases it is desirable to reformat the information contained in the signal.

    In particular, a change in signal format occurs when transmitting an audio signal in a frequency division multiple access (FDMA) telephone system. In this case, analog techniques are used to place multiple voice channels in the frequency spectrum for transmission via microwave radio relay, coaxial cable, or fiber optic cable.

    In digital communication, analog audio information is first converted to digital using an ADC. Digital information representing individual audio channels is time multiplexed (time division multiple access, TDMA) and transmitted over a serial digital link (as in a PCM system).

    Another reason for signal processing is to compress the signal bandwidth (without significant loss of information) followed by formatting and transmission of information at reduced speeds, which allows the required channel bandwidth to be narrowed. High-speed modems and adaptive pulse code modulation (ADPCM) systems widely use data redundancy elimination (compression) algorithms, as do digital mobile communications systems, MPEG audio recording systems, and high-definition television (HDTV).

    Industrial data acquisition and control systems use information received from sensors to generate appropriate feedback signals, which in turn directly control the process. Please note that these systems require both ADC and DAC, as well as sensors, signal conditioners and DSPs (or microcontrollers).

    In some cases, there is noise in the signal containing information and the main goal is to reconstruct the signal. Techniques like filtering, autocorrelation, convolution, etc. are often used to accomplish this task in both analog and digital domains.

    SIGNAL PROCESSING GOALS
    • Extracting signal information (amplitude, phase, frequency, spectral components, timing relationships)
    • Signal format conversion (FDMA, TDMA, CDMA)
    • Data compression (modems, cell phones, HDTV, MPEG compression)
    • Generating feedback signals (industrial process control)
    • Isolating signal from noise (filtering, autocorrelation, convolution)
    • Isolating and storing the signal in digital form for subsequent processing (FFT)

    Signal Conditioning

    In most of the above situations (related to the use of DSP technologies), both an ADC and a DAC are required. However, in some cases, only a DAC is required when analog signals can be directly generated from the DSP and DAC. A good example is sweep video displays, in which a digitally generated signal drives the video image or RAMDAC (pixel array digital to analog converter) unit.

    Another example is artificially synthesized music and speech. In reality, generating physical analog signals using digital-only methods relies on information previously obtained from sources of similar physical analog signals. In display systems, the data on the display must convey relevant information to the operator. When designing sound systems, the statistical properties of the generated sounds are specified, which have been previously determined through the extensive use of DSP methods (sound source, microphone, preamplifier, ADC, etc.).

    Signal processing methods and technologies

    Signals can be processed using analog techniques (analog signal processing, or ASP), digital techniques (digital signal processing, or DSP), or a combination of analog and digital techniques (mixed signal processing, or MSP). In some cases the choice of methods is clear, in other cases the choice is not clear and the final decision is based on certain considerations.

    As for DSP, the main difference between it and traditional computer data analysis is the high speed and efficiency of complex digital processing functions such as filtering, analysis and real-time data compression.

    The term "combined signal processing" implies that the system performs both analog and digital processing. Such a system can be implemented as a printed circuit board, a hybrid integrated circuit (IC), or a separate chip with integrated elements. ADCs and DACs are considered to be combined signal processing devices, since each of them implements both analog and digital functions.

    Recent advances in Very High Level Integration (VLSI) IC technology enable complex (digital and analog) processing on a single chip. The very nature of the DSP means that these functions can be performed in real time.

    Comparison of analog and digital signal processing

    Today's engineer is faced with choosing the appropriate combination of analog and digital techniques to solve a signal processing problem. It is impossible to process physical analog signals using digital methods alone, since all sensors (microphones, thermocouples, piezoelectric crystals, disk drive heads, etc.) are analog devices.

    Some types of signals require normalization circuits for further signal processing, both analogue and digital. Signal normalization circuits are analog processors that perform functions such as amplification, accumulation (in measuring and preliminary (buffer) amplifiers), signal detection against a background of noise (high-precision common mode amplifiers, equalizers and linear receivers), dynamic range compression (logarithmic amplifiers, logarithmic DACs and programmable gain amplifiers) and filtering (passive or active).

    Several methods for implementing signal processing are shown in Figure 1. The top area of ​​the figure shows a purely analog approach. The remaining areas depict the DSP implementation. Note that once a DSP technology has been selected, the next decision must be to locate the ADC in the signal processing path.

    ANALOG AND DIGITAL SIGNAL PROCESSING

    Figure 1. Signal processing methods

    In general, since the ADC is moved closer to the sensor, most of the analog signal processing is now done by the ADC. Increasing the capabilities of the ADC can be reflected in increasing the sampling rate, expanding the dynamic range, increasing the resolution, cutting off input noise, using input filtering and programmable amplifiers (PGA), the presence of on-chip voltage references, etc. All the mentioned additions increase the functional level and simplify the system.

    With modern technologies available to produce DACs and ADCs with high sampling rates and resolutions, significant progress has been made in integrating more and more circuits directly into ADCs/DACs.

    In the measurement industry, for example, there are 24-bit ADCs with built-in programmable amplifiers (PGAs) that allow full-scale 10 mV bridge signals to be digitized directly without subsequent normalization (eg the AD773x series).

    At voice and audio frequencies, complex encoding-decoding devices are common - codecs (Analog Front End, AFE), which have an analog circuit built into the chip that meets the minimum requirements for external normalization components (AD1819B and AD73322).

    There are also video codecs (AFE) for applications such as CCD image processing and others (such as the AD9814, AD9816, and AD984X series).

    Implementation example

    As an example of the use of DSP, compare analog and digital low-pass filters (LPFs), each with a cutoff frequency of 1 kHz.

    The digital filter is implemented as a typical digital system, shown in Figure 2. Note that the diagram makes several implicit assumptions. Firstly, in order to accurately process the signal, it is assumed that the ADC/DAC path has sufficient values ​​of sampling frequency, resolution and dynamic range. Secondly, in order to complete all its calculations within the sampling interval (1/f s), the DSP device must be fast enough. Thirdly, at the ADC input and DAC output there is still a need for analog filters for limiting and restoring the signal spectrum (anti-aliasing filter and anti-imaging filter), although the requirements for their performance are low. With these assumptions in place, digital and analog filters can be compared.



    Figure 2. Block diagram of a digital filter

    The required cutoff frequency for both filters is 1 kHz. Analog conversion is implemented of the first kind of sixth order (characterized by the presence of transmission coefficient ripples in the passband and the absence of ripples outside the passband). Its characteristics are presented in Figure 2. In practice, this filter can be represented by three second-order filters, each of which is built on an operational amplifier and several capacitors. Using modern computer-aided filter design (CAD) systems, creating a sixth-order filter is fairly easy, but meeting the 0.5 dB flatness specification requires precise component selection.

    The 129-coefficient digital FIR filter shown in Figure 2 has a passband flatness of only 0.002 dB, a linear phase response, and a much steeper rolloff. In practice, such characteristics cannot be realized using analogue methods. Another obvious advantage of the circuit is that the digital filter does not require selection of components and is not subject to parameter drift, since the filter clock frequency is stabilized by a quartz resonator. A filter with 129 coefficients requires 129 multiply-accumulate (MAC) operations to calculate the output sample. These calculations must be completed within a 1/fs sampling interval to ensure real-time operation. In this example, the sampling rate is 10 kHz, so 100 μs of processing time is sufficient unless significant additional computation is required. The ADSP-21xx family of DSPs can complete the entire multiply-accumulate process (and other functions required to implement the filter) in a single instruction cycle. Therefore, a filter with 129 coefficients requires a speed of more than 129/100 μs = 1.3 million instructions per second (MIPS). Existing DSPs are much faster and thus are not the limiting factor for these applications. The 16-bit fixed-point ADSP-218x series delivers performance up to 75MIPS. Listing 1 shows the assembly code that implements the filter on DSP processors of the ADSP-21xx family. Note that the actual lines of executable code are marked with arrows; the rest is comments.


    Figure 3. Analog and digital filters

    Of course, in practice there are many other factors considered when comparing analog and digital filters or analog and digital signal processing methods in general. Modern signal processing systems combine analog and digital methods to implement the desired function and take advantage of the best methods, both analog and digital.

    ASSEMBLY PROGRAM:
    FIR FILTER FOR ADSP-21XX (SINGLE PRECISION)

    MODULE fir_sub; ( FIR filter subroutine Subroutine call parameters I0 --> Oldest data in delay line I4 --> Start of filter coefficient table L0 = Filter length (N) L4 = Filter length (N) M1,M5 = 1 CNTR = Filter length - 1 (N-1) Return values ​​MR1 ​​= Result of summation (rounded and limited) I0 --> Oldest data in delay line I4 --> Start of filter coefficient table Variable registers MX0,MY0,MR Running time (N - 1) + 6 cycles = N + 5 cycles All coefficients are written in format 1.15).ENTRY fir; fir: MR=0, MX0=DM(I0,M1), MY0=PM(I4,M5) CNTR = N-1; DO convolution UNTIL CE; convolution: MR=MR+MX0*MY0(SS), MX0=DM(I0,M1), MY0=PM(I4,M5); MR=MR+MX0*MY0(RND); IF MV SAT MR; RTS; .ENDMOD; REAL-TIME SIGNAL PROCESSING

    • Digital signal processing;
      • The spectrum width of the processed signal is limited by the sampling frequency of the ADC/DAC
        • Remember the Nyquist criterion and Kotelnikov's theorem
      • limited by ADC/DAC capacity
      • DSP performance limits the amount of signal processing because:
        • For real-time operation, all calculations performed by the signal processor must be completed within a sampling interval equal to 1/f s
    • Don't forget about analog signal processing
      • high-pass/RF filtering, modulation, demodulation
      • analog limiting and spectrum restoring filters (usually low-pass filters) for ADCs and DACs
      • where common sense and cost of implementation dictate

    Literature:

    Together with the article “Types of signals” read:

    Analog, discrete and digital signals

    One of the trends in the development of modern communication systems is the widespread use of discrete-analog and digital signal processing (DAO and DSP).

    The analog signal Z’(t), originally used in radio engineering, can be represented as a continuous graph (Fig. 2.10a). Analog signals include AM, FM, FM signals, telemetry sensor signals, etc. Devices in which analog signals are processed are called analog processing devices. Such devices include frequency converters, various amplifiers, LC filters, etc.

    Optimal reception of analog signals, as a rule, involves an optimal linear filtering algorithm, which is especially relevant when using complex noise-like signals. However, it is in this case that the construction of a matched filter is more difficult. When using matched filters based on multi-tap delay lines (magnetostrictive, quartz, etc.), large attenuation, dimensions and delay instability are obtained. Filters based on surface acoustic waves (SAW) are promising, but the short duration of the signals processed in them and the complexity of adjusting the filter parameters limit their scope of application.

    In the 40s, analog RES were replaced by devices for discrete processing of analog input processes. These devices provide discrete analog processing (DAO) of signals and have great capabilities. Here a signal is used that is discrete in time and continuous in state. Such a signal Z’(kT) is a sequence of pulses with amplitudes equal to the values ​​of the analog signal Z’(t) at discrete times t=kT, where k=0,1,2,… are integers. The transition from a continuous signal Z'(t) to a sequence of pulses Z'(kT) is called time sampling.

    Figure 2.10 Analog, discrete and digital signals

    Figure 2.11 Analog Signal Sampling

    The analog signal can be sampled in time by the “AND” coincidence cascade (Fig. 2.11), at the input of which the analog signal Z’(t) operates. The coincidence cascade is controlled by clock voltage UT(t) - short pulses of duration ti, following at intervals T>>ti.

    The sampling interval T is selected in accordance with Kotelnikov’s theorem T=1/2Fmax, where Fmax is the maximum frequency in the spectrum of the analog signal. The frequency fd = 1/T is called the sampling frequency, and the set of signal values ​​at 0, T, 2T,... is called a signal with pulse amplitude modulation (PAM).



    Until the end of the 50s, AIM signals were used only when converting speech signals. For transmission via a radio relay communication channel, the AIM signal is converted into a signal with pulse phase modulation (PPM). In this case, the amplitude of the pulses is constant, and information about the speech message is contained in the deviation (phase) Dt of the pulse relative to some average position. Using short pulses of one signal, and placing pulses of other signals between them, multi-channel communication is obtained (but not more than 60 channels).

    Currently, DAO is being intensively developed based on the use of “fire chains” (FC) and charge coupled devices (CCD).

    In the early 70s, systems with pulse code modulation (PCM), which used signals in digital form, began to appear on communication networks in various countries and the USSR.

    The PCM process is a conversion of an analog signal into numbers and consists of three operations: time sampling at intervals T (Fig. 2.10, b), level quantization (Fig. 2.10, c) and coding (Fig. 2.10, e). The time sampling operation is discussed above. The level quantization operation consists in the fact that a sequence of pulses, the amplitudes of which correspond to the values ​​of the analog 3 signal at discrete moments in time, is replaced by a sequence of pulses whose amplitudes can take only a limited number of fixed values. This operation leads to a quantization error (Fig. 2.10d).

    The signal ZКВ’(kT) is a discrete signal both in time and in states. The possible values ​​u0, u1,…,uN-1 of the signal Z’(kT) on the receiving side are known, therefore they do not transmit the values ​​uk that the signal received in the interval T, but only its level number k. At the receiving side, based on the received number k, the value uk is restored. In this case, sequences of numbers in the binary number system - code words - are subject to transmission.



    The encoding process consists of converting the quantized signal Z’(kT) into a sequence of codewords (x(kT)). In Fig. Figure 2.10d shows code words in the form of a sequence of binary code combinations using three bits.

    The considered PCM operations are used in RPUs with DSP, while PCM is necessary not only for analog signals, but also for digital ones.

    Let us show the need for PCM when receiving digital signals over a radio channel. Thus, when transmitting in the decameter range the element xxxxxxxxxxxxxxxxxxxxxxa of the digital signal xi(kT) (i=0.1), reflecting the nth code element, the expected signal at the input of the RPU together with the additive noise ξ(t) can be represented in the form:

    z / i (t)= µx(kT) + ξ(t) , (2.2)

    at (0 ≤ t ≥ TE),

    where μ is the channel transmission coefficient, TE is the duration time of the signal element. From (2.2) it is clear that noise at the RPU input forms a set of signals representing an analog oscillation.

    Examples of digital circuits are logical elements, registers, flip-flops, counters, storage devices, etc. Based on the number of nodes on ICs and LSIs, RPUs with DSPs are divided into two groups:

    1. Analog-to-digital control units, which have individual components implemented on an IC: frequency synthesizer, filters, demodulator, AGC, etc.

    2. Digital radio receivers (DRD), in which the signal is processed after an analog-to-digital converter (ADC).

    In Fig. Figure 2.12 shows the elements of the main (information channel) of the decameter range DRP:: the analog part of the receiving path (ADP), the ADC (consisting of a sampler, quantizer and encoder), the digital part of the receiving path (DCPT), a digital-to-analog converter (DAC) and a low-end filter frequencies (low-pass filter). Double lines indicate the transmission of digital signals (codes), and single lines indicate the transmission of analog and AIM signals.

    Figure 2.12 Elements of the main (information channel) CRPU of the decameter range

    The AFC produces preliminary frequency selectivity, significant amplification and frequency conversion of the Z’(T) signal. The ADC converts the analog signal Z’(T) into a digital signal x(kT) (Fig. 2.10,e).

    In the CCPT, as a rule, additional frequency conversion, selectivity (in a digital filter - basic selectivity) and digital demodulation of analog and discrete messages (frequency, relative phase and amplitude telegraphy) are performed. At the output of the CCPT we obtain a digital signal y(kT) (Fig. 2.10, e). This signal, processed according to a given algorithm, from the output of the central frequency converter goes to the DAC or to the computer storage device (when receiving data).

    In a DAC and low-pass filter connected in series, the digital signal y(kT) is converted first into a signal y(t), continuous in time and discrete in state, and then into yФ(t), which is continuous in time and state (Fig. 2.10g , h).

    Of the many methods of digital signal processing in the digital signal processing center, the most important are digital filtering and demodulation. Let's consider the algorithms and structure of a digital filter (DF) and a digital demodulator (CD).

    A digital filter is a discrete system (physical device or computer program). In it, the sequence of numerical samples (x(kT)) of the input signal is converted into a sequence (y(kT)) of the output signal.

    The main DF algorithms are: linear difference equation, discrete convolution equation, operator transfer function in the z-plane and frequency response.

    Equations that describe sequences of numbers (pulses) at the input and output of a digital filter (discrete system with a delay) are called linear difference equations.

    The linear difference equation of the recursive digital function has the form:

    , (2.3)

    where x[(k-m)T] and y[(k-n)T] are the values ​​of the input and output sequences of numerical samples at times (k-m)T and (k-n)T, respectively; m and n – the number of delayed summed previous input and output numerical samples, respectively;

    a0, a1, …, am and b1, b2, …, bn are real weighting coefficients.

    In (3), the first term is a linear difference equation of a non-recursive digital function. The discrete convolution equation of the digital function is obtained from the linear difference non-recursive digital function by replacing al in it with h(lT):

    , (2.4)

    where h(lT) is the impulse response of the digital filter, which is the response to a single pulse.

    The operator transfer function is the ratio of the Laplace transformed functions at the output and input of the digital filter:

    , (2.5)

    This function is obtained directly from the difference equations using the discrete Laplace transform and the displacement theorem.

    By discrete Laplace transform, for example, a sequence (x(kT)), we mean obtaining an L-image of the form

    , (2.6)

    where p=s+jw is the complex Laplace operator.

    The displacement (shift) theorem in relation to discrete functions can be formulated: the displacement of the independent variable of the original in time by ±mT corresponds to the multiplication of the L-image by . For example,

    Taking into account the linearity properties of the discrete Laplace transform and the displacement theorem, the output sequence of numbers of the non-recursive digital function will take the form

    , (2.8)

    Then the operator transfer function of the non-recursive digital filter:

    , (2.9)

    Figure 2.13

    Similarly, taking into account formula (2.3), we obtain the operator transfer function of the recursive digital filter:

    , (2.10)

    The formulas of operator transfer functions have a complex form. Therefore, great difficulties arise in the study of fields and poles (the roots of Fig. 2.13 of the numerator polynomial and the roots of the denominator polynomial), which in the p-plane have a structure periodic in frequency.

    The analysis and synthesis of digital functions is simplified by applying the z-transformation, when moving to a new complex variable z associated with p by the relation z=epT or z-1=e-рT. Here the complex plane p=s+jw is mapped to another complex plane z=x+jy. For this it is necessary that es+jw=x+jy. In Fig. Figure 2.13 shows the complex planes p and z.

    By replacing the variables e-pT=z-1 in (2.9) and (2.10), we obtain transfer functions in the z-plane, respectively, for non-recursive and recursive digital filters:

    , (2.11)

    , (2.12)

    The transfer function of a non-recursive digital filter has only zeros, so it is absolutely stable. A recursive digital filter will be stable if its poles are located inside the unit circle of the z-plane.

    The transfer function of the digital filter in the form of a polynomial in negative powers of the variable z makes it possible to directly draw up a block diagram of the digital filter using the form of the function HC(z). The variable z-1 is called the unit delay operator, and in block diagrams it is the delay element. Therefore, the highest powers of the numerator and denominator of the transfer function HC(z)rec determine the number of delay elements in the non-recursive and recursive parts of the digital filter, respectively.

    The frequency response of the digital filter is obtained directly from its transfer function in the z-plane by replacing z with ejl (or z-1 with e-jl) and carrying out the necessary transformations. Therefore, the frequency response can be written as:

    , (2.13)

    where CC(l) is the amplitude-frequency response (AFC), and φ(l) is the phase-frequency characteristics of the digital filter; l=2 f’ - digital frequency; f ’=f/fД – relative frequency; f – cyclic frequency.

    Characteristic CF(jl) CF is a periodic function of digital frequency l with period 2 (or unity in relative frequencies). Indeed, ejl±jn2 = ejl ±jn2 = ejl, because according to Euler's formula ejn2 =cosn2 +jsinn2 = 1.

    Figure 2.14 Block diagram of an oscillatory circuit

    In radio engineering, during analog signal processing, the simplest frequency filter is an LC oscillatory circuit. Let us show that in digital processing the simplest frequency filter is a second-order recursive link, the transfer function in the z-plane of which

    , (2.14)

    and the block diagram has the form shown in Fig. 2.14. Here operator Z-1 is a discrete delay element for one clock cycle of the digital filter, lines with arrows indicate multiplication by a0, b2, and b1, “block +” denotes the adder.

    To simplify the analysis, in expression (2.14) we take a0=1, representing it in positive powers of z, we obtain

    , (2.15)

    The transfer function of a digital resonator, like an oscillatory LC circuit, depends only on the circuit parameters. The role of L, C, R is performed by coefficients b1 and b2.

    From (2.15) it is clear that the transfer function of the second-order recursive link has a zero of the second multiplicity in the z plane (at points z=0) and two poles

    And

    We obtain the equation for the frequency response of the second-order recursive link from (2.14), replacing z-1 with e-jl (with a0=1):

    , (2.16)

    The amplitude-frequency response is equal to modulus (2.16):

    After carrying out basic transformations. The frequency response of the second-order recursive link will take the form:

    Figure 2.15 Graph of a second-order recursive link

    In Fig. 2.15 shows graphs in accordance with (2.18) for b1=0. It is clear from the graphs that the recursive link of the second order is a narrow-band electoral system, i.e. digital resonator. Shown here is only the working section of the frequency range of the resonator f ’<0,5. Далее характери-стики повторяются с интервалом fД

    Research shows that the resonant frequency f0’ will take the following values:

    f0’=fД/4 at b1=0;

    f0' 0;

    f0’>fД/4 at b1<0.

    The values ​​of b1 and b2 change both the resonant frequency and the quality factor of the resonator. If b1 is chosen from the condition

    , where , then b1 and b2 will only affect the quality factor (f0’=const). Tuning the resonator frequency can be achieved by changing fD.

    Digital demodulator

    A digital demodulator in the general theory of communications is considered as a computing device that processes a mixture of signal and noise.

    Let's define CD algorithms for processing analog AM and FM signals with a high signal-to-noise ratio. To do this, let us present the complex envelope Z / (t) of a narrow-band analog mixture of signal and noise Z’(t) at the output of the AFC in exponential and algebraic form:

    And

    , (2.20)

    is the envelope and total phase of the mixture, and ZC(t) and ZS(t) are the quadrature components.

    From (2.20) it is clear that the signal envelope Z(t) contains complete information about the modulation law. Therefore, the digital algorithm for processing an analog AM signal in a CD using the quadrature components XC(kT) and XS(kT) of the digital signal x(kT) has the form:

    It is known that the frequency of a signal is the first derivative of its phase, i.e.

    , (2.22)

    Then from (2.20) and (2.22) it follows:

    , (2.23)

    Figure 2.16 Block diagram of the CCPT

    Using the quadrature components XC(kT) b XS(kT) of the digital signal x(kT) in (2.23) and replacing the derivatives with first differences, we obtain a digital algorithm for processing an analog FM signal in a digital digital disk:

    In Fig. Figure 2.16 shows a variant of the block diagram of the CCPT when receiving analog AM and FM signals, which consists of a quadrature converter (QC) and a CD.

    In the CP, quadrature components of a complex digital signal are formed by multiplying the signal x(kT) by two sequences (cos(2πf 1 kT)) and (sin(2πf 1 kT)), where f1 is the central frequency of the lowest frequency display of the signal spectrum z'(t ). At the output of the multipliers, digital low-pass filters (DLPFs) provide suppression of harmonics with a frequency of 2f1 and isolate digital samples of quadrature components. Here, DFLPs are used as a primary selectivity digital filter. The block diagram of the CD corresponds to algorithms (2.21) and (2.24).

    The considered digital signal processing algorithms can be implemented in hardware (using specialized computers on digital ICs, devices with charging connections, or devices on surface acoustic waves) and in the form of computer programs.

    When implementing a signal processing algorithm in software, the computer performs arithmetic operations on the coefficients al, bl and variables x(kT), y(kT) stored in it.

    Previously, the disadvantages of computational methods were: limited speed, the presence of specific errors, the need for re-selection, high complexity and cost. Currently, these limitations are being successfully overcome.

    The advantages of digital signal processing devices over analog ones are advanced algorithms associated with training and adaptation of signals, ease of control of characteristics, high time and temperature stability of parameters, high accuracy and the ability to simultaneously and independently process several signals.

    Simple and complex signals. Signal base

    The characteristics (parameters) of communication systems improved as the types of signals and their methods of reception and processing (separation) were mastered. Each time there was a need to competently distribute the limited frequency resource between operating radio stations. In parallel with this, the issue of reducing the emission bandwidth of signals was addressed. However, there were problems when receiving signals that could not be solved by simply distributing the frequency resource. Only the use of a statistical method of signal processing - correlation analysis - made it possible to solve these problems.

    Simple signals have a signal base

    BS=TS*∆FS≈1, (2.25)

    where TS is the signal duration; ∆FS – spectrum width of a simple signal.

    Communication systems that operate on simple signals are called narrowband. For complex (composite, noise-like) signals, additional modulation (manipulation) in frequency or phase occurs during the duration of the signal TS. Therefore, the following relationship applies here for the base of a complex signal:

    BSS=TS*∆FSS>>1, (2.26)

    where ∆FSS is the spectrum width of the complex signal.

    It is sometimes said that for simple signals ∆FS = 1/ TS is the spectrum of the message. For complex signals, the signal spectrum expands by ∆FSS / ∆FS times. This results in redundancy in the signal spectrum, which determines the useful properties of complex signals. If in a communication system with complex signals the information transmission rate is increased to obtain the duration of the complex signal TS = 1/ ∆FSS, then a simple signal and a narrowband communication system are formed again. The useful properties of the communication system disappear.

    Methods for expanding the signal spectrum

    The discrete and digital signals discussed above are time division signals.

    Let's get acquainted with broadband digital signals and methods of multiple access with code division (in form) of channels.

    Broadband signals were initially used in military and satellite communications because of their useful properties. Here, their high immunity to interference and secrecy were used. A communication system with broadband signals can work when energetic interception of the signal is impossible, and eavesdropping without a signal sample and without special equipment is impossible even when the signal is received.

    Shannon proposed using segments of white thermal noise as a carrier of information and the method of broadband transmission. He introduced the concept of communication channel capacity. Showed the connection between the possibility of error-free transmission of information with a given ratio and the frequency band occupied by the signal.

    The first communication system with complex signals from segments of white thermal noise was proposed by Costas. In the Soviet Union, the use of broadband signals when the code division multiple access method is implemented was proposed by L. E. Varakin.

    To temporarily represent any variant of a complex signal, you can write the following relation:

    where UI (t) and (t) are the envelope and initial phases, which are slowly changing

    Functions compared to cosω 0 t; - carrier frequency.

    When the signal is represented in frequency, its generalized spectral shape has the form

    , (2.28)

    where are coordinate functions; - expansion coefficients.

    Coordinate functions must satisfy the orthogonality condition

    , (2.29)

    and the expansion coefficients

    (2.30)

    For parallel complex signals, trigonometric functions of multiple frequencies were first used as coordinate functions

    , (2.31)

    when each i-th variant of a complex signal has the form

    Z i (t) = t . (2.32)

    Then, having accepted

    Aki = and = - arktg(β ki / ki), (2.33)

    Ki , βki – coefficients of expansion into the trigonometric Fourier series of the i-th signal;

    i = 1,2,3,…,m ; m is the base of the code, we get

    Z i (t) = t . (2.34)

    Here the signal components occupy frequencies from ki1 /2π = ki1 /TS to ki2 /2π = ki2 /TS; ki1 = min (ki1) and ki2 = max (ki2); ki1 and ki2 – numbers of the smallest and largest harmonic components, which significantly affect the formation of the i-th signal variant; Ni = ki2 - ki1 + 1 - the number of harmonic components of the complex i-th signal.

    Frequency band occupied by the signal

    ∆FSS = (ki2 - ki1 + 1)ω 0 / 2π = (ki2 - ki1 + 1)/ TS . (2.35)

    The main part of the signal’s energy spectrum is concentrated in it.

    From relation (35) it follows that the base of this signal

    BSS = TS ∙ ∆FSS = (ki2 - ki1 + 1) = Ni , (2.36)

    equal to the number of harmonic components of the signal Ni, which are formed by the i-th signal variant

    Figure 2.17

    b)

    Figure 2.18 Signal spread spectrum diagram with periodic sequence plot

    Since 1996-1997, for commercial purposes, Qualcomm began to use a subset (φ k (t)) of complete Walsh functions orthogonalized on the interval to generate parallel complex signals based on (28). In this case, the code division multiple access method is implemented - the CDMA (Code Division Multiple Access) standard.

    Figure 2.19 Correlation receiver circuit

    Useful properties of wideband (composite) signals

    Figure 2.20

    When communicating with mobile stations (MS), multipath (multipath) signal propagation occurs. Therefore, signal interference is possible, which leads to the appearance of deep dips (signal fading) in the spatial distribution of the electromagnetic field. So, in urban conditions, at the receiving point there can only be reflected signals from high-rise buildings, hills, etc., if there is no direct visibility. Therefore, two signals with a frequency of 937.5 MHz (l = 32 cm), arriving with a time shift of 0.5 ns with a path difference of 16 cm, are added in antiphase.

    The signal level at the receiver input also changes from vehicles passing by the station.

    Narrowband communication systems cannot operate in multipath conditions. So, if at the input of such a system there are three beams of the signal of one parcel Si(t) – Si1(t), Si2(t), Si3(t), which overlap in time due to the difference in the length of the transmission path, then separate them at the output of the bandpass filter (Yi1(t), Yi2(t), Yi3(t)) is not possible.

    Communication systems with complex signals cope with the multipath nature of radio wave propagation. Thus, by choosing the ∆FSS band such that the duration of the folded pulse at the output of the correlation detector or matched filter is less than the delay time of neighboring beams, one beam can be received or, by providing appropriate pulse delays (Gi(t)), their energy can be added, which will increase the ratio sigal/noise. The American communications system Rake, like a rake, collected the received rays reflected from the Moon and summed them up.

    The principle of signal accumulation can significantly improve noise immunity and other signal properties. The idea of ​​signal accumulation is given by simple signal repetition.

    The first element for this purpose was a frequency-selective system (filter).

    Correlation analysis allows you to determine the statistical relationship (dependence) between the received signal and the reference signal located on the receiving side. The concept of a correlation function was introduced by Taylor in 1920. The correlation function is a second-order statistical mean in time, or a spectral mean, or a probabilistic mean.

    If time functions (continuous sequences) x(t) and y(t) have arithmetic means

    With time division of channels;

    With code division of channels.

    The periodic function has the form:

    f(t) = f(t+kT), (2.40)

    where T-period, k-any integer (k= , 2, ...). Periodicity exists along the entire time axis (-< t <+ ). При этом на любом отрезке времени равном T будет полное описа­ние сигнала.

    Figure 2.10, a, b, c shows a periodic harmonic signal u1(t) and its spectrum of amplitudes and phases.

    Figure 2.11, a, b, c shows graphs of the periodic signal u2(t) - a sequence of rectangular pulses and its spectrum of amplitudes and phases.

    So, any signals can be represented in the form of a Fourier series over a certain period of time. Then we will represent the signal separation through signal parameters, i.e. through amplitudes, frequencies, and phase shifts:

    a) signals whose series with arbitrary amplitudes, non-overlapping frequencies and arbitrary phases are separated by frequency;

    b) signals whose series with arbitrary amplitudes overlap in frequency, but those shifted in phase between the corresponding components of the series are separated in phase (the phase shift here is proportional to frequency);

    The high capacity of composite signal communication systems will be illustrated below.

    c) signals whose series with arbitrary amplitudes, with components overlapping in frequency (frequencies can coincide) and arbitrary phases are separated by shape.

    Shape separation is a code separation when the transmitting and receiving sides have complex signals (patterns) specially created from simple signals.

    When receiving a complex signal, it is first subject to correlation processing, and then

    a simple signal is being processed.

    Frequency resource division with multiple access

    Currently, signals can be transmitted in any environment (in the environment, in a wire, in a fiber-optic cable, etc.). To increase the efficiency of the frequency spectrum, and at the same time, transmission lines form group channels for transmitting signals over one communication line. On the receiving side, the reverse process occurs - channel separation. Let's look at the methods used to separate channels:

    Figure 2.21 Frequency Division Multiple Access FDMA

    Figure 2.22 Time Division Multiple Access TDMA.

    Figure 2.23 Code Division Multiple Access CDMA

    Encryption in wi-fi networks

    Encryption of data in wireless networks receives so much attention due to the very nature of such networks. Data is transmitted wirelessly using radio waves, generally using omnidirectional antennas. Thus, everyone hears the data - not only the person for whom it is intended, but also the neighbor living behind the wall or the “interested person” staying with a laptop under the window. Of course, the distances over which wireless networks operate (without amplifiers or directional antennas) are small - about 100 meters in ideal conditions. Walls, trees and other obstacles greatly dampen the signal, but this still does not solve the problem.

    Initially, only the SSID (network name) was used for protection. But, generally speaking, this method can be called protection with a big stretch - the SSID is transmitted in clear text and no one is stopping an attacker from eavesdropping on it, and then substituting the desired one in his settings. Not to mention that (this applies to access points) the broadcast mode for the SSID can be enabled, i.e. it will be force-broadcast to everyone listening.

    Therefore, there was a need for data encryption. The first such standard was WEP – Wired Equivalent Privacy. Encryption is carried out using a 40 or 104-bit key (stream encryption using the RC4 algorithm on a static key). And the key itself is a set of ASCII characters with a length of 5 (for a 40-bit) or 13 (for a 104-bit key) characters. The set of these characters is translated into a sequence of hexadecimal digits, which are the key. Drivers from many manufacturers allow you to directly enter hexadecimal values ​​(of the same length) instead of a set of ASCII characters. Please note that the algorithms for converting from ASCII character sequences to hexadecimal key values ​​may vary between different manufacturers. Therefore, if your network uses heterogeneous wireless equipment and you are unable to configure WEP encryption using an ASCII key phrase, try entering the key in hexadecimal format instead.

    But what about manufacturers’ statements about support for 64 and 128-bit encryption, you ask? That's right, marketing plays a role here - 64 is more than 40, and 128 is 104. In reality, data encryption occurs using a key length of 40 or 104. But in addition to the ASCII phrase (the static component of the key), there is also such a thing as Initialization Vector - IV – initialization vector. It serves to randomize the rest of the key. The vector is selected randomly and changes dynamically during operation. In principle, this is a reasonable solution, since it allows you to introduce a random component into the key. The vector length is 24 bits, so the total key length ends up being 64 (40+24) or 128 (104+24) bits.

    Everything would be fine, but the encryption algorithm used (RC4) is currently not particularly strong - if you really want, you can find a key by brute force in a relatively short time. But still, the main vulnerability of WEP is associated precisely with the initialization vector. The IV is only 24 bits long. This gives us approximately 16 million combinations - 16 million different vectors. Although the figure “16 million” sounds quite impressive, everything in the world is relative. In real work, all possible key options will be used in a period from ten minutes to several hours (for a 40-bit key). After this, the vectors will begin to repeat. An attacker only needs to collect a sufficient number of packets by simply listening to the wireless network traffic and find these repetitions. After this, selection of static