• A method for modeling a communication channel. Information characteristics of discrete communication channels

    It is useful to recall that within a discrete channel there is always a continuous channel. The modem converts a continuous channel into a discrete one. Therefore, in principle, it is possible to derive a mathematical model of a discrete channel from models of a continuous channel for a given modem. This approach is often fruitful, but it leads to complex models.

    Let's consider simple models discrete channel, during the construction of which the properties of the continuous channel and modem were not taken into account. It should, however, be remembered that when designing a communication system, it is possible to vary the discrete channel model within a fairly wide range for a given continuous channel model by changing the modem.

    The model of a discrete channel contains a specification of a set of possible signals at its input and a distribution of conditional probabilities of the output signal for a given input. Here, the input and output signals are sequences of code symbols. Therefore, to determine possible input signals, it is enough to indicate the number of different symbols (code base), as well as the duration of transmission of each symbol. We will assume the value is the same for all symbols, which is done in most cases.

    temporary channels. The value determines the number of characters transmitted per unit of time. As stated in Chap. 1, it is called technical speed and is measured in baud. Each symbol received at the input of the channel causes the appearance of one symbol at the output, so that the technical speed at the input and output of the channel is the same.

    In the general case, for any, the probability must be indicated that when any given sequence of code symbols is applied to the input of the channel, some implementation of a random sequence will appear at the output. Code symbols will be denoted by numbers from 0 to which will allow us to perform operations on them arithmetic operations. Moreover, all -sequences (vectors), the number of which is equal, form a dimensional finite vector space, if “addition” is understood as a bitwise summation modulo and multiplication by a scalar is similarly defined. For a special case, such a space was considered in Chap. 2.

    Let us introduce another useful definition. We will call the error vector the bitwise difference (of course, modulo between the received and transmitted vectors. This means that the passage discrete signal through the channel can be considered as the addition of the input vector with the error vector. The error vector plays approximately the same role in a discrete channel as interference does in a continuous channel. Thus, for any discrete channel model we can write using addition in vector space(bitwise, modulo

    where and are random sequences of symbols at the input and output of the channel; random error vector, which in general depends on Various models differ in the probability distribution of the vector. The meaning of the error vector is especially simple in the case of binary channels when its components take on the values ​​0 and 1. Every one in the error vector means that at the corresponding place in the transmitted sequence the symbol was received erroneously, and every zero means error-free reception of the symbol. The number of non-zero characters in an error vector is called its weight. Figuratively speaking, a modem that makes the transition from a continuous channel to a discrete one converts interference and distortion of the continuous channel into a stream of errors. Let us list the most important and fairly simple models of discrete channels.

    A permanent symmetric channel without memory is defined as a discrete channel in which each transmitted code symbol can be received incorrectly with a fixed probability and correctly with a probability, and in the event of an error, instead of the transmitted symbol, any other symbol can be received with equal probability. Thus, the probability that a character is received if it was transmitted

    The term “memoryless” means that the probability of erroneously receiving a symbol does not depend on the previous history, i.e. from what symbols were transmitted before him and how they were received. In the future, for the sake of brevity, instead of “probability of erroneous reception of a symbol” we will say “probability of error”.

    Obviously, the probability of any -dimensional error vector in such a channel

    where is the number of non-zero characters in the error vector (the weight of the error vector). The probability that errors occur, located arbitrarily over a sequence of length, is given by Bernoulli's formula

    where the binomial coefficient is equal to the number various combinations I errors per block length

    This model is also called the binomial channel. It satisfactorily describes the channel that arises with a certain choice of modem, if there is no fading in the continuous channel, and the additive noise is white (or at least quasi-white). It is easy to see that the probability of errors in a binary code combination of length (a multiple according to model (4.53) when

    The transition probabilities in a binary symmetric channel are shown schematically as a graph in Fig. 4.3.

    The permanent symmetrical channel without memory with erasure differs from the previous one in that the alphabet at the output of the channel contains an additional character, often denoted by a "?". This symbol appears when the 1st decision circuit (demodulator) cannot reliably identify the transmitted symbol. The probability of such a refusal to make a decision or erasing a symbol in this model is constant and does not depend on the transmitted symbol. By introducing erasure, it is possible to significantly reduce the probability of error, sometimes it is even considered equal to zero. In Fig. 4.4 schematically shows the transition probabilities in such a model.

    An asymmetric channel without memory is characterized, like previous models, by the fact that errors occur in it independently of each other, but the error probabilities depend on which symbol is transmitted. Thus, in a binary asymmetric channel the probability of receiving symbol 1 at

    Rice. 4.3. Transition probabilities in a binary symmetric channel

    Rice. 4.4. Transition probabilities in a binary symmetric channel with erasure

    Rice. 4.5. Transition probabilities in a binary single-ended channel

    transmitting the symbol 0 is not equal to the probability of receiving 0 when transmitting 1 (Fig. 4.5). In this model, the probability of the error vector depends on what sequence of symbols is transmitted.

    Models of discrete channels. Discrete channel call a set of means intended for transmitting discrete signals. Such channels are widely used, for example, in data transmission, telegraphy, and radar.

    Discrete messages, consisting of a sequence of characters from the alphabet of the message source (primary alphabet), are converted in the encoder into a sequence of characters. Volume m alphabet of characters (secondary alphabet)
    , as a rule, less volume l alphabet of signs, but they may coincide.

    The material embodiment of a symbol is an elementary signal received in the process of manipulation - a discrete change in a certain parameter of the information carrier. The elementary signals are generated taking into account the physical limitations imposed by a particular communication line. As a result of manipulation, each sequence of symbols is associated with a complex signal. Lots of complex signals of course. They differ in the number, composition and relative arrangement of elementary signals.

    The terms “elementary signal” and “symbol”, as well as “complex signal” and “sequence of symbols”, will be used as synonyms in the following.

    The information model of a channel with noise is specified by a set of symbols at its input and output and a description of the probabilistic properties of the transmission of individual symbols. In general, a channel can have many states and transition from one state to another both over time and depending on the sequence of transmitted symbols.

    In each state, the channel is characterized by a matrix of conditional probabilities ρ(
    ) that the transmitted symbol u i will be perceived at the output as a symbol ν j . The probability values ​​in real channels depend on many different factors: the properties of the signals that are the physical carriers of the symbols (energy, type of modulation, etc.), the nature and intensity of the interference affecting the channel, the method of determining the signal on the receiving side.

    If there is a dependence of the transition probabilities of the channel on time, which is typical for almost all real channels, it is called a non-stationary communication channel. If this dependence is insignificant, a model is used in the form of a stationary channel, the transition probabilities of which do not depend on time. A non-stationary channel can be represented by a number of stationary channels corresponding to different time intervals.

    The channel is called with " memory"(with aftereffect), if the transition probabilities in a given channel state depend on its previous states. If the transition probabilities are constant, i.e. a channel has only one state, it is called stationary channel without memory. By k-ary channel we mean a communication channel in which the number of different symbols at the input and output is the same and equal to k.

    WITH stationary discrete binary channel without memory is uniquely determined by four conditional probabilities: p(0/0), p(1/0), p(0/1), p(1/1). This channel model is usually depicted in the form of a graph shown in Fig. 4.2, where p(0/0) and p(1/1) are the probabilities of undistorted transmission of symbols, and p(0/1) and p(1/0) are the probabilities of distortion (transformation) of symbols 0 and 1, respectively.

    If the probabilities of symbol distortion can be assumed equal, i.e., then such a channel is called binary symmetrical channel[at p(0/1) p(1/0) channel is called asymmetrical]. The symbols at its output are correctly received with probability ρ and incorrectly - with probability 1-p = q. The mathematical model is simplified.

    It was this channel that was studied most intensively, not so much because of its practical significance (many real channels are described by it very approximately), but because of the simplicity of its mathematical description.

    The most important results obtained for the binary symmetric channel are extended to wider classes of channels.

    WITH
    One more channel model should be noted, which lately is becoming increasingly important. This is a discrete erase channel. It is characterized by the fact that the alphabet of output symbols differs from the alphabet of input symbols. At the input, as before, the symbols are 0 and 1, and at the output of the channel states are recorded in which a signal with an equal base can be assigned to both one and zero. In place of such a symbol, neither a zero nor a one is placed: the state is marked by an additional erasure symbol S. When decoding, it is much easier to correct such symbols than erroneously identified ones.

    In Fig. Figure 4 3 shows models of the erasing channel in the absence (Fig. 4.3, a) and in the presence (Fig. 4.3, 6) of symbol transformation.

    The speed of information transmission over a discrete channel. When characterizing a discrete communication channel, two concepts of transmission speed are used: technical and information.

    Under technical transmission speedV T, also called the manipulation rate, refers to the number of elementary signals (symbols) transmitted over the channel per unit time. It depends on the properties of the communication line and the speed of the channel equipment.

    Taking into account possible differences in symbol durations, the speed

    Where - average symbol duration.

    With the same duration τ of all transmitted symbols =τ.

    The unit of measurement for technical speed is baud- the speed at which one character is transmitted per second.

    Information speed, or information transfer rate, is determined by the average amount of information that is transmitted over the channel per unit of time. It depends both on the characteristics of a given communication channel, such as the volume of the alphabet of symbols used, the technical speed of their transmission, the statistical properties of interference in the line, and on the probabilities of symbols arriving at the input and their statistical relationship.

    At a known manipulation speed V T the information transmission speed over the channel Ī(V,U) is given by the relation

    where I(V,U) is the average amount of information carried by one symbol.

    Discrete channel throughput without interference. For theory and practice, it is important to find out to what extent and in what way the speed of information transmission over a specific communication channel can be increased. The maximum capabilities of a channel for transmitting information are characterized by its throughput.

    Channel C capacity d is equal to that maximum speed transfer of information via this channel, which can be achieved with the most advanced methods of transmission and reception:

    With a given alphabet of symbols and fixed main characteristics of the channel (for example, frequency band, average and peak power of the transmitter), the remaining characteristics must be selected in such a way as to ensure the highest transmission rate of elementary signals through it, i.e., to ensure the maximum value of V T. Maximum average the amount of information per symbol of the received signal I(V,U) is determined on the set of probability distributions between the symbols
    .

    The channel capacity, as well as the speed of information transmission over the channel, is measured by the number of binary units of information per second (binary units/s).

    Since in the absence of interference there is a one-to-one correspondence between the set of symbols (ν) at the output of the channel and (u) at its input, then I(V,U) = I(U,V) = H(U). The maximum possible amount of information per symbol is equal to log m, where m is the volume of the alphabet of symbols, hence the throughput of a discrete channel without interference

    Consequently, in order to increase the speed of information transmission over a discrete channel without interference and bring it closer to the channel capacity, the sequence of message letters must undergo such a transformation in the encoder in which different characters in its output sequence would appear as equally likely as possible, and there would be no statistical connections between them . It has been proven (see § 5.4) that this is feasible for any ergodic sequence of letters if coding is carried out in blocks of such length that the theorem on their asymptotic equiprobability is valid.

    R expanding the volume of the alphabet of symbols leads to an increase in channel capacity (Fig. 4.4), but the complexity of technical implementation also increases.

    Capacity of a discrete channel with noise. In the presence of interference, the correspondence between the sets of symbols at the input and output of the communication channel is no longer unambiguous. The average amount of information I(V,U) transmitted over the channel by one symbol is determined in this case by the relation

    If there are no statistical connections between symbols, the entropy of the signal at the output of the communication line is equal to

    If there is a statistical relationship, entropy is determined using Markov chains. Since the algorithm for such a definition is clear and there is no need to complicate the presentation with cumbersome formulas, we will limit ourselves here only to the case of the absence of connections.

    A posteriori entropy characterizes the decrease in the amount of transmitted information due to errors. It depends on both statistical properties sequences of symbols arriving at the input of the communication channel, and from the set of transition probabilities reflecting the harmful effect of interference.

    If the volume of the alphabet of input symbols u is equal to m 1, and the volume of output symbols υ is m 2, then

    Substituting expressions (4.18) and (4.19) into (4.17) and carrying out simple transformations, we obtain

    Information transmission rate over a noisy channel

    Considering the manipulation speed V T to be the maximum permissible for the given technical characteristics of the channel, the value of I(V,U) can be maximized by changing the statistical properties of the symbol sequences at the channel input using a converter (channel encoder). The resulting limiting value C D of the speed of information transmission over the channel is called throughput discrete communication channel with interference:

    where p(u) is the set of possible probability distributions of input signals.

    It is important to emphasize that in the presence of interference, the channel capacity determines the largest amount of information per unit time that can be transmitted with an arbitrarily low probability of error.

    In ch. 6 shows that the throughput of a communication channel with noise can be approached by encoding the ergodic sequence of letters of the message source in blocks of such length that the theorem on the asymptotic equiprobability of long sequences is valid.

    An arbitrarily small error probability is only achievable in the limit where the length of the blocks becomes infinite.

    With the lengthening of encoded blocks, the complexity of the technical implementation of encoding and decoding devices and the delay in message transmission increases, due to the need to accumulate the required number of letters in a block. Within the limits of permissible complications in practice, two goals can be pursued during coding: either at a given information transmission rate, they strive to ensure a minimum error, or at a given reliability, a transmission rate approaching the channel capacity.

    The maximum capacity of the channel is never fully utilized. The degree of its loading is characterized channel utilization factor

    where is the performance of the message source; C D - communication channel capacity.

    Since the normal functioning of the channel is possible, as shown below, when the source performance changes within the limits ,theoretically can vary from 0 to 1.

    Example 4.4 . Define throughput binary symmetric channel (DSC) with manipulation speed V T, assuming independence of transmitted symbols.

    Let us write relation (4.19) in the following form:

    Using the notation on the graph (Fig. 4.5), we can write

    The value of H U (V) does not depend on the probabilities of the input symbols, which is a consequence of the symmetry of the channel.

    Therefore, the throughput

    The maximum H(V) is achieved when the probabilities of symbols appearing are equal; it is equal to 1. Hence

    The dependence of the DSC capacity on ρ is shown in Fig. 4.6. When the probability of symbol transformation increases from 0 to 1/2, C D (p) decreases from 1 to 0. If ρ = 0, then there is no noise in the channel and its capacity is equal to 1. When p = 1/2 the channel is useless, since the values ​​of the symbols on the receiving side can equally well be set based on the results of tossing a coin (coat of arms - 1, hash - 0). The channel capacity in this case is zero.

    Submitting your good work to the knowledge base is easy. Use the form below

    good job to the site">

    Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

    Posted on http://www.allbest.ru/

    1. Discrete channel and its parameters

    Discrete channel - a communication channel used to transmit discrete messages.

    Composition and parameters electrical circuits at the input and output of the DC are determined by the relevant standards. Characteristics can be economical, technological and technical. The main ones are technical specifications. They can be external and internal.

    External - informational, technical and economic, technical and operational.

    There are several definitions for transmission speed.

    Technical speed characterizes the performance of the equipment included in the transmitting part.

    where m i is the code base in the i-th channel.

    Information speed transmission - related to channel capacity. It appears with the advent and rapid development of new technologies. Information speed depends on technical speed, on the statistical properties of the source, on the type of CS, received signals and interference acting in the channel. The limiting value is the capacity of the CS:

    where?F - KS band;

    Based on the transmission speed of discrete channels and the corresponding UPS, they are usually divided into:

    Low-speed (up to 300 bps);

    Medium-speed (600 - 19600 bps);

    High-speed (more than 24,000 bps).

    Effective transmission rate - the number of characters per unit of time provided to the recipient, taking into account overhead (SS phasing time, time allocated for redundant symbols).

    Relative transfer rate:

    Reliability of information transmission - is used due to the fact that in each channel there are extraneous emitters that distort the signal and complicate the process of determining the type of transmitted single element. According to the method of converting messages into a signal, interference can be additive or multiplicative. In form: harmonic, pulse and fluctuation.

    Interference leads to errors in the reception of single elements; they are random. Under these conditions, probability is characterized by error-free transmission. An assessment of transmission fidelity can be the ratio of the number of erroneous symbols to the total

    Often the transmitter probability turns out to be less than the required one, therefore, measures are taken to increase the probability of errors, eliminating received errors, including some additional devices, which reduce the properties of the channels, therefore reducing errors. Improving fidelity is associated with additional material costs.

    Reliability - a discrete channel, like any DS, cannot work without failure.

    A failure is an event that ends in the full or partial womb of the performance system. In relation to a data transmission system, a failure is an event that causes a delay in the received message for a time t set >t add. In this case t additional in different systems different. The property of a communication system that ensures the normal performance of all specified functions is called reliability. Reliability is characterized by the mean time between failures T o, the mean recovery time T b, and the availability factor:

    Probability trouble-free operation shows the probability with which the system can operate without a single failure.

    2 . Model of partial description of a discrete channel

    Dependence of the probability of occurrence of a distorted combination on its length n and the probability of occurrence of a combination of length n with t errors.

    The dependence of the probability of occurrence of a distorted combination on its length n is characterized as the ratio of the number of distorted combinations to the total number of transmitted code combinations.

    This probability is a non-decreasing value of the function n. When n=1, then P=P OR, when P=1.

    In the Purtov model, the probability is calculated:

    where b is the error grouping indicator.

    If b = 0, then there is no error packetization and the occurrence of errors should be considered independent.

    If 0.5< б < 0.7, то это пакетирование ошибок наблюдается на кабельных линиях связи, т.к. кратковременные прерывания приводят к появлению групп с большой плотностью ошибок.

    If 0.3< б < 0.5, то это пакетирование ошибок наблюдается в radio relay lines communications, where, along with intervals of high error density, intervals with rare errors are observed.

    If 0.3< б < 0.4, то наблюдается в радиотелеграфных каналах.

    The distribution of errors in combinations of various lengths also estimates the probability of combinations of length n c t with predetermined errors.

    Comparison of the results of the calculated probability values ​​using formulas (2) and (3) shows that grouping errors leads to an increase in the number of code combinations affected by errors of higher multiplicity. We can also conclude that when errors are grouped, the number of distorted code combinations of a given length n decreases. This is also understandable from purely physical considerations. With the same number of errors, packetization leads to their concentration on individual combinations (the multiplicity of errors increases), and the number of distorted code combinations decreases.

    3. Classification of discrete channels

    Classification of discrete channels can be carried out according to various signs or characteristics.

    According to the transmitted carrier and signal channel there are ( continuous signal- continuous carrier):

    Continuous-discrete;

    Discrete-continuous;

    Discrete-discrete.

    There is a distinction between the concepts of discrete information and discrete transmission.

    From a mathematical point of view, a channel can be defined by an alphabet of unit elements at the input and output of the channel. The dependence of this probability depends on the nature of the errors in the discrete channel. If, when transmitting the i-th unit element i=j, no errors occurred, if upon receiving the element received new element, different from j, then an error occurred.

    Channels in which P(a j /a i) does not depend on time for any i and j are called stationary, otherwise - non-stationary.

    Channels in which the transition probability does not depend on the value of a previously received element are a channel without memory.

    If i is not equal to j, P(a j /a i)=const, then the channel is symmetrical, otherwise it is asymmetrical.

    Most channels are symmetrical and have memory. Channels space communications symmetrical, but have no memory.

    4 . Channel models

    When analyzing CS systems, 3 main models are used for analog and discrete systems and 4 models only for discrete systems.

    Basic mathematical models KS:

    Channel with additive noise;

    Linear filtered channel;

    Linear filtered channel and variable parameters.

    Mathematical models for discrete CS:

    DKS without memory;

    DCS with memory;

    Binary symmetrical KS;

    KS from binary sources.

    In this model, the transmitted signal S(t) is influenced by additional noise n(t), which can arise from extraneous electrical interference, electronic components, amplifiers or due to interference phenomena. This model was applied to any CS, but in the presence of a damping process, it is necessary to add a damping coefficient to the total reaction.

    r(t)=bS(t)+n(t) (9)

    Line filtered channel is applicable for physical channels containing linear filters to limit the frequency band and eliminate the interference phenomenon. c(t) is impulse response linear filter.

    A linear filtered channel with variable parameters is characteristic of specific physical channels, such as acoustic CS, ionospheric radio channels, which arise when the transmitted signal varies over time and is described by variable parameters.

    Discrete CS models without memory are characterized by an input alphabet or a binary sequence of symbols, as well as a set of input probability of the transmitted signal.

    In a DCS with memory, there is interference in the transmitted data packet or the channel is affected by fading, then the conditional probability is expressed as the total joint probability of all elements of the sequence.

    A binary symmetric KS is a special case of a discrete channel without memory, when the input and output alphabets can only be 0 and 1. Consequently, the probability has a symmetric form.

    The DCS of binary sources generates an arbitrary sequence of symbols, while the final discrete source is determined not only by this sequence and the probability of their occurrence, but also by the introduction of such functions as self-information and mathematical expectation.

    5 . Modulation

    discrete modulation signal

    Signals are generated by changing certain parameters physical media in accordance with the transmitted message. This process (changing the parameters of the carrier) is usually called modulation.

    The general principle of modulation is to change one or more parameters of the carrier vibration (transporter) f(t,b,c, ...) in accordance with the transmitted message. So, if a harmonic oscillation f(t)=Ucos(φ 0 t+t) is chosen as a carrier, then three types of modulation can be formed: amplitude (AM), frequency (FM) and phase (PM).

    Waveforms at binary code For various types discrete modulation

    Amplitude modulation consists of a change in the amplitude of the carrier U AM =U 0 +ax(t) that is proportional to the primary signal x(t). In the simplest case harmonic signal x(t)=XcosШt amplitude is equal to:

    As a result, we have an AM oscillation:

    Graphs of fluctuations x(t), u and u AM

    AM spectrum

    Figure 1.5 shows graphs of oscillations x(t), u and u AM. The maximum deviation of the amplitude U AM from U 0 represents the amplitude of the envelope U Ш =aX. The ratio of the amplitude of the envelope to the amplitude of the carrier (unmodulated) oscillation:

    m is called the modulation coefficient. Usually m<1. Коэффициент модуляции, выраженный в процентах, т.е. (m=100%) называют глубиной модуляции. Коэффициент модуляции пропорционален амплитуде модулирующего сигнала.

    Using expressions (12), expression (11) is written as:

    To determine the spectrum of AM vibrations, let’s open the brackets in expression (1.13):

    According to (14), the AM oscillation is the sum of three high-frequency harmonic oscillations of close frequencies (since<<щ 0 или F<

    Oscillations of the carrier frequency f 0 with amplitude U 0 ;

    Oscillations of the upper side frequency f 0 +F;

    Oscillations of the lower side frequency f 0 -F.

    The AM vibration spectrum (14) is shown in Figure 1.6. The spectrum width is equal to twice the modulation frequency: ?f AM =2F. The amplitude of the carrier oscillation does not change during modulation; the amplitudes of the oscillations of the side frequencies (upper and lower) are proportional to the modulation depth, i.e. amplitude X of the modulating signal. When m=1, the amplitudes of side frequency oscillations reach half the carrier (0.5U 0).

    The carrier wave does not contain any information, and it does not change during the modulation process. Therefore, we can limit ourselves to transmitting only sidebands, which is implemented in communication systems on two sidebands (DSB) without a carrier. Moreover, since each sideband contains complete information about the primary signal, it is possible to dispense with the transmission of only one sideband (SBP). Modulation, which results in oscillations of one sideband, is called single-sideband (SB).

    The obvious advantages of the DBP and OBP communication systems are the possibility of using the transmitter power to transmit only the side bands (two or one) of the signal, which allows increasing the communication range and reliability. With single-sideband modulation, in addition, the width of the spectrum of the modulated oscillation is halved, which makes it possible to correspondingly increase the number of signals transmitted over the communication line in a given frequency band.

    Phase modulation consists of a change in the phase u of the carrier u=U 0 cos(u 0 t+t) proportional to the primary signal x(t).

    The amplitude of the oscillation does not change during phase modulation, therefore the analytical expression for the FM oscillation

    If modulation is carried out by a harmonic signal x(t)=XsinШt, then the instantaneous phase

    The first two terms (1.17) determine the phase of the unmodulated oscillation, the third determines the change in the oscillation phase as a result of modulation.

    The phase-modulated oscillation is clearly characterized by the vector diagram Figure 1.7, built on a plane rotating clockwise with an angular frequency u 0. An unmodulated oscillation corresponds to a moving vector U 0 . Phase modulation consists of a periodic change with a frequency of rotation of the vector U relative to U 0 by an angle? The extreme positions of the vector U are designated U" and U"". The maximum deviation of the phase of the modulated oscillation from the phase of the unmodulated oscillation:

    where M is the modulation index. The modulation index M is proportional to the amplitude X of the modulating signal.

    Vector diagram of phase modulated oscillation

    Using (18), we rewrite the FM oscillation (16) as

    u=U 0 cos(u 0 t+t 0 +Msinаt) (19)

    Instantaneous frequency of FM oscillation

    мш=U(ш 0 +МШcosШt) (20)

    Thus, the FM oscillation at different moments of time has different instantaneous frequencies, differing from the frequency of the carrier oscillation u 0 by the amount?

    Frequency modulation consists of a proportional change in the primary signal x(t) of the instantaneous frequency of the carrier:

    u=u 0 +ax(t) (21)

    where a is the proportionality coefficient.

    Instantaneous phase of the FM oscillation

    The analytical expression of FM oscillations, taking into account the constancy of the amplitude, can be written as:

    Frequency deviation is its maximum deviation from the carrier frequency u 0 caused by modulation:

    The analytical expression of this FM oscillation is:

    The term (?ш Д/Ш)sinШt characterizes the phase change resulting from FM. This allows us to consider the FM oscillation as an FM oscillation with a modulation index

    and write it similarly:

    From the above it follows that FM and FM oscillations have much in common. Thus, an oscillation of the form (1.27) can be the result of both PM and FM harmonic primary signals. In addition, PM and FM are characterized by the same parameters (modulation index M and frequency deviation? f D), related to each other by the same relations: (1.21) and (1.24).

    Along with the noted similarity of frequency and phase modulation, there is also a significant difference between them associated with the different nature of the dependence of the values ​​of M and ?f D on the frequency F of the primary signal:

    With PM, the modulation index does not depend on the frequency F, and the frequency deviation is proportional to F;

    In FM, the frequency deviation does not depend on the frequency F, and the modulation index is inversely proportional to F.

    6 . Block diagram with ROS

    A transmission from a POS is similar to a telephone conversation in conditions of poor audibility, when one of the interlocutors, having heard a word or phrase poorly, asks the other to repeat it again, and if audibility is good, either confirms the fact of receiving the information, or in any case, does not ask for repetition .

    The information received via the OS channel is analyzed by the transmitter, and based on the results of the analysis, the transmitter makes a decision to transmit the next code combination or to repeat previously transmitted ones. After this, the transmitter transmits service signals about the decision made, and then the corresponding code combinations. In accordance with the service signals received from the transmitter, the receiver either issues the accumulated code combination to the recipient of the information, or erases it and stores the newly transmitted one.

    Types of systems with ROS: systems with waiting for service signals, systems with continuous transmission and blocking, systems with address transfer. Currently, numerous algorithms for operating OS systems are known. The most common systems are: with POS with waiting for an OS signal; with addressless repetition and receiver blocking with address repetition.

    Systems with waiting after transmitting a combination either wait for a feedback signal or transmit the same code combination, but begin transmitting the next code combination only after receiving confirmation for the previously transmitted combination.

    Blocking systems transmit a continuous sequence of code combinations in the absence of OS signals for the previous S combinations. After errors are detected in the (S+1)th combination, the system output is blocked for the duration of receiving S combinations, S previously received combinations are erased in the memory device of the PDS system receiver, and a resend signal is sent. The transmitter repeats the transmission S of the last transmitted code combinations.

    Systems with address repetition are distinguished by the fact that code combinations with errors are marked with conditional numbers, in accordance with which the transmitter retransmits only these combinations.

    Algorithm for protection against overlay and loss of information. OS systems can discard or use the information contained in rejected code combinations in order to make a more correct decision. Systems of the first type are called systems without memory, and systems of the second - systems with memory.

    Figure 1.8 shows a block diagram of a system with ROC-coolant. Systems with ROS-ozh operate as follows. Coming from the source of information (AI), the m-element combination of the primary code is written through logical OR into the transmitter’s storage unit (NC 1). At the same time, control symbols are formed in the encoding device (CU), which represent the block control sequence (BCS).

    Block diagram of a system with ROS

    The resulting n-element combination is fed to the input of the direct channel (PC). From the output of the PC, the combination is supplied to the inputs of the decision device (RU) and the decoding device (DCU). Based on m information symbols received from the forward channel, the DCU forms its own block control sequence. The decision device compares two CPB (received from the PC and generated by the DCU) and makes one of two decisions: either the information part of the combination (m-element primary code) is issued to the recipient of the PI information, or it is erased. At the same time, the information part is selected in the DKU and the resulting m-element combination is recorded in the receiver’s storage device (NK 2).

    Block diagram of the system algorithm with ROS NP

    If there are no errors or undetected errors, a decision is made to issue PI information and the receiver control device (CU 2) issues a signal that opens element AND 2, which ensures the issuance of an m-element combination from NC 2 to PI. The feedback signal generation device (FSD) generates a combination reception confirmation signal, which is transmitted via the reverse channel (OC) to the transmitter. If the signal coming from OK is deciphered by the feedback signal decoding device (FSD) as a confirmation signal, then a corresponding pulse is sent to the input of the transmitter control device (CU 1) of the transmitter, through which CU 1 makes a request from the AI ​​for the next combination. Logic circuit AND 1 in this case is closed, and the combination written in NK 1 is erased when a new one arrives.

    If errors are detected, the control unit makes a decision to erase the combination recorded in NC 2, while control pulses are generated by control unit 2, locking the logic circuit And 2 and generating a resurgence signal in the UFS. When the MAC circuit decrypts the signal arriving at its input as a re-question signal, the control unit 1 generates control pulses, with the help of which the combination stored in NK 1 is retransmitted through the AND 1, OR and KU circuits to the PC.

    Posted on Allbest.ru

    ...

    Similar documents

      Basic dynamic characteristics of measuring instruments. Functionalities and parameters of complete dynamic characteristics. Weight and transient characteristics of measuring instruments. Dependence of the output signal of measuring instruments on time-varying quantities.

      presentation, added 08/02/2012

      Development of a measuring channel for monitoring the physical parameter of a process plant: selection of technical measuring instruments, calculation of the error of the measuring channel, throttle device, flow meter diaphragms and automatic potentiometer.

      course work, added 03/07/2010

      Basics of measuring physical quantities and the degree of their symbols. The essence of the measurement process, classification of its methods. Metric system of measures. Standards and units of physical quantities. Structure of measuring instruments. Representativeness of the measured value.

      course work, added 11/17/2010

      abstract, added 01/09/2015

      Structure and parameters of an induced-channel MOS transistor, its topology and cross-section. Selecting the channel length, dielectric under the transistor gate, and substrate resistivity. Calculation of threshold voltage and transmission characteristic slope.

      course work, added 11/24/2010

      Direct and indirect measurements of voltage and current. Application of Ohm's law. Dependence of the results of direct and indirect measurements on the value of the angle of rotation of the regulator. Determination of the absolute error of indirect measurement of direct current.

      laboratory work, added 01/25/2015

      Physical quantities and their measurements. Distinguish between the terms "control" and "measurement". Line length measure IA-0–200 GOST 12069–90. Parameters for assessing roughness. Purpose, types and parameters of test squares. Measurement of strains and stresses.

      test, added 05/28/2014

      Magnetometer as a device for measuring the characteristics of a magnetic field and the magnetic properties of substances (magnetic materials), its varieties and functional features. Fluxgate: concept and types, structure and elements, principle of operation, purpose.

      abstract, added 02/11/2014

      Development of a measuring channel for monitoring water flow through a hot water boiler: selection of a diaphragm, installation of a differential pressure gauge, taking into account measurement errors. Calculation of the circuit of the automatic bridge KSM-4, operating in tandem with a resistance thermometer TSM (50).

      course work, added 03/07/2010

      Development of a measuring channel for a measuring channel, its metrological support. Selection of a mathematical model of IR substance consumption. Functional, structural diagram of the IC, conditions of its operation. Unified current signal distribution block.


    Ministry of Education and Science of the Republic of Kazakhstan

    Non-profit joint stock company

    "Almaty University of Energy and Communications"

    Department of Infocommunication Technologies

    COURSE WORK

    in the discipline "Digital Communication Technologies"

    Completed:

    Alieva D.A.

    Introduction

    2. System with ROS and continuous transmission of information (ROS - np) and blocking

    3. Determination of n, k, r, at the highest throughput R

    4. Construction of encoder and decoder circuits for the selected g (x) polynomial

    8. Calculations of reliability indicators of the main and bypass channels

    9. Selecting a highway from the map

    Conclusion

    References

    Introduction

    code cyclic channel device

    Recently, digital data transmission systems have become increasingly widespread. In this regard, special attention is paid to the study of the principles of transmitting discrete messages. The discipline “Digital Communication Technologies” is devoted to consideration of the principles and methods of transmitting digital signals, which is based on previously studied disciplines: “Theory of Electrical Communications”, “Theory of Electrical Circuits”, “Fundamentals of Construction and CAD of Telecommunication Systems and Networks”, “Digital Devices and Fundamentals computer technology”, etc. As a result of studying this discipline, it is necessary to know the principles of constructing systems for transmitting and processing digital signals, hardware and software methods for increasing the noise immunity and transmission speed of digital communication systems, methods for increasing the effective use of communication channels. It is also necessary to be able to make calculations of the main functional units, analyze the influence of external factors on the performance of communications equipment; have skills in using computer technology for calculations and design of software and hardware communications.

    Completing coursework helps you gain skills in problem solving and a more thorough consideration of sections of the Digital Communication Technologies course.

    The purpose of this work is to design a data path between the source and receiver of information using a cyclic code and decisive feedback, continuous transmission and blocking of the receiver. In the course work it is necessary to consider the principle of operation of the encoding and decoding device of a cyclic code. Software tools are widely used to model telecommunication systems. Using the “System View” package, in accordance with a given option, circuits for a cyclic code encoder and decoder must be assembled.

    1. Models of partial description of a discrete channel

    In real communication channels, errors occur for many reasons. In wired channels, the greatest number of errors is caused by short-term interruptions and impulse noise. In radio channels, fluctuation noise has a noticeable effect. In shortwave radio channels, the majority of errors occur when the signal level changes due to the influence of fading. In all real channels, errors are distributed very unevenly over time, which is why error flows are also uneven.

    There are a large number of mathematical models of a discrete channel. Also, in addition to general schemes and particular models of a discrete channel, there are a large number of models that provide a partial description of the channel. Let us dwell on one of these models - the model of A.P. Purtov.

    Formula for a discrete channel model with independent errors:

    Errors are of a batch nature, so a coefficient is introduced

    Using this model, it is possible to determine the dependence of the probability of occurrence of a distorted combination on its length n and the probability of occurrence of combinations of length n with t errors(t

    The probability P(>1,n) is a non-decreasing function of n.

    When n=1 P(>1,n)=Posh

    Probability of occurrence of distortions in a code combination of length n:

    where is the error grouping indicator.

    At 0 we have the case of independent occurrence of errors, and at 1 the occurrence of group errors (at =1 the probability of code combination distortion does not depend on n, since in each erroneous combination all elements are accepted with error). The highest value of d (0.5 to 0.7) is observed on the CLS, since a short-term interruption leads to the appearance of groups with a higher error density. In radio relay lines, where along with intervals of high error density there are intervals with rare errors, the value of d lies in the range from 0.3 to 0.5. In HF radiotelegraph channels, the error grouping indicator is the smallest (0.3-0.4).

    Distribution of errors in combinations of different lengths:

    evaluates not only the probability of occurrence of distorted combinations (at least one error), but also the probability of combinations of length n with t predetermined errors P(>t,n).

    Consequently, grouping of errors leads to an increase in the number of code combinations affected by errors of higher multiplicity. Analyzing all of the above, we can conclude that when errors are grouped, the number of code combinations of a given length n decreases. This is also understandable from purely physical considerations. With the same number of errors, packetization leads to their concentration on individual combinations (the multiplicity of errors increases), and the number of distorted code combinations decreases.

    2. System with ROS and continuous transmission of information (ROS-np) and blocking.

    In POC-NP systems, the transmitter transmits a continuous sequence of combinations without waiting for confirmation signals to be received. The receiver erases only those combinations in which the solver detects errors, and gives a repeat signal based on them. The remaining combinations are issued to the PI as they arrive. When implementing such a system, difficulties arise due to the finite time of transmission and propagation of signals. If at some point in time the reception of a code combination in which an error is detected is completed, then by that point in time the next code combination is already being transmitted via the forward channel. If the signal propagation time in the channel t c exceeds the duration of the code combination nt o , then by the time t" the transmission of one or more combinations following the second may end. A certain number of code combinations will be transmitted until the time (t") until it is received and The re-ask signal for the second combination was analyzed.

    Thus, during continuous transmission, during the time between the moment the error is detected (t") and the arrival of the repeated code combination (t""), h more combinations will be received, where the symbol [x] means the smallest integer greater than or equal to x.

    Since the transmitter repeats only the combinations for which the interrogation signal is received, as a result of repetition with a delay of h combinations, the order of combinations in the information issued by the PI system will differ from the order in which the code combinations enter the system. But the recipient must receive the code combinations in the same order in which they were transmitted. Therefore, to restore the order of combinations, the receiver must have a special device and a buffer storage device of significant capacity (at least ih, where i is the number of repetitions), since multiple repetitions are possible.

    To avoid making receivers more complex and expensive, systems with POC-NP are built mainly in such a way that after detecting an error, the receiver erases the combination with the error and is blocked at h combinations (i.e., does not accept h subsequent combinations), and the transmitter repeats h upon a request signal last combinations (the combination with the error and h--1 following it). Such systems with ROS-np are called systems with blocking ROS-npbl. These systems make it possible to organize continuous transmission of code combinations while maintaining their order.

    Figure 1 - Block diagram of a system with DOS

    3. Determination of n, k, r, at the highest throughput R.

    The length of the code combination n must be chosen in such a way as to ensure the greatest throughput of the communication channel. When using a correcting code, the code combination contains n bits, of which k bits are information bits, and r bits are verification bits:

    Figure 2 - Block diagram of the system algorithm with ROS-NPBL

    If the communication system uses binary signals (signals of type “1” and “0”) and each unit element carries no more than one bit of information, then there is a relationship between the information transmission rate and the modulation rate:

    C = (k/n)*B, (1)

    where C is the information transmission rate, bit/s;

    B - modulation speed, Baud.

    Obviously, the smaller r, the more the ratio k/n approaches 1, the less the difference between C and B, i.e. the higher the capacity of the communication system.

    It is also known that for cyclic codes with a minimum code distance d 0 = 3 the following relation holds true:

    The above statement is true for large d 0, although there are no exact relationships for the connections between r and n. There are only upper and lower bounds specified.

    From the foregoing, we can conclude that from the point of view of introducing constant redundancy into the code combination, it is advantageous to choose long code combinations, since as n increases, the relative throughput increases, tending to the limit equal to 1:

    In real communication channels there is interference, leading to errors in code combinations. When an error is detected by the decoding device in systems with POS, a group of code combinations is asked again. During re-questioning, useful information decreases.

    It can be shown that in this case:

    where P 00 is the probability of detecting an error by the decoder (probability of asking again);

    P PP - probability of correct reception (error-free reception) of the code combination;

    M is the transmitter storage capacity in the number of code combinations.

    At low probabilities of error in the communication channel (R osh.< 10 -3) вероятность Р 00 также мала, поэтому знаменатель мало отличается от 1 и можно считать:

    In case of independent errors in the communication channel, when:

    Storage capacity:

    Sign< >- means that when calculating M you should take the larger nearest integer value.

    where L is the distance between terminal stations, km;

    v is the speed of signal propagation along the communication channel, km/s;

    B - modulation speed, Baud.

    After simple substitutions we finally have

    It is easy to notice that at P osh = 0, formula (8) turns into formula (3).

    If there are errors in the communication channel, the value of R is a function of P error, n, k, B, L, v. Consequently, there is an optimal n (for given P osh, B, L, v), at which the relative throughput will be maximum.

    Formula (8) becomes even more complicated in the case of dependent errors in the communication channel (when packetizing errors).

    Let us derive this formula for the Purtov error model.

    As shown in, the number of errors t about in a combination of n bits long is determined by formula 7.38. To detect such a number of errors, we find a cyclic code with a code distance d 0 of at least. Therefore, according to formula 7.38, it is necessary to determine the probability:

    As shown, with some approximation it is possible to associate the probability with the probability of the decoder not detecting an error P HO and the number of check bits in the code combination:

    Substituting the value into (9) with t about replaced by d 0 -1, we have:

    When calculating on microcalculators, it is more convenient to use decimal logarithms.

    After transformations:

    Returning to formulas (6) and (8) and replacing k with n-r taking into account the value of r, from formula (11) we obtain:

    The second term of formula (8), taking into account the grouping of errors according to relation 7.37, will take the form:

    Let us determine the optimal length of the code combination n, which provides the greatest relative throughput R and the number of check bits r providing the specified probability of an undetected error Roche.

    Table 1 - specified probability of an undetected error Roche

    From Table 1 it can be seen that the highest throughput

    R = 0.9127649 provides a cyclic code with parameters n =511, r = 7, k = 504.

    We find the generating polynomial of degree r from the table of irreducible polynomials (Appendix A to this MU).

    Let us choose, for r = 7, the polynomial g(x)=x 7 +x 4 +x 3 +x 2 +1

    4. Construction of encoder and decoder circuits for the selected g(x) polynomial

    a) Let's build a cyclic code encoder.

    The operation of the encoder at its output is characterized by the following modes:

    1. Formation of k elements of the information group and at the same time dividing the polynomial reflecting the information part x r m(x) by the generating (generating) polynomial g(x) in order to obtain the remainder of the division r(x).

    2. Formation of check r elements by reading them from the cells of the division circuit x r m(x) to the output of the encoder.

    The block diagram of the encoder is shown in Figure 2.

    The encoder operation cycle for transmitting n = 511 unit elements is n clock cycles. Clock signals are generated by a transmitting distributor, which is not indicated in the diagram.

    The first mode of operation of the encoder lasts k = 504 cycles. From the first clock pulse, the T trigger takes a position at which a “1” signal appears at its direct output, and a “0” signal appears at its inverse output. The "1" signal opens the keys ( logic I) 1 and 3. With the “0” signal, key 2 is closed. In this state, the trigger and keys are in k+1 cycles, i.e. 505 bars. During this time, the encoder output through public key 1, 504 single elements of information group k =504 will arrive.

    At the same time, through public key 3, information elements are sent to the device for dividing the polynomial x r m(x) by g(x).

    The division is carried out by a multi-cycle filter with a number of cells equal to the number of check bits (degree of the generating polynomial). In my case, the number of cells r = 7. The number of adders in the device is equal to the number of non-zero terms g(x) minus one (note on page 307). In our case, the number of adders is four. Adders are installed after the cells corresponding to non-zero terms of g(x). Since all irreducible polynomials have a term x 0 =1, the adder corresponding to this term is installed in front of key 3 (AND logic circuit).

    After k=504 cycles, the remainder of the division r(x) will be written in the cells of the division device.

    When exposed to k+1= 505 clock pulse, trigger T changes its state: a “1” signal appears at the inverse output, and “0” appears at the direct output. Keys 1 and 3 close, and key 2 opens. For the remaining r=7 clock cycles, the elements of the remainder of the division (test group) through key 2 arrive at the output of the encoder, also starting from the most significant digit.

    Figure 3 - Block diagram of the encoder

    b) Let's build a cyclic code decoder.

    The operation of the decoder circuit (Figure 3) is as follows. The received code combination, which is represented by the polynomial P(x), enters the decoding register and simultaneously into the cells of the buffer register, which contains k cells. The buffer register cells are connected through "no" logic circuits, which transmit signals only if there is a "1" at the first input and an "O" at the second (this input is marked with a circle). The code combination is received at the input of the buffer register through the AND 1 circuit. This switch opens from the output of the T trigger with the first clock pulse and closes with the k+1 clock pulse (completely similar to the operation of the T trigger in the encoder circuit). Thus, after k=504 clock cycles information group elements will be written to the buffer register. The NO circuits are open in the register filling mode, because no voltage is supplied to the second inputs from the AND 2 switch.

    At the same time, in the decoding register, during all n=511 clock cycles, the division of the code combination (polynomial P(x) by the generating polynomial g(x)) occurs. The decoding register circuit is completely similar to the encoder division circuit, which was discussed in detail above. If the division results in a zero remainder - syndrome S(x) = 0, then subsequent clock pulses will write off information elements to the decoder output.

    If there are errors in the accepted combination, the syndrome S(x) is not equal to 0. This means that after the nth (511) cycle, “1” will be written in at least one cell of the decoding register. Then a signal will appear at the output of the OR circuit. Switch 2 (circuit AND 2) will work, the NO circuits of the buffer register will close, and the next clock pulse will transfer all register cells to the “0” state. Incorrectly received information will be erased. At the same time, the erase signal is used as a command to block the receiver and resend it.

    5. Determination of the volume of transmitted information W

    Let it be necessary to transmit information over a time interval T, which is called the rate of information transmission. The failure criterion t fault is the total duration of all faults, which is permissible during the time T. If the fault time during the period of time T exceeds t fault, then the data transmission system will be in a state of failure.

    Therefore, during the time T per -t open it is possible to transmit C bits useful information. Let us determine W for the previously calculated R = 0.9281713, V = 1200 baud, T per = 460 s., t off = 60 s.

    W=R*B*(Tper-totk)=445522 bits

    6. Construction of cyclic code encoder and decoder circuits in the System View environment

    Figure 4 - Cyclic code encoder

    Figure 5 - Output and input signal of the encoder

    Figure 7 - Input signal decoder, bit error and output syndrome

    7. Finding the capacitance and constructing a timing diagram

    Let's find the storage capacity:

    M=<3+(2 t p /t k)> (13)

    where t p is the signal propagation time along the communication channel, s;

    t k - duration of a code combination of n bits, s.

    These parameters are found from the following formulas:

    t p =L/v=4700/80000=0.005875 s (14)

    h=1+ (16)

    where t cool = 3t to +2t p +t ak + t az =0.6388+0.1175+0.2129+0.2129=1.1821 s,

    where t ak, t az - analysis time in the receiver, t 0 - duration of a single pulse:

    h=1+<1,1821/511 8,333 10 -4 >=3

    8. Calculation of reliability indicators of the main and bypass channels

    The probability of an error occurring is known (P osh =0.5 10 -3), the total probability will be the sum of the following components p pr - correct reception, p but - failure to detect an error, p ob - probability of detecting an error by the decoder (probability of asking again).

    The dependence of the probability of occurrence of a distorted combination on its length is characterized as the ratio of the number of distortion of code combinations N osh (n) to the total number of transmitted combinations N(n):

    The probability Р(?1,n) is a non-decreasing function of n. For n=1 Р(?1,n)=р osh, and for n>? probability Р(?1,n) >1:

    P(?1,n)=(n/d 0 -1) 1- b r osh, (17)

    Р(?1,n)=(511/5) 1-0.5 0.5 10 -3 =5.05 10 -3 ,

    With independent errors in the communication channel, with n<<1:

    r about? n r osh (18)

    r about =511 0.5 10 -3 =255.5 10 -3

    The sum of the probabilities must be equal to 1, i.e. we have:

    r pr + r but + r about =1 (19)

    r pr +5.05 10 -3 +255.5 10 -3 =1

    The timing diagram (Figure 9) illustrates the operation of the system with ROS NPbl when an error is detected in the second combination in the case of h=3. As can be seen from the diagram, the transmission of the AI ​​combination is carried out continuously until the transmitter receives a re-question signal. After this, the transmission of information from the AI ​​stops for a period of time t and 3 combinations starting from the second. At this time, h combinations are erased in the receiver: the second combination in which an error was detected (marked with an asterisk) and 3 subsequent combinations (shaded). Having received the combinations transmitted from the storage device (from the second to the 5th inclusive), the receiver issues their PI, and the transmitter continues transmitting the sixth and subsequent combinations.

    Figure 8 - Timing diagrams of system operation with ROS-NPBL

    9. Selecting a highway from the map

    Figure 9 - Aktyubinsk - Almaty - Astana highway

    Conclusion

    When performing the course work, the essence of the model of partial description of a discrete channel (L.P. Purtov’s model), as well as a system with decisive feedback, continuous transmission and receiver blocking, was considered.

    Based on the given values, the main parameters of the cyclic code were calculated. In accordance with them, the type of generating polynomial was chosen. For this polynomial, encoder and decoder circuits are constructed with an explanation of the principles of their operation. The same schemes were implemented using the System View package. All results of the experiments are presented in the form of drawings confirming the correct operation of the assembled encoder and decoder circuits.

    For the forward and reverse discrete data transmission channels, the main characteristics were calculated: the probability of an undetectable error and one detected by a cyclic code, etc. For the ROS NPBL system, timing diagrams were constructed using the calculated parameters to explain the operating principle of this system.

    Two points were selected from the geographical map of Kazakhstan (Aktyubinsk - Almaty - Astana). The 4,700 km long highway chosen between them was divided into sections 200-700 km long. For a visual representation, a map is presented in the work.

    Analyzing the given error grouping indicator, we can say that the main calculation for the design of cable communication lines was made in the work, since, i.e. lies in the range of 0.4-0.7.

    References

    1 Sklyar B. Digital communications. Theoretical foundations and practical application: 2nd ed. /Trans. from English M.: Williams Publishing House, 2003. 1104 p.

    2 Prokis J. Digital communication. Radio and communications, 2000.-797p.

    3 A.B. Sergienko. Digital signal processing: Textbook for universities. - M.: 2002.

    4 Company standard. Educational works. General requirements for construction, presentation, design and content. FS RK 10352-1910-U-e-001-2002. - Almaty: AIES, 2002.

    5 1 Shvartsman V.O., Emelyanov G.A. The theory of discrete information transmission. - M.: Communication, 1979. -424 p.

    6 Transmission of discrete messages / Ed. V.P. Shuvalova. - M.: Radio and Communications, 1990. - 464 p.

    7 Emelyanov G.A., Shvartsman V.O. Transfer of discrete information. - M.: Radio and Communications, 1982. - 240 p.

    8 Purtov L.P. and others. Elements of the theory of discrete information transmission. - M.: Communication, 1972. - 232 p.

    9 Kolesnik V.D., Mironchikov E.T. Decoding cyclic codes. - M.: Communication, 1968.

    Similar documents

      Model of partial description of a discrete channel (L. Purtov’s model). Determination of the parameters of the cyclic code and the generating polynomial. Construction of an encoding and decoding device. Calculation of characteristics for the main and bypass data transmission channels.

      course work, added 03/11/2015

      Models for partial description of a discrete channel. System with ROS and continuous information transfer (ROS-np). Selecting the optimal length of a code combination when using a cyclic code in a system with POC. Length of the code combination.

      course work, added 01/26/2007

      Technical systems for collecting telemetric information and protecting stationary and mobile objects, methods for ensuring the integrity of information. Development of an algorithm and operating diagram for the encoding device. Calculation of the technical and economic efficiency of the project.

      thesis, added 06/28/2011

      Research and specifics of using inverse code and Hamming. Block diagram of a data transmission device, its components and operating principle. Modeling of a temperature sensor, as well as an encoder and decoder for an inverse code.

      course work, added 01/30/2016

      Design of a medium-speed data transmission path between two sources and recipients. Assembling a circuit using the "System View" package for modeling telecommunication systems, a cyclic code encoder and decoder.

      course work, added 03/04/2011

      Calculation of the number of channels on the highway. Selection of transmission system, determination of capacitance and design calculation of optical cable. Selection and characteristics of the intercity highway route. Calculation of signal, numerical aperture, normalized frequency and number of modes.

      course work, added 09/25/2014

      Model of partial description of a discrete channel, model of Purtov L.P. Block diagram of a system with ROSNp and blocking and block diagram of the system operation algorithm. Construction of an encoder circuit for the selected generating polynomial and an explanation of its operation.

      course work, added 10/19/2010

      Classification of synchronization systems, calculation of parameters with addition and subtraction of pulses. Construction of a cyclic code encoder and decoder, diagrams of systems with feedback and waiting for a non-ideal reverse channel, calculation of error probability.

      course work, added 04/13/2012

      The essence of the Hamming code. Circuits of an encoder for four information bits and a decoder. Determination of the number of check digits. Construction of a correcting Hamming code with correction of a single error with ten information bits.

      course work, added 01/10/2013

      Studying the patterns and methods of transmitting messages over communication channels and solving the problem of analysis and synthesis of communication systems. Designing a data transmission path between the source and recipient of information. Model of partial description of a discrete channel.

    Page 1

    UDC 621.397

    Models of discrete communication channels

    Mikhail Vladimirovich Markov, master's student, mmarkov 1986@ mail . ru ,

    Federal State Educational Institution of Higher Professional Education "Russian State University of Tourism and Service",

    Moscow
    The basic models of the discrete communication channels used for information transfer in wireless systems of access to information resources are described. The basic merits and demerits of various communication channels are considered and their general characteristic is given. The mathematical apparatus that is necessary for the description of the pulsing nature of the traffic in real channels of transfer is presented. The mathematical calculations used for definition of functions of density of probability are given. Models of channels with the memory, characterized by packing of errors in the conditions of a frequency-selective dying down and multibeam distribution of signals are considered.
    The main models of discrete communication channels used to transmit information in wireless access systems are described. information resources. The main advantages and disadvantages of various communication channels are considered and their general characteristics are given. The mathematical apparatus necessary to describe the pulsating nature of traffic in real transmission channels is presented. The mathematical calculations used to determine the probability density functions are given. Models of memory channels characterized by error packetization under conditions of frequency-selective fading and multipath signal propagation are considered.
    Key words: models of communication channels, discrete channels without memory, channels with deleting, asymmetrical channels without memory, channels with memory

    Keywords: models of communication channels, discrete channels without memory, channels with erasure, asymmetric channels without memory, channels with memory.
    Statement of the problem

    To describe information transmission channels, it is customary to use mathematical models that take into account the characteristics of radio wave propagation in the environment. Among these features, we can, for example, note the presence of frequency-selective fading, leading to the phenomenon of intersymbol interference (ISI). These phenomena significantly affect the quality of received information, since in some cases they lead to packetization of single errors. Many models of memory communication channels have been developed to describe packetization processes. The article describes the main models that have various characteristics, described using polygeometric distributions of the lengths of error-free intervals and error bursts.

    Communication channels are usually called discrete in time only if the input and output signals are available for observation and further processing at strictly fixed points in time. To determine models of discrete communication channels, it is enough to describe the random processes occurring in them, as well as to know the probabilities of errors. To do this you need to have an input ( A) and output () sets of transmitted symbols, a set of transition probabilities must be specified p( | a), which depends on the following quantities:
    – a random sequence of characters of the input alphabet, where
    – symbol at the channel input i-th moment of time;
    – sequence of received symbols taken from the output alphabet, where
    – symbol at the channel output in i th moment.

    From a mathematical point of view, the probability
    can be defined as the conditional probability of receiving the sequence provided that the sequence is transmitted a. The number of transition probabilities increases in direct proportion to the duration of the input and output sequences. For example, when using a binary code for a sequence of length n, the number of transition probabilities will be
    . Below is a description of mathematical models of discrete channels containing errors. With their help, you can quite simply determine the transition probabilities
    for a given sequence of length p.


    Discrete channel without memory

    This type of channel is characterized by the fact that the probability of a symbol appearing at its output is determined only by the set of symbols at its input. This statement is true for all pairs of symbols transmitted through the data channel. The most striking example of a memoryless channel is a binary symmetric channel. The principle of its operation can be described in the form of a graph shown in Fig. 1.

    An arbitrary symbol from the sequence is supplied to the channel input A. On the receiving side it is reproduced correctly with a constant probability q equal to , or false, if the probability is given by the expression

    The transition diagram for a binary channel (BCC) is shown in Fig. 1.

    Rice. 1. Discrete channel without memory
    For the BSC, you can easily determine the probability of receiving any sequence of symbols at the output, provided that some input sequence of a fixed length is given. Let us assume that such a sequence has length 3

    For ease of analysis, let us imagine the BSK as a channel to which an error generator is connected. Such a generator produces a random sequence of errors
    . Each of its symbols is added modulo to the symbol , belonging to the binary channel -
    . Addition is performed only if the positions of the error and the symbol are the same. Thus, if the error ( ) has a single value, the transmitted symbol will change to the opposite, that is, the sequence ( ) containing an error.

    Transition probabilities describing a stationary symmetric channel have the form

    From the above expression it can be seen that the channel can be completely described by the error sequence statistics ( ), Where
    (0, 1) . Such a sequence with length n, is usually called the error vector. The components of this vector take single values ​​only at positions corresponding to incorrectly received symbols. The number of ones in a vector determines its weight.


    Symmetrical channel without memory with erasing

    This type of channel is in many ways similar to a memoryless channel, except that the input alphabet contains an additional (m+1) symbol " ? ". This character is used only if the detector is not able to reliably recognize the transmitted character a i. The probability of such an event R With is always a fixed value and does not depend on the transmitted information. The transition probability graph for this model is shown in Fig. 2.

    Rice. 2. Symmetrical channel without memory with erasing
    Single-ended channel without memory

    This communication channel can be characterized by the fact that there is no relationship between the probabilities of an error occurring. But they themselves are determined by the symbols transmitted at the current moment in time. Thus, for a binary channel we can write
    . The transition probabilities describing this model are shown in Fig. 3.


    Rice. 3. Single-ended channel without memory
    Discrete channel with memory.

    This channel can be described by the dependence between the symbols of the input and output sequences. Each received character depends on both the corresponding transmitted one and the previous input and output bits. Most of the actually functioning communication systems contain just such channels. The most significant reason for the presence of memory in a channel is intersymbol interference, which occurs due to limitations imposed on the bandwidth of the communication channel. Each output symbol has a dependency on several consecutive input symbols. The type of this dependence is determined by the impulse response of the communication channel.

    The second, no less important, reason for the “memory” effect is pauses in data transmission to the channel. The duration of such pauses can significantly exceed the duration of one data bit. During a break in transmission, the likelihood of incorrectly receiving information increases sharply, resulting in groups of errors called packets.

    For this reason, many researchers recommend using the concept of “channel state”. As a result, each symbol of the received sequence statistically depends on both the input symbols and the state of the channel at the current time. The term “channel state” usually refers to the form of a sequence of input and output symbols up to a given point in time. The state of the channel is also strongly influenced by intersymbol interference. Memory for communication channels is divided into two types: input and output memory. If there is a dependency between the output symbol and the input bits
    , then such a channel has input memory. It can be described by transition probabilities of the form
    , i= –1, 0, 1, 2, … From the point of view of mathematical analysis, the channel memory is infinite. In practice, the number of symbols influencing the probability of correct or incorrect reception of information is finite.

    Channel memory is calculated as the number of symbols N, starting from which the equality of conditional probabilities is valid

    For everyone
    . (4)

    Sequence of input characters
    can be represented as the state of the channel
    V ( i- 1)th moment. In this case, the channel can be characterized by a set of transition probabilities of the form
    .

    If the received data bit characterized by dependence on previous output symbols, the communication channel is usually called a channel with output memory. Transition probabilities can be represented as

    where are the output characters
    determine the channel state
    V ( i–1)th moment.

    The use of transition probabilities to describe channels with memory is very ineffective due to the cumbersomeness of mathematical calculations. For example, if there is a channel with intersymbol interference, and its memory is limited to five symbols, then the number of possible channel states will be 2 5 =32.

    If the memory only by input or only by output is limited in the binary channel N symbols, then the number of states is 2 N, that is, it grows exponentially depending on the number of memory symbols N. In practice, most often you have to deal with channels that have a memory of tens, hundreds, and even thousands of characters.


    Discrete-continuous channel

    Let's consider a discrete-continuous channel at the input of which there are independent symbols a i, and there is a continuous signal at the output
    . To describe it, we will use transition (conditional) densities
    decodable implementation z(t) provided that the character is transmitted , as well as a priori probabilities of transmitted symbols
    . Transition densities are also commonly called likelihood functions. On the other hand, a discrete-continuous channel can be described by posterior probabilities
    character transmission when receiving oscillations at the output z(t). Using Bayes' formula we get

    , (6).

    This expression uses the density of the decoded vibration, which is defined as

    (7).

    The continuous-discrete channel is described similarly.


    Discrete channel with memory, characterized by correlated

    fading

    Fading occurs when the amplitude or phase of a signal transmitted through a channel changes randomly. It is clear that fading leads to a significant deterioration in the quality of received information. One of the most significant causes of fading is considered to be multipath propagation of signals.

    Here in letters E, T designated signal energy and duration,

    – whole numbers, l k > 1. (9).

    A random process will be observed on the receiving side y(t)

    This expression uses the following parameters:

    µ - channel transmission coefficient selected randomly,

    - random phase shift,

    n (t) - white Gaussian noise (AWGN). His spectral density power is equal N 0 /2.

    If a certain sequence is transmitted a, then the output signal of the coherent demodulator will take the form . The named sequence is sent to the input of the decoder. The resulting sequence can be represented as a vector

    , to calculate the components of which expressions (11) and (12) are used:

    (12)


    ,

    - quadrature components in sum giving the channel transmission coefficient,

    - random variables associated with the influence of white Gaussian noise,

    -- signal-to-noise ratio.

    These expressions are only valid if the character is passed
    .

    If there is a character transfer
    , then the right-hand sides of equalities (11) and (12) change places. Random variables obey a Gaussian distribution with parameters

    (15)

    Analyzing these expressions, we can come to the conclusion that the channel transmission coefficient

    depends on the Rayleigh distribution.

    A fading channel is characterized by the presence of memory between elements of a sequence of symbols. This memory depends on the nature of the connections between members of the ranks

    Let's assume that

    , (18),

    Where
    .

    In that case µ c And µ s form independent Markov sequences. And the probability density function w(µ) for consistency µ at N>1 will be equal



    (20)

    (21).

    In the given expression (X) is a Bessel function of the first kind of zero order. The parameter will be equal to the average S/N ratio for the Rayleigh channel. Parameter r characterizes the dependence of random channel transmission coefficients on time. This parameter can lie in the range of 0.99-0.999.

    Knowing all of the above parameters, we can determine the conditional probability density function
    . The analytical expression for this function is

    Taking into account the above equations, we obtain

    (23).

    Thus, the conditional probability density functions
    are the product of probability density functions in the case of centered and uncentered X 2 – distributions. This distribution has two degrees of freedom.

    Hilbert model

    Unfortunately, all of the channel models described above are not able to describe the pulsating nature of real transmission channels. Therefore, Hilbert proposed the following model of a channel with errors. The probability of an error in the current state of the network depends on what state the network was in at the previous point in time. That is, it is implied that there is a correlation between two successive events. Thus, the memory of the channel and its pulsating nature are revealed. The Hilbert model is essentially a first-order Markov model with two states, “good” and “bad.” If there are no errors in the received data, then we are talking about a “good” state. In a “bad” state, the error probability takes on some value greater than 0. In Fig. Figure 4 shows the Hilbert model.

    Rice. 4. Schematic illustration of the Hilbert model

    Rice. 5. Schematic illustration of the Hilbert-Elliott model
    The probability that the channel is in a “bad” state is

    (24),

    and thus the total probability of error

    The Hilbert model is a self-renewing model, which means that the lengths of error bursts and the lengths of error-free intervals do not depend on previous bursts and error intervals. This is the so-called hidden Markov model (HMM). The current state of the model (X or P) cannot be determined until the model output is received. In addition, the model parameters ( p, q, P( 1|B)) cannot be obtained directly during simulation. They can only be estimated using special trigrams or using curve fitting, as proposed in Hilbert's work.

    Because of the ability to directly estimate the parameters, a simplified version of the Hilbert model was most often used, in which the error probability in a “bad” state is always equal to 1. This model can be slightly modified and represented as a first-order Markov chain with two states. The two parameters of the simplified Hilbert model (p, q) can be calculated directly by measuring error traces, taking into account the average length of error bursts

    (26)

    and the average value of the interval lengths

    or the full probability of error

    Improvements to Hilbert's model were first described in Eliot's work. In it, errors can also occur in good condition, as shown in Fig. 5.

    This model, also known as the Hilbert–Eliot channel (GEC), overcomes the limitation of the Hilbert model with respect to geometric distributions of error burst lengths. Besides that this model must correspond to the HMM model, it must be non-renewable, that is, the lengths of the error bursts must be statistically independent of the lengths of the gaps. This brings new opportunities for modeling a radio channel, but also complicates the procedure for estimating parameters. The parameters for the non-renewable HMM model and the GEC model can be estimated using the Baum-Valiya algorithm.

    Rice. 6. Separated Markov chains
    In the 1960s, researchers Berger, Mandelbrot, Sussman, and Eliot proposed using renewable processes to model the error characteristics of communication channels. To do this, Berger and Mandelbrot used an independent Pareto distribution of the form

    for intervals between successive errors.

    Rice. 7. Separated Markov chains with two error-free and three error states

    Further improvements to Hilbert's model were published by Fritchman (1967), who proposed dividing Markov chains into several circuits with error-free and error-free states (Fig. 6). A limit was introduced on the number of prohibited transitions between error states and error-free states. The parameters of this model can be somewhat improved due to selective approximation of polygeometric distributions of gap lengths and error burst lengths. The polygeometric distribution is calculated as

    under the following restrictions

    0 i 1 and 0 i 1.

    Parameters μ i and λ i correspond to the probabilities of transition to a new state and the probability of transition within a new state, K is the number of error-free states, N is the total number of states.

    The configuration of this model is shown in Fig. 7. It includes two error-free states and three error states. However, there is still a statistical relationship between the current gap and the previous error burst, as well as between the current gap (error burst) and the previous gap (error burst). Therefore for full description models, these dependencies also need to be considered. However, there is a limitation associated with maintaining fixed proportions of the probabilities of transition from one state to another. In this regard, the model becomes renewable. For example, in the case of a 2/3 model configuration, the relationships between the probabilities will be as follows: p 13 : p 14 : p 15 = p 23 : p 24 : p 25 And p 31 : p 32 = p 41 : p 42 = p 51 : p 52 . Thus, the Frichman model shown in Fig. 8 is a special case of a split Markov chain. This figure shows only one of its error states. This configuration of the distribution of intervals between errors uniquely characterizes the model, and its parameters can be found by approximating the corresponding curve. Each state of the Fritchman model represents an error model without memory, and therefore the Fritchman model is limited to polygeometric distributions of the lengths of gaps and bursts of errors.

    Rice. 8. Frichman model

    The article examined the main models of communication channels used to transmit various discrete information and provide access to shared information resources. For most models, the corresponding mathematical calculations are given, based on the analysis of which conclusions are drawn about the main advantages and limitations of these models. The work showed that all the models under consideration have significant differences in error characteristics.
    Literature


    1. Adoul, J-P.A., Fritchman, B.D. and Kanal, L.N. A critical statistic for channels with memory // IEEE Trans. on Information Theory. 1972. No. 18.

    2. Aldridge, R.P. and Ghanbari, M. Bursty error model for digital transmission channels. // IEEE Letters. 1995. No. 31.

    3. Murthy, D.N.P., Xie, M. and Jiang, R. Weibull Models . John Wiley & Sons Ltd., 2007.

    4. Pimentel, C. and Blake, F. Modeling Burst Channels Using Partitioned Fritchman's Markov Models. // IEEE Trans. on Vehicular Technology. 1998. No. 47.

    5. McDougall, J., Yi, Y. and Miller, S. A Statistical Approach to Developing Channel Models for Network Simulations. // Proceedings of the IEEE Wireless Communication and Networking Conference. 2004. vol. 3. R. 1660–1665.
    page 1