• Efficient coding of information when transmitted over communication channels. Structure of the information transmission channel Methods of presenting information

    Coding is the transformation of a transmitted message into a message or signal that can be transmitted.

    Before starting to convey an idea, the sender must encode it using symbols (words, intonation or gestures). This encoding turns an idea into a message. The sender must also select a channel compatible with the type of characters used for encoding. Commonly known channels include speech and written communications, electronic communications including computer networks, e-mail, videotapes and video conferencing. If the channel is not suitable for the physical embodiment of the symbols, transmission is not possible. The exchange of information will not be effective if the communication channel does not correspond to the emerging idea. It is desirable that the choice of means of transmitting a message is not limited to one channel. Of course, the process of transmitting information becomes more complicated, since the sender has to establish the sequence of using these means and determine the time intervals in the sequence of information transmission. However, transmitting information, for example, using the means of exchanging oral and written information is usually more effective than, say, exchanging written information only.

    BROADCAST

    In the third stage, the sender uses a channel to deliver a message (an encoded idea or set of ideas) to the recipient. As soon as the transmission of a message or signal has begun, the communication process goes beyond the control of the medium or person who sent it. Once sent, a message cannot be returned.

    From the moment the information is transmitted, the sending stage ends and the stage of receiving the transmitted information and understanding its meaning begins. The channel transmits a message to the receiver. If the message's carriers (code signs) or forms change in the channel, then the reception is considered unsuccessful. The person to whom the message was addressed is called the recipient. This is another key role performed by the participant in interpersonal communication in order for the process to take place. The role of the recipient is not only to record the receipt of the message, but also to decode this message into a meaning that is understandable and acceptable to him.

    DECODING

    Decoding is the translation of the sender's symbols into the recipient's thoughts. It includes the perception (the fact of receiving) the message by the recipient, its interpretation (how it was understood) and evaluation (what and how it was received). If the symbols chosen by the sender have exactly the same meaning for the recipient, the latter will know what exactly the sender meant, when his idea was formulated.

    However, there are reasons why the recipient may give the message a slightly different meaning than the one intended by the sender.

    Elements of the communication process

    There are 4 basic elements:

    · Sender, the person collecting and transmitting information.

    · Message, information encoded using symbols.

    · Channel, means of transmitting information.

    · Recipient, the person to whom the information is addressed and who interprets it.

    The sender and recipient go through several interconnected stages when exchanging information. Their task is to compose a message that must be transmitted through communication channels in such a way that both parties understand and share the original idea. This process is far from easy since each stage is also a point at which the meaning can be distorted or completely lost
    Question 65. Competition: its essence, types and role in the mechanism of market functioning. Types of competition.

    There are 4 most common methods used in developing cost estimates for any type of activity that is part of the incentive package, for example, advertising:

    1) The calculation method is “from cash”, that is, as much as the enterprise’s budget allows (according to the chief accountant).

    2) The calculation method is “as a percentage of the sales amount” or the selling price of the product (for example, 2% of the sales amount).

    3) Competitive parity method, when a company sets its budget level at the level of its competitors' budgets.

    4) Calculation method “based on goals and objectives.” This method requires that incentive budgets be based on: the development of specific goals; identifying the tasks to be solved to achieve these goals; cost estimates for solving these problems.

    The sum of all these costs will give an approximate figure for budgetary allocations for stimulation.

    The advantage of this method is that it is based on the relationship between the amount of costs, the level of advertising contacts, the intensity of testing and the regularity of using the product.

    The choice of certain means (elements) of the incentive complex is influenced by many factors:

    1. Nature of incentives:

    b) ability to persuade (multiple repetition);

    c) expressiveness - catchiness (although it is precisely this that can distract

    2. Personal selling has three characteristics:

    • personal nature, that is, live communication;
    • the formation of relationships from formal to friendship;
    • encouragement to respond.

    Personal selling is the most expensive means of influence.

    3. Sales promotion- activities during which a targeted set of means of influence is used - coupons, competitions, bonuses...

    These products have three characteristic qualities:

    • attractiveness and information content;
    • incentive to make purchases;
    • invitation to make purchases.

    The company resorts to sales promotion means to achieve a stronger and faster reaction from the buyer (events are short-term in nature).

    4. Propaganda(“Publicity”/Publicity) is built on:

    Credibility;

    Wide coverage of buyers;

    Showiness.

    5. Public relations aims to maintain a reputation for reliability and commitment to all participants in the company's activities.

    The transfer of information occurs from the source to the recipient (receiver) of information. Source information can be anything: any object or phenomenon of living or inanimate nature. The process of transmitting information takes place in a certain material environment that separates the source and recipient of information, which is called channel transfer of information. Information is transmitted through the channel in the form of a certain sequence of signals, symbols, signs, which are called message. Recipient information is an object that receives a message, resulting in certain changes in its state. All of the above is schematically depicted in the figure.

    Transfer of information

    A person receives information from everything that surrounds him through the senses: hearing, sight, smell, touch, taste. A person receives the greatest amount of information through hearing and vision. Sound messages are perceived by ear - acoustic signals in a continuous medium (most often in the air). Vision perceives light signals that convey images of objects.

    Not every message is informative for a person. For example, a message in an unknown language, although transmitted to a person, does not contain information for him and cannot cause adequate changes in his condition.

    An information channel can either be of a natural nature (atmospheric air through which sound waves are transmitted, sunlight reflected from observed objects) or be artificially created. In the latter case we are talking about technical means of communication.

    Technical information transmission systems

    The first technical means of transmitting information over a distance was the telegraph, invented in 1837 by the American Samuel Morse. In 1876, the American A. Bell invents the telephone. Based on the discovery of electromagnetic waves by the German physicist Heinrich Hertz (1886), A.S. Popov in Russia in 1895 and almost simultaneously with him in 1896 by G. Marconi in Italy, radio was invented. Television and the Internet appeared in the twentieth century.

    All of the listed technical methods of information communication are based on the transmission of a physical (electrical or electromagnetic) signal over a distance and are subject to certain general laws. The study of these laws is carried out communication theory, which originated in the 1920s. Mathematical apparatus of communication theory - mathematical theory of communication, developed by the American scientist Claude Shannon.

    Claude Elwood Shannon (1916–2001), USA

    Claude Shannon proposed a model of the process of transmitting information through technical communication channels, represented by a diagram.

    Technical information transmission system

    Coding here refers to any transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding - reverse signal sequence conversion.

    The operation of such a scheme can be explained using the familiar process of talking on the phone. The source of information is the person speaking. The encoding device is the microphone of the telephone handset, with the help of which sound waves (speech) are converted into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes). The decoding device is the handset (earphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

    Modern computer information transmission systems - computer networks - work on the same principle. There is an encoding process that converts binary computer code into a physical signal of the type that is transmitted over a communication channel. Decoding involves converting the transmitted signal back into computer code. For example, when using telephone lines in computer networks, encoding-decoding functions are performed by a device called a modem.

    Channel capacity and information transmission speed

    Developers of technical information transmission systems have to solve two interrelated problems: how to ensure the highest speed of information transfer and how to reduce information loss during transmission. Claude Shannon was the first scientist to take on these problems and create a new science for that time - information theory.

    K. Shannon determined a method for measuring the amount of information transmitted over communication channels. They introduced the concept channel capacity,as the maximum possible speed of information transfer. This speed is measured in bits per second (also kilobits per second, megabits per second).

    The capacity of a communication channel depends on its technical implementation. For example, computer networks use the following means of communication:

    telephone lines,

    Electrical cable connection,

    Fiber optic cable communication,

    Radio communication.

    The capacity of telephone lines is tens, hundreds of Kbps; The capacity of fiber optic lines and radio communication lines is measured in tens and hundreds of Mbit/s.

    Noise, noise protection

    The term “noise” refers to various types of interference that distort the transmitted signal and lead to loss of information. Such interference primarily arises for technical reasons: poor quality of communication lines, insecurity of various information streams transmitted over the same channels from each other. Sometimes, when talking on the phone, we hear noise, crackling noises that make it difficult to understand the interlocutor, or our conversation is superimposed by the conversation of completely different people.

    The presence of noise leads to loss of transmitted information. In such cases, noise protection is necessary.

    First of all, technical methods are used to protect communication channels from noise. For example, using shielded cable instead of bare wire; the use of various types of filters that separate the useful signal from noise, etc.

    was developed by Claude Shannon coding theory, giving methods to combat noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. For example, if you are hard of hearing when talking on the phone, then by repeating each word twice, you have a better chance that the other person will understand you correctly.

    However, the redundancy should not be too large. This will lead to delays and higher communication costs. Coding theory allows you to obtain a code that is optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be maximum.

    In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The entire message is divided into portions - packages. For each packet it is calculated checksum(sum of binary digits), which is transmitted along with this packet. At the receiving location, the checksum of the received packet is recalculated and, if it does not coincide with the original sum, the transmission of this packet is repeated. This will happen until the source and destination checksums match.

    When considering the transfer of information in propaedeutic and basic computer science courses, first of all, this topic should be discussed from the position of a person as a recipient of information. The ability to obtain information from the surrounding world is the most important condition for human existence. Human sense organs are information channels of the human body that communicate between a person and the external environment. Based on this criterion, information is divided into visual, auditory, olfactory, tactile, and gustatory. The rationale for the fact that taste, smell and touch provide information to a person is as follows: we remember the smells of familiar objects, the taste of familiar food, and we recognize familiar objects by touch. And the contents of our memory are stored information.

    Students should be told that in the animal world the informational role of the senses differs from that of humans. The sense of smell performs an important information function for animals. The heightened sense of smell of service dogs is used by law enforcement agencies to search for criminals, detect drugs, etc. The visual and auditory perception of animals differs from that of humans. For example, it is known that bats hear ultrasound, and cats see in the dark (from a human point of view).

    Within the framework of this topic, students should be able to give specific examples of the process of transmitting information, determine for these examples the source, receiver of information, and the channels used for transmitting information.

    When studying computer science in high school, students should be introduced to the basic principles of technical communication theory: the concepts of encoding, decoding, information transmission speed, channel capacity, noise, noise protection. These issues can be considered within the framework of the topic “Technical means of computer networks”.

    The process of genetic transmission information is determined by the so-called central dogma of molecular biology: DNA-RNA-protein. According to modern concepts, the path from gene to protein is very complex and consists of several independent stages.

    At the first stage, " rewriting» the nucleotide sequence of a gene through the synthesis of a complementary RNA molecule (transcription). Transcription is directed by the RNA polymerase enzyme and leads to the formation of primary RNA transcript (preRNA) molecules in the cell nucleus. The preRNA molecule is an exact copy of the DNA template of the transcribed gene. The synthesized preRNA undergoes a maturation stage (processing): this stage includes both modification of the terminal sections of the chain (helping to stabilize the molecule) and removal of non-coding nitron regions from the primary RNA transcript.

    Process " cutting“Introns, which is called splicing, is the most important step in the maturation of preRNA and leads to the fact that only sense regions that are sequentially “linked” to each other, complementary to the exons of the gene, remain in the RNA. A key signaling role in splicing is played by certain nucleotide sequences flanking each of the exons (the so-called splicing sites); When mutations are localized at splicing sites, intimate mechanisms for removing introns from preRNA may be disrupted and, as a result, the synthesis of a peptide with an abnormal structure may occur. The mature RNA formed after excision of introns is called information or template RNA (mRNA); The length of mRNA is many times shorter than the transcribed gene itself and its primary RNA transcript.

    Next stage transfers genetic information occurs in the cytoplasm. It involves the assembly of protein molecules on ribosomes using an mRNA matrix (translation process). Amino acids are transported to ribosomes by a special class of molecules - transfer RNAs (tRNAs). Each tRNA is responsible for transporting a strictly defined amino acid, and this specificity is determined by the presence in the tRNA of a unique 3-nucleotide sequence called an anticodon. As the ribosome moves along the mRNA molecule, the anticodons of various tRNAs carrying “their” amino acid are sequentially recognized by their complementary mRNA codons. As a result, the “necessary” amino acids are sequentially added to the growing polypeptide chain. The translation process is initiated by the AUG triplet encoding the amino acid methionine.

    Thus, methionine codon as part of RNA, it opens the reading frame for genetic information; as stated, this reading occurs in accordance with the “one triplet - one amino acid” rule. The signal for the end of translation is one of three special codons (UAA, UAG or UGA), called stop codons (nonsense codons); recognition of a stop codon on the ribosome stops the synthesis of the polypeptide chain.

    Upon completion broadcasts the primary polypeptide molecule undergoes certain post-translational modifications, turning into a functionally mature product. “Maturation” of the protein occurs, as a rule, in the corresponding organelles of the cell.

    Efficient encoding solves the problem of more compact recording of messages generated by the source due to their recoding. And it is used in almost all archivers such as Rar, Zip, etc. The peculiarity of these archivers is that they allow you to compress information a relatively small number of times (2-3, max 4 times), but at the same time, complete recovery of the compressed information occurs “ bit to bit." If you do not need to restore bit-to-bit information, then other transcoding methods are used that allow you to achieve compression tens of times. They are based on studying the patterns of creation of messages by the source, studying the properties of the source itself and understanding how necessary it is to preserve the initial information for the consumer. For example, when transmitting speech, you can not transmit it “bit for bit”, but allow distortions that the recipient of the voice message simply will not notice due to the insensitivity of the human hearing aid to these changes. At the same time, speech intelligibility, voice recognition, and its emotional coloring will be preserved. Partial loss of these qualities increases its compression. Let us emphasize once again that effective coding is the compression and restoration of information “bit for bit”.

    General definition of coding and code. Coding tasks

    Coding - in the broadest sense of the word - is the representation of messages in a form convenient for transmission over a given channel.

    The inverse operation of encoding is called decoding.

    Let us return again to the consideration of the general scheme of the information transmission system.

    Rice. 3.1. General diagram of the information transmission system

    Message X At the output of the information source (AI), it is necessary to match a certain signal. Since the number of messages tends to infinity with an unlimited increase in time, it is clear that creating your own signal for each message is almost impossible.

    Since discrete messages are composed of letters, and continuous messages can be represented by a sequence of numbers at each counting moment, it is fundamentally possible to make do with a finite number of sample signals corresponding to individual letters of the source alphabet.

    When the alphabet is large, they resort to representing letters in another alphabet with a smaller number of letters, which we will call symbols.

    To denote this operation, the same term is used - coding, now understood in a narrow sense.

    Since the alphabet of symbols is smaller than the alphabet of letters, each letter corresponds to a certain sequence of symbols, called a code combination. The number of characters in a code combination is called its value.

    The process of converting letters into symbols can serve several purposes:

    1. The first of them is to convert information into such a symbol system (code) so that it ensures the simplicity and reliability of the hardware implementation of information devices, i.e.:

    • simplicity of equipment for distinguishing individual characters;
    • minimum transfer time;
    • minimum storage capacity during storage;
    • ease of performing arithmetic and logical operations in the adopted system.

    The statistical properties of the source of messages and interference in the communication channel are not taken into account.

    The technical implementation of the encoding process in this simplest form with a continuous input signal is carried out by analog-to-code (digital) converters.

    2. The second purpose of coding is, based on Shannon’s theorems, to harmonize the properties of the message source with the properties of the communication channel.

    The so-called source encoder (SC) aims to provide coding in which, by eliminating redundancy, the average number of symbols required per message letter is significantly reduced.

    In the absence of interference, this directly results in a gain in transmission time or storage space, i.e. increases system efficiency. http://peredacha-informacii.ru/ This coding is called effective coding.

    In the presence of interference in the channel, effective coding allows you to convert the input information into a sequence of symbols that is best prepared for further conversion (maximally compressed).

    The so-called channel encoder (CC) aims to ensure a given reliability when transmitting or storing information by additionally introducing redundancy, but using simple algorithms and taking into account the statistical patterns of interference in the communication channel. This type of coding is called noise-resistant coding.

    The advisability of eliminating message redundancy using efficient coding methods followed by recoding with an error-resistant code is due to the fact that the redundancy of the message source in most cases is not consistent with the statistical laws of interference in the communication channel and therefore cannot be fully used to increase the reliability of the received message, whereas it is possible to select a suitable one for this interference is a noise-resistant code.

    In addition, message redundancy is often a consequence of very complex probabilistic dependencies and allows errors to be detected and corrected only after decoding the entire message, using highly complex algorithms and intuition.

    So, the choice of encoding and decoding devices depends on the statistical properties of the message source, as well as the level and nature of interference in the communication channel.

    If there is virtually no message source redundancy and no interference in the communication channel, then introducing both a source encoder and a channel encoder is impractical.

    When message source redundancy is high and interference is low, it is advisable to introduce only a source encoder.

    When the source redundancy is small and the interference is large, it is advisable to introduce a channel encoder.

    If there is a lot of redundancy and a high level of interference, it is advisable to introduce both additional encoding and decoding devices.

    After the channel encoder KK, the encoded signal enters the device for encoding symbols with signals - modulator M. The signal received at the output of the modulator Y prepared for transmission over a specific LAN communication line.

    The signal decoding device into symbols (DM demodulator) receives a signal from the communication line, distorted by noise, which is indicated in the diagram - Z.

    The noise-resistant code decoding device (DC channel decoder) and the message decoding device (DI source decoder) produces a decoded message W to the recipient P (person or machine).

    Efficient coding of information when transmitted over communication channels

    1.7. Transmission of information over the channel without interference

    If a sequence of discrete messages of duration is transmitted through a communication channel without interference, then the limit of the ratio

    where is the amount of information contained in a sequence of messages (the speed of information transmission over the communication channel). The limiting value of information transmission speed is called the communication channel capacity:

    As is known, the amount of information in messages is maximum when the probability of states is equal. Then

    The speed of information transfer generally depends on the statistical properties of the message and parameters

    communication channel. Bandwidth is a characteristic of a communication channel that does not depend on the speed of information transmission. Quantitatively, the throughput of a communication channel is expressed by the maximum number of binary units of information that a given communication channel can transmit in one second.

    For the most efficient use of a communication channel, it is necessary that the information transmission speed be as close as possible to the throughput of the communication channel.

    If the rate at which information arrives at the input of the communication channel exceeds the channel capacity, then not all information will be transmitted through the channel, i.e. the condition must be met

    This is the main condition for coordinating the source of information and the communication channel. Negotiation is accomplished by appropriate message encoding. It has been proven that if the speed of information generated by the source of messages is sufficiently close to the channel capacity, i.e., where the value is arbitrarily small, it is always possible to find a coding method that will ensure the transmission of messages generated by the source, and the speed of information transfer will be very close to the channel capacity.

    The converse statement is that it is impossible to ensure long-term transmission of all messages if the flow of information generated by the source exceeds the channel capacity.

    If a message source with entropy per symbol equal to the communication channel capacity is connected to the channel input, the source is considered to be consistent with the channel. If the source entropy is less than the channel capacity, which may be the case if the source states are not equally likely, then the source does not agree

    connected with the communication channel, i.e. the channel is not fully used.

    Agreement in the statistical sense is achieved using so-called statistical coding. To understand the principle of statistical coding, consider two sequences of messages representing, for example, the state signal of a two-position controlled object (on or off) recorded at regular intervals:

    Character 1 corresponds to the signal “object is on”, symbol 0 “object is off”. We will assume that the symbols appear independently of one another.

    For the first sequence, the symbols 1 and 0 are equally probable, for the second - the probability of the first symbol of the second symbol

    Entropy of the first sequence Entropy of the second sequence Therefore, the amount of information per symbol in the second sequence is half that in the first.

    When transmitting sequences through a binary communication channel, the first sequence will be consistent with the channel, while when transmitting the second sequence, the capacity of the binary channel per symbol is twice the entropy of the source, i.e., the channel is underloaded and, in a statistical sense, is not consistent with the source

    Statistical coding makes it possible to increase the entropy of transmitted messages in the limit to the value that is obtained if the symbols of the new sequence are equally probable. In this case, the number of characters in the sequence will be reduced. As a result

    the source of information is consistent with the communication channel. The technique of such encoding is described in § 2.9.

    Chapter 1. GENERAL INFORMATION

  • 1.7. Transmission of information over the channel without interference

    Chapter 2. Non-redundant codes

    Chapter 5. EVALUATION AND SELECTION OF CODES

    Copying information from the page is permitted only with a link to this site