• High-speed integrated circuits DAC and ADC and measurement of their parameters - Marcinkevichus A. High-speed integrated circuits DAC and ADC and measurement of their parameters

    A digital-to-analog converter (DAC) is a device for converting digital code into an analog signal in magnitude proportional to the code value.

    DACs are used to connect digital control systems with devices that are controlled by the level of an analog signal. Also, the DAC is integral part in many analog-to-digital device and converter structures.

    The DAC is characterized by a conversion function. It relates a change in digital code to a change in voltage or current. The DAC conversion function is expressed as follows

    U out- output voltage value corresponding to the digital code Nin, supplied to the DAC inputs.

    U max- maximum output voltage corresponding to the maximum code applied to the inputs N max

    Size K DAC, determined by the ratio, is called the digital-to-analog conversion coefficient. Despite the stepwise nature of the characteristic associated with a discrete change in the input value (digital code), it is believed that DACs are linear converters.

    If the value Nin represented through the values ​​of the weights of its digits, the transformation function can be expressed as follows

    , Where

    i- digit number of the input code Nin; A i- meaning i th digit (zero or one); Ui – weight i-th category; n – number of bits of the input code (number of bits of the DAC).

    The weight of the bit is determined for a specific bit capacity, and is calculated using the following formula

    U OP - DAC reference voltage

    The operating principle of most DACs is the summation of shares of analog signals (discharge weight), depending on the input code.

    The DAC can be implemented using current summation, voltage summation, and voltage division. In the first and second cases, in accordance with the values ​​of the bits of the input code, the signals of current generators and E.M.F. sources are summed up. The last method is a code-controlled voltage divider. The last two methods are not widely used due to the practical difficulties of their implementation.

    Methods for implementing a DAC with weighted summation of currents

    Let's consider the construction of a simple DAC with weighted summation of currents.

    This DAC consists of a set of resistors and a set of switches. The number of keys and the number of resistors is equal to the number of bits n input code. Resistor values ​​are selected in accordance with the binary law. If R=3 Ohms, then 2R=6 Ohms, 4R=12 Ohms, and so on, i.e. Each subsequent resistor is 2 times larger than the previous one. When a voltage source is connected and the switches are closed, current will flow through each resistor. The current values ​​of the resistors, thanks to the appropriate choice of their values, will also be distributed according to the binary law. When submitting an entry code Nin The keys are switched on in accordance with the value of the corresponding bits of the input code. The key is closed if the corresponding bit is equal to one. In this case, currents are summed up in the node, proportional to the weights of these bits, and the magnitude of the current flowing from the node as a whole will be proportional to the value of the input code Nin.

    The resistance of the matrix resistors is chosen to be quite large (tens of kOhms). Therefore, for most practical cases, the DAC plays the role of a current source for the load. If it is necessary to obtain voltage at the output of the converter, then a current-voltage converter is installed at the output of such a DAC, for example, on an operational amplifier

    However, when the code changes at the DAC inputs, the amount of current taken from the reference voltage source changes. This is the main disadvantage of this method of constructing a DAC. . This construction method can only be used if the reference voltage source has low internal resistance. In another case, at the moment the input code changes, the current taken from the source changes, which leads to a change in the voltage drop across it internal resistance and, in turn, to an additional change in the output current not directly related to the code change. The structure of the DAC with switching switches allows us to eliminate this drawback.

    In such a structure there are two output nodes. Depending on the value of the bits of the input code, the corresponding keys are connected to the node connected to the output of the device, or to another node, which is most often grounded. In this case, current flows constantly through each resistor of the matrix, regardless of the position of the switch, and the amount of current consumed from the reference voltage source is constant.

    A common disadvantage of both structures considered is the large ratio between the smallest and largest values ​​of the matrix resistors. At the same time, despite big difference resistor ratings, it is necessary to ensure the same absolute accuracy of fit for both the largest and smallest resistor ratings. In an integrated DAC design with more than 10 bits, this is quite difficult to achieve.

    Structures based on resistive materials are free from all of the above disadvantages. R-2R matrices

    With this construction of the resistive matrix, the current in each subsequent parallel branch is two times less than in the previous one. The presence of only two resistor values ​​in the matrix makes it quite easy to adjust their values.

    The output current for each of the presented structures is simultaneously proportional not only to the value of the input code, but also to the value of the reference voltage. It is often said that it is proportional to the product of these two quantities. Therefore, such DACs are called multipliers. Everyone will have these properties. DAC, in which the formation of weighted current values ​​corresponding to the discharge weights is carried out using resistive matrices.

    In addition to being used for their intended purpose, multiplying DACs are used as analog-to-digital multipliers, as code-controlled resistances and conductivities. They are widely used as components in the construction of code-controlled (tunable) amplifiers, filters, reference voltage sources, signal conditioners, etc.

    Basic parameters and errors of the DAC

    The main parameters that can be seen in the directory:

    1. Number of bits – number of bits of the input code.

    2. Conversion coefficient - the ratio of the output signal increment to the increment input signal For linear function transformations.

    3. Settling time of the output voltage or current - the time interval from the moment of a specified change in the code at the input of the DAC until the moment at which the output voltage or current finally enters the zone with the width of the least significant digit ( MZR).

    4. Maximum frequency transformations – the highest frequency of code changes at which specified parameters comply with established standards.

    There are other parameters that characterize the performance of the DAC and the features of its functioning. These include: input voltage low and high level, current consumption, output voltage or current range.

    The most important parameters for a DAC are those that determine its accuracy characteristics.

    Accuracy characteristics of each DAC , First of all, they are determined by errors normalized in magnitude.

    Errors are divided into dynamic and static. Static errors are the errors that remain after the completion of all transient processes associated with changing the input code. Dynamic errors are determined by transient processes at the DAC output that arise as a result of a change in the input code.

    Main types of static DAC errors:

    The absolute conversion error at the end point of the scale is the deviation of the output voltage (current) value from the nominal value corresponding to the end point of the scale of the conversion function. Measured in units of the least significant digit of the conversion.

    Output zero offset voltage - voltage DC at the DAC output with an input code corresponding to a zero output voltage value. Measured in low order units. Conversion factor error (scale) – associated with the deviation of the slope of the conversion function from the required one.

    DAC nonlinearity is the deviation of the actual conversion function from the specified straight line. It is the worst error that is difficult to combat.

    Nonlinearity errors are generally divided into two types - integral and differential.

    Integral nonlinearity error is the maximum deviation of the actual characteristic from the ideal one. In fact, this considers the averaged transformation function. This error is determined as a percentage of the final range of the output value.

    Differential nonlinearity is associated with the inaccuracy of setting the weights of the discharges, i.e. with errors in the divider elements, scatter in the residual parameters of key elements, current generators, etc.

    Methods for identifying and correcting DAC errors

    It is desirable that error correction be carried out during the manufacture of converters (technological adjustment). However, it is often desirable when using a specific sample BIS in one device or another. In this case, the correction is carried out by introducing into the structure of the device, in addition to LSI DAC additional elements. Such methods are called structural.

    The most difficult process is to ensure linearity, since they are determined by the related parameters of many elements and nodes. Most often, only the zero offset and coefficient are adjusted

    The accuracy parameters provided by technological methods deteriorate when the converter is exposed to various destabilizing factors, primarily temperature. It is also necessary to remember about the aging factor of elements.

    Zero offset error and scale error are easily corrected at the DAC output. To do this, a constant offset is introduced into the output signal, compensating for the offset of the converter characteristic. The required conversion scale is established either by adjusting the gain set at the output of the amplifier converter, or by adjusting the value of the reference voltage if the DAC is a multiplying one.

    Correction methods with test control consist of identifying DAC errors across the entire set of permissible input influences and adding corrections calculated on the basis of this to the input or output value to compensate for these errors.

    For any correction method with control using a test signal, the following actions are provided:

    1. Measuring the characteristics of the DAC on a set of test influences sufficient to identify errors.

    2. Identification of errors by calculating their deviations from measurement results.

    3. Calculation of corrective amendments for the converted values ​​or the required corrective effects on the corrected blocks.

    4. Carrying out correction.

    Control can be carried out once before installing the converter into the device using special laboratory measuring equipment. It can also be carried out using specialized equipment built into the device. In this case, monitoring, as a rule, is carried out periodically, all the time while the converter is not directly involved in the operation of the device. Such organization of control and correction of converters can be carried out when it operates as part of a microprocessor measuring system.

    The main disadvantage of any end-to-end testing method is the long testing time along with the heterogeneity and large volume of equipment used.

    The correction values ​​determined in one way or another are stored, as a rule, in digital form. Correction of errors taking into account these corrections can be carried out both in analog and digital form.

    With digital correction, corrections are added taking into account their sign to the DAC input code. As a result, a code is received at the DAC input, which generates the required voltage or current value at its output. The simplest implementation of this correction method consists of an adjustable DAC, at the input of which a digital storage device is installed ( memory). The input code plays the role of an address code. IN memory The corresponding addresses contain pre-calculated, taking into account corrections, code values ​​supplied to the corrected DAC.

    For analog correction, in addition to the main DAC, another additional DAC is used. The range of its output signal corresponds to the maximum error value of the corrected DAC. The input code is simultaneously sent to the inputs of the corrected DAC and to the address inputs memory amendments From memory amendments, the appropriate one is selected given value input code correction. The correction code is converted into a signal proportional to it, which is summed with the output signal of the corrected DAC. Due to the smallness of the required range of the output signal of the additional DAC compared to the range of the output signal of the corrected DAC, the first’s own errors are neglected.

    In some cases, it becomes necessary to correct the dynamics of the DAC.

    The transient response of the DAC will be different when changing different code combinations, in other words, the settling time of the output signal will be different. Therefore, the maximum settling time must be taken into account when using a DAC. However, in some cases it is possible to correct the behavior of the transfer characteristic.

    Features of using LSI DAC

    For the successful use of modern BIS It is not enough for DACs to know the list of their main characteristics and the basic circuits for their inclusion.

    Significant impact on application results BIS The DAC fulfills the operational requirements determined by the characteristics of a particular chip. Such requirements include not only the use of permissible input signals, voltage of power supplies, capacitance and load resistance, but also the order of switching on different power sources, separation of connection circuits of different power sources and a common bus, the use of filters, etc.

    For precision DACs, noise output voltage is of particular importance. A feature of the noise problem in a DAC is the presence of voltage surges at its output caused by switching switches inside the converter. The amplitude of these bursts can reach several tens of weights. MZR and create difficulties in the operation of analog signal processing devices following the DAC. The solution to the problem of suppressing such bursts is to use sample-and-hold devices at the output of the DAC ( UVH). UVH controlled from the digital part of the system, which generates new code combinations at the DAC input. Before submitting a new code combination UVH switches to storage mode, opening the analog signal transmission circuit to the output. Thanks to this, the spike in the DAC output voltage does not reach the output UVH, which is then put into tracking mode, repeating the DAC output.

    Special attention when building a DAC based on BIS It is necessary to pay attention to the choice of the operational amplifier that serves to convert the DAC output current into voltage. When the DAC input code is applied to the output Op-amp there will be an error DU, caused by its bias voltage and equal to

    ,

    Where U cm– bias voltage Op-amp; R os– resistance value in the feedback circuit Op-amp; R m– resistance of the resistive matrix of the DAC (output resistance of the DAC), depending on the value of the code applied to its input.

    As the ratio varies from 1 to 0, the error due to U cm, changes in the aisles (1...2)U cm. Influence U cm neglected when using OU, who has .

    Due to the large area of ​​transistor switches in CMOS BIS significant output capacitance of the LSI DAC (40...120 pF depending on the value of the input code). This capacitance has a significant impact on the output voltage settling time. Op-amp to the required accuracy. To reduce this influence R os bypassed with a capacitor With OS.

    In some cases, it is necessary to obtain a bipolar output voltage at the DAC output. This can be achieved by introducing an output voltage range bias at the output, and for multiplying DACs by switching the polarity of the reference voltage source.

    Please note that if you are using an integrated DAC , having a larger number of bits than you need, then the inputs of unused bits are connected to the ground bus, unambiguously determining the level of logical zero on them. Moreover, in order to work with the widest possible range of the output signal of the LSI DAC, the digits are taken as such digits, starting with the least significant one.

    One of the practical examples of using DACs is signal shapers of various shapes. I made a small model in Proteus. Using a DAC controlled by the MK (Atmega8, although it can also be done on Tiny), signals of various shapes are generated. The program is written in C in CVAVR. By pressing the button, the generated signal changes.

    LSI DAC DAC0808 National Semiconductor, 8-bit, high-speed, included according to the standard circuit. Since its output is current, it is converted into voltage using an inverting amplifier using an op-amp.

    In principle, you can even have such interesting figures, it reminds me of something, right? If you choose a higher bit depth, you will get smoother

    References:
    1. Bakhtiyarov G.D., Malinin V.V., Shkolin V.P. Analog-to-digital converters/Ed. G.D. Bakhtiyarov - M.: Sov. radio. – 1980. – 278 p.: ill.
    2. Design of analog-digital control microprocessor systems.
    3. O.V. Shishov. - Saransk: Mordov Publishing House. University 1995. - p.

    Below you can download the project at

    Digital-to-analog converters have static and dynamic characteristics.

    Static characteristics of the DAC

    Main static characteristics DACs are:

    · resolution;

    · nonlinearity;

    · differential nonlinearity;

    · monotony;

    · conversion factor;

    · absolute full scale error;

    · relative full scale error;

    · zero offset;

    absolute error

    Resolution – this is the increment of U OUT when transforming adjacent values ​​D j, i.e. differing by one least significant unit (EMP). This increment is the quantization step. For binary conversion codes, the nominal value of the quantization step is

    h = U PS /(2 N – 1),

    where U PN is the nominal maximum output voltage of the DAC (full scale voltage), N is the bit capacity of the DAC. The higher the bit capacity of the converter, the higher its resolution.

    Full scale error – the relative difference between the real and ideal values ​​of the conversion scale limit in the absence of a zero offset, i.e.

    It is the multiplicative component of the total error. Sometimes indicated by the corresponding EMP number.

    Zero offset error – the value of U OUT when the DAC input code is zero. It is an additive component of the total error. Typically stated in millivolts or as a percentage of full scale:

    Nonlinearity – maximum deviation of the actual conversion characteristic U OUT (D) from the optimal one (Fig. 5.2, line 2). Optimal performance is found empirically in such a way as to minimize the value of the nonlinearity error. Nonlinearity is usually defined in relative units, but in the reference data it is also given in the EMR. For the characteristics shown in Fig. 5.2,

    Differential nonlinearity – the maximum change (taking into account the sign) of the deviation of the actual transformation characteristic U OUT (D) from the optimal one when moving from one value of the input code to another adjacent value. Usually defined in relative units or in EMP. For the characteristics shown in Fig. 5.2,

    Monotone conversion characteristics - increase (decrease) of the DAC output voltage (U OUT) with an increase (decrease) of the input code D. If the differential nonlinearity is greater than the relative quantization step h/U PN, then the converter characteristic is nonmonotonic.

    The temperature instability of the DAC is characterized by temperature coefficients full scale errors and zero offset errors.

    Full scale and zero offset errors can be corrected by calibration (tuning). Nonlinearity errors cannot be eliminated by simple means.

    Dynamic characteristics of the DAC

    TO dynamic characteristics am DACs include settling time and conversion time.

    With a sequential increase in the values ​​of the input digital signal D(t) from 0 to (2 N – 1) through the least significant digit, the output signal U OUT (t) forms a stepped curve. This dependence is usually called the DAC conversion characteristic. In the absence of hardware errors, the midpoints of the steps are located on the ideal straight line 1 (see Fig. 5.2), which corresponds to ideal characteristic transformations. The actual transformation characteristic may differ significantly from the ideal one in terms of the size and shape of the steps, as well as their location on the coordinate plane. There are a number of parameters to quantify these differences.

    The dynamic parameters of the DAC are determined by the change in the output signal when the input code changes abruptly, usually from the value “all zeros” to “all ones” (Fig. 5.3).

    Settling time – time interval from the moment of betrayal
    input code (Fig. 5.3, t = 0) until the last time the equality is satisfied:

    |U OUT – U ПШ | = d/2,

    with d/2 usually corresponding to EMP.

    Slew rate maximum speed changes in U OUT (t) during the transient process. Defined as the increment ratio D U OUT to the time Dt during which this increment occurred. Usually indicated in technical specifications DAC with voltage output signal. For digital-to-analog converters with current output this parameter largely depends on the type of output op-amp.

    For voltage-output multiplying DACs, the unity gain frequency and power bandwidth are often specified, which are largely determined by the properties of the output amplifier.

    Figure 5.4 shows two linearization methods, from which it follows that the linearization method for obtaining the minimum value of D l, shown in Fig. 5.4, ​​b, allows you to reduce the error D l by half compared to the linearization method at boundary points (Fig. 5.4, a).

    For digital-to-analog converters with n binary digits, in the ideal case (in the absence of conversion errors), the analog output U OUT corresponds to the input binary number as follows:

    U OUT = U OP (a 1 2 -1 + a 2 2 -2 +…+ a n 2 -n),

    where U OP is the reference voltage of the DAC (from the built-in or external source).

    Since ∑ 2 -i = 1 – 2 -n, then with all bits turned on, the output voltage of the DAC is equal to:

    U OUT (a 1 …a n) = U OP (1 – 2 -n) = (U OP /2 n) (2 n – 1) = D (2 n – 1) = U PS,

    where U PN is the full scale voltage.

    Thus, when all bits are turned on, the output voltage of the digital-to-analog converter, which in this case forms U PN, differs from the value of the reference voltage (U OP) by the value of the least significant digit of the converter (D), defined as

    D = U OP /2 n.

    When any i-th bit is turned on, the output voltage of the DAC will be determined from the relationship:

    U OUT /a i = U OP 2 -i .

    A digital-to-analog converter converts the digital binary code Q 4 Q 3 Q 2 Q 1 into an analog value, usually voltage U OUT. or current I OUT. Each rank binary code has a certain weight of the i-th digit twice as large as the weight of the (i-1)-th. The operation of the DAC can be described by the following formula:

    U OUT = e (Q 1 1 + Q 2 2 + Q 3 4 + Q 4 8 +…),

    where e is the voltage corresponding to the weight of the least significant digit, Q i is the value of the i-th digit of the binary code (0 or 1).

    For example, the number 1001 corresponds to:

    U OUT = e (1· 1 + 0 · 2 + 0 · 4 + 1 · = 9 · e,

    and the number 1100 corresponds

    U OUT = e (0· 1 + 0 · 2 + 1 · 4 + 1 · = 12 · e.

    Significant difficulties arise when reducing the random error when measuring a time-varying quantity. In this case, to obtain the best estimate of the measured value, a filtering procedure is used. Depending on the type of transformations used, linear and nonlinear filtering are distinguished, where the implementation of individual procedures can be carried out both in hardware and software.

    Filtering can be used not only to suppress interference induced on the input circuits of analog signal transmission, but, if necessary, to limit the spectrum of the input signal and restore the spectrum of the output signal (this has already been discussed earlier). If necessary, filters with a tunable cutoff frequency can be used.

    Application automatic correction systematic errors can be considered as adapting the channel to its own state. The use of modern element base allows today to implement input circuits that adapt to the characteristics of the input signal, in particular, to its dynamic range. For such adaptation, an input amplifier with controlled gain is required. If, based on the results of previous measurements, it was possible to establish that the dynamic range of the signal is small compared to the range of the ADC input signal, then the amplifier gain is increased until the dynamic range of the signal corresponds to the operating range of the ADC. In this way, it is possible to minimize the signal sampling error and, consequently, increase the accuracy of measurements. The change in the signal gain at the input is taken into account in software when processing the measurement results by a digital controller.

    Conformity assessment criteria dynamic range signal and the operating range of the ADC will be discussed further; methods for adapting the input channel to the frequency properties of the input signal will also be considered.

    2.4. Sample-and-hold devices

    When collecting information and its subsequent conversion, it is often necessary to fix the value of an analog signal for a certain period of time. For this purpose, sampling and storage devices (SSDs) are used. Another name for such devices is analog storage devices (AMD). Their work is carried out in two modes. In the sampling (tracking) mode, they must repeat the input analog signal at their output, and in the storage mode, they must store and output to their output the last input voltage preceding the moment the device switches to this mode.

    In the simplest case, when constructing a UVH, to carry out these operations we only need a capacitor WITH XP and key S(Fig. 2.12. A). When the switch is closed, the voltage on the capacitor and at the output of the UVH will repeat the input. When the key is opened, the voltage on the capacitor, the value of which will be equal to the input voltage at the moment the key is opened, will be stored on it and transmitted to the output of the UVH.

    https://pandia.ru/text/78/077/images/image030_18.jpg" width="457" height="428 src=">

    Rice. 2.12. Functional diagram of UVH ( A) and time diagrams of its operation ( b)

    Obviously, in practical implementation, the voltage level on the capacitor in storage mode will not remain constant (Fig. 2.12. b) due to its discharge by current to the load and discharge due to its own leakage currents. In order for the capacitor voltage to remain at an acceptable level for as long as possible at the output of the UVH, a repeater is installed on the op-amp ( D.A. 1 in Fig. 2.12. A). As you know, a repeater has a high input impedance. This “decouples” the capacitor circuit and the load circuit in resistance and significantly reduces the discharge of the capacitor through the load. To reduce your own leakage currents, you need to choose a capacitor with a high-quality dielectric. And of course, in order for the voltage on the capacitor to remain constant for as long as possible, it is necessary to take as large a capacitance as possible.

    When transferring the UVH from storage mode to tracking mode, the voltage on the capacitor will not immediately reach the current input voltage level (Fig. 2.12. b). The time it takes for this to happen will be determined by the time it takes for the capacitor to charge - this time is called the acquisition time or sampling time. The capacitor will charge the faster, the greater its charge current. In order for this current not to be limited by the output resistance of the previous stage, a repeater is also installed at the input of the UVH at the op-amp ( D.A. 2 in Fig. 2.12. A). IN in this case The property that the repeater has low output impedance is used. The capacitor will charge the faster the smaller its capacity. Thus, the conditions for choosing the capacitance value of the capacitor for optimal performance UVH in different modes are contradictory - the capacitance of the capacitor must be selected each time based on the specific requirements for the duration of its operating modes.

    The input follower drives the capacitive load. Therefore, to build it, operational amplifiers are used that are stable at unity gain and a large capacitive load.

    When using UVH in an ADC, the storage time, as a rule, is not much longer than the conversion time of the ADC. In this case, the capacitor value is selected in such a way as to obtain best time capture provided that the voltage drop during one conversion does not exceed the value of the least significant digit of the ADC.

    Since dielectric losses in a storage capacitor are one of the sources of errors, it is best to choose capacitors with a dielectric made of polypropylene, polystyrene and Teflon. Mica and polycarbonate capacitors already have very mediocre characteristics. And you should not use ceramic capacitors at all.

    The precision characteristics of the UVH include the zero offset voltage, which usually does not exceed 5 mV (if an op-amp with bipolar transistors is used at the input; op-amp with field effect transistors at the input, have a more significant zero offset) and the drift of the fixed voltage at a given capacity of the storage capacitor (for different UVHs from 10-3 to 10-1 V/s is normalized at the capacitance WITH XP = 1,000 pF). The amount of drift can be reduced by increasing the capacitance WITH HR. However, this degrades the dynamic characteristics of the circuit.

    The dynamic characteristics of the UVH include: sampling time, which shows how long, under the most unfavorable conditions, the process of charging a storage capacitor with a given tolerance level lasts; and aperture delay - the period between the moment the control voltage is removed and the actual locking of the key.

    There are many sample-hold integrated circuits that have good characteristics. A number of circuits include an internal storage capacitor and guarantee maximum sampling times of tens or hundreds of nanoseconds with an accuracy of 0.01% for a 10 V signal. The aperture delay value for popular UVHs does not exceed 100 ns. If higher performance is required, hybrid and modular UVHs can be used.

    As an example of the practical construction of the UVH in Fig. Figure 2.13 shows the functional diagram of the LSI K1100SK2 (LF398). The circuit has a general negative feedback covering the entire circuit - from the follower output to the op-amp D.A. 2 to the repeater input on the amplifier D.A. 1.

    Dating" href="/text/category/datirovanie/" rel="bookmark">dating the ADC reading when measuring a variable signal, in multi-channel measuring systems for simultaneous data acquisition from various sensors, eliminating high-frequency emissions in the DAC output signal when changing the code. These and other applications of UVC will be discussed in more detail in further material.

    3. DIGITAL TO ANALOG CONVERTERS

    3.1 General implementation methods

    Digital-to-analog converters (DACs) are devices used to convert digital code into an analog signal in magnitude proportional to the value of the code.

    DACs are widely used to connect digital control systems with actuators and mechanisms that are controlled by the level of an analog signal, as components of more complex analog-to-digital devices and converters.

    In practice, DACs are mainly used for converting binary codes, so further discussion will only be about such DACs.

    Any DAC is characterized, first of all, by its conversion function, which connects a change in the input value (digital code) with a change in the output value (voltage or current) Fig. 3.1.

    Rice. 3.1. Transform function ( transfer characteristic) DAC

    Analytically, the DAC conversion function can be expressed as follows (for the case when the output signal is represented by voltage):

    U OUT = ( U MAX / N MAX) N VX, where

    U OUT – output voltage value corresponding to the digital code N VX supplied to the DAC inputs.

    U MAX – maximum output voltage corresponding to the maximum code applied to the inputs N MAX.

    Size TO DAC defined by the ratio U MAX/ N MAX is called the digital-to-analog conversion ratio. Its constancy for the entire range of changes in the arguments determines the proportionality of changes in the value of the output analog signal to the corresponding changes in the value of the input code. That is why, despite the stepwise nature of the characteristic associated with a discrete change in the input value (digital code), it is believed that DACs are linear converters.

    If the value N The VX can be represented through the values ​​of the weights of its bits, the DAC conversion function can be expressed as follows:

    U OUT = DAC, where

    i– digit number of the input code N VX;

    A i – value i th digit (zero or one);

    U i – weight i-th category;

    n– number of bits of the input code (number of bits of the DAC).

    This method of recording the conversion function largely reflects the operating principle of most DACs, which essentially consists of summing the shares of an analog output value (summing analog measures), each of which is proportional to the weight of the corresponding digit.

    In general, according to the construction method, DACs are distinguished with a weighted summation of currents, with a weighted summation of voltages, and based on a code-controlled voltage divider.

    When constructing a DAC based on a weighted summation of currents in accordance with the values ​​of the bits of the input code N The VX signals from the current generators are summed and the output signal is represented by current. The construction of a four-bit DAC using this principle is illustrated in Fig. 3.2. The values ​​of the generator currents are selected proportional to the weights of the bits of the binary code, i.e. if the current value of the smallest current generator corresponding to the least significant bit of the input code is equal to I, then the value of each next one must be twice as large as the previous one - 2 I, 4I, 8I. Every i th digit of the input code N VX controls i-th key S i. If i th digit is equal to one, then the corresponding switch is closed and then the generator current, whose current value is proportional to the weight of this i th category, participates in the formation of the output current of the converter. Thus, it turns out that the output current is IN VH.

    Rice. 3.2. Construction of a DAC based on weighted summation of currents

    N S 1, S 2 and S 4 in the diagram in Fig. 3.2 will be closed, and the key S 3 – open. Thus, currents equal to I, 2I and 8 I. In total they will form the output current IEXIT = 11I, i.e. the value of the output current I N VX = 11.

    When constructing a DAC based on a weighted summation of voltages in accordance with the values ​​of the bits of the input code N The I/O output signal of the DAC is formed from the values ​​of the voltage generators and is represented by voltage. The construction of a four-bit DAC using this principle is illustrated in Fig. 3.3. The values ​​of the voltage generators are set in accordance with the binary distribution law - proportional to the weights of the bits of the binary code ( E, 2E, 4E and 8 E). If i th digit of the input code N VX is equal to unity, then the corresponding switch must be open, while the voltage generator, whose voltage value is proportional to the weight of this i-th category, participates in the formation of the output voltage U converter OUT. Thus, it turns out that the output voltage is U DAC OUTPUT is proportional to input code size N VH.

    Rice. 3.3. Construction of a DAC based on weighted summation of voltages

    For example, if the value of the input code N BX is equal to eleven, i.e. in binary form it is represented as (1011), then the keys controlled by the corresponding bits S 1, S 2 and S 4 in the diagram in Fig. 3.3 will be open, and the key S 3 – closed. Thus, voltages equal to E, 2E and 8 E. In total they will form the output voltage U OUT = 11 I, i.e. the value of the output voltage U OUT will be proportional to the value of the input code N VX = 11.

    In the latter case, the DAC is implemented as a code-controlled voltage divider (Fig. 3.4).

    Rice. 3.4. Construction of a DAC based on a code-controlled voltage divider

    The code-controlled divider consists of two arms. If the bit width of the implemented DAC is equal to n, then the number of resistors in each arm is 2 n. The resistance of each arm of the divider is changed using keys S. The keys are controlled by the output unitary code of the decoder DC, and the keys of one arm are controlled directly by it, while the others are controlled through inverters. The output code of the decoder contains a number of units equal to the value of the input code N VH. It is not difficult to understand that the division coefficient of the divider will always be proportional to the value of the input code N VH.

    The last two methods are not widely used due to the practical difficulties of their implementation. For a DAC structure with weighted summation of voltages, it is impossible to implement voltage generators that would allow the mode short circuit at the output, as well as switches that do not have residual voltages in the closed state. In a DAC structure based on a code-controlled divider, each of the two arms of the divider consists of a very large number of resistors (2 n), includes the same number of keys for managing them and a large decoder. Therefore, with this approach, the implementation of the DAC turns out to be very cumbersome. Thus, the main structure used in practice is the current-weighted summation DAC structure.

    3.2 DAC with weighted current summation

    Let's consider the construction of a simple DAC with weighted summation of currents. In the simplest case, such a DAC consists of a resistive matrix and a set of switches (Fig. 3.5).

    Rice. 3.5. Resistive matrix DAC implementations

    The number of keys and the number of resistors in the matrix is ​​equal to the number of bits n input code N VH. Resistor values ​​are chosen proportional to the weights of the binary code, i.e. proportional to the values ​​of the series 2i,i = 1… n. When a voltage source is connected to a common node of the matrix and the keys are closed, current will flow through each resistor. The current values ​​of the resistors, thanks to the appropriate choice of their values, will be distributed according to the binary law, i.e., proportional to the weights of the bits of the binary code. When submitting an entry code N VX keys are switched on in accordance with the value of the corresponding bits of the input code. The key is closed if the corresponding bit is equal to one. In this case, in the current node, currents are summed up, proportional to the weights of these bits, and the value of the current flowing from the node as a whole will be proportional to the value of the input code N VH.

    In such a structure there are two output nodes. Depending on the value of the bits of the input code, the corresponding keys are connected to the node connected to the output of the device, or to another node, which is most often grounded. In this case, current flows constantly through each resistor of the matrix, regardless of the position of the switch, and the amount of current consumed from the reference voltage source is constant.

    Rice. 3.6. Implementations of a DAC based on a resistive matrix and with switches

    A common disadvantage of both structures considered is the large ratio between the smallest and largest values ​​of the matrix resistors. At the same time, despite the large difference in resistor ratings, it is necessary to ensure the same absolute error in fitting both the largest and the smallest resistor rating. That is, the relative accuracy of fitting large resistors should be very high. In an integrated DAC design with a number of bits of more than ten, this is quite difficult to achieve.

    Structures based on resistive materials are free from all these disadvantages. R- 2R matrices (Fig. 3.7).

    Rice. 3.7. DAC based implementations R-2R resistive matrix

    and with switch keys

    You can verify that with this construction of the resistive matrix, the current in each subsequent parallel branch is two times less than in the previous one, i.e., their values ​​are distributed according to a binary law. The presence in the matrix of only two resistor values, differing by a factor of two, allows for a fairly simple adjustment of their values, without making high demands on the relative accuracy of the adjustment.

    3.3 DAC parameters and errors

    System electrical characteristics The DAC, reflecting the features of their construction and operation, combines more than a dozen parameters. Below are the main ones, recommended for inclusion in the regulatory and technical documentation as the most common and most fully describing the operation of the converter in static and dynamic modes.

    1. Number of bits – number of bits of the input code.

    2. Conversion coefficient – ​​the ratio of the output signal increment to the input signal increment for a linear conversion function.

    3. The settling time of the output voltage or current is the time interval from the moment of a given code change at the input of the DAC until the moment at which the output voltage or current finally enters a zone with a width equal to the weight of the least significant bit (LSB), symmetrically located relative to the steady-state value. In Fig. Figure 3.8 shows the transition function of the DAC, showing the change in the DAC output signal over time when the code changes. In addition to the settling time, it also characterizes some other dynamic parameters of the DAC - the amount of output signal overshoot, the degree of damping, the circular frequency of the settling process, etc. When determining the characteristics of a particular DAC, this characteristic is removed when the code changes from a zero value to a code equal to half of its maximum meanings.

    4. Maximum conversion frequency – the highest sampling frequency at which the specified parameters comply with the established standards.

    There are other parameters that characterize the performance of the DAC and the features of its functioning. These include: low and high level input voltage, output leakage current, consumption current, output voltage or current range, influence factor of power supply instability and others.

    The most important parameters for a DAC are those that determine its accuracy characteristics, which are determined by errors normalized by magnitude.

    Rice. 3.8. Determining the settling time of the DAC output signal

    First of all, it is necessary to clearly distinguish static and dynamic errors DAC. Static errors are the errors that remain after the completion of all transient processes associated with changing the input code. Dynamic errors are determined by transient processes at the output of the DAC or its component components that arise as a result of a change in the input code.

    The main types of static DAC errors are defined as follows.

    Absolute conversion error at scale end point– deviation of the output voltage (current) value from the nominal value corresponding to the end point of the conversion function scale. For DACs operating with an external reference voltage source, it is determined without taking into account the error introduced by this source. Measured in units of the least significant digit of the conversion.

    Zero offset voltage at the output – the voltage at the output of the DAC with a zero input code. Measured in low order units. Determines the parallel shift of the actual transformation function and does not introduce nonlinearity. This is an additive error.

    Conversion factor error(scale) – multiplicative error associated with the deviation of the slope of the transformation function from the required one.

    DAC non-linearity– deviation of the actual transformation function from the specified straight line. The main requirement for a DAC from this point of view is the mandatory monotonicity of the characteristic, which determines the unambiguous correspondence between the output and input signals of the converter. Formally, the monotonicity requirement consists in the constancy of the characteristic sign of the derivative throughout the entire working area.

    Nonlinearity errors are generally divided into two types - integral and differential.

    Integral nonlinearity error– maximum deviation of the actual characteristic from the ideal one. In fact, this considers the averaged transformation function. This error is determined as a percentage of the final range of the output value. Integral nonlinearity arises due to various nonlinear effects that affect the operation of the converter as a whole. They are most clearly manifested in the integrated design of converters. For example, it may be associated with different heating levels in the LSI of some nonlinear resistances for different input codes.

    Differential nonlinearity error– deviation of the actual characteristic from the ideal one for adjacent code values. These errors reflect non-monotonic deviations of the actual characteristics from the ideal ones. To characterize the entire transformation function, the local differential nonlinearity with the maximum absolute value is selected. The limits of permissible values ​​of differential nonlinearity are expressed in units of the weight of the least significant digit.

    Let's consider the reasons for the appearance of differential errors and how they affect the DAC conversion function. Let's imagine that all the weights of the bits in the DAC are set perfectly accurately, except for the weight of the most significant bit.

    If we consider the sequence of all code combinations for a binary code of a certain bit depth, then the patterns of binary code formation determine, among other things, that in code combinations corresponding to values ​​from zero to half the full scale (from zero to half the maximum code value), the most significant bit is always is equal to zero, and in code combinations corresponding to values ​​from half the scale to its full value, the most significant digit is always equal to one. Therefore, when applying codes corresponding to the first half of the input code value scale to the DAC, the weight of the most significant digit does not participate in the formation of the output signal, and when applying codes corresponding to the second half, it is constantly involved. But if the weight of this digit is specified with an error, then this error will also be reflected in the formation of the output signal. Then this will be reflected in the DAC conversion function, as shown in Fig. 3.9. A.

    Rice. 3.9. Influence of reference error on the DAC conversion function

    weights of the senior category.

    From Fig. 3.9. A. it can be seen that for the first half of the input code values, the real DAC conversion function corresponds to the ideal one, and for the second half of the input code values, the real conversion function differs from the ideal one by the amount of error in setting the weight of the most significant bit. Minimizing the influence of this error on the DAC conversion function can be achieved by choosing a conversion scale factor that will reduce the error at the end point of the conversion scale to zero (Fig. 3.9. b). It is clear that the differential errors are distributed symmetrically relative to the middle of the scale. This determined another name for them - symmetrical type errors. At the same time, it is clear that the presence of such an error determines the non-monotonic behavior of the DAC conversion function.

    In Fig. 3.10. A. It is shown how the real DAC conversion function will differ from the ideal one, provided that there are no errors in setting the weights of all digits except the digit preceding the most significant one. Rice. 3.10. b. shows the behavior of the transformation function if the scale component of the total error is selected (reduced to zero).

    Metrology" href="/text/category/metrologiya/" rel="bookmark">it is rational to achieve metrological indicators in a comprehensive manner, using technological techniques with various structural methods. And when using ready-made integrated converters, structural methods are the only way to further improve the metrological characteristics of the conversion system .

    Zero offset error and scale error are easily corrected at the DAC output. To do this, a constant offset is introduced into the output signal, compensating for the offset of the converter characteristic. The required conversion scale is established either by adjusting the gain set at the output of the amplifier converter, or by adjusting the value of the reference voltage if the DAC is a multiplying one.

    Submitting your good work to the knowledge base is easy. Use the form below

    good job to the site">

    Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

    • CONTENT 2
    • INconducting 3
    • 1. Terms of reference 6
    • 2. Development and description of a system of measuring channels for determining static and dynamic characteristics 8
    • 2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments 8
    • 2.2 Development of complexes of standardized metrological characteristics 12
    • 3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS 16
    • 3.1 Development of metrological reliability of measuring instruments. 16
    • 3.2 Changes in metrological characteristics of means 19
    • measurements during operation 19
    • 3.3 Development of metrological standardization models 22
    • characteristics 22
    • 4. CLASSIFICATION OF SIGNALS 26
    • 5. Channel development 30
    • 5.1Development of a channel model 30
    • 5.2 Development of a measuring channel model 30
    • LITERATURE 35

    Introduction

    One of the main forms of state metrological supervision and departmental control aimed at ensuring the uniformity of measurements in the country, as mentioned earlier, is the verification of measuring instruments. Instruments released from production and repair, received from abroad, as well as those in operation and storage are subject to verification. The basic requirements for the organization and procedure for verification of measuring instruments are established by GOST “GSI. Verification of measuring instruments. Organization and procedure.” The term “verification” was introduced by GOST “GSI. Metrology. Terms and definitions” as “the determination by a metrological body of the errors of a measuring instrument and the establishment of its suitability for use.” In some cases, during verification, instead of determining error values, they check whether the error is within acceptable limits. Thus, verification of measuring instruments is carried out to establish their suitability for use. Those measuring instruments, the verification of which confirms their compliance with metrological and technical requirements to this SI. Measuring instruments are subjected to primary, periodic, extraordinary, inspection and expert verification. Instruments undergo primary verification upon release from production or repair, as well as instruments received for import. Instruments in operation or storage are subject to periodic verification at certain calibration intervals established to ensure the suitability of the instrument for use for the period between verifications. Inspection verification is carried out to determine the suitability for use of measuring instruments in the implementation of state supervision and departmental metrological control over the condition and use of measuring instruments. Expert verification is carried out when controversial issues arise regarding metrological characteristics (MX), serviceability of measuring instruments and their suitability for use. Metrological certification is a set of activities to study the metrological characteristics and properties of a measuring instrument in order to make a decision on the suitability of its use as a reference instrument. Usually, for metrological certification, a special program of work is drawn up, the main stages of which are: experimental determination of metrological characteristics; analysis of the causes of failures; establishing a verification interval, etc. Metrological certification of measuring instruments used as reference ones is carried out before commissioning, after repair and, if necessary, changing the category of the reference measuring instrument. The results of metrological certification are documented in appropriate documents (protocols, certificates, notices of the unsuitability of the measuring instrument). The characteristics of the types of measuring instruments used determine the methods for their verification.

    In the practice of calibration laboratories, various methods for calibrating measuring instruments are known, which for unification are reduced to the following:

    * direct comparison using a comparator (i.e. using comparison tools);

    * direct measurement method;

    * method of indirect measurements;

    * method of independent verification (i.e. verification of measuring instruments of relative values, which does not require transfer of unit sizes).

    Verification of measuring systems is carried out by state metrological bodies called the State Metrological Service. The activities of the State Metrological Service are aimed at solving scientific and technical problems of metrology and implementing the necessary legislative and control functions, such as: establishing units of physical quantities approved for use; creation of exemplary measuring instruments, methods and measuring instruments of the highest accuracy; development of all-Union verification schemes; determination of physical constants; development of measurement theory, error estimation methods, etc. The tasks facing the State Metrological Service are solved with the help of the State System for Ensuring the Uniformity of Measurements (GSI). The state system for ensuring the uniformity of measurements is the regulatory and legal basis for metrological support of scientific and practical activities in terms of assessing and ensuring measurement accuracy. It is a set of regulatory and technical documents that establish a unified nomenclature, methods for presenting and assessing the metrological characteristics of measuring instruments, rules for standardization and certification of measurements, registration of their results, requirements for conducting state tests, verification and examination of measuring instruments. The main regulatory and technical documents of the state system for ensuring the uniformity of measurements are state standards. On the basis of these basic standards, regulatory and technical documents are developed that specify the general requirements of the basic standards for various industries, measurement areas and measurement methods.

    1. Technical specifications

    1.1 Development and description of a system of measuring channels to determine static and dynamic characteristics.

    1.2 Materials of scientific and methodological developments of the ISIT department

    1.3 Purpose and purpose

    1.3.1 This system is designed to determine the characteristic instrumental components of measurement errors.

    1.3.2 Develop a measuring system information system allowing you to automatically receive necessary information, process and issue it in the required form.

    1.4 System requirements

    1.4.1 The rules for selecting sets of standardized metrological characteristics for measuring instruments and methods for their standardization are determined by the GOST 8.009 - 84 standard.

    1.4.2 Set of standardized metrological characteristics:

    1. measures and digital-to-analog converters;

    2. measuring and recording instruments;

    3. analog and analog-to-digital measuring converters.

    1.4.3 Instrumental error of the first model of normalized metrological characteristics:

    Random component;

    Dynamic error;

    1.4.4 Instrumental error of the second model of normalized metrological characteristics:

    where is the main SI error without breaking it into components.

    1.4.5 Compliance of models of standardized metrological characteristics with GOST 8.009-84 on the formation of complexes of standardized metrological characteristics.

    2. Development and description of a system of measuring channels to determine static and dynamic characteristics

    2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments

    When using SI, it is fundamentally important to know the degree of correspondence of the information being measured, contained in the output signal, to its true value. For this purpose, certain metrological characteristics (MX) are introduced and standardized for each SI.

    Metrological characteristics are characteristics of the properties of a measuring instrument that influence the measurement result and its errors. The characteristics established by regulatory and technical documents are called standardized, and those determined experimentally are called valid. The MX nomenclature, the rules for selecting sets of standardized MX for measuring instruments and the methods for their standardization are determined by the GOST 8.009-84 standard "GSI. Standardized metrological characteristics of measuring instruments."

    Metrological characteristics of SI allow:

    determine measurement results and calculate estimates of the characteristics of the instrumental component of the measurement error in real conditions of SI application;

    calculate MX channels of measuring systems consisting of a number of measuring instruments with known MX;

    produce optimal choice SI, providing the required quality of measurements under known conditions of their use;

    compare SI various types taking into account the conditions of use.

    When developing principles for the selection and standardization of measuring instruments, it is necessary to adhere to a number of provisions outlined below.

    1. The main condition for the possibility of solving all of the listed problems is the presence of an unambiguous connection between normalized MX and instrumental errors. This connection is established through a mathematical model of the instrumental component of the error, in which the normalized MX must be arguments. It is important that the MX nomenclature and methods of expressing them are optimal. Experience in operating various SIs shows that it is advisable to normalize the MX complex, which, on the one hand, should not be very large, and on the other hand, each standardized MX should reflect the specific properties of the SI and, if necessary, can be controlled.

    Standardization of MX measuring instruments should be carried out on the basis of uniform theoretical premises. This is due to the fact that measuring instruments based on different principles can participate in measurement processes.

    Normalized MX must be expressed in such a form that with their help it is possible to reasonably solve almost any measurement problems and at the same time it is quite simple to control the measuring instruments for compliance with these characteristics.

    Normalized MX must provide the possibility of statistical integration and summation of the components of the instrumental measurement error.

    In general, it can be defined as the sum (combination) of the following error components:

    0 (t), due to the difference between the actual conversion function under normal conditions and the nominal one assigned by the relevant documents to this type of SI. This error is called the main error, caused by the reaction of the SI to changes in external influencing quantities and informative parameters of the input signal relative to their nominal values. This error is called additional;

    dyn, caused by the reaction of the SI to the rate (frequency) of change of the input signal. This component, called dynamic error, depends both on the dynamic properties of the measuring instruments and on frequency spectrum input signal;

    int , caused by the interaction of the measuring instrument with the measurement object or with other measuring instruments connected in series with it in the measuring system. This error depends on the characteristics of the parameters of the SI input circuit and the output circuit of the measurement object.

    Thus, the instrumental component of the SI error can be represented as

    where * is a symbol for the statistical combination of components.

    The first two components represent the static error of the SI, and the third is the dynamic one. Of these, only the main error is determined by the properties of SI. Additional and dynamic errors depend both on the properties of the SI itself and on some other reasons (external conditions, parameters of the measuring signal, etc.).

    The requirements for universality and simplicity of the statistical combination of the components of the instrumental error determine the need for their statistical independence - non-correlation. However, the assumption of the independence of these components is not always true.

    Isolating the dynamic error of the SI as a summed component is permissible only in a particular, but very common case, when the SI can be considered a linear dynamic link and when the error is a very small value compared to the output signal. A dynamic link is considered linear if it is described by linear differential equations with constant coefficients. For SI, which are essentially nonlinear links, separating static and dynamic errors into separately summable components is unacceptable.

    Normalized MX must be invariant to the conditions of use and operating mode of the SI and reflect only its properties.

    The choice of MX must be carried out so that the user has
    the ability to calculate SI characteristics from them under real operating conditions.

    Standardized MX, given in the regulatory and technical documentation, reflect the properties not of a single instance of measuring instruments, but of the entire set of measuring instruments of this type, i.e. are nominal. A type is understood as a set of measuring instruments that have the same purpose, layout and design and satisfy the same requirements regulated in the technical specifications.

    The metrological characteristics of an individual SI of this type can be any within the range of nominal MX values. It follows that the MX of a measuring instrument of this type should be described as a non-stationary random process. A mathematically strict account of this circumstance requires normalization of not only the limits of MX as random variables, but also their time dependence (i.e., autocorrelation functions). This will lead to extremely complex system standardization and the practical impossibility of controlling MX, since in this case it would have to be carried out at strictly defined intervals. As a result, a simplified standardization system was adopted, providing a reasonable compromise between mathematical rigor and the necessary practical simplicity. In the adopted system, low-frequency changes in the random components of the error, the period of which is commensurate with the duration of the verification interval, are not taken into account when normalizing MX. They determine the reliability indicators of measuring instruments, determine the choice of rational calibration intervals and other similar characteristics. High-frequency changes in the random components of the error, the correlation intervals of which are commensurate with the duration of the measurement process, must be taken into account by normalizing, for example, their autocorrelation functions.

    2.2 Development of complexes of standardized metrological characteristics

    The large variety of SI groups makes it impossible to regulate specific MX complexes for each of these groups in one regulatory document. At the same time, all SI cannot be characterized by a single set of normalized MX, even if it is presented in the most general form.

    The main feature of dividing measuring instruments into groups is the commonality of the complex of standardized MXs necessary to determine the characteristic instrumental components of measurement errors. In this case, it is advisable to divide all measuring instruments into three large groups, presented according to the degree of complexity of MX: 1) measures and digital-to-analog converters; 2) measuring and recording instruments; 3) analog and analog-to-digital measuring converters.

    When establishing a set of standardized MX, the following model of the instrumental component of measurement error was adopted:

    where by symbol<< * >> indicates the combination of the SI error in real conditions of its use and the error component int, caused by the interaction of the SI with the measurement object. By combining we mean applying some functionality to the components, which allows us to calculate the error caused by their joint influence. In each case, the functionality is determined based on the properties of a particular SI.

    The entire MX population can be divided into two large groups. In the first of them, the instrumental component of the error is determined by statistically combining its individual components. In this case, the confidence interval in which the instrumental error lies is determined with a given confidence probability less than one. For MX of this group, the following error model is adopted in real application conditions (model 1):

    where is the systematic component;

    Random component;

    Random component due to hysteresis;

    Combining additional errors;

    Dynamic error;

    L is the number of additional errors, equal to all quantities that significantly affect the error in real conditions.

    Depending on the properties of a given type of SI and the operating conditions of its use, individual components may be missing.

    The first model is selected if it is accepted that the error occasionally exceeds the value calculated from the standardized characteristics. In this case, using the MX complex, it is possible to calculate point and interval characteristics in which the instrumental component of the measurement error is found with any given confidence probability, close to unity, but less than it.

    For the second MX group, statistical aggregation of components is not applied. Such measuring instruments include laboratory means, as well as most standard means, the use of which does not involve repeated observations with averaging of results. The instrumental error in this case is defined as the arithmetic sum of the largest possible values ​​of its components. This estimate gives a confidence interval with a probability equal to one, which is the upper limit estimate of the desired error interval, covering all possible, including very rarely realized, values. This leads to a significant tightening of the requirements for MX, which can only be applied to the most critical measurements, for example those related to the health and life of people, with the possibility of catastrophic consequences of incorrect measurements, etc.

    Arithmetic summation of the largest possible values ​​of the components of the instrumental error leads to the inclusion in the complex of normalized MX limits of permissible error, and not statistical moments. This is also acceptable for measuring instruments that have no more than three components, each of which is determined by a separate standardized MX. In this case, the calculated estimates of the instrumental error obtained by arithmetic combining the largest values ​​of its components and statistical summation of the characteristics of the components (with a probability, although smaller, but quite close to unity), will practically not differ. For the case under consideration, SI error model 2:

    Here is the main error of the SI without breaking it into components (unlike model 1).

    3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS

    3.1 Development of metrological reliability of measuring instruments.

    Model 2 is applicable only for those SIs whose random component is negligible.

    The issues of choosing MX are regulated in sufficient detail in GOST 8.009-84, which shows the characteristics that should be standardized for the above-mentioned SI groups. The above list can be adjusted for a specific measuring instrument, taking into account its features and operating conditions. It is important to note that one should not normalize those MXs that make an insignificant contribution to the instrumental error compared to others. Determination of whether a given error is important or not is made on the basis of the materiality criteria given in GOST 8.009-84.

    During operation, the metrological characteristics and parameters of the measuring instrument undergo changes. These changes are random, monotonous or fluctuating in nature and lead to failures, i.e. to the inability of the SI to perform its functions. Failures are divided into non-metrological and metrological.

    Non-metrological is a failure caused by reasons not related to changes in the MX of the measuring instrument. They are mainly of an obvious nature, appear suddenly and can be detected without verification.

    Metrological is a failure caused by MX leaving the established permissible limits. As studies have shown, metrological failures occur much more often than non-metrological ones. This necessitates the development of special methods for their prediction and detection. Metrological failures are divided into sudden and gradual.

    Sudden failure is a failure characterized by an abrupt change in one or more MXs. These failures, due to their randomness, cannot be predicted. Their consequences (failure of readings, loss of sensitivity, etc.) are easily detected during operation of the device, i.e. by the nature of their manifestation they are obvious. A feature of sudden failures is the constancy of their intensity over time. This makes it possible to apply classical reliability theory to analyze these failures. In this regard, failures of this kind will not be considered further.

    Gradual failure is a failure characterized by a monotonous change in one or more MXs. By the nature of their manifestation, gradual failures are hidden and can only be detected based on the results of periodic monitoring of the measuring instruments. In the following, it is these types of failures that will be considered.

    The concept of metrological serviceability of a measuring instrument is closely related to the concept of “metrological failure”. It refers to the state of the SI in which all standardized MX correspond to the established requirements. SI's ability to preserve set values metrological characteristics for a given time under certain modes and operating conditions is called metrological reliability. The specificity of the problem of metrological reliability is that for it the main position of the classical reliability theory about the constancy of the failure rate over time turns out to be unlawful. Modern reliability theory is focused on products that have two characteristic states: operational and inoperative. A gradual change in the SI error makes it possible to introduce as many operational states as desired with different levels operating efficiency, determined by the degree of approximation of the error to the permissible limit values.

    The concept of metrological failure is to a certain extent conditional, since it is determined by the MX tolerance, which in the general case can vary depending on specific conditions. It is also important that it is impossible to record the exact time of occurrence of a metrological failure due to the hidden nature of its manifestation, while obvious failures, which the classical reliability theory deals with, can be detected at the moment of their occurrence. All this required the development of special methods for analyzing the metrological reliability of SI.

    The reliability of a measuring instrument characterizes its behavior over time and is a generalized concept that includes stability, reliability, durability, maintainability (for recoverable measuring instruments) and storability.

    SI stability is qualitative characteristics, reflecting the invariance of its MX over time. It is described by the time dependences of the parameters of the error distribution law. Metrological reliability and stability are different properties of the same SI aging process. Stability carries more information about the constancy of the metrological properties of a measuring instrument. This is, as it were, its “internal” property. Reliability, on the contrary, is an “external” property, since it depends both on stability and on the accuracy of measurements and the values ​​of the tolerances used.

    Reliability is the property of an SI to continuously maintain an operational state for some time. It is characterized by two states: operational and inoperative. However, for complex measuring systems there may also be larger number states, since not every failure leads to a complete cessation of their functioning. Failure is a random event associated with disruption or cessation of SI performance. This determines the random nature of reliability indicators, the main one of which is time distribution trouble-free operation SI.

    Durability is the property of an SI to maintain its operational state until a limiting state occurs. An operational state is a state of the SI in which all its MX correspond to the normalized values. A limiting state is a state of SI in which its use is unacceptable.

    After a metrological failure, the SI characteristics can be returned to acceptable ranges through appropriate adjustments. The adjustment process can be more or less lengthy depending on the nature of the metrological failure, the design of the measuring instruments and a number of other reasons. Therefore, the concept of “maintainability” was introduced into the reliability characteristic. Maintainability is a property of a measuring instrument, which consists in its adaptability to preventing and detecting the causes of failures, restoring and maintaining its operational state through maintenance and repair. It is characterized by the expenditure of time and money to restore the measuring instrument after a metrological failure and maintain it in working condition.

    As will be shown below, the process of changing MX is continuous, regardless of whether the SI is in use or is stored in a warehouse. The property of an SI to preserve the values ​​of indicators of reliability, durability and maintainability during and after storage and transportation is called its persistence.

    3.2 Changes in metrological characteristics of means

    measurements during operation

    The metrological characteristics of SI may change during operation. In what follows, we will talk about changes in error (t), implying that any other MX can be considered in a similar way instead.

    It should be noted that not all error components are subject to change over time. For example, methodological errors depend only on the measurement technique used. Among instrumental errors, there are many components that are practically not subject to aging, for example, the size of the quantum in digital devices and the quantization error determined by it.

    The change in MX of measuring instruments over time is due to aging processes in its components and elements caused by interaction with the external environment. These processes occur mainly at the molecular level and do not depend on whether the measuring instrument is in operation or stored for conservation. Consequently, the main factor determining the aging of measuring instruments is the calendar time that has passed since their manufacture, i.e. age. The rate of aging depends primarily on the materials and technologies used. Research has shown that irreversible processes that change the error occur very slowly and in most cases it is impossible to record these changes during the experiment. In this regard, various mathematical methods, on the basis of which error change models are built and metrological failures are predicted.

    The problem solved when determining the metrological reliability of measuring instruments is to find the initial changes in MX and construct a mathematical model that extrapolates the results obtained over a large time interval. Since the change in MX over time is a random process, the main construction tool mathematical models is the theory of random processes.

    The change in SI error over time is a non-stationary random process. Many of its implementations are shown in Fig. 1 in the form of error modulus curves. At each moment t i they are characterized by a certain probability density distribution law p(, t i) (curves 1 and 2 in Fig. 2a). In the center of the strip (curve cp (t)) the highest density of errors is observed, which gradually decreases towards the boundaries of the strip, theoretically tending to zero at an infinite distance from the center. The upper and lower boundaries of the SI error band can only be presented in the form of some quantile boundaries, within which are contained most of the errors realized with a confidence probability P. Outside the boundaries with probability (1 - P)/2 are the errors that are most distant from the center of implementations.

    To apply a quantile description of the boundaries of the error band in each of its sections t i, it is necessary to know the estimates of the mathematical expectation cp (t i) and the standard deviation of individual implementations. The error value at the boundaries in each section t i is equal to

    r (t i) = cp (t) ± k(t i),

    where k is a quantile factor corresponding to a given confidence probability P, the value of which significantly depends on the type of error distribution law across sections. It is practically impossible to determine the type of this law when studying SR aging processes. This is due to the fact that distribution laws can undergo significant changes over time.

    Metrological failure occurs when the straight line curve intersects ± etc. Failures can occur at various times in the range from t min to t max (see Fig. 2, a), and these points are the intersection points of the 5% and 95% quantiles with the permissible error line. When the curve (t) reaches the permissible limit, 5% of devices experience metrological failure. The distribution of the moments of occurrence of such failures will be characterized by the probability density p H (t), shown in Fig. 2, b. Thus, as a model of a nonstationary random process of change in time of the SI error module, it is advisable to use the dependence of the change in time of the 95% quantile of this process.

    Indicators of accuracy, metrological reliability and stability of an SI correspond to various functionals built on the trajectories of changes in its MX (t). The accuracy of the SI is characterized by the value MX at the considered moment in time, and for the set of measuring instruments - by the distribution of these values, represented by curve 1 for the initial moment and curve 2 for the moment t i . Metrological reliability is characterized by the distribution of times when metrological failures occur (see Fig. 2,b). SI stability is characterized by the distribution of MX increments over a given time.

    3.3 Development of metrological standardization models

    characteristics

    The MX standardization system is based on the principle of adequacy of the measurement error estimate and its actual value, provided that the estimate actually found is an estimate “from above.” The last condition is explained by the fact that an estimate “from below” is always more dangerous, since it leads to greater damage from unreliable measurement information.

    This approach is quite understandable, taking into account that accurate normalization of MX is impossible due to the many influencing factors that are not taken into account (due to their ignorance and the lack of a tool for identifying them). Therefore, rationing, to a certain extent, is an act of will when reaching a compromise between desire full description measurement characteristics and the ability to carry this out in real conditions under known experimental and theoretical limitations and the requirements of simplicity and clarity of engineering methods. In other words, complex methods for describing and normalizing MX are not viable

    The consumer receives information about typical MX from the technical documentation on SI and only in extremely rare cases, exceptional cases independently conducts an experimental study of the individual characteristics of SI. Therefore, it is very important to know the relationship between MX SI and instrumental measurement errors. This would allow, knowing one complex MX SI, to directly find the measurement error, eliminating one of the most time-consuming and complex tasks of summing up the components of the total measurement error. However, this is hampered by one more circumstance - the difference between the MX of a particular SI and the metrological properties of many of the same SIs. For example, the systematic error of a given SI is a deterministic quantity, but for a set of SI it is a random quantity. The NMX complex must be installed based on the requirements real conditions operation of specific measuring instruments. On this basis, it is advisable to divide all SI into two functional categories. For the first and third groups of SI, the characteristics of interaction with devices connected to the input and output of the SI, and non-informative parameters of the output signal should be normalized. In addition, for the third group the nominal transformation function f nom (x) must be normalized (in the SI of the second group it will be replaced by a scale or other calibrated reading device) and full dynamic characteristics. The indicated characteristics for SI of the second group do not make sense, with the exception of recording instruments for which it is advisable to normalize complete or partial dynamic characteristics

    The most common forms of recording the CSI accuracy class are:

    where c and d are constant coefficients according to formula (3.6); x k - final value of the measurement range; x - current value;

    where b= d; a = c-b;

    3) symbolic notation, characteristic of foreign CRCs,

    op = ± ,

    GOST 8.009 - 84 provides two main models (Ml and MP) for the formation of NMX complexes, corresponding to two models of the occurrence of SI errors, based on the statistical combination of these errors.

    The model is applicable for SI, the random component of the error of which can be neglected. This model includes the calculation of the largest possible values ​​of the components of the SI error to guarantee the probability P = 1 of preventing the SI error from going beyond the calculated limits. Model II is used for the most critical measurements related to taking into account technical and economic factors, possible catastrophic consequences, threats to human health, etc. When the number of components exceeds three, this model gives a rougher (due to the inclusion of rarely occurring components), but reliable estimate “from above” of the main SI error.

    Model 1 gives a rational estimate of the main SI error with probability P<1 из-за пренебрежения редко реализующимися составляющими погрешности.

    Thus, the NMX complex for error models I and II provides for the statistical integration of individual error components, taking into account their significance.

    However, for some SIs such statistical unification is impractical. These are precise laboratory industrial (in technological processes) measuring instruments that measure slowly changing processes under conditions close to normal, exemplary measuring instruments, when used, repeated observations with averaging are not performed. In such instruments, their main error or the arithmetic sum of the largest possible values ​​of the individual error components can be taken as instrumental (model III).

    Arithmetic summation of the largest values ​​of error components is possible if the number of such components is no more than three. In this case, the estimate of the total instrumental error will practically not differ from the statistical summation.

    4. CLASSIFICATION OF SIGNALS

    A signal is a material carrier of information that represents a certain physical process, one of the parameters of which is functionally related to the physical quantity being measured. This parameter is called informative.

    A measuring signal is a signal containing quantitative information about the physical quantity being measured. Basic concepts, terms and definitions in the field of measuring signals are established by GOST 16465 70 "Radio signals. Terms and definitions". The measuring signals are extremely varied. Their classification according to various criteria is shown in Fig. 3.

    Based on the nature of measuring informative and time parameters, measuring signals are divided into analog, discrete and digital.

    An analog signal is a signal described by a continuous or piecewise continuous function Y a (t), and both this function itself and its argument t can take on any values ​​at given intervals Y<=(Y min ; Y max) и t6(t mjn ; t max)

    A discrete signal is a signal that varies discretely in time or in level. In the first case, it can take nT at discrete moments in time, where T = const - sampling interval (period), n = 0; 1; 2;. integer, any values ​​Y JI (nT)e(Y min ; Y max), called samples, or samples. Such signals are described by lattice functions. In the second case, the values ​​of the signal Y a (t) exist at any time te(t niin ; t max), but they can take on a limited range of values ​​h ; =nq, multiples of the quantum q.

    Digital signals are level-quantized and time-discrete signals Y u (nT), which are described by quantized lattice functions (quantized sequences), which at discrete moments of time PT accept only a finite series of discrete values ​​of quantization levels h 1 , h 2 ,., h n

    According to the nature of changes over time, signals are divided into constants, whose values ​​do not change over time, and variables, whose values ​​change over time. Constant signals are the simplest type of measuring signals.

    Variable signals can be continuous in time or pulsed. A signal whose parameters change continuously is called continuous. A pulse signal is a signal of finite energy, significantly different from zero during a limited time interval commensurate with the time of completion of the transient process in the system to which this signal is intended to influence.

    According to the degree of availability of a priori information, variable measuring signals are divided into deterministic, quasi-deterministic and random. A deterministic signal is a signal whose law of change is known, and the model does not contain unknown parameters. The instantaneous values ​​of a deterministic signal are known at any time. The signals at the output of the measures are deterministic (with a certain degree of accuracy). For example, the output signal of a low-frequency sine wave generator is characterized by the amplitude and frequency values ​​​​that are set on its controls. The errors in setting these parameters are determined by the metrological characteristics of the generator.

    Quasi-deterministic signals are signals with a partially known nature of change over time, i.e. with one or more unknown parameters. They are most interesting from a metrological point of view. The vast majority of measurement signals are quasi-deterministic.

    Deterministic and quasi-deterministic signals are divided into elementary, described by simple mathematical formulas, and complex. Elementary signals include constant and harmonic signals, as well as signals described by the unit and delta functions.

    Signals can be periodic or non-periodic. Non-periodic signals are divided into almost periodic and transient. Nearly periodic is a signal whose values ​​are approximately repeated when a properly selected number of almost period is added to the time argument. A periodic signal is a special case of such signals. Almost periodic functions are obtained by adding periodic functions with incommensurable periods, for example Y(t) sin(cot) - sin(V2(0t). Transient signals describe transient processes in physical systems.

    A signal is called periodic, the instantaneous values ​​of which are repeated at a constant time interval. The period T of the signal is a parameter equal to the smallest such time interval. The frequency f of a periodic signal is the reciprocal of the period.

    A periodic signal is characterized by a spectrum. There are three types of spectrum:

    * complex complex function of a discrete argument that is a multiple of an integer number of frequency values ​​f of a periodic signal Y(t)

    * amplitude - a function of a discrete argument, which is the module of the complex spectrum of a periodic signal

    * phase - a function of a discrete argument, which is an argument of the complex spectrum of a periodic signal

    A measuring system, by definition, is designed to perceive, process and store measurement information in the general case of heterogeneous physical quantities through various measuring channels (IC). Therefore, calculating the error of a measuring system comes down to estimating the errors of its individual channels.

    The resulting relative error of the IR will be equal to

    where x is the current value of the measured value;

    x P - the limit of a given channel measurement range at which the relative error is minimal;

    Relative errors calculated at the beginning and end of the range, respectively.

    IR - a chain of various perceiving, converting and recording links

    5. Channel development

    5.1Development of a channel model

    In real data transmission channels, the signal is affected by complex interference and it is almost impossible to give a mathematical description of the received signal. Therefore, when studying signal transmission through channels, idealized models of these channels are used. A data transmission channel model is understood as a description of a channel that allows one to calculate or evaluate its characteristics, on the basis of which one can explore various ways of constructing a communication system without direct experimental data.

    The model of a continuous channel is the so-called Gaussian channel. The noise in it is additive and represents an ergodic normal process with zero mathematical expectation. The Gaussian channel reflects quite well only the channel with fluctuation noise. For multiplicative interference, a channel model with a Rayleigh distribution is used. For impulse noise, a channel with a hyperbolic distribution is used.

    The discrete channel model coincides with the models of error sources.

    A number of mathematical models of error distribution in real communication channels have been put forward, such as Hilbert, Mertz, Maldenbrot, etc.

    5.2 Development of a measuring channel model

    Previously, measuring equipment was designed and manufactured mainly in the form of separate instruments designed to measure one or several physical quantities. Currently, conducting scientific experiments, automation of complex production processes, control, and diagnostics are unthinkable without the use of measurement information systems (MIS) of various purposes, which make it possible to automatically obtain the necessary information directly from the object being studied, process it and issue it in the required form. Specialized measuring systems are being developed for almost all areas of science and technology.

    When designing an IIS according to given technical and operational characteristics, a task arises related to the choice of a rational structure and a set of technical means for its construction. The structure of the information system is mainly determined by the measurement method on which it is based, and the number and type of technical means by the information process occurring in the system. An assessment of the nature of the information process and types of information transformation can be made based on an analysis of the information system information model, but its construction is a rather labor-intensive process, and the model itself is so complex that it makes it difficult to solve the problem.

    Due to the fact that in the third generation IMS information processing is carried out mainly by universal computers, which are a structural component of the IMS, and when designing IMS they are selected from a limited number of serial computers, the information model of the IMS can be simplified by reducing it to a model of a measuring channel (MC). ). All measuring channels of the IIS, which include elements of information processes, from receiving information from the object of study or control to its display or processing and storing, contain a certain limited number of types

    transformation of information. By combining all types of information conversion in one measuring channel and isolating the latter from the IMS, and also keeping in mind that analog signals are always active at the input of the measuring system, we obtain two models of measuring channels with direct (Fig. 4a) and reverse (Fig. 4b) ) transformations of measurement information.

    On the models, in nodes 0 - 4, information is converted. The arrows indicate the direction of information flows, and their letter designations indicate the type of transformation.

    Node 0 is the output of the research or control object, on which analog information A is generated, which determines the state of the object. Information A arrives at node 1, where it is converted to the form A n for further transformations in the system. In node 1, conversions of a non-electrical information carrier into an electrical one, amplification, scaling, linearization, etc. can be carried out, i.e., normalization of the parameters of the information carrier A.

    In node 2, the normalized information carrier A„ for transmission over the communication line is modulated and provided in the form of an analog A n or discrete D m signal.

    Analog information A n in node 3 is demodulated and sent to node 4, where it is measured and displayed.

    Fig.4 Model of the measuring channel of direct (a) and reverse (b) transformations of measuring information

    Discrete information in node Z 1 is either converted into analog information A n and enters node 4 1, or after digital conversion it is sent to a digital information display device or to a device for processing it.

    In some ICs, the normalized information carrier A from node 1 immediately goes to node 4 1 for measurement and display. In other ICs, analog information A, without a normalization operation, immediately enters node 2, where it is sampled.

    Thus, the information model (Fig. 4a) has six branches through which information flows are transmitted: analog 0-1-2-3 1 -4 1 and 0-1-4 1 and analog-discrete 0-1-2-3 2 -4 1 , 0-1-2-3 2 -4 2 and 0-2-З 2 -4 1 , 0-2-3 2 -4 2 . Branch 0-l-4 1 is not used when constructing measuring channels of the IMS, but is used only in autonomous measuring instruments and therefore is not shown in Fig. 4a.

    The model shown in Fig. 4 b differs from the model in Fig. 4 a only in the presence of branches 3 2 -1"-0, 3 1 -1"-0, 3 2 -1"-1 and 3 1 -1"- 1, through which reverse transmission* of the analog information carrier A n is carried out. In node 1, the output discrete information carrier A l is converted into a homogeneous signal with the input information carrier A or the normalized information carrier A n signal A." Compensation can be made both according to A and A n.

    Analysis of the information models of the measuring channels of the IMS showed that when constructing them based on the direct conversion method, only five variants of structures are possible, and when using measurement methods with reverse (compensatory) information conversion 20.

    In most cases (especially when constructing an IIS for remote objects), the generalized information model of the IC IIS has the form shown in Fig. 4a. The most common analogue-discrete branches 0-1-2-3 2 -4 2 and 0-2-3 2 -4 2 . As can be seen, for the indicated branches the number of levels of information conversion into IC does not exceed three.

    Since the nodes contain technical means that transform information, taking into account the limited number of transformation levels, they can be combined into three groups. This will allow, when developing an IC IIS, to select the necessary technical means to implement a particular structure. The group of technical means of node 1 includes the entire set of primary measuring transducers, as well as unifying (normalizing) measuring transducers (UMTs) that perform scaling, linearization, power conversion, etc.; blocks of test formation and exemplary measures.

    In node 2, if there are analog-discrete branches, there is another group of measuring instruments: analog-to-digital converters (ADC), switches (CM), which serve to connect the corresponding source of information to the IR or processing device, as well as communication channels (CC).

    The third group (node ​​3) combines code converters (PCs), digital-to-analog converters (DACs) and delay lines (DLs).

    The given IC structure, which implements the direct measurement method, is shown without the switching element and ADC connections that control the operation. It is standard, and most multi-channel IMS are built on its basis, especially long-range IMS.

    Of interest are the methods for calculating IC for the various information models discussed above. A strict mathematical calculation is impossible, but using simplified methods of approach to determining the components of the resulting error, parameters and distribution laws, specifying the value of the confidence probability and taking into account the correlations between them, it is possible to create and calculate a simplified mathematical model of a real measuring channel. Examples of calculating the error of channels with analog and digital recorders are considered in the works of P.V. Novitsky.

    LITERATURE

    1. V. M. Pestrikov Home electrician and more... Ed. Nit. - 4th edition

    2. A.G. Sergeev, V.V. Krokhin. Metrology, uch. manual, Moscow, Logos, 2000

    3. Goryacheva G. A., Dobromyslov E. R. Capacitors: Handbook. - M.: Radio and communication, 1984

    4. Rannev G. G. Methods and measuring instruments: M.: Publishing center "Academy", 2003

    5. http//www.biolock.ru

    6. Kalashnikov V.I., Nefedov S.V., Putilin A.B. Information-measuring equipment and technologies: textbook. for universities. - M.: Higher. school, 2002

    Similar documents

      Description of the operating principle of an analog sensor and selection of its model. Selection and calculation of an operational amplifier. Operating principle and choice of analog-to-digital converter microcircuit. Development of the program algorithm. Description and implementation of the output interface.

      course work, added 02/04/2014

      Preparing an analog signal for digital processing. Calculation of the spectral density of an analog signal. Specifics of digital filter synthesis based on a given analog prototype filter. Calculation and construction of time characteristics of an analog filter.

      course work, added 11/02/2011

      Calculation of filter characteristics in time and frequency domains using fast Fourier transform, output signal in time and frequency domains using inverse fast Fourier transform; determination of the power of the filter's own noise.

      course work, added 10/28/2011

      Development of an analog-to-digital converter adapter and an active low-pass filter. Sampling, quantization, coding as signal conversion processes for the microprocessor section. Algorithm of operation of the device and its electrical circuit.

      abstract, added 01/29/2011

      4:2:2 digital stream parameters. Development of a circuit diagram. Digital-to-analog converter, low-pass filter, analog signal amplifier, output stage, PAL encoder. Development of PCB topology.

      thesis, added 10/19/2015

      An algorithm for calculating a filter in the time and frequency domains using the discrete fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT). Calculation of the output signal and the inherent noise power of the synthesized filter.

      course work, added 12/26/2011

      Classification of filters according to the type of their amplitude-frequency characteristics. Development of schematic diagrams of functional units. Calculation of an electromagnetic filter for separating electron beams. Determination of the active resistance of the rectifier and diode phase.

      course work, added 12/11/2012

      Development of block diagrams of transmitting and receiving devices of a multi-channel information transmission system with PCM; calculation of basic time and frequency parameters. Project of a pulse amplitude modulator for converting an analog signal into an AIM signal.

      course work, added 07/20/2014

      Typical block diagram of an electronic device and its operation. Properties of a frequency filter, its characteristics. Calculation of the input voltage converter. Design and principle of operation of a relay element. Calculation of an analog time delay element.

      course work, added 12/14/2014

      Consideration of the design of a rheostatic measuring transducer and the principle of its operation. Study of the block diagram of converting an analog signal from a measuring controller into digital form. Study of the operating principle of parallel ADC.

    Classification of measuring instruments

    Measuring instruments and their characteristics

    The concept of a measuring instrument was presented in paragraph 1.2 as one of the fundamental concepts of metrology. It was noted that a measuring instrument (MI) is a special technical device that stores a unit of quantity, allows one to compare the measured quantity with its unit and has standardized metrological characteristics, i.e. characteristics that influence the results and accuracy of measurements.

    Let us classify SI according to the following criteria:

    § according to the method of implementing the measuring function;

    § by design;

    § for metrological purposes.

    According to the method of implementing the measuring function, all measuring instruments can be divided into two groups:

    § reproducing the value of a given (known) size (for example: weight - mass; ruler - length; normal element - emf, etc.);

    § generating a signal (indication) that carries information about the value of the measured quantity.

    The classification of measuring instruments by design is shown in the diagram in Fig. 4.1.

    Measure– a measuring instrument in the form of a body or device designed to reproduce a physical quantity of one or several sizes, the values ​​of which are known with the accuracy necessary for measurement. Measure is the basis of measurement. The fact that in many cases measurements are made using measuring instruments or other devices does not change anything, since many of them include measures, others are “graded” using measures; their scales can be thought of as a storage device. And finally, there are measuring instruments (for example, cup scales) that can only be used with measures.


    Rice. 4.1. Classification of measuring instruments by design.

    Meter– a measuring instrument designed to generate a signal of measuring information in a form accessible to direct perception by an observer. Depending on the form of information presentation, analogue and digital devices are distinguished. Analog instruments are instruments whose readings are a continuous function of the quantity being measured, for example, a pointer instrument, a glass thermometer, etc.

    Figure 4.2 shows a generalized block diagram of a measuring device with a pointer indicating device.

    A mandatory element of the measuring device is reading device– part of the design of a measuring instrument intended for reading the value of the measured quantity. The reading device of a digital measuring device is a digital display.


    Rice. 4.2. Block diagram of the measuring device.

    The reading device of an analog measuring instrument usually consists of a pointer and a scale. The scale has an initial and final value, within which the range of readings is located (Fig. 4.3).


    Rice. 4.3. Reading device of an analog indicating device.

    Measuring setup– a set of functionally integrated measuring instruments, in which one or more measuring devices are used to convert the measured value into a signal.

    The measuring installation may include measuring instruments, measures, converters, as well as auxiliary devices, regulators, and power supplies.

    Measuring system– a set of measuring instruments and auxiliary devices, interconnected by communication channels, designed to generate measurement information signals in a form convenient for automatic processing, transmission and use in monitoring and control systems.

    Transducer– a measuring instrument designed to convert measurement information signals from one type to another. Depending on the types of input and output signals, measuring transducers are divided into:

    § primary transducers or sensors;

    § secondary converters.

    Primary converters– a measuring transducer, the input of which is supplied with the measured physical quantity. The primary transducer is the first in the measuring chain.

    The output signal of the primary transducer cannot be directly perceived by the observer. To transform it into a form accessible to direct observation, another stage of transformation is necessary. An example of a primary transducer is a resistance thermometer, which converts temperature into electrical resistance of a conductor. Another example of a primary converter is the orifice of variable pressure flow meters, which converts flow into differential pressure.

    Secondary device– a converter, the input of which is supplied with the output signal of the primary or normalizing converter. The output signal of the secondary device, like the signal of the measuring device, is available for direct perception by the observer. The secondary device closes the measuring circuit.

    Normalizing converter– an intermediate converter installed between the primary converter and the secondary device in case of inconsistency between the output signal of the primary converter and the input signal of the secondary device. An example of a normalizing converter is a normalizing bridge, which converts an information signal of variable resistance into a unified DC signal of 0-5 mA or 0-20 mA.

    The use of such normalizing converters allows the use of unified milliammeters for all measured physical quantities as secondary devices, which improves the ergonomic qualities and design of control panels.

    Scale converter– a measuring transducer that serves to change by a certain number of times the value of one of the quantities acting in the circuit of the measuring device, without changing its physical nature. These are voltage and current measuring transformers, measuring amplifiers, etc.

    According to their metrological purpose, all measuring instruments are divided into standards and working measuring instruments. The classification of measuring instruments by metrological purpose was given in detail in paragraph 2.2. “The procedure for transferring the sizes of units of physical quantities.”


    Rice. 4.4 Static and dynamic characteristics of measuring instruments.

    As noted above, measurements are divided into static and dynamic. Let us consider the metrological properties of measuring instruments that characterize the result of measuring constant and time-varying quantities. Figure 4.4 shows the classification of characteristics that reflect these properties.

    Static characteristic measuring instruments call the functional relationship between the output quantity y and input quantity x in steady state y = f(x). This dependence is also called the instrument scale equation, the calibration characteristic of the instrument or converter. The static characteristic can be specified:

    Analytically;

    Graphically;

    In the form of a table.

    In the general case, the static characteristic is described by the dependence:

    y = y n + S (x – x n), (4.1)

    Where u n, x n– initial value of the output and input quantities; y, x– current value of the output and input quantities; S– sensitivity of the measuring instrument.

    Measuring instrument error() is the difference between the SI reading and the true (actual) value of the measured physical quantity. The error and its various components are the main standardized characteristic of the SI.

    Sensitivity of measuring instrument(S)– a property that can be quantitatively determined as the limit of the ratio of the increment of the output value D at to the increment of the input value D X:

    Figure 4.5 shows examples of static characteristics of measuring instruments: A) And b) – linear, V) – nonlinear. The linearity of the static characteristic is an important property of the measuring instrument, ensuring ease of use.

    Nonlinearity of the static characteristic, especially for a technical measuring instrument, is allowed only when it is due to the physical principle of transformation.

    It should be noted that for most measuring instruments, especially for primary transducers, the static characteristic can be considered linear only within the required accuracy of the measuring instrument.

    The linear static characteristic has a constant sensitivity, independent of the value of the measured quantity. In the case of a linear static characteristic, sensitivity can be determined by the formula:

    Where y k, x k– final value of the output and input quantity; y d = y k – y n– range of output signal variation; x d = x k – x n– range of input signal variation.

    x

    x
    X
    u n

    A) b) V)

    Rice. 4.5. Static characteristics of measuring instruments:

    a), b)– linear; V)- nonlinear

    Measuring range– range of values ​​of the measured quantity, within which the permissible error limits of measuring instruments are normalized. The measuring range of a meter is always less than or equal to the reading range.

    The concept of transmission coefficient applies to individual elements of measuring systems that perform the functions of directional transmission, scaling or normalization of measuring signals.

    Transfer coefficient( To) is called the ratio of the output quantity y to the input quantity x, i.e. k = y/x. The transmission coefficient, as a rule, has a constant value at any point in the converter range, and the listed types of converters (scaling, normalizing) have a linear static characteristic.

    Dynamic characteristics is called the functional dependence of the readings of measuring instruments on the change in the measured value at each moment in time, i.e. y(t) = f.

    Output deviation y(t) from the input value x(t) in dynamic mode is shown in Fig. 4.6 depending on the law of change of the input quantity over time.

    Dynamic error measuring instruments is defined as

    Dу(t) =y(t) – to x(t),(4.4)

    Where kx(t)– output value of a dynamically “ideal” converter.

    The dynamic mode of a wide class of measuring instruments is described by linear inhomogeneous differential equations with constant coefficients. The dynamic properties of measuring instruments in thermal power engineering are most often modeled by a first-order dynamic link (aperiodic link):

    where T – conversion time constant, which shows the signal output time y(t) to a steady value after a step change in the input value x(t).

    Rice. 4.6. Deviation of the output value from the input value in dynamic mode

    To describe the dynamic properties of measuring instruments, transient characteristics are used. The transient response is the response of a dynamic system to a single step action. In practice, step effects of arbitrary value are used:

    Step response h(t) associated with the response of a linear dynamic system y(t) on a real non-unit step impact by the relation:

    h(t)=y(t)/X a(4.7)

    The transient response describes the inertia of the measurement, which causes delay and distortion of the output signal. The transient response can have aperiodic and oscillatory forms.

    The dynamic characteristics of a linear measuring instrument do not depend on the value and sign of the step disturbance, and the transient characteristics measured experimentally at different values ​​of the step disturbance must coincide. If experiments with stepwise disturbances of different magnitude and sign lead to unequal quantitative and qualitative results, then this indicates the nonlinearity of the measuring instrument being studied.

    The dynamic characteristics of measuring instruments, which characterize the response of the measuring instrument to harmonic influences in a wide frequency range, are called frequency characteristics which include amplitude-frequency and phase-frequency characteristics.

    When experimentally determining frequency characteristics, harmonic, for example, sinusoidal oscillations are supplied to the input of the measuring instrument using a generator:

    x(t) = A x sin(w t + f x)(4.8)

    If the measuring instrument under study is a linear dynamic system, then the oscillations of the output value in steady state will also be sinusoidal (see Fig. 4.6, c):

    y(t) = A y sin(wt + f y), (4.9)

    Where f x- initial phase, rad: w- angular velocity, rad/s.

    The amplitude of the output oscillations and the phase shift depend on the properties of the measuring instrument and the frequency of the input oscillations.

    Addiction A(w), showing how the ratio of the amplitude of output oscillations changes with frequency Ay(w) linear dynamic system to the amplitude of input oscillations Ax(w), is called the amplitude-frequency response (AFC) of this system:

    A(w) = A y (w)/A x (w) (4.10)

    The frequency dependence of the phase shift between input and output oscillations is called the phase-frequency response (PFC) of the system:

    f(w) = f y (w) – f x (w)(4.11)

    Frequency characteristics are determined both experimentally and theoretically, using a differential equation that describes the relationship between the output and input signal (4.5). The procedure for obtaining frequency characteristics from the differential equation of a linear system is described in detail in the literature on the theory of automatic control.

    Figure 4.7 shows typical frequency characteristics for a measuring instrument whose dynamic properties correspond to a first-order linear differential equation (4.5). As the frequency of the input signal increases, such a measuring device usually reduces the amplitude of the output signal, but increases the shift of the output signal relative to the input signal, which leads to an increase in the dynamic error.

    Rice. 4.7. Amplitude-frequency (a) and phase-frequency (b) characteristics of a measuring instrument, the dynamic properties of which correspond to a first-order linear link (apriodic link).

    Let us show by example how to assess the dynamic characteristics of measuring instruments, the dynamic properties of which can be modeled by a 1st order linear element.

    Example. Time Constant Calculation T thermal receiver.

    Rice. 4.8. Schematic diagram and dynamic characteristics of the thermal receiver

    Thermal inertia of the thermal receiver is due to slower heating compared to the rapid (abrupt) change in temperature of the medium, which leads to a lag in the readings of the temperature measuring device.

    The dynamic error of the thermal receiver is determined

    Where s, r, S– heat capacity, density, volume and surface area of ​​the heat receiver; a is the heat transfer coefficient; t avg And t pr– temperature of the medium and temperature sensor.

    The time constant of the thermal receiver is determined by the condition t pr (T)=0,63(t av -t n) and is equal to

    Where d- thickness of the walls of the thermal receiver cover.

    Let it be given: r= 7×10 3 kg/m 3 ; With= 0.400 kJ/kg×deg; a= 200 W/m 2 ×deg; d= 2.0 mm.

    Estimated time constant:

    If the ambient temperature t avg= 520 o C is measured by an electronic potentiometer with an error of D = ±5 o C, then the time of establishment of the instrument readings T y is determined