What is the Bit Error Rate (BER)?
How is BER measured in practice?
What is Block Error Rate (BLER)?
Network management systems are mandatory elements of modern communication networks. They help with tasks such as network reconfiguration, continuous monitoring of communication system parameters (for example, an inter-network gateway), recording emergency conditions, protective switching, storage and processing of monitoring results, etc. All of these operations are performed, as a rule, automatically using built-in hardware and software.
At the same time, servicing communication networks is often impossible without some manual operations using portable measuring instruments. A classic example is the elimination of complex damage to metal communication cables caused by them getting wet.
The main advantage of digital transmission compared to analog transmission is the absence of accumulating interference along the line. This is achieved by restoring the shape of the transmitted signal at each regeneration section.
All factors defining the length of the section can be divided into internal and external factors.
Line attenuation, intersymbol interference, system clock instability, delay variation, and increased noise levels due to system aging are considered to be the most important internal ones.
Significant external factors usually include transient and impulse noise, external electromagnetic influences, mechanical damage to contacts due to vibration or shock, and deterioration of the properties of the transmitting medium due to temperature changes.
All of them usually predetermine the deterioration of the most error-sensitive parameter of digital transmission—the signal-to-noise ratio. Indeed, a decrease in this ratio's value by just 1 dB leads to an increase in the general quality parameter of digital transmission systems, the bit error rate (BER), by at least an order of magnitude.
By definition, BER is the ratio of the number of corrupted bits received to the total number of bits received. Its value statistically fluctuates around the average error rate over a long period of time. The difference between the directly measured error rate and the long-term average value depends on the number of monitored bits and, thus, on the measurement duration.
The time base is formed using two main methods.
By the first of them, a fixed number of observed bits is set at the receiving end, and the corresponding number of error bits is recorded.
For example, if the number of corrupted bits received was 20, and the specified total number of bits received was 106, the error rate would be 20/10^6 = 20 x 10^-6 = 2 x 10^-5.
The advantage of this approach is the precisely known measurement time, but the disadvantage is the low reliability of measurement with a small number of errors.
According to the second method, the measurement time is determined by a given number of errors. The measurement continues until, for example, 100 errors are recorded. The error rate is then calculated based on the corresponding number of data bits. In this estimate, the measurement time is unknown, and with small error rates, it can be very long. In addition, it is quite possible that the data bit counter will fill up, and the measurement will stop. Therefore, this method is rarely used.
At the initial stage of the development of digital transmission systems, they were used mainly for transmitting an analog telephone signal, so the characteristics of this signal determined the requirements for the quality of digital systems.
The reference quality connection line is often considered to have a length of 17,000 mi, with the BER value not exceeding 10^-7.
Errors can be detected by two main methods.
Firstly, during service maintenance of communication lines, measurements are performed with interruptions of communication, which are implemented according to three connection schemes: point-to-point, loop, and transit.
Secondly, measurements are used without interruption of communication to monitor the network, qualitatively assess its condition, and detect and eliminate damage.
Measuring BER without interrupting communication requires precise knowledge of the digital signal structure. As part of a cycle, for example, the primary digital signal E-1 is a frame synchronization signal, taking 7 bits of the zero channel interval of the E-1 signal.
A frame signal is transmitted to every other E-1 frame, with each E-1 frame containing 32 slots and 32 × 8 = 256 bits. Thus, the relative proportion of the frame clock in the E-1 signal is 7/(256 × 2) < 1.4%. Therefore, the reliability of BER estimation using the frame clock signal is very low.
Another well-known method for assessing the quality of digital transmission is detecting code errors. It is used, for example, in T-1/E-1 digital paths, where AMI and HDB-3 alternating positive and negative codes are used. However, a code error meter cannot reveal the actual value of the bit error rate. Deviations between the results of measuring code errors and conventional error measurements using the bitwise comparison method become especially noticeable at error rates greater than 10-3. In addition, the encoding violation often extends to several bits after the corrupted bit. As a result, the signal content-dependent bias and error at large error rates make it impossible to accurately analyze the error distribution.
So, practical assessment of BER is only possible in measurement mode with a communication interruption and sending reference test signals. When measuring BER, the test signal should simulate the real one as best as possible, i.e., be random. A pseudo-random sequence of bits with a given structure close to the real information signal is usually used as a test signal. Such sequences are formed by clocked feedback shift registers.
The digital test signal replaces the usually transmitted information signal. It is evaluated at the receiving end by an error meter.
Thus, continuous monitoring of digital transmission errors by the BER method, required under normal operating conditions, without interruption of communication, is practically impossible.
Therefore, the method of measuring block errors (Block Error Rate, BLER) is used to assess the quality of digital transmission systems under operational conditions. As you might guess, its main advantage is that it is based on using the information signal itself and is performed without interrupting communication.
All methods for measuring block errors involve introducing redundancy into the information signal, processing this auxiliary signal according to a specific algorithm, and transferring the processing result to the receiving side, where the received signal is processed according to the same algorithm as during transmission. The result is compared with the processing result received from the transmitting side. If they differ, the transmitted block is considered erroneous.
There are several ways to detect block errors. Block parity and checksum methods do not show all types of errors, thereby limiting their practical applicability. Perhaps the only universal way to measure errors without interrupting communication is through the Cyclical Redundancy Check (CRC).
Hi! I'm Kevin! I am a very curious engineer :))
I'm the website founder and author of many posts.
I invite you to follow exciting experiments, research, and challenges.
Let's go on to new knowledge and adventures!