7.2, the source of PAR
PAR (signal peak-to-average ratio) is usually represented by a statistical function such as CCDF, and its curve represents the power (amplitude) value of the signal and its corresponding probability of occurrence. For example, if the average power of a certain signal is 10dBm, the statistical probability that it has a power exceeding 15dBm is 0.01%, and we can consider its PAR to be 5dB.
PAR is an important factor affecting transmitter spectrum regeneration (such as ACLP/ACPR/Modulation Spectrum) in modern communication systems. The peak power will push the amplifier into the non-linear region and produce distortion. The higher the peak power, the stronger the non-linearity.
In the GSM era, because of the constant envelope characteristic of GMSK modulation, PAR=0. We often push it to P1dB when designing GSM power amplifiers to get the maximum efficiency. After the introduction of EDGE, 8PSK modulation is no longer a constant envelope, so we tend to push the average output power of the power amplifier to about 3dB below P1dB, because the PAR of the 8PSK signal is 3.21dB.
In the UMTS era, whether WCDMA or CDMA, the peak-to-average ratio is much larger than that of EDGE. The reason is the correlation of the signals in the code division multiple access system: when the signals of multiple code channels are superimposed in the time domain, the same phase may occur, and the power will show a peak at this time.
The peak-to-average ratio of LTE is derived from the burstiness of the RB. OFDM modulation is based on the principle of dividing multi-user/multi-service data into blocks in both the time domain and the frequency domain, so that high power may appear in a certain "time block". LTE uplink transmission uses SC-FDMA. First, DFT is used to extend the time domain signal to the frequency domain, which is equivalent to "smoothing" the burstiness in the time domain, thereby reducing PAR.
8. Summary of interference indicators
The "interference index" here refers to the sensitivity test under various interferences other than the static sensitivity of the receiver. In fact, it is very interesting to study the origin of these test items.
Our common interference indicators include Blocking, Desense, Channel Selectivity, etc.
8.1, Blocking (blocking)
Blocking is actually a very old RF indicator, as early as the invention of radar. The principle is to pour a large signal into the receiver (usually the first LNA that suffers the most), making the amplifier enter the nonlinear region or even saturate. At this time, on the one hand, the gain of the amplifier suddenly decreases, and on the other hand, it produces extremely strong nonlinearity, so the function of amplifying useful signals cannot work normally.
Another possible Blocking is actually done through the receiver’s AGC: a large signal enters the receiver link, and the receiver AGC acts to reduce the gain to ensure dynamic range; but at the same time, the useful signal level entering the receiver is very low. At this time, the gain is insufficient, and the amplitude of the useful signal entering the demodulator is insufficient.
Blocking indicators are divided into in-band and out-of-band, mainly because the RF front-end generally has a band filter, which inhibits out-of-band blocking. However, whether in-band or out-of-band, the Blocking signal is generally dot frequency without modulation. In fact, point-frequency signals without modulation at all are rare in the real world. In engineering, it is simply simplified to point-frequency to (approximately) replace various narrow-band interference signals.
For solving Blocking, the main effort is RF. To put it bluntly, it is to increase the receiver IIP3 and expand the dynamic range. For out-of-band Blocking, the rejection of the filter is also very important.
8.2, AM Suppression
AM Suppression is a unique indicator of the GSM system. From the description point of view, the interference signal is a TDMA signal similar to the GSM signal, synchronized with the useful signal and has a fixed delay.
This scenario simulates the signals of neighboring cells in the GSM system. From the point of view that the frequency offset of the interference signal is greater than 6MHz (GSM bandwidth is 200kHz), this is a very typical neighboring cell signal configuration. So we can think of AM Suppression as a reflection of the receiver's interference tolerance to adjacent cells in the actual work of the GSM system.
8.3, Adjacent (Alternative) Channel Suppression (Selectivity)
Here we collectively refer to it as "adjacent channel selectivity". In a cellular system, in addition to co-frequency cells, we must also consider adjacent-frequency cells in our networking. The reason can be found in the transmitter index ACLR/ACPR/Modulation Spectrum we discussed earlier: because of the transmitter’s spectrum regeneration There will be strong signals falling into adjacent frequencies (generally, the farther the frequency offset, the lower the level, so the adjacent channel is generally the most affected), and this kind of spectrum regeneration is actually related to the transmitted signal Yes, that is, receivers of the same standard may mistake this part of the regenerated spectrum as a useful signal for demodulation.
For example: if two adjacent cells A and B happen to be adjacent frequency cells (such networking methods are generally avoided, here is just a limit scenario), when a terminal registered in cell A swims to two At the campus junction, but the signal strength of the two cells has not reached the handover threshold, the terminal still maintains cell A connection; the ACPR of the B cell base station transmitter is higher, so the terminal’s receiving frequency band has a higher B cell ACPR component, It overlaps with the useful signal of cell A in frequency; because the terminal is far away from the base station of cell A, the received signal strength of cell A is also very low. At this time, when the ACPR component of cell B enters the terminal receiver, it can The original useful signal causes co-channel interference.
If we pay attention to the definition of the frequency offset of the adjacent channel selectivity, we will find that there is a difference between Adjacent and Alternative, which corresponds to the first and second adjacent channels of ACLR/ACPR. You can see the "transmitter spectrum leakage (regeneration)" in the communication protocol. And "receiver adjacent channel selectivity" is actually defined in pairs.
8.4, Co-Channel Suppression (Selectivity)
This description is absolute co-frequency interference, which generally refers to the interference pattern between two co-frequency cells.
According to the networking principles we described earlier, the distance between two cells with the same frequency should be as far as possible, but even if they are farther away, there will be signals leaking to each other, just the difference in intensity. For the terminal, the signals of the two campuses can be regarded as "correct useful signals" (of course, there is a set of access specifications on the protocol layer to prevent such false access), and it is a measure of whether the terminal's receiver can avoid "the west wind overwhelms the east wind" "It depends on its co-frequency selectivity.
8.5 Summary
Blocking is "big signal interferes with small signal", RF still has room for maneuver; and the above indicators such as AM Suppression, Adjacent (Co/Alternative) Channel Suppression (Selectivity) are "small signal interferes with large signal", the working meaning of pure RF Not big, but rely on physical layer algorithms.
Single-tone Desense is a unique indicator of the CDMA system. It has a characteristic: the single-tone signal as an interference signal is an in-band signal and is very close to the useful signal. In this way, it is possible to generate two kinds of signals falling in the receiving frequency domain: the first is due to near-end phase noise of LO, baseband signal formed by mixing LO and useful signal, and signal formed by mixing LO phase noise and interference signal. Will fall within the range of the receiver baseband filter, the former is a useful signal and the latter is interference; the second is due to the nonlinearity in the receiver system, the useful signal (has a certain bandwidth, such as 1.2288MHz CDMA signal) It may produce intermodulation with the interference signal on the non-linear device, and the intermodulation product may also fall in the receiving frequency domain and become interference.
The origin of Single-tone desense is that when North America initiated the CDMA system, it used the same frequency band as the original analog communication system AMPS, and the two networks coexisted for a long time. As a latecomer, the CDMA system must consider the AMPS system's interference to itself.
At this point, I am reminded of the PHS, which was called "Generally does not move, but does not work". Because of the long-term occupation of the 1900~1920MHz frequency, the implementation of the Tianchao TD-SCDMA/TD-LTE B39 has always been in the lower segment of B39 from 1880~ 1900MHz, until the PHS exits the network.
The explanation of Blocking in the textbook is relatively simple: the large signal entering the receiver amplifier causes the amplifier to enter the nonlinear region, and the actual gain becomes smaller (for useful signals).
But it is difficult to explain two scenarios:
Scenario 1: The linear gain of the front-stage LNA is 18dB. When a large signal is injected to make it reach P1dB, the gain is 17dB; if no other influence is introduced (the default LNA NF etc. have not changed), then the noise figure of the entire system In fact, the impact is very limited. It is nothing more than that the denominator of the subsequent NF becomes a little smaller when the total NF is included, which has little effect on the sensitivity of the entire system.
Scenario 2: The IIP3 of the front-level LNA is very high, so it is not affected. The second-level gain block is affected (the interference signal reaches near P1dB). In this case, the impact of the entire system NF is even smaller.
I'm here to start a discussion and put forward a point: The influence of Blocking may be divided into two parts, one is that the gain mentioned in the textbook is compressed, and the other part is actually that after the amplifier enters the nonlinear region, the useful signal is distorted in this region. This kind of distortion may include two parts, one part is the spectrum regeneration (harmonic component) of the useful signal caused by pure amplifier nonlinearity, and the other part is the Cross Modulation of large signal modulating small signal. (Understandable)
Therefore, we also propose another idea: if we want to simplify the Blocking test (3GPP requires frequency sweeping, which is very time-consuming), we may be able to select certain frequency points, which have the greatest impact on useful signal distortion when blocking signals appear.
From an intuitive point of view, these frequency points may be: f0/N and f0*N (f0 is the useful signal frequency, and N is a natural number). The former is because the N-th harmonic component generated by the large signal in the nonlinear region is just superimposed on the useful signal frequency f0 to form direct interference, and the latter is superimposed on the N-th harmonic of the useful signal f0 to affect the output signal f0. Time domain waveform-explain: According to Paseval's law, the waveform of the time domain signal is actually the sum of the frequency domain fundamental frequency signal and each harmonic. When the power of the Nth harmonic in the frequency domain changes, time The corresponding change in the domain is the envelope change (distortion occurred) of the time domain signal.
9. Dynamic range, temperature compensation and power control
Dynamic range, temperature compensation, and power control are in many cases "invisible" indicators, and their influence will only be shown when certain limit tests are performed, but they themselves reflect the most delicate part of RF design .
9.1. Transmitter dynamic range
The dynamic range of the transmitter characterizes the maximum transmission power and minimum transmission power of the transmitter "without damaging other transmission indicators".
"Does not damage other emission indicators" appears to be very broad. If you look at the main effects, it can be understood that the linearity of the transmitter is not damaged at the maximum transmission power, and the signal-to-noise ratio of the output signal is maintained at the minimum transmission power.
At the maximum transmit power, the transmitter output is often close to the nonlinear region of active devices at all levels (especially the final amplifier), which often occurs in nonlinear performance such as spectrum leakage and regeneration (ACLR/ACPR/SEM), modulation error ( PhaseError/EVM). At this time, the worst thing is basically the linearity of the transmitter, this part should be better understood.
Under the minimum transmission power, the useful signal output by the transmitter is close to the noise floor of the transmitter, and may even be "submerged" in the transmitter noise. What needs to be guaranteed at this time is the signal-to-noise ratio (SNR) of the output signal, in other words, the lower the transmitter noise floor at the minimum transmit power, the better.
One thing happened in the laboratory: when an engineer was testing the ACLR, it was found that the ACLR was worse when the power was reduced (the normal understanding is that the ACLR should be improved as the output power is reduced). The first reaction at that time was that the meter had a problem. But the test result is still the same for another instrument. The guidance we gave is to test the EVM under low output power and found that the EVM performance is very poor; we judged that the noise floor at the entrance of the RF link is very high, and the corresponding SNR is obviously very poor. The main component of ACLR is no longer The amplifier's spectrum is regenerated, but the baseband noise is amplified through the amplifier link.
9.2. Receiver dynamic range
The receiver dynamic range is actually related to the two indicators we talked about before, the first is the reference sensitivity, and the second is the receiver IIP3 (mentioned many times when talking about interference indicators).
The reference sensitivity actually characterizes the minimum signal strength that the receiver can recognize, so I won't repeat it here. We mainly talk about the maximum receiving level of the receiver.
The maximum receiving level refers to the maximum signal that the receiver can receive without distortion. This distortion may occur at any stage of the receiver, from the previous LNA to the receiver ADC. For the front-level LNA, the only thing we can do is to increase IIP3 as much as possible so that it can withstand higher input power; for the subsequent step-by-step devices, the receiver uses AGC (automatic gain control) to ensure that the useful signal falls on the device. Within the input dynamic range. Simply put, there is a negative feedback loop: detect the received signal strength (too low/too high)-adjust the amplifier gain (up/down)-the amplifier output signal to ensure that it falls within the input dynamic range of the next stage device.
Here we talk about an exception: the front-end LNA of most mobile phone receivers has AGC function. If you study their datasheet carefully, you will find that the front-end LNA provides several variable gain sections, and each gain section has its corresponding Generally speaking, the higher the gain, the lower the noise figure. This is a simplified design. The design idea is that the goal of the receiver RF link is to keep the useful signal input to the receiver ADC within the dynamic range and keep the SNR higher than the demodulation threshold (the SNR is not critical) The higher the better, but "just enough", this is a very smart approach). Therefore, when the input signal is large, the front-stage LNA reduces gain, loss NF, and increases IIP3 at the same time; when the input signal is small, the front-stage LNA increases gain, reduces NF, and reduces IIP3 at the same time.