This article follows the earlier one about why the world is changing to digital TV broadcasting. It explains how the reduction of spectrum usage for terrestrial services is possible and also has some basics about how digital TV broadcasts (DVB-T system) work.
Current analogue transmissions (in N.Z) use 8MHz channels (7MHz for VHF). A large number of the 8MHz channels are allocated to TV because one channel can carry only one standard-definition programme. See the diagram below:
This shows how one 8MHz channel block carries one analogue programme. The main carrier of course is the vision carrier and it always locates 1.25MHz up from the lower channel edge. By convention, frequencies within the channel are referred to the vision carrier frequency. Therefore, the lower edge of the channel is -1.25MHz, the upper edge is +6.75MHz, the colour subcarrier is at +4.43MHz (PAL system), the mono sound carrier is at +5.5MHz and the NICAM stereo carrier is at +5.85MHz. The amplitude-modulated sidebands of the vision carrier extend from -0.75MHz to +5MHz for PAL B/G. This makes the picture horizontal resolution about 400 lines maximum. (Not to be confused with the vertical scanning resolution of 576 lines)
For compatibility reasons, New Zealand kept the 8MHz channel spacings on UHF that were used in the U.K and Europe, even though we have the PAL-G system. That makes for a small amount of 'spare' space above the NICAM and prior to the channel upper edge. That space does not exist on the VHF bands, where we use 7MHz wide channels. This space on UHF channels is useful as a guard-band between adjacent channels and has also been used for unlicenced radio-microphones. On VHF, the NICAM does fall outside the channel limit but a change to the channel definition for VHF allowed for NICAM to be carried after it commenced in N.Z in 1987.
Domestic digital TV was only made possible when the MPEG (Moving Picture Experts Group) defined MPEG-2 and subsequent standards. These standards permitted compression of raw digitized video down to much lower bit-rates. The first domestic commercial product was DVD. DVD video uses a variable bit-rate depending on the picture content and has a maximum rate of 9.8 Mbits per second. The MPEG-2 algorithms have compressed the video down from 270Mbits per second. Without that degree of compression, it would not have been possible to put enough data on a DVD for a regular feature film. HD (high-definition) pictures start out at 1.485Gbits per second. MPEG-2 compresses that down to a maximum of around 40Mbits/sec on blu-ray discs and less than half that for HD broadcasts. Further improvements in compression techniques saw the MPEG-4 standard and then H264/AVC standards improve on that. New Zealand adopted H264/AVC for its DVB-T based terrestrial broadcast system under the umbrella of Freeview, which commenced broadcasting in 2007. Now SD pictures can be coded at 2 to 3 Mbits/sec and high def pictures at around 6 Mbits per second. These data rates (and therefore picture quality) are under the control of individual broadcasters. It is fair to say that this degree of compression makes for more picture defects (artefacts) than are present on DVD or blu-ray, but is an acceptable (to the broadcasters and probably to most viewers) compromise.
The H264 coded pictures are what forms the 'payload' for digital TV transmissions using the DVB-T standard that we use here in New Zealand as well as some other places around the world. Most countries operating a DVB-T network still use MPEG-2 but are planning to adopt H264/AVC for future upgrades.
The following graphic depicts how DVB-T occupies a single 8MHz wide RF channel. Compare it with the analogue occupancy above.
One DVB-T ensemble fits into an 8MHz channel block as does analogue TV. DVB-T permits many variations of parameters to suit local conditions. New Zealand adopted the '8k' variant with transmission parameters that result in a total payload capacity of 26.35 Mbits/second. That capacity means that a number of H264/AVC coded programmes can be carried within one ensemble. The numbers typically referred to are as many as 9 standard definition programmes or three HD programmes plus one SD at the same time. Or any combination of SD and HD programmes as well as some audio-only (radio broadcasts) that collectively do not exceed the total capacity.
So the spectrum taken up for the same number of transmitted programmes is much less than for analogue. 9 separate SD programmes would use 72MHz of spectrum using analogue; the same 9 programmes on DVB-T now only use 8MHz of spectrum. This makes it possible for regulatory authorities to re-allocate some broadcast spectrum for other high-value uses such as 4G mobiles and DVB-H. That can only occur when analogue broadcasts are finally switched off.
This is an overview of the RF aspects of DVB-T.
The basis of DVB-T is the modulation scheme called COFDM which is the abbreviation for 'Coded Orthogonal Frequency Division Multiplexing'. The last part, frequency division multiplexing simply means that the total data is divided up amongst several carriers to be transmitted. Orthogonal means that the spacing between all the carriers is set so that the modulation of each carrier does not interfere with adjacent carriers. Coded relates to the forward error correction applied to the data.
The two major variants of DVB-T are the '2k' system and the '8k' system. 2k was used when DVB-T first commenced internationally because the processing power necessary to demodulate 8k was not cheaply available. By the time New Zealand commenced digital terrestrial TV, 8k was routine and was chosen for its ability to use a longer guard interval in order to facilitate single-frequency networks. In other words, 8k gives better immunity to 'ghosts' which is important to improve margins in hilly topography. Referring to the diagram above, the 8k 'ensemble' does not paradoxically, comprise 8000 carriers as might be assumed; rather it has a total of 6817 carriers which take up 7.607MHz of the 8MHz channel. 200kHz guard bands between the start/end carriers and the channel edges make it possible to use adjacent channels. 6817 carriers in 7.607MHz means they are spaced 1.116kHz apart.
Not all of the 6817 carriers contain programme data but they are all vital for the system to work.
That leaves exactly 6048 carriers for the data 'payload', meaning the encapsulated programme streams. Each of the 6048 data carriers is phase and amplitude modulated (QAM). DVB-T allows for either 4-QAM also known as QPSK, 16-QAM or 64-QAM. The first option, QPSK is the most robust, but 64QAM has the greatest payload capacity. New Zealand used 64QAM for the Freeview platform. See the graphic below:
Every one of the 6048 data carriers takes up one of the 64 possible positions for the duration of a data symbol. The period of a symbol equals 952 microseconds in the N.Z case, which includes 56 microseconds of a guard interval (see below). Because 26 equals 64, each carrier position signals a 6-bit binary word and the specific binary word of each position is fixed in the standard. So, how many bits per second of total capacity does that give us?
6 bits x 6048 carriers x 1/952us equals 38.1176Mbits/second.
This is more than the 26.35Mb/s that was stated above, so what's the rest of the story?
Well the original transport stream has 16 bytes of error protection data added to each 188 byte data packet. So that reduces the 38Mb/s by the factor of 188/204. Then more error protection is added in the form of Reed-Solomon coding. The number of bits assigned to this is configurable within the DVB-T standard but N.Z used a code-rate of 3/4, meaning that fully 1/4 of the gross capacity is assigned to error protection. So actual payload is then restricted to 38 x 3/4 x 188/204 which equals the stated 26.35Mbits/second. (I have glossed over the coding, formatting and packetising processes because they are not in the scope of this paper. )
Guard Interval: This is a short period added to the start of the symbol data period effectively extending the total time each symbol remains in one of the 64 possible carrier positions. This guard interval is configurable within the standard; specified as either 1/4, 1/8, 1/16 or 1/32 of the data period. This process is one of the key aspects of DVB-T because it permits the receiver to ignore reflections which would otherwise reduce decoding margins. Effectively, the receiver ignores the symbol until after the guard interval, at which time it begins determining the amplitude and phase of each carrier. Using a guard interval of 1/4 is not common but it allows for the ultimate in immunity from reflections. Here, we adopted a 1/16 guard interval. Since the raw symbol period for 8k is 896us, the 1/16 guard interval is 56us, which then makes the gross symbol period 952us. Any reflections in the coverage area or for any other reason that are delayed by less than 56us will be ignored by the receiver. Radio signals travel 300 metres in a microsecond, so in this case, any reflection arriving at the receiver from a path up to 16.8km (10.44 miles) longer than for the direct signal will be ignored. The guard interval also facilitates the use of single-frequency networks, whereby multiple stations within a geographically common coverage area can operate on the same frequency. The co-channel signal from an adjacent station will be ignored as long as it arrives at the receiver inside the guard interval. This frequency re-use capability makes for efficient use of spectrum. Stations within a single frequency network must be on precisely the same frequency as each other and must transmit exactly the same data bits at the same time.
The carrier/noise ratio necessary to successfully decode DVB-T depends on the transmission parameters in use. Successful (or quasi-error-free) decoding is obtained where the bit-error ratio at the output of the receiver Viterbi correction process is 2x10-4 or better. If one used QPSK in an 8MHz channel with a code rate of 1/2, then a carrier/noise ratio of only 3dB is required, assuming Gaussian noise only. In the N.Z case of 64QAM and a code rate of 3/4, then a C/N ratio of 18dB (Gaussian channel) or 21.7dB (Rayleigh channel) or 19.1dB for a Ricean channel are necessary. A Gaussian channel exists if random noise is the only impairment. A Rayleigh channel is where reflections are dominant and a Ricean channel has certain combinations of noise and reflections. It is safest to base planning standards on the worst situation; being 21.7dB in our case.
Assuming a receiver has a noise figure of 7dB at the frequency of operation, then the necessary C/N ratio (using UHF 8MHz; 64QAM; CR=3/4) will occur with -106dBW input to the receiver (-76dBm or 33 dBuV in 75 ohms). I have confirmed by measuring several receivers that -76dBm is the minimum signal power. Some have worked down to -81dBm, meaning that their noise performance is superior to the assumed value. Only one or two receivers have needed up to 8dB more signal, implying their front-ends are not optimal. In practice, one needs to assure a greater signal than minimum at the receiver to allow for degradations, because DVB-T does not have a graceful increase in noise as does analogue TV. It will be perfect up to the point where it stops altogether. There is sometimes a point where perfect first degrades to highly pixellated before stopping altogether. Planning authorities allow sufficient signal strength to cope with some terrain losses, a little building penetration loss and inefficient antennas, but viewers are always advised to use proper outdoor antennas.
As an example, assume we want 12dB decode margin at the receiver, then we must have at least -66dBm (43dBuV/75) signal power at the receiver terminals (same parameters as above). We have good quality RG6 coax 30m long to the antenna. That much RG6 has 6 dB loss at UHF, so the antenna power must be more than -60 dBm (49dBuV in 75 ohms). This level is at least 18dB lower than would be required to produce a noise-free analogue picture under the same circumstances. Freeview DVB-T transmitters in N.Z are typically 1/5 of the power of their analogue equivalents to achieve similar coverage objectives, but that is not a hard and fast rule. DVBT transmitter power is defined as averaged rms value of all carriers where for analogue it is defined as the peak sync power of the vision transmitter, so power comparisons between digital and analogue are not straight-forward.
The power of a DVB-T channel is the rms sum of all the carriers within the 8MHz block. Each one of the 6048 data carriers varies dynamically in power depending on which of the 64 QAM positions it has taken during any one symbol period. But over all carriers and over enough time for many symbols, the power in the channel is constant. Below is a screenshot of an actual transmitted DVB-T spectrum.
DVB-T 8MHz transmitted spectrum example: The spectrum is perfectly flat. The analyser is averaging the rms value over a sufficient period to ensure enough symbols are captured. Here the horizontal divisions are 2MHz and the vertical divisions 10dB. Resolution bandwidth is 10kHz. VBW is 100kHz to permit accurate operation of rms detector.
The spectrum of a DVBT channel at a receiving location may not be flat. Reflections do cause ripples and these may be quite severe in difficult terrain. There are specialised field-strength meters around for DVB-T. You could not expect an analogue TV FS meter to produce the correct answer although it could still aid in peaking the signal. A modern spectrum analyser can also have a 'channel power' measurement feature. It is necessary to ensure that the centre frequency is set to the centre of the DVB-T channel and that the channel width is assigned correctly as 8MHz. The analyser should have an rms detector mode, or at least a 'sampling' mode. Use of peak detection will give an incorrect answer. If you have an older style spectrum analyser without the fancy measurement routines the measurement can be done as below:
If the received spectrum is fairly flat (left picture) then three separate measurements are enough. Set the analyser first to the centre frequency, RBW=30kHz, detector=rms(or sample), sweep-time >200ms, averaging 5 sweeps. Record the level in dBm. Repeat this at two more positions near the edges of the channel as shown by the red arrows. Now take the average of the 3 results. Next, correct that average figure for the full bandwidth of the spectrum.
Corrected level = averaged reading (dBm) + [10 x log(7610/30)]. Or in other words, add 24dB to the reading obtained using 30kHz resolution bandwidth on the analyser. If you use other settings for RBW, the correction value will change. Some people prefer 100kHz RBW. In that case, the correction value to add is 19dB.
If the received spectrum is full of peaks and dips like the right picture, you should take 5 readings at the arrowed positions and then follow the same procedure as above. The result will be reasonably accurate to within say 1dB.
There are cautions to observe when measuring DVBT signals because some of the 6048 carriers will be higher than the averaged rms value of all of them, and some carriers will be lower. The broadcast peak to average ratio can be over 12dB in practice. So, always ensure that your analyser is not overloaded by these peaks. The best way to ensure linear operation is to look for regenerated 'shoulders' to the left and right of the 8MHz channel. Shoulders are actually 3rd order and 5th order intermodulation products produced due to amplitude non-linearity. These unwanted products will exist right through the channel as well, but you can't see those ones. Look at the screenshot of the transmitted spectrum above and you can see a shoulder on the right hand side. This shoulder is an indicator of non-linearity of the transmitter. That shoulder and it's partner on the left would extend at least another 8MHz each side if it were not for the high-selectivity bandpass filter which is reducing them. However, the fact that the shoulders have been filtered does not change the linearity of the system. In that screenshot, the shoulder is about 40dB down with respect to the total power. That is quite good in this context. If you observed shoulders that were only 30dB down or worse, you have a non-linear device in the path. If you get poor shoulders on your spectrum measurement, increase the 'RF attenuator' setting by 5 or 10dB. That should solve the problem. Do not put too much RF attenuation in however, or you will add too much noise to the signal and also compromise accuracy of the measurement.
That concludes this article on DVB-T RF basics. Any comments or questions please use the form on the CONTACT page.
Sources: ETSI standard EN300744
Rohde & Schwarz documents
Axino-tech Consulting & Services , August 2010.