No sooner than the ink dried on the
baseline IEEE 802.11 Wireless Ethernet standard, we began to see a
plethora of proprietary bit rate extending enhancements to the basic
standard, pushing its achievable bit rate out further and further than
the modest 1-2 Megabits/s it started at. In this month's feature Carlo
Kopp explores one of the latest developments in the IEEE 802.11 saga -
the very high speed variants being developed around the COFDM modulation
scheme.
At this time wireless Ethernet is a rapidly growing
market, as it has proven to be an excellent connectivity tool for
laptops, notebooks and various other bits of portable/wearable
technogadgetry. Given the boom in mobile phones, it is inevitable that
wireless connectivity will become an increasingly popular part of the
portable and mobile computing game.
After an initial period during which proprietary
products dominated the market, the 802.11 standard was adopted and has
become the baseline for this ilk of mobile connectivity (an excellent
web primer by Jean Tourrilhes of HP is posted at http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/).
While the 900 MHz band achieved some success, most current products are
firmly centred in the 2.45 GHz ISM (Industrial-Scientific-Medical) band,
for which there is no licensing requirement if the transmitter is
appropriately limited in power output (1 Watt, although the definition
is really related to the antenna Equivalent Isotropic Radiated Power or
EIRP) and band coverage.
The baseline 802.11 offering comes in two forms. The
first is the Direct Sequence or Direct Spreading (DS) scheme at 1 or 2
Mbits/s, in which each bit in the binary stream to be transmitted in
encoded into an 11-bit Barker code, which is then phase modulated upon a
carrier wave. With four mutually orthogonal forms of the 11-bit code,
802.11-DS can have four channels sharing the same bandwidth, in a manner
not unlike CDMA telephony.
The second form is the Frequency Hopping (FH) scheme at
1 Mbits/s, in which the band is chopped up into channels, and the
carrier is pseudo-randomly hopped between channels, thus spreading the
energy of the signal evenly across the band.
The aim of using spread spectrum techniques in 802.11
was twofold, both to allow low power operation with a useful bit rate,
and to allow operation in a free-for-all unlicensed portion of the
spectrum. In general, wideband spread spectrum systems have
significantly better resilience to narrowband interfering signals than
conventional narrowband modulations.
Inevitably, users wanted more channel throughput, and
thus the race was on to find ways of extending the standard to get
higher bit rates, or put simply, squeeze more blood out of the stone.
After much haggling between vendors on the committee, an agreement was
reached which led to the adoption of the IEEE 802.11-b or 802.11 HR
standard, which is an extension of the direct spreading modulation
scheme to provide either 5.5 Mbits/s or 11 Mbits/s by imposing an
additional modulation on the signal, with some inevitable loss in signal
robustness as Shannon predicts. There can be no doubt that 11 Mbits/s
in a wireless Ethernet is a useful throughput, and the standard has
proven to be very popular in the market, especially for environments
needing only small footprints.
Not surprising navigating in the 802.11 marketplace is
not a game for the faint-hearted or technologically challenged, since we
now have three different derivatives bundled in the same standard, of
which only two have a one-way backward compatibility relationship.
The use of spread spectrum techniques for 802.11 was a
radical departure from the established wireless communications game, but
given the very low spreading ratios resulting from short pseudo-random
coding, it has not delivered the level of robustness many users may
have expected, after hearing tales of near jam-proof military spread
spectrum communications. This was an inevitability, insofar as the
robustness of any spread spectrum transmission scheme depends
critically upon the spreading ratio between the baseband signal and the
modulation on the carrier. In practice, the 10 dB or so offered by
802.11 is not much of a defence against the neighbour's leaky microwave,
or ISM-barely-compliant spread spectrum telephone handset.
Interference rejection proved to be one weakness of the
established 802.11 protocols, but it is not the only one, as was soon
discovered once the product arrived in the market en-masse. Multipath,
the curse of analogue television, FM radio in cars and mobile telephony,
proved to be just as much of a curse for wireless LANs.
To those who have a strong background in radio-frequency
communications, multipath is an inevitable and well understood fact of
life in terrestrial transmission. Radio signals propagating along the
surface will bounce off obstructions such a buildings or hills, and what
a receiver sees at its antenna is a jumble of variously time delayed and
variously weakened copies of the signal which left the transmitting
antenna. In engineering speak, a vector sum of signals.
What this looks like at the receiver end depends very
much on the geometry and relative strengths of the signal received
directly, and its delayed copies. The worst case situation is where the
delayed signal is just as strong, but delayed by half a carrier cycle,
or technically out-of-phase. When this happens, the two signals cancel
each other out and the signal fades to nothing, causing a dropout. The
term fading, used to describe the deleterious effects of multipath, is
aptly chosen.
How well a receiver copes with fading depends on the
technology in use, and the severity of the multipath effects. Direct
spreading spread spectrum receivers have traditionally performed quite
well under modestly severe fading conditions, by using rake receivers,
in effect a battery of parallel receivers each of which locks on to one
of the multipath delayed copies of the carrier. But even this technique
fails if the multipath fading is serious enough, since mutually
cancelling or almost cancelling carriers leave very little indeed for a
rake receiver to lock on to.
Other alternatives do exist, some of which are very
effective. Military GPS receivers use smart adaptive beamforming and
nulling antennas, which cleverly point beams at satellites, and antenna
nulls at interfering sources, or jammers. Needless to say it doesn't
take a genius engineer to figure out that a very nice way to jam GPS is
to simply retransmit a time delayed copy of the GPS signal to
artificially introduce a serious fading problem. Adaptive beamforming
and nulling can pretty much nullify such mischief. The snag? The antenna
and its supporting beamforming electronics can cost tens of thousands
of dollars, weight several kilograms, and are definitely not a near
term candidate for an IEEE 802.11 laptop user, no matter how profligate
he or she may be with the departmental IT budget!
Fading has been a thorn in the side of every radio
engineer throughout the history of broadcast communications, and
inevitably became a subject for much practical and academic research.
Some urgency arose with the growth in mobile communications, since there
was no simple way of dealing with the problem when using a cheap whip
antenna, or other simple omni-directional antenna type.
One of the interesting facts which follows from some
thought into the subject is that fading is frequency dependent, as well
as spatially dependent. If the propagation path of the interfering copy
of the radio wave has some fixed length, retuning the carrier wave will
see the fading periodically increase and decrease, as the two carrier
waves move in and out of phase with one another. So just as one can
drive in and out of an area of severe fading when talking on one's
mobile, one could achieve a similar effect by having the transmitters
and receivers retune themselves to suppress fading.
Attempting such a scheme is however not a very practical
idea, for good engineering but also regulatory reasons. So the big
question does remain of how to exploit this aspect of multipath
propagation physics to advantage. A very elegant solution does indeed
exist, and we can expect to see much more of it in the foreseeable
future - COFDM.
Coherent
Orthogonal Frequency Division Multiplexing (COFDM)
COFDM is the basis of the new European (and soon
Australian) High Definition TV broadcast standard, the European Digital
Audio Broadcasting (DAB) standard, and the new 802.11-a 5 GHz band
wireless LAN standard. The in the direction of COFDM, particularly its
early adoption in Europe, has much to do with Europe's endemic multipath
problems resulting from very high population density and a lot of very
hilly terrain. What is an annoyance in urban Australia and the US, and
largely a non-issue in most rural areas, is a do-or-die problem for the
Europeans.
To best appreciate how COFDM achieves good rejection of
multipath fading, it is useful to do a little experiment. Let us
contrive a radio link which allows us to retune both ends concurrently,
and then let us find a place in the neighbourhood known for horrendous
fading problems. What next? We pick a slice of the radio spectrum we
intend to work in, and tune the carrier across our frequency range of
interest. What we will find is that for a reasonably wide slice of the
spectrum, typical for a wideband data signal, we are likely to get some
particular patches where multipath bites a lot, and many others where
the phase differences cause no pain at all.
Can we think about this effect differently? Let's assume
our carrier is modulated in amplitude with a simple high speed binary
data stream. It will produce a spectrum with sidebands, and as a result
of the multipath fading being frequency dependent, parts of these
sidebands will be chopped out or damaged. What happens when we view the
recovered data stream? We will see nasty shape distortion, causing some
bits to run into their neighbours, not unlike the ugliness we encounter
on a cable which distorts signals by suppressing higher frequency
components. Different physics, but related consequences.
How can we exploit this behaviour to advantage? If we
transmit the data modulation at a much slower bit rate, the distortion
will become increasingly less significant. Why? The sidebands contract
and less of them fall into the part of the spectrum where the multipath
causes them to fade out.
This inevitably leads to the basic idea behind COFDM.
Rather than transmitting a very fast digital modulation with redundant
data bits on a single carrier, which is vulnerable to fading because the
modulation sidebands become damaged, we transmit a very large number of
redundant subcarriers each with a very slow modulation. As a result, if
fading knocks out one or more subcarriers, we can still recover the
data safely, which is not necessarily true of the single carrier/fast
modulation scheme. In concept, this is not unlike the comparison
between a redundant parallel bus with many wires, against a serial bus
with one wire. Parallel busses have always been easier to build for a
given throughput, because they can be clocked with data N-times slower
for any given bus throughput.
This analogy isn't quite as silly as it may seem, on
closer examination, because the problems which kill bus performance
arise also from pulse distortion in the transmission medium, in this
case a cable. The physics via which the distortion arises may be quite
different, but they produce much the same effect at a system level.
A trivial multiple subcarrier FDM system could be simply
built by stacking a large number of low speed data modems in parallel,
each tuned to its subcarrier frequency, and then suitably multiplexing
and demultiplexing the digital data stream going into the system.
However, if we want several hundred subcarriers to properly exploit the
available benefit, we end up with something which is prohibitively
expensive to build and much too bulky for ordinary users.
Modern COFDM systems are affordable and compact, as they
exploit some clever idiosyncrasies in the mathematics of the problem.
The origins of COFDM go back to 1971, when a pair of
very clever research engineers, Paul Ebert and S. Weinstein, both
working for Bell Labs in New Jersey, discovered a curious relationship
between the Fourier transform, beloved by engineers, and the behaviour
of coherent FDM systems using large numbers of subcarriers. The FDM
signal, made up of a large number of coherent (ie having a fixed
frequency relationship) subcarriers, could be shown to be the Fourier
transform of the digital data stream, and that the behaviour of the
stack of coherent demodulators could be described by the inverse Fourier
transform. Since Fourier transforms can be crunched on computers, Ebert
and Weinstein suggested a new modulator design for this purpose, a
completely digital modem built around a special purpose computer
performing the fast Fourier transform (FFT) algorithm (the author is
indebted to Dr Chintha Tellambura of Monash Uni for providing a copy of
this 1971 paper).
Given the speed and cost of computers during that
period, the Ebert-Weinstein modem had to wait three decades before it
could be produced economically.
Before practical COFDM systems could be produced, other
theoretical refinements had to be developed. In the Ebert-Weinstein
model, the data could not be transmitted continuously, since the
sidebands of the subcarriers interfered with one another. They dealt
with this problem by leaving gaps in the transmission, which was
inefficient. The solution to the problem was found in 1980 by NEC
research scientist Botaro Hirosaki, who discovered that making the
subcarriers orthogonal mathematically, allowed transmission without
interference between the sidebands of the subcarriers. The condition for
orthogonality was found to be very simple - the spacing of the
subcarrier frequencies had to be the inverse of the data symbol period
(ie bit cell duration on each subcarrier). If this condition was
satisfied, then the system could transmit data on every subcarrier with
no intervening gaps.
Thus was born COFDM. Before practical commercial systems
could be built, advances had to come in FFT processing. By the 1990s,
FFT processor chips reached the cost and performance level where mass
production COFDM modems became feasible.
Modern COFDM transceivers rely fundamentally on the
availability of high speed signal processing chips, FFT processor chips
and analogue/digital and digital/analogue converter chips.
A typical design will see the digital data stream
converted into a parallel set of N bits for N subcarriers. It is then
used to produce, in software, the phase keyed modulation values for
each subcarrier (for readers with an engineering background, this
amounts to calculating the respective amplitudes of the real and complex
components of each subcarrier, to get the phase angle required to
encode the bit value). These are then fed into an FFT chip which
performs an inverse FFT into the time domain. These samples are then fed
into a digital/analogue converter to produce the modulation envelope for
the signal. At the receiver end, the signal is digitised, and the
samples fed into a FFT chip to perform the forward FFT transform into
the frequency domain, to recover the subcarriers and their respective
phase shifts. Once the phase values are established, the bits are
recovered and the data stream can be produced.
Since fading may knock out some of the subcarriers, a
block coding scheme with some redundancy will be used to recover the
data free of errors. This is usually backed up with further data
redundancy and error control measures in the data stream.
There can be little doubt that COFDM is the most complex
modulation scheme yet to penetrate into the mass production
techno-commodity market.
The 802.11-a COFDM
Wireless Ethernet
The 5 GHz band 802.11-a standard will provide
unprecedented speed for a wireless application - no less than 54
Mbits/s. This is almost five times the throughput of the current 802.11
HR standard, which tops out at 11 Mbits/s under optimal transmission
conditions. Since COFDM is used, it is expected that the 802.11-a
standard will deliver significantly better robustness in fading
environments.
An 802.11-a link requires a modest 20 MHz bandwidth, and
encodes 64-subcarriers with the Quadrature Amplitude Modulation envelope
carrying the data, using a pair of 64-point FFT chips and a Viterbi
encoding scheme. A typical 802.11-a COFDM modem chipset such as the
Radiata R-M11a uses 10-bit resolution analogue/digital and
digital/analogue conversion at 80 Megasamples/s.
Not to break the well established trend in this market,
some manufacturers are already selling designs with proprietary
enhancements. The Atheros AR5110 radio-on-a-chip is designed to operate
in 802.11-a compliant 54 Mbits/s mode, but also in a proprietary turbo
mode of up to 72 Mbits/s, out to 100 feet of distance.
What conclusions can we draw at this stage? The market
is still in its infancy, but the throughput advantages of 802.11-a COFDM
are so dramatic against the basic 802.11 and enhanced 802.11-a (HR)
standards, that we can expect to see a mad scramble by OEMs, WLAN board
manufacturers and mainstream computer manufacturers to incorporate the
COFDM product at the earliest possible date. Since 802.11-a operates
outside the established 2.45 GHz band, virtually all in service WLAN
hardware will be effectively obsoleted. Very few 2.45 GHz antennas and
cables will perform well at 5 GHz, so we are likely to see a lot of
surplus 2.45 GHz equipment appearing on the market over the next 2-3
years.
To yet again paraphrase Larry Niven, it is another case
of evolution in action.