Mobile
Satellite Communications - Part 2
A Technical Critique
|
|
Originally
published April, 1997 |
by
Carlo Kopp
|
© 1997, 2005 Carlo Kopp |
|
In last issue's feature
we surveyed the new generation of mobile communications satellites, and
briefly reviewed some of the basic technical issues surrounding this new
alternative in communications. In this follow-up feature, we will take
a closer look at some of the technical issues and fundamental
limitations of the existing schemes.
The best starting point for any such discussion is to articulate the
basic properties which users of long distance computer communications
expect of a transmission medium. Using these as a baseline, we will then
look at a number of existing schemes and determine what their strengths
and weaknesses are in relation to the ideal model.
What Would the Ideal Mobile Communications
Scheme Offer ?
The trivial answer to this question is infinite zero latency
error free bandwidth at zero cost to consumers, with global all weather
coverage. As nice as this sounds, fundamental physics and information
theory suggest that this cannot be, so the consumer will have to accept
some significant compromises. Even so, producing and implementing a
scheme which delivers good performance for computer users, as compared
to voice users, is not an easy task. To better understand why, it helps
to review the most important criteria:
- Bandwidth is the
critical factor in modern computer comms, and the ever increasing demand
for multimedia in commercial applications, and imagery transmission in
military and other government applications, suggests that bandwidth
will remain a critical issue. Whereas traditional circuit switched
voice applications can easily merge multiple streams and thus smooth
out traffic loads with increasing traffic volumes, computer traffic has
the nasty property of being fractal in its statistical properties. If
you merge multiple streams of bursty computer traffic, the merged
stream will exhibit similar burstiness properties to the component
streams (a future feature will discuss the implications of the Bellcore
paper in more detail). What this means in turn is that individual users
will require multiple Mbit/s class bandwidths to their satellite
terminals, if they are to be provided with crisp and responsive
interactive performance. Moreover, the shared transmission medium will
have to have significant headroom in traffic carrying capacity if it is
to accommodate the statistical behaviour of computer traffic without
suffering congestion problems. Imagine the consequences of a major
congestion collapse on an orbiting global network of routers. Provision
of bandwidth is complicated by a number of factors. The foremost of
these is spectral congestion, which is a major problem in the US and
Europe and is becoming an issue is this country as well. At this time
the only bands which are not saturated or approaching saturation with
existing traffic are the microwave bands above 15 GHz. Unfortunately,
fundamental physics make the exploitation of frequencies above 15 GHz
quite difficult. This is because the lower layers of the atmosphere are
dense and this density results in significant absorption of microwave
signals. For instance water molecules resonate at 22 GHz, and Oxygen at
60 GHz, making even clear air transmission extremely difficult if not
impossible in the immediate vicinity of these frequencies. Moisture
laden clouds and rain exacerbate this problem, as they are excellent
microwave absorbers in their own right. Above 20 GHz heavy rain will
knock out transmission with increasing effectiveness with increasing
frequency. As a result of these effects, the upper microwave bands are
best exploitable in two "windows", one below 22 GHz and the other
between 25 to 45 GHz. Above sixty GHz quantum physical absorption makes
long haul links impractical. Needless to say infrared or optical
frequencies suffer the same problems to an even greater degree.
- Quality of Service
(QoS) is measured by link availability and link error rates. As just
noted, the use of the millimetric band is quite problematic primarily
as poor weather conditions will compromise QoS very rapidly. Achieving
good bit error rates (eg < 10^-9) is generally not a problem with
modern receivers, be they microwave or optical, moreover clever use of
redundancy in coding schemes can provide excellent resistance to bit
errors produced by receiver noise or interference. However, having your
link drop out altogether as a cloud passes overhead would be most
annoying. An important issue in this context is the ability to cope with
interference from natural or man made sources (the latter including
intentional jamming in military situations). Conventional modulation
schemes do not cope particularly well with interference, and the only
defence is to trade bandwidth for redundancy in oder to maintain QoS.
Fortuitously, the answer to this problem, as well as the problem of
spectral congestion has existed for several decades. It is the
technique of Spread Spectrum communications, in which the intended
message modulation is spread over a significantly wider bandwidth, by
additional modulation with a pseudo-random binary code. The resulting
signal appears as noise to a conventional receiver, and is typically
ignored by another spread spectrum receiver. A future feature will
discuss this in more detail.
- Latency is the
propagation delay incurred by the message as it propagates between
sender and receiver. Because radio and light waves travel at the speed
of light, which is approximately 3.10^8 m/s in free space, appreciable
latencies can be incurred in long distance transmission. This is
particularly the case with GEO satellite links. Terrestrial links and
LEO satellite schemes therefore have an inherent advantage in the
latency contest, simply as they have much lesser distances to cover.
However, some latency delay will be incurred with every repeater or
router along the way. Therefore schemes which have an advantage in
propagation delay latency may lose much ground if many repeaters or
routers are used.
- Coverage is a
measure of the footprint of a satellite or constellation of satellites.
The footprint of each sat is in turn determined by the type of antennas
used, the power transmitted and the sensitivity of the receivers used.
In mobile comms or computing applications, the user terminal will
require a compact lightweight antenna. This means that good antenna gain
and thus terminal sensitivity will require in turn higher power output
from the satellite, and better antenna and receiver performance from
the satellite, which in turn bites into complexity and thus cost. An
LEO constellation will have much smaller footprints per each satellite,
compared to say MEO or GEO schemes, with adjacent satellite footprints
overlapping one another at their respective boundaries. While the LEO
schemes can therefore pack more bandwidth per square kilometre of
coverage, they must in turn accommodate a smooth cutover between
satellites as one moves out of coverage and another into coverage. This
will add complexity to receivers, and thus cost. An issue in the
context of coverage is "granularity", or the relative throughput per
receiver. Mobile systems mounted on vehicles, ships or aircraft can use
a single large high performance antenna/transmitter/receiver which
feeds individual users through an onboard LAN.
If you wish to feed individual users with mobile laptops or portables,
you immediately incur penalties in cost and complexity, particularly at
the satellite end. Satellite schemes intended to support individual
users will run into a major issue, which is that of population density
in the satellites' footprints. Consider an LEO scheme where each
satellite has a circular footprint 300 kilometres in diameter.
While this satellite is over Tahiti, Bouganville, Baluchistan, the
Kalahari or the Simpson desert, it will probably take a handful of
connections from geologists, missionaries, the odd tourist and possibly
a local government. Consider however the load upon such a satellite over
Tokyo, New York, LA, London or Singapore ? If we assume that 5% of the
population will each want a 2 Mbit/s connection, then we immediately
run up an aggregate bandwidth requirement of the order of hundreds of
Gigabits/sec for that satellite alone, and the overheads to manage the
state of hundreds of thousands of connections. If we assume that only
0.5% of the population wants a connection, the numbers are still very
problematic. This is indeed the Achilles heel of most of the mobile
satellite schemes proposed to date.
They will indeed provide an unprecedented service for users out in
Woop-woop, but are likely to suffer significant difficulties once
confronted with the high population densities of the First World. Since
most of the world's computer comms users live in the First World, which
also produces most of the world's GDP, we must ask a fundamental
question - why should shareholders of global communications schemes want
to provide a low cost worldwide service when most of the best revenue
sources will be unable to extract a truly high quality high speed
service from the system and thus are unlikely to subscribe in viable
numbers ?
Let us however assume that some clever engineering tricks are played
and a satellite can be built to carry many Gigabits/s of traffic within
its footprint. We are then confronted with the issue of carrying this
traffic to adjacent satellite borne routers, and forwarding it to its
destination. Should we adopt conventional shortest path routing
algorithms, we could simply trace great big lines across the globe
between the First World's major population centres, and expect that
satellites along these lines will be extremely busy simply carrying
traffic between their neighbours. Again, we are likely to confront
similar problems in saturation of routers, and thus performance problems
due queuing delays. So to avoid saturating satellites along paths
between Europe, the US and the Far East, we adopt a routing scheme
which channels the traffic through less geometrically advantageous
satellites which are lightly loaded with traffic, as they are passing
over Third World countries.
We then begin to incur latency delays through having to hop across more
satellites, and traverse greater distances. In any event, we still end
up with ever increasing traffic density as we approach the First World.
This raises serious questions about the technical and commercial
viability of a number of mobile satellite schemes, particularly in
relation to the carriage of high speed computer traffic. Indeed the the
only viable near term consumer of high bandwidth digital satellite comms
will be the military in the First World. The US DoD Milstar
constellation, with four cross-linked GEO orbit satellites, using 60 GHz
crosslinks, provides a T1 service for a limited number of channels, and
is limited to 2,400 Bits/s for its standard high volume low data rate
service. The Milstar I/II is both large and expensive to build and
deploy.
Conclusions
The conclusion we can draw from a basic analysis is that the current
generation of proposed mobile satellite communication schemes suffer
significant technical limitations in the carriage of computer traffic
which will in turn reduce their utility in the highest density and thus
best revenue generating parts of the world. They do however provide
useful if limited connectivity to parts of the world's geography which
are not provided with viable terrestrial links. Whether provision of
service to such areas will provide sufficient revenue sources for a
follow-on generation of sats remains to be seen. |
|