Australia's First Online Journal Covering Air Power Issues [ISSN 1832-2433]
F-22A Raptor, FB-22, F-22E, F-22N and Variants Index Page [Click for more ...] People's Liberation Army Air Power Index Page  [Click for more ...]
Military Ethics, Culture, Education and Training Index Page [Click for more ...]
Russian / Soviet Weapon Systems Index Page [Click for more ...]


Home - Air Power Australia Website


Last Updated: Mon Jan 27 11:18:09 UTC 2014







Assessing the Impact of Exponential Growth Laws on Future Combat Aircraft Design

Air Power Australia Analysis 2010-04
  31st December 2010

A Monograph by
Dr Carlo Kopp, SMAIAA, SMIEEE, PEng
©  2010, Carlo Kopp

F-22A Raptor, F-86H Saber, P-38J Lightning and F-15E Strike Eagle. These four fighters are characteristic of upper tier air superiority fighters built to kinematically dominate opponents in close combat (U.S. Air Force image).

Abstract


Exponential growth laws are well established predictors of performance in digital computing hardware, data storage hardware, computer networks and demonstrably, focal plane array imaging devices. These technologies are becoming pervasive and as a result can now be found in most combat aircraft designs. This study explores how these growth laws can be expected to impact the evolution of combat aircraft designs over the coming two decades. Exponential growth laws in computing, optical imaging, mass storage and data communications basic technologies are surveyed and constraints explored. Recent technological advances in radar technologies are surveyed and related to exponential growth laws. The divergence in growth rates between kinematic and information domain basic technologies is explored, and a range of conclusions drawn. These in turn are related to combat aircraft capabilities, from the perspectives of survivability, lethality and the dynamics of air combat. Finally, this study summarises the direct and indirect impact which the exponential growth laws are likely to produce in the evolution of existing combat aircraft and development of future combat aircraft.



Introduction



The history of combat aircraft design and evolution through operational service life now spans almost a century, including two World Wars, the extended Cold War period, and two decades of post Cold War era. As a result, a vast number of detailed and often exceptionally well documented case studies exist, illustrating both successes and failures in design, and designs which were able to evolve over time. Many designs were not able to evolve and rapidly became extinct, either through the cessation of manufacture or attrition in combat operations, or both.


Evolutionary Considerations in Combat Aircraft



The “technological evolution” of combat aircraft over the last century shows that technological achievement has always reflected a confluence of available basic technology and operational imperatives in intended capability. While the former has enabled specific capabilities, the latter has acted to effectively attrit less competitive technologies over time1.

Perhaps the best case study of this effect has been the transition from reciprocating piston engine and propeller propulsion to the use of gas turbines. While reciprocating engines and propellers remain in use over a century since their introduction, they have been completely displaced in all mainstream designs. Hybrids, such as gas turbine powered propeller and propfan systems are employed in applications where low fuel burn at low and medium operating altitudes is the pivotal imperative. Turbofan engines have become the mainstay in mainstream applications, with specific design optimisations in bypass ratio and other key design parameters characterising specific design niches.

In the language of evolutionary theory, the introduction of new designs by variation in existing basic technology or the application of new basic technology amounts to a mechanism not unlike “mutation” in biological evolution, but with the important caveat that an entity separate from the evolving entity provides reproduction through manufacture. Another important difference from biological systems is that operational aircraft can be “mutated” through modification throughout their operational life, such modifications encompassing changes to hardware and more recently, software.

In biological evolution, “fitness” is some measure which describes the potential of a species to reproduce and thus propagate through time. In combat aircraft, reproduction amounts to continued manufacture or continued operation with ongoing modification to adapt the design to a changing environment. The analogy to biological extinction is withdrawal from service, as a result of attrition in combat or other causes which make continued production or operation of the design non-viable.

Contemporary military aircraft are complicated systems, combining an airframe, propulsion system, control system, sensors, navigation and communication systems, weapon systems, and a range of supporting systems ranging from cooling to crew life support systems.

The choice of specific technologies reflects a number of imperatives, some of which could be described as “short term” or “near term” imperatives, and some of which are “long term” imperatives. Short term imperatives may include the availability or popularity of certain components, materials or design strategies, while long term imperatives usually include qualities in combat and life cycle costs.

In most combat aircraft the dominant short and long term imperative continues to be “combat effectiveness”, most often defined as some combined result of “lethality” and “survivability”.

Broadly, the term “combat effectiveness” has been and continues to be used in many contexts, and thus cannot be said to be exactly defined, as definitions vary widely with context and application. Moreover many attempts to quantify combat effectiveness are subject to the cognitive bias of the observer or assessor attempting to do so2.

The accepted practice in the definition of combat effectiveness is the conflation of combat effects and survivability, as seen in the common use of Loss-Exchange-Rates as a Measure of Effectiveness in air to air combat. This practice is not unique, considering the attempt to quantify combat effectiveness in the model introduced by Wang and Li, in which combat effectiveness E is defined as a linear combination of terms describing effectiveness in air to air and air to ground combat, with each term conflating measures of combat effect and survivability3.

Conflating lethality and survivability is useful in many situations, but incurs some risks as it can often obscure specific limitations in a design which may have a dominant effect.



Figure 1. Mitsubishi A6M2 Zeke Model 22. A pivotal weakness of most Japanese fighters during the 1940s was the absence of design measures to reduce vulnerability such as armour plating and fuel system protection. When the nature of combat operations shifted from the offensive strategic manoeuvre to sustained attrition warfare in the Solomons campaign, catastrophic attrition of aircraft and pilots ensured. The Japanese design philosophy of neglecting vulnerability to maximise lethality and minimise susceptibility proved unsuccessful for the type of conflict these aircraft were employed in (U.S. Air Force image).



Figure 2. F-22A Raptor (2010) and P-47D Thunderbolt (1944). Both incorporate the best survivability measures available during their respective periods of development (U.S. Air Force image).

An excellent historical case study of the decoupling of combat effect and survivability was the practice by Japanese combat aircraft designers, during the 1930s and 1940s, of sacrificing survivability measures such as armour and fire suppression to minimise weight and thus maximise lethality through best possible aerodynamic performance. This practice yielded good effectiveness in the early phase of the Second World War, where Japanese forces fought a high tempo offensive strategic manoeuvre campaign exploiting superior numbers, surprise and strategic mobility. Once the character of the conflict shifted to sustained attrition warfare, the survivability limitations of these designs resulted in catastrophic attrition losses in aircraft and personnel. Refer Figure 1.



Figure 3. Designed to survive defences by high speed flight at low altitudes and toss delivery of nuclear bombs, the F-105 suffered heavy losses when employed in dive bomb deliveries, using dumb bombs, during the South East Asian conflict (U.S. Air Force image).

Another interesting case study of mismatched design optimisation and combat environment is the operational use of the Republic F-105 Thunderchief in the South East Asian conflict during the 1960s and early 1970s. Initially developed to deliver tactical nuclear weapons by toss bombing, through dense air defences, the design was optimised to evade hostile fighters and Surface Air Missiles by exploiting very high speed and low altitude flight. In South East Asia the aircraft was employed to deliver unguided conventional bombs, and suffered heavy losses primarily due to damage inflicted by low calibre low altitude Anti-Aircraft Artillery (AAA) fire, mostly during low altitude dive weapon delivery. Refer Figure 3.
4.

In the absence of a widely accepted definition of “combat effectiveness” the following definition will be applied:

Combat Effectiveness = (the ability of the aircraft to produce the intended combat effect in a specific combat environment) (1)

Combat effectiveness reflects in some fashion the effects of “survivability” and “lethality”.

Survivability research is mature and survivability as a quantitative measure is well defined5.

The definitions by Ball and Atkinson are most widely employed and provide a robust basis for both qualitative and quantitative analysis, cite6:

Aircraft combat survivability (ACS) is defined as the capability of an aircraft to avoid or withstand a man-made hostile environment. It can be measured by the probability the aircraft survives an encounter (combat) with the environment, PS.”

Combat survivability in turn reflects three important measures, these being “susceptibility” , “vulnerability” and “killability”, cite:

Susceptibility is the inability of an aircraft to avoid (the guns, approaching missiles, exploding warheads, air interceptors, radars, and all of the other elements of an enemy's air defense that make up) the man-made hostile mission environment. The more likely an aircraft on a mission is hit by one or more damage-causing mechanisms generated by the warhead on a threat weapon (e.g. warhead fragments, blast, and incendiary particles), the more susceptible is the aircraft. Susceptibility can be measured by the probability the aircraft is hit by one or more damage mechanisms, PH. Thus,
Susceptibility = PH
Vulnerability is the inability of an aircraft to withstand (the hits by the damage-causing mechanisms created by) the man-made hostile environment. The more likely an aircraft is killed by the hits by the damage mechanisms from the warhead on a threat weapon, the more vulnerable is the aircraft. Vulnerability can be measured by the conditional probability the aircraft is killed given that it is hit, PK|H. Thus,
Vulnerability = PK|H
Killability is the inability of the aircraft to both avoid and withstand the man-made hostile environment. Thus, killability is the ease with which the aircraft is killed by the enemy air defense. Killability can be measured by the probability the aircraft is killed, PK. Killability is given by the joint probability the aircraft is hit (its susceptibility) and it is killed given the hit (its vulnerability). Thus,
PK = PHPK|H

Killability = Susceptibility • Vulnerability
If the threat weapon contains a high explosive (HE) warhead with proximity fuzing, the subscript H for a hit is replaced with an F for warhead fuzing.”

The relationships between these measures are defined thus5:
PS = 1 - PK = 1 - PHPK|H
Survivability = 1 - Killability = 1 - Susceptibility • Vulnerability
Measures to assess or quantify lethality in combat are much less well defined. While the measure of Probability of Kill, usually represented as PK or PKILL, is frequently employed in the context of weapons deployed against specific targets, the measure itself is a conflation of probabilities reflecting the design of various weapon components as well as the design of the delivery aircraft and its operational application.

Other measures of lethality have been employed. The measure of “throw weight”, defined as the product of some normalised weapon payload and the range of the delivery system, has been widely employed for strategic weapon systems, including bomber aircraft. It is a good measure of combat utility for a bomber, and for a fleet of bombers, permitting cardinal comparisons between fleets. Its limitation is that it does not encompass a measure of effectiveness in killing targets, as the latter is effectively nulled by the normalisation of the weapon payload.

Loss-Exchange-Rates (LER) are a relative measure of fighter aircraft lethality, but as they conflate survivability effects, they are not a true lethality measure.

In assessing lethality, several important considerations apply:

  1. Different target types require different delivery techniques and different weapons to achieve high kill probabilities. An aircraft suitable and indeed highly effective for one category of target may be quite unsuitable for another;
  2. The reliability of hardware, and more recently, software is a critical determinant of aircraft operational availability, and can seriously impact both lethality and survivability if poor;
  3. Basic aerodynamic performance in range/payload or persistence is now important for both air combat and strike roles, and impacts both lethality and survivability;
  4. Sustained and short duration kinematic performance (speed, acceleration, climb rate, turn rate) is important for both lethality and survivability;
  5. Sensor capabilities, passive and active, are important for both lethality and survivability;
  6. Observables performance, of the airframe, propulsion and active sensors, are important for both lethality and survivability.

These considerations expose an important duality in combat aircraft airframe designs and sensor fits, which is that most often key functional capabilities and sensor suite components make important contributions to both lethality and survivability. Refer Figure 4.



Figure 4.

If we place airframe performance into a domain encompassing “kinematic domain capabilities” and sensor performance into a domain encompassing “information domain capabilities”, then both lethality and survivability are complex functions of the aircrafts' kinematic and information domain capabilities7.



Figure 5. Messerschmitt Me-262A Schwalbe. The Me-262A was the first operational jet fighter, powered by a pair of Jumo 004 axial turbojets. This aircraft kinematically defeated all Allied piston engine fighters and was highly effective in combat. The immaturity of its engines resulted in frequent hot end failures, with this reliability problem resulting in frequent combat losses (U.S. Air Force image).

A good example can be found in systems such a geolocating radio-frequency receivers, which are a potent defensive aid permitting early detection and evasion of threats. The very same sensor can be employed for precision targeting of guided weapons against targets producing radio-frequency emissions.

An analogous example is an Infrared Search and Track (IR&ST) system designed to acquire airborne targets, but potentially a valuable defensive sensor if it can detect missile motor ignition flares in air combat.

Persistence at high speeds, subsonic or supersonic (i.e. supercruise), is another example, providing valuable capabilities in closing with targets or separating from threats, in both air-to-air and air-to-ground combat.

The overlap or duality property in both lethality and survivability across aircraft capabilities in the kinematic and information domains has been and remains a source of confusion to many observers.

Another important source of confusion is assessing aircraft lethality is the implicit dependency of lethality upon survivability. If the aircraft cannot survive before it can engage its target, its lethality will be zero. In a complex warfighting situation, aircraft with poor survivability will be lost more frequently than highly survivable designs, as a result of which the lethality of less survivable aircraft will be effectively reduced.

What this means in quantitative terms, is that the probability of kill against an intended target is actually a conditional probability, thus:

Pkill_opposed = Pkill_unopposed | Ps = Pkill_unopposed | (1 - PHPK|H )

Where Pkill_opposed is the probability that a kill can be achieved in an actual opposed threat environment, Pkill_unopposed is the probability that a kill can be achieved in an unopposed threat environment, both assuming probabilities of kill for some combination of aircraft and weapon, and finally the term (1 - PHPK|H ) represents susceptibility and vulnerability, as per the previous definitions by Ball and Atkinson.

This representation does not consider the correlation between weapon success rates at release and any defensive play the aircraft may need to employ in an opposed environment, where such a play may impair weapon effectiveness. For instance, bomb deliveries and missile launches may be compromised by suboptimal kinematics at release.

Therefore, a more accurate representation of the problem is thus:

Pkill_opposed = Pkill_unopposed | Pshot_unimpaired | (1 - PHPK|H )

Where the term Pshot_unimpaired represents the probability that the weapon release(s) is(are) not impaired in an opposed environment.

What this says in plain language is that an aircraft with poor survivability cannot be highly lethal in an opposed environment, even if it is highly lethal in an unopposed environment, such as a test range.

Mills has represented these relationships in tabular form8:

COMBAT EFFECTIVENESS RELATED TO COMBAT PRODUCTIVITY LETHALITY
LOW HIGH
SURVIVABILITY HIGH MARGINALLY EFFECTIVE
LOW PRODUCTIVITY
HIGHLY EFFECTIVE
HIGH PRODUCTIVITY
LOW INEFFECTIVE
NO PRODUCTIVITY
MARGINALLY EFFECTIVE
LOW PRODUCTIVITY
Table 1.

In this representation, “productivity” is a measure of attrition inflicted upon an opponent, as a result of lethality and survivability. Highly survivable aircraft with good lethality produce much greater effect in combat and are thus more “productive” in attrition warfare.

Another important observation is that selection mechanisms observed to attrit combat aircraft can vary widely between periods of peace and times of conflict.

In conflicts or sustained strategic competition involving peer competitor nations or near peer competitor nations, the survival imperative results in rapid technological evolution which invariably selects for combat effectiveness against opposing capabilities. This selection mechanism has been observed repeatedly between the late 1930s and early 1990s.

Periods in which there was no overt conflict between peer competitor nations or near peer competitor nations, and where there was a widely held belief that no strategic competition was under way, have been characterised by often wholly arbitrary choices in what aircraft are manufactured or maintained in operation. There is no evidence which substantiates the commonly held belief that acquisition or life cycle cost minimisation is the determining “peacetime selection mechanism” in combat aircraft under such conditions9.

Patterns of Evolution in Combat Aircraft


Technological evolution in combat aircraft has displayed two distinct patterns over the last century. These have previously been labelled as “linear evolution” and “lateral evolution10.

Linear evolution is characteristically a process born of head to head contests between 'like' capabilities. The better performing of these like capabilities prevails, and the process is centred on established designs improving their performance in their core design optimisations.

Lateral evolution is characteristically a process of a rapid adaptation to a rapidly changing environment. The demand for a new capability arises unpredictably, and the winner in the contest, all else being equal, is the player who can field the new capability quickest and thus secure a decisive advantage.

Platforms with strong potential for lateral evolution can be adapted quickly and cheaply thus facilitating the process of rapid adaptation of the force structure to gain a decisive military advantage.

A good parallel is a comparison against warfighting strategies. Linear evolution parallels a brute force attrition campaign, whereas lateral evolution parallels an agile manoeuvre campaign. More than often lateral evolution appears to be the more successful strategy as it pits a rapidly evolved strength against an opponent's basic weakness, not unlike a manoeuvre force punching through weak enemy defences.

A careful historical study of combat aircraft yields numerous good case studies. Periods of war or intense military competition, such as arms races, are most illustrative due to the significant survival pressures driving rapid growth in military capabilities.


Linear Evolution







Figure A. Messerschmitt Bf-109G-10 Gustav. The Bf-109 was employed initially during the Spanish civil war and continued in operational use well past 1945. The aircraft is an excellent example of linear evolution, with the earliest variants powered by a ~700 SHP engines and the final Luftwaffe Bf-109G-10 and Bf-109K variants powered by 2,000 SHP class engines, almost tripling installed engine power (U.S. Air Force image).



Figure B. Above: Supermarine Spitfire PR.XI (Merlin); below: Supermarine Spitfire PR.XIX (Griffon). Like the competing German Bf-109, the Spitfire entered service before the Second World War. Early variants were fitted with a 1,030 SHP Merlin engine, while the last variants were fitted with a 2,035 SHP Griffon engine, effectively doubling installed power (U.S. Air Force images).







The Second World War period presents some excellent examples, due to the large diversity of types built and operated by all participants. The RAF's Supermarine Spitfire and Luftwaffe's Messerschmitt Bf-109 were both in production at the outset and the end of the war. Both exhibited great capacity for linear evolution, and the 1945 variants of both types bore only a very basic resemblance to the 1939 models, with dramatic gains seen in performance.

Of more interest however are case studies of lateral evolution. Three aircraft stand out in this sense, these being the RAF's DeHavilland Mosquito, the Luftwaffe's Junkers Ju-88 and the USAAC's Lockheed P-38. All were initially architected for single roles, the Mosquito and Ju-88 as fast bombers, the P-38 as an interceptor. By 1945 all three types had spawned a multiplicity of specialised derivatives. These types flew roles such as reconnaissance, night fighting, pathfinding strike, close air support, and anti-shipping strike. The Ju-88 even evolved into the Mistel “piggyback” cruise missile11.

The protracted Cold War period also presents numerous interesting case studies. The F-111, devised as a supersonic low level nuclear bomber and naval interceptor, evolved into a range of conventional strike variants, a penetrating radar and communications jamming platform designated the EF-111A Raven, a reconnaissance aircraft in the RF-111C and a maritime strike aircraft. It was also employed as a platform for a sidelooking ground surveillance radar in the Pave Mover program. The F-4 Phantom II, developed initially as a naval interceptor, evolved into a fighter-bomber and spawned both a photoreconnaissance variant, the RF-4C, and a defence suppression variant, the F-4G Wild Weasel IV. The Soviet Tu-16 Badger was initially developed as a medium bomber, and spawned a wide range of derivatives including support jammers, chaff bombers, cruise missile carriers, electronic and photographic intelligence collection platforms, and radar reconnaissance platforms, with Chinese cruise missile carrier variants remaining in production at this time. Numerous other examples exist12.


Lateral Evolution







Figure C. De Havilland Mosquito B.Mk.35. The Mosquito presents an exceptional example of lateral evolution, yielding dedicated bomber variants, photoreconnaissance variants, night fighters, day fighter-bombers, and heavy cannon equipped anti-ship variants (U.S. Air Force image).


Figure D. P-38L-5-LO (F-5G-6-LO). The P-38 was initially developed as an interceptor, but used primarily as an air superiority and escort fighter. Variants included a night fighter, multiple photoreconnaissance subtypes, optical bombsight and radar equipped pathfinders, also used for early radio controlled guided bomb control. The aircraft displayed significant linear and lateral evolution (U.S. Air Force image).



Figure E. Above: BMW 801 radial powered Ju-88G-1 night fighter; below: Ju-88 Mistel cruise missile. Around 15,000 the Ju-88 were built, in more different and diverse variants than any other German combat aircraft. Variants included level/dive bombers, reconnaissance aircraft, night intruders, long range escort fighters, heavy fighters, night fighters, and a specialised tank killer. The Mistel radio controlled cruise missile saw the Ju-88A-4 cockpit replaced by a two tonne shaped charge warhead.







Figure F. Initially developed as a shipboard interceptor, the F-4 Phantom II evolved  laterally into a range of naval and land based variants, including photoreconnaissance subtypes and the specialised Wild Weasel IV defence suppression aircraft (U.S. Air Force image).



Figure G. The Tu-16 was initially developed as a medium bomber. It has spawned over a dozen specialised variants, and remains in production more than a half century later as a cruise missile carrier (Chinese Internet).





Of interest are the factors which are common to designs which have exhibited strong lateral evolution. These clearly include highly competitive aerodynamic performance, structural design capable of adaptation and strong enough to absorb growth in weight and engine performance, load carrying ability and internal fuel capacity, and the internal volume to accommodate specialised mission systems, especially sensors.

Distilling this down further, overall size relative to competing types appears to be the most common denominator in enabling lateral evolution.



Figure 6. The F-86 Saber was the dominant fighter in the Korean War,  the P-38 was a pivotal type during the Second World War, while the F-4 Phantom II was dominant during the Vietnam conflict. All four types were larger and in many respects kinematically superior to their opponents (U.S. Air Force image).



Figure 7. Differential evolutionary rates in basic design features.

Rate of Evolution


Aircraft comprise a range of components and subsystems, many of which are constructed using fundamentally different basic technologies. These technologies mostly evolve at different rates, reflecting basic differences in the physics and mathematics which determine the properties of each.



Figure 8. Evolution of subsonic engine TSFC performance (NASA).

Airframe aerodynamics, structures, engines, observables shaping and materials, power, cooling and life support subsystems, control systems and mission avionics all exhibit different rates of evolution, and a wealth of case studies exist over the past century.

The greatest dichotomy in rates of evolution continues to be observed between basic technologies providing aircraft functions in the kinematic domain, versus basic technologies providing aircraft functions in the information domain. Refer Figure 7.

A simple example can be found in comparing two fundamental measures of performance in key technologies falling into either domain.

The Dry Thrust Specific Fuel Consumption (TSFC) is a key measure of efficiency in gas turbine engines. Over five decades of evolution this figure has improved by a factor of around 2:1. Over the same period the computational performance of computers, measured in Millions of Instructions Per Second (MIPS) has improved by many orders of magnitude. Refer Figure 8.

This pattern is repeated across a great many other key technologies, with the principal divide being whether these technologies provide functions in either the kinematic domain or information domain.

It cannot be any other way, as the basic technologies used to provide functions in the kinematic domain are constrained by Newtonian physics, thermodynamics, fluid dynamics and chemistry. Basic technologies used to provide functions in the information domain are constrained by quantum physics, relativity and electromagnetism, and limited by the speed of light.


Exponential Growth Laws



Basic technologies employed in the information domain display, empirically, not only high rates of evolution over time, but in most instances display “geometric” or “exponential” growth properties over time13.

Exponential growth behaviour will therefore persist in such technologies until one of two constraints is encountered:
  1. The value of the technology diminishes to the point, where investment in research and development ceases; or
  2. Some hard limit in physics is encountered which prevents further technological evolution.
Exponential growth in electronic and opto-electronic semiconductor devices is well studied, and the result of this are definitions of Moore's, Kryder's, Nielsen's and Edholm's Laws.

These “laws” reflect empirical observations of growth in several key areas of basic technology, and therefore merit closer study.


Figure 9. Moore's law for density (above) and clock frequency (below) between 1970 and 2001 (Author).



Moore's Law


Gordon Moore's empirical formula relating the density of electronic circuits to time was defined during the 1960s, and is now a widely accepted measure of technology growth. With five decades of empirical data available, the growth law is well validated1415.

The essential thesis of Moore's Law is that “the number of transistors which can be manufactured on a single die will double every 18 months.” The starting point for this exponential growth curve is the period during which the first Silicon planar epitaxial transistors were designed and tested, around 1959 to 1962.

Moore's Law can be generally applied to all devices using planar monolithic fabrication technologies, which encompass general purpose Central Processing Unit (CPU) chips, specialised processing chips such as Graphics Processing Units (GPU) and signal processor chips, as well as Static and Dynamic Random Access Memory (SRAM / DRAM / SDRAM), Non-Volatile Random Access Memory (NVRAM), and electrically erasable or Flash Memory chips.

While Moore's Law provides a direct measure of capability in memory chips, as size is the primary measure of worth in such, it is only an indirect measure of performance in processing chips. This behaviour arises for two important reasons.

The switching speed of transistors in high density chips is critical to circuit performance. Mead observes that electronic switching circuit clock frequencies scale with the ratio of geometry sizes, as compared to transistor counts on a chip which scale with the square of the ratio of geometry sizes. So clock frequency in a processor chip can be related to density, and in turn to time16.

The first caveat is that Moore's Law for clock frequencies must include a scaling factor, and importantly, clock frequency for a complex chip design also depends on other factors, especially the wiring design on the chip and the internal architecture of the processor17.

The second caveat for the use of Moore's Law in estimating performance gains in CPUs, GPUs and signal processing chips is that actual computational throughput depends strongly on the internal architectural design of the processor, not simply the frequency at which it can be clocked.

For “like” internal architectures, Moore's Law does indeed provide a direct measure of performance gain. A Pentium chip with a given internal die (chip) design, if fabricated with smaller transistor geometry, will produce an improvement in computational throughput which is directly proportional to the improvement in clock frequency, for computational applications which are “compute bound”, where processor performance dominates over memory bandwidth or mass storage bandwidth.

Conversely, comparisons between chips which differ significantly in both internal architectures and densities or clock frequencies, can yield misleading results. Producing a variant of a 1970s microprocessor which could be clocked at 3 GigaHertz will still yield inferior computing performance to a contemporary 3 GigaHertz clock frequency chip, due to the inferior internal architecture of the older chip design.

What is frequently overlooked, is that major changes in internal architecture can produce important performance gains, at an unchanged clock frequency.

As a result, the actual performance growth observed in processing chips over time has been typically much greater than that conferred by clock frequency gains alone. Higher transistor counts allow for more elaborate internal architectures, thereby coupling performance gains to the exponential growth in transistor counts, in a manner which is not easily scaled like clock frequencies, and more than often difficult to exactly predict through modelling or analytical means.

A worthwhile observation is that contemporary commodity microprocessor chips are, in terms of their internal architectures, much closer in design to 1970s mainframe computers and supercomputers, than microprocessors of that period. Some chip architectures, such as GPUs, are fundamentally different to their historical predecessors.

Over the last several decades there have been repeated claims that Moore's Law would not persist longer term. The material reality is that quantum physics will eventually impose hard limits on how small switching devices can be made. To date the limitations of photolithographic technology, and the electrical performance of on-chip wiring connections have had a stronger impact on achievable density and especially clock frequencies.

The most recent industry trend, where difficulties have arisen with achievable clock frequencies in processors, has been to employ parallel processing technology, usually marketed as “multicore” processors.



Figure 10. Amdahl's law for a multiprocessing computer system. Even a very small serial component significantly impairs achievable parallelism (Author).

Amdahl's Law


Parallel processing techniques were first employed in mainframe computers and supercomputers during the 1960s, as a means of overcoming clock speed limitations arising from transistor sizes in period monolithic integrated circuit chips.

Contemporary “multicore” commodity processing chips are parallel processors, in which each “core” is a general purpose CPU, and two, four or six such CPUs are fabricated on a single Silicon die, sharing common hardware such as cache memory, memory management, and bus interfaces. Computational load in such a machine is shared in some fashion between these multiple processors.

The most widely used estimator of parallel processing performance gains is Amdahl's Law, defined in 1966:

Speedup = (s + p ) / (s + p / N ) = 1 / (s + p / N )

where s and p are the serial and parallel time fractions, respectively18.

The important conclusion which falls out of Amdahl's work is that performance in parallel systems depends critically upon the problem being computed, as much as the manner in which the parallel processor is constructed. Problems in which computations will stall when waiting for the results of other computations, can perform very poorly on parallel processors.

The bounds of performance in parallel systems are the best case of “linear speedup” whereby no internal computational dependencies exist between parts of the problem, and where performance of the parallel processing system scales directly with the number of added CPUs, and the worst case, where so many dependencies exist in the algorithm being computed, that only a single CPU is ever active. Real world problems span the full continuum between these bounds.

Contemporary computing practice is seeing, increasingly, the use of parallel processing environments, whether appropriate or not. At one extreme we observe commodity notebook, desktop and server systems shifting to “multicore” chips, at the other extreme we see increasing use of large scale clusters, grids and clouds for commercial and scientific / engineering computing applications. The latter are all forms of distributed parallel processing systems, where large numbers of commodity processors are connected using a high speed network of some type, more than often the Internet.

In combat aircraft, the first genuine parallel processing scheme to be architected into a design was the Pave Pillar avionic scheme, developed for Advanced Tactical Fighter (ATF) program and reflected subsequently in the early F-22A Raptor central processing system. Parallel processing technology has since appeared in radar signal and data processing systems, and will become increasingly common in future designs, mostly using commodity or “COTS” processor technology19.

The most recent trend in parallel processing has been the use of multiple GPU chips to provide low cost high volume floating point arithmetic. Such systems may employ many hundreds or thousands of parallel execution units, providing orders of magnitude higher floating point computational performance when compared to a commodity “multicore” processor. As the internal architectures of such chips are primarily optimised for graphics computations, significant effort in software is mostly required to exploit the potential performance of the hardware. As with all parallel processing, computational dependencies impose hard constraints on achievable performance20.

The contemporary and likely future trend is thus to attempt to overcome performance limitations in individual processors by aggregating large numbers of processors, often with little regard for whether this is the most efficient means of increasing performance. As a result, with some qualification, the exponential growth curve in processing performance is most likely to persist for the foreseeable future.


Parallelism in Avionic Architectures






Figures H, I: The first effort to define an avionic architecture to exploit large scale parallel processing was the 1986 Pave Pillar effort, performed in support of the Advanced Tactical Fighter program. While contemporary architectures differ in the use of COTS bussing and CPU technology, the concept of a shared high speed interconnect and multiple CPUs, often of different architectures, is now commonly employed (Oostgaard et al, 1986, WPAFB).









Figure 11. Evolution of rotating mass storage access times (above) and storage densities (below). The exponential growth seen in density is not paralleled by mechanically constrained access times (IBM).




Kryder's Law and Mass Storage Technology

Kryder's Law was defined to estimate exponential growth in the density of rotating magnetic storage devices or “hard disks”, in a manner analogous to Moore's Law. It is sometimes labelled “Moore's Law for hard disks”, and usually defined as a “doubling of capacity per dollar” over an 18 month to 24 month period21.

Like Moore's law, Kryder's law represents the progressive evolutionary improvement in density, produced by technological improvements in magnetic materials and disk drive heads. Mark Kryder of Seagate defined the law in 2005.

While Moore's law has exhibited modest short term perturbations, Kryder's Law has tended to larger perturbations due to the stronger short term impact of technologies such as Giant Magneto-Resistive (GMR) heads. Whereas in monolithic chip fabrication modest increments in density can be achieved by small improvements in technology, in magnetic mass storage a new materials or construction technology may be needed to achieve growth.

No differently from Moore's Law, Kryder's Law does not encompass the complete gamut of disk storage performance. It has provided a good predictor in recent years of disk storage capacity growth, and by default data transfer rates to and from disks, which are both determined by storage density.

What Kryder's law does not predict is the improvement in disk access times, which are determined by the rotational speed of the disk and movement velocities of disk heads. These have improved little over the last decade, reflecting the reality that access performance is determined by mechanical design, rather than electronics and magnetic materials. In effect, the access mechanism of such disks is a kinematic domain technology. Refer Figure 11.

The trend to parallelism observed in processing is emulated by the use of multiple platter disks to improve capacity per drive, but also in the use of RAID (Redundant Array of Inexpensive Disks) technology, where multiple disk drives are operated in parallel to emulate a much larger disk, with a much higher aggregate data transfer rate. Access times in RAID systems reflect the mechanically limited performance of the disk drives in the array.

For the foreseeable future, again with qualification, Kryder's law is most likely to hold.

It is important to observe that Kryder's law is specific to rotating magnetic storage device technology. Solid state non-volatile mass storage devices which employ electrically erasable Flash Memory technology, sometimes marketed as “Solid State Drives”, obey Moore's law, although recently the growth rate has been well in excess of that observed in CPUs. While considerably more expensive in cost per Gigabyte of storage, Flash Memory is an attractive alternative to rotating storage in embedded avionic applications as it is typically faster to access, but is significantly more robust in high vibration environments and better able to survive a cyclic thermal load.

Alternative solid state mass storage technologies are now emerging, and where these are based on monolithic semiconductor processes, will also obey Moore's law22.



Figure 12. Performance modelling of a high data rate communications link using a representative fighter class X-band AESA, for two stations with phase centres at the tropopause. The capacity measure is based on Shannon's criterion. Useful range is strongly dependent upon weather conditions, with four models compared (Author, 1999).


Figure 13. Performance modelling of a communications link using an AESA, for two stations with phase centres at the tropopause, and operating frequencies between the X-band and Q-band. The capacity measure is based on Shannon's criterion. Useful range is strongly dependent upon operating frequency, due to  tropospheric water vapour loss peaking at 22.235 GHz, and the onset of Oxygen resonance loss at ~60 GHz. Clear sky conditions are assumed (Author, 1999).

Bandwidth Laws


Nielsen's Law of Internet Bandwidth is defined in 1998 as “a high-end user's connection speed grows by 50% per year”. Edholm's Law of Bandwidth, defined for wireless, nomadic, and wireline Internet connections, asserts that “the three telecommunications categories march almost in lock step: their data rates increase on similar exponential curves, the slower rates trailing the faster ones by a predictable time lag”23.

It has been empirically observed that telecommunications bandwidth follows the Moore's Law nominal 18 month doubling period24.

The bandwidth laws reflect evolutionary growth across a range of different technologies, but particularly optical fibre technology and Gallium Arsenide Monolithic Microwave Integrated Circuits (MMIC)25.

As with Moore's Law and Kryder's Law, the bandwidth laws are constrained by basic physics and Shannon's information theory.

This is especially important when considering the impact of the bandwidth laws in assessing growth in radiofrequency wireless channels, whether these are commodity consumer wireless networking schemes like WiFi and WiMax, or more specialised military radio datalinks like JTIDS/MIDS or JTRS.

Two critical constraints apply to growth in wireless network bandwidth, and neither can be either dismissed or easily avoided.

The first of these is the “power-aperture product” problem, whereby the achievable range and data rate of a radio link is limited by the combined effects of transmitter power, antenna gain, receiver sensitivity and the inverse square law of radio propagation loss. This sets hard limits on how many bits per second can be sent between two devices at some distance. This is further exacerbated by radio propagation effects such as absorption, refraction and especially fading26.

Shannon's capacity theorem states that the achievable data rate through a channel can be manipulated by trading channel bandwidth and transmitted power, all else being equal. This is indeed the mathematical basis underpinning modern spread spectrum techniques. Unfortunately, congestion of the radiofrequency spectrum is increasingly a severe operational constraint27.

An additional consideration is that in military radio-frequency datalinks, Low Probability of Intercept (LPI) characteristics are of increasing importance. LPI characteristics are typically achieved by the use of low power emissions, and link capacity is usually sacrificed to make the signal difficult to detect and demodulate by unwanted parties.

Significant pressures will arise as a result of the exponentially growing gap between the internal bandwidth of sensor systems, which is exhibiting exponential growth properties due to the increasing use of exponentially growing computing and optical fibre technology, and constrained growth of radiofrequency datalink technology. Good examples include high resolution Synthetic Aperture Radar (SAR) systems and very high definition optical imaging systems, both of which at this time can collect data at rates in excess of 100 Megabytes/s, while the fastest operational datalinks such as CDL and TCDL are limited to 274 Megabits/s and 1,096 Megabits/s respectively28.

A promising technological strategy which can increase available capacity for datalinks is the use of high power aperture Active Electronically Steered Array (AESA) antennas, discussed further. Switched beam AESA technology is already employed in the low power Ku-band Multifunction Airborne Data Link (MADL), to provide steerable pencil beam links between aircraft, the intent being to minimise geometrical opportunities for intercept.

A high power-aperture AESA such as an X-band multimode radar antenna offers the same advantage of precise directional beam control, but with mainlobe widths typically between 1.5° and 5°, and significantly more power and aperture gain compared to specialised datalink antennas. Exploration of the fundamental performance bounds of such AESAs performed in 1999, using the measure of Shannon channel capacity, indicates that exceptional data transmission rates and ranges are achievable using representative AESA parameters, including data rates in excess of 2 Gigabits/s, refer Figures 12 and 13. Subsequent effort in 2006 by L-3 Communications, Lockheed Martin and Northrop Grumman using a modified AN/APG-77 radar demonstrated 1.096 Gigabits/s data rates. While the use of LPI waveforms and lower power ratings would reduce AESA link capacity, this yet to be exploited technology still presents an opportunity to advance well beyond the limitations of non-directional antennas, but importantly is also still constrained by the physics of apertures and radio-frequency propagation2930.

In summary, while the Bandwidth Laws reflect the same exponential growth relationships in well behaved transmission media like optical fibre cables, they break down rapidly in radiofrequency wireless systems, where antenna power-aperture, radio propagation effects, and spectral congestion dominate achievable bandwidth. The result of this is an exponentially growing gap between achievable internal bandwidth in sensors and avionic suites, when compared to achievable bandwidth in radio-frequency datalinks employed for networking.



Figure 14. Focal Plane Array imaging devices display exponential growth, although doubling rates are more sedate than in other planar technology semiconductor devices, of the order of ~4.5 years (Author, 2010).


Moore's Law in Focal Plane Arrays


Focal Plane Array chips used for optical imaging, such as CCD (Charged Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) chips used in still and motion camera equipment, bandgap and QWIP (Quantum Well Imaging Photodetector) thermal imaging devices, webcams, or cellular telephones, are fabricated using the same basic processes as other commodity chips. Prima facie, it could be argued that these devices should obey Moore's Law.

Empirical study (Figure 14.) of pixel counts, a measure of photosite density in commercial camera imaging sensors, shows a doubling period between four and five years, much slower than that observed in CPUs, GPUs, DRAMs and other semiconductor devices fabricated using much the same processes.

Unlike CPUs, GPUs, DRAMs and other commodity computing devices, imaging chips are constrained in design by factors other than photolithographically achievable photosite density, as their imaging performance depends on the number of photons each photosite can capture with acceptable signal to noise ratio performance. Another important limitation is the readout time, which increases at a given clock speed in proportion to the number of imaging photosites on the chip. While multiple readout interfaces are frequently used, this remains a major obstacle to photosite count growth.

Further constraints such as compatibility with existing film camera lenses are a major factor driving the commercial market, to the extent that many imaging chips have been specifically designed to match the frame sizes of legacy wet film media, such as the 120/220 or 35 mm standards. The resulting large die sizes impact production yields, in turn impacting production costs and volumes, and as a result the doubling period is much greater than for CMOS or NMOS products which are more conventional in design.

Regardless of market constraints, the exponential growth trend is well established, and many of these chips now match or outperform legacy military imaging chips previously developed for military photoreconnaissance framing cameras. Some commercially developed imaging chips have been integrated into military equipment.

Military infrared band thermal imaging chips have displayed a much slower rate of growth in resolution, reflecting fundamental limits in diffraction at the sensor plane, as well as slow learning curves in low volume manufacture, using often exotic materials and processes. Imperatives such as providing concurrent imaging in multiple infrared bands have often displaced pixel count as the driver of development investment effort3132.

An interesting recent development has been the design of large scale parallel imaging sensors, which aggregate dozens or hundreds of CCD imaging devices into a single system, intended to provide high density imaging coverage of large areas. In a very basic sense this reflects the trend to parallelism observed in CPU/GPU technology, where parallelism is employed to overcome the performance limitations of individual devices. The Argus-IS system presents an interesting case study33.

Hyperspectral sensors are another very useful area of imaging sensor technology which will benefit strongly from exponential growth in focal plane arrays, be these visible band or infrared. To date the cost of hyperspectral imaging sensors, both pushbroom and scanning types, has been high due to the demand for low production volume sensors, and the prodigious demand for sensor bandwidth, fast mass storage, and numerically intensive processing required. Exponential growth in the basic technologies used to construct hyperspectral sensors and provide processing will be an enabler for near term future growth.

As a case study, the exponential growth in imaging chips is valuable, as it is the only major sensor category where Moore's Law effects collide directly with the physics of sensor apertures, and thus it represents a special case, compared to other sensor technologies.

System Integration Law


Defined a half decade ago by researchers at Georgia Tech in the United States, the “System Integration Law” is intended to reflect higher than Moore's Law growth arising from advances in high density electronic component packaging rather than density increases in the monolithic electronic components being packaged. The law was framed in the context of the “More than Moore's Law” movement34.

Extremely high density electronics packaging presents formidable engineering challenges, not only in providing desired electrical performance, but especially in thermal management, as heat dissipation remains a major challenge in all all high density electronics.

Moore's Law in Monolithic Microwave Integrated Circuits


Much of the exceptional technology growth observed in military and commercial radio-frequency electronic hardware over the last two decades is a direct result of the development of repeatable manufacturing processes for Monolithic Microwave Integrated Circuits (MMICs) using GaAs centred technologies. Silicon, the mainstay of computing chip manufacturing, lacks the electron mobility necessary to construct high performance radio frequency transistors for operation in the mid and upper microwave frequency bands.

Integrated circuits on a chip using GaAs were a “holy grail” technology three decades ago, and formidable challenges had to be overcome to bring the technology to maturity. Like the Internet, GaAs fabrication research was heavily funded by DARPA, yet resulted in massive commercial exploitation once viable. Within the first decade of GaAs MMIC manufacture, the total fraction of the military market for such products was only a small percentage of the global total25.

Table 2,3: Gallium Nitride characteristics (Brookner, 2008).

Density growth over time in MMICs faces challenges often very different to those seen in commodity and specialised computing chips, as radio-frequency design rules apply to active and passive components, but especially wiring interconnections within the MMIC. Electrical impedance matching and bandwidth requirements of interconnections between the MMIC and external electrical circuits can and often do consume large portions of the real estate on the chip. More than often only a small fraction of the MMIC area can exploit Moore's Law driven gains in transistor and passive component density. The inferior heat conduction performance of GaAs compared to Silicon has also impacted achievable densities.

The limitations of basic GaAs materials have resulted in a sustained and intensive industry effort to develop alternative materials capable of operating at higher frequencies and high thermal loads, as well as a parallel effort to adapt CMOS Silicon technology for some radio-frequency applications35.

Gallium Nitride (GaN) and Silicon Carbide (SiC) materials offer the potential to fabricate transistors which have up to two orders of magnitude better power handling capability than GaAs, permitting in turn corresponding gains in component performance for applications such as radar or radio-frequency networking36.

While the exceptional density growth observed in Silicon computing chips may never be observed in radio-frequency MMICs, the density growth relative to legacy discrete component technologies will be exponential.


Evolutionary Growth in Apertures



Apertures, comprising antennas in radio-frequency systems, and lenses and mirrors in optical systems, are critical components of military avionic suites, whether employed for sensors, communications and networking, or more recently, for Directed Energy Weapon (DEW) applications.

The challenges which arise from the need to control radio-frequency and optical band signatures to meet design objectives in stealthiness, as well as traditional imperatives of minimising weight, volume and power dissipation in avionics, have seen a slow but inevitable to trend to “multifunction apertures” where for instance a radar antenna is employed for its host radar, but also as an aperture for bidirectional high speed digital data transmission, passive interferometric detection of radio-frequency emitters, in-band jamming and disruptive radio-frequency DEW applications37.

In optical systems a parallel trend to share an aperture for multiple uses is well established. Many electro-optical targeting systems share common windows or stabilised mirror systems for thermal imager, visible band imager, laser rangefinder and passive laser spot tracker subsystems. Laser DEWs frequently employ a common primary aperture for the power delivery beam and the offset wavelength beam employed to control power delivery beam wavefront shaping38.

While shared multifunction apertures are desirable from a systems integration perspective, both in terms of meeting observables performance objectives and minimising weight and volume, the design of such can present difficult challenges as a result of strongly divergent design objectives for the various functions or subsystems which share the aperture. Structural mode RCS, aperture bandwidth and power handling performance are exactly such constraints, which present as common problems in both radio-frequency and optical band apertures.



Table 4: AESA technology breakthroughs (Brookner, 2008).

Active Electronically Steered Arrays


Electronically Steered Array (ESA) or “phased-array” antenna technology entered operational use in 1940 when the Luftwaffe deployed its first GEMA Mammut VHF-band acquisition radars. Until a decade ago, ESA technology was mostly confined to surface based acquisition, early warning and Surface-Air-Missile engagement radars, primarily due to weight and volume39.

The first Passive ESA (PESA) fighter radar to be mass produced was the Soviet low X-band Tikhomirov NIIP N007 Zaslon or Flash Dance, which employed a 1.1 metre array with 1700 X-band phase elements, with an embedded 64 element L-band IFF array. This radar was so large it was only ever fitted to the 100,000 lb gross weight class MiG-31 Foxhound interceptor, built from the mid 1980s40.

The attraction in all ESA radars is agility in beam steering, beam shaping, and the potential for forming nulls or multiple beams, all at update frequencies of up to kiloHertz. This permits enormous functional flexibility, both in terms of dynamically adapting antenna characteristics during the operation of different modes, but also in terms of interleaving different modes to in effect multiplex the radar between different tasks, creating the illusion of concurrency to the user. In a sense an ESA radar provides a form of parallelism, albeit by multiplexing a single electronically steered antenna.

The Active Electronically Steered Array (AESA) is now the dominant technology in new build fighter, strike and multimode radars of United States and European origin, and in Russian designs about to enter production. An AESA or “active phased array” embeds a passive radiating and matching element, a low noise receiver, a solid state transmitter of several Watts or tens of Watts power rating, a phase or time delay beam steering control element, a gain control element, and support electronics, into each and every Transmit-Receive (TR) Module forming the array. The backplane of the AESA provides RF signal distribution, and often passive feed networks for monopulse and MTI operating modes. Beam steering software for antenna control is often embedded in the radar data processing system. The reliability of current AESA based antennas is one to two orders of magnitude better than the reliability of the TWT transmitter and planar array antenna technology being replaced.

The GaAs MMIC was the enabling basic technology for airborne X-band and Ku-band AESA radars. The principal design challenges in current AESA designs remain in achievable bandwidth, TR module radio-frequency power output, and array cooling. Liquid cooling remains the preferred technique.

AESAs could be described without overstatement as a revolutionary technology in fighter radar, overcoming limitations in beamsteering and beamforming agility, antenna bandwidth, antenna sidelobe performance, receive sensitivity and operational reliability in legacy radar designs. Moreover their compact fixed design permits installation in much smaller volumes than required for mechanically steered antennas, with eventual potential for conformal installations. The flexibility inherent in AESAs makes them the ideal technology for multifunction radio-frequency band apertures.

The combined effect of Moore's Law driven growth in computing power and MMIC based AESA technology has been exponential growth in the number of different modes and functions an AESA radar can perform, over the last decade.

That exponential growth is not paralleled in detection range performance, which is constrained by radar power-aperture product, AESA power conversion efficiency, airframe power generation and especially cooling capacity, the inverse fourth power law, and tropospheric radiofrequency propagation effects.

Bandwidth is another area where exponential growth has not occurred, and where hard limits arise due to fixed aperture sizes, and genuine challenges in the analogue design of wideband radiating elements and AESA backplane analogue feed networks.

Brookner's excellent AESA technology survey, published in 2008, provides a robust roadmap of basic technology developments which will impact radio-frequency aperture designs over the coming two decades35.

Digital Beamforming and Adaptive Arrays

Analogue beamforming technology currently dominates AESA designs, and has the principal limitation that at any time the antenna can form only one beam, the shape and direction of which is determined by the phase or delay setting applied to the single phase or delay control element in each TR module. While the addition of multiple parallel phase or delay control elements would permit multiple beams to be formed concurrently, volumetric and density limits in TR module construction, and density limits in feed networks, would impose hard constraints on the level of concurrency which is achievable.

Digital BeamForming (DBF) involves a fundamentally different architecture to established AESAs, in that individual array elements (TR modules) or subarrays of elements each include Intermediate Frequency receiver chains, and Digital to Analogue converters. Rather than combining analogue radio-frequency signals in a feed network, with gain and phase/delay weightings manipulated at radio frequencies, a DBF design transfers digital outputs from each element or subarray into a signal and data processing system which performs the weighted summations in software. The level of concurrency achievable is then limited only by the computational performance of the processor system. Multiple concurrent beams may be thus formed, each with unique characteristics for the intended application.

An important advantage of the DBF approach is that the bottleneck in bandwidth which arises in the antenna backplane is removed. The feed network is replaced by digital bussing, which while challenging to design well, is not as critical in analogue performance demands as a radio-frequency feed network. Currently DBF techniques are confined to the receiver path, but this does not preclude a digital transmit path up to a Digital to Analogue converter in the TR module, thus eliminating analogue feed technology altogether.

Adaptive arrays are an outgrowth from 1960s SideLobe Canceller (SLC) technology, where a separate antenna was employed to capture surface clutter backscatter and/or jammer signals, which were then subtracted from mainlobe signals to effect clutter and jammer rejection. In an adaptive array, a mainlobe is formed with embedded nulls which are pointed at unwanted signal sources, thus rejecting these within the antenna mainlobe itself35.

DBF and adaptive array technology were first employed in mass production jam resistant GPS receiver antennas, operating in the L-band, with very modest element counts, a necessity given the -160 dBm signal levels employed. This technology was adopted since it was the only technique available which permitted concurrent beamforming to place multiple mainlobes on multiple satellites, while placing multiple antenna nulls on multiple hostile jammers41.

Brookner details the advantages of DBF thus29: “Using DBF eliminates the analog combining hardware, analog down-converting and all the errors associated with them. This in turn will lead to ultra-low side lobes. It will allow the implementation of multiple [receive] beams pointing in different directions. It will enable the adaptive use of different parts of the antenna for different applications at the same time.” “At the same time, the search angle accuracy is improved by about 40 percent. DBF will also permit better adaptive-array processing.”42

Moore's Law driven computing performance growth and MMIC technology are the enablers for DBF and adaptive array techniques in AESA apertures. They will also be determinants of what capability in DBF and adaptive array techniques can be incorporated into a radar design at any given time in the foreseeable future.

The gains in radar performance detailed by Brookner are important advances in their own right, but the concurrency provided by DBF and adaptive array techniques yield important gains where the aperture is being shared between radar functions and functions such as datalinking, radio-frequency passive surveillance and geolocation, and jamming of emitters.

Space Time Adaptive Processing


Space Time Adaptive Processing (STAP) techniques are employed to adaptively reject surface clutter by placing a null on the clutter source. The technique is used on the United States E-2D AN/APY-9 radar and the Russian 1L119 Nebo SVU and Nebo M radars354344.

The primary enabling technology for STAP is Moore's law driven computing power.

Sensor Fusion


Sensor fusion techniques are enabled primarily by Moore's law driven computing power, and the availability of high capacity datalinks and networks which permit realtime or near-realtime collection of sensor output data to permit fusion to be performed.

Sensor fusion can be performed using like sensors, such as multiple radars of like or different configuration, or by using dissimilar sensors, such as passive radio-frequency sensors, radar and optical sensors. This may be performed on a single platform equipped with multiple sensors, or by fusing sensor outputs from multiple platforms.

The benefit arising from well performing sensor fusion is that it can overcome the specific limitations of particular sensors, thus improving confidence in target identification but also track quality. This is especially important in an environment where effort is being made to control or significantly reduce platform signatures in multiple bands.

Examples of sensor fusion systems include the US Navy Cooperative Engagement Capability (CEC) systems, or Russian designs such as the Poima or NNIIRT Nebo M system45.

With the continuing proliferation of digital networking technology and exponential growth in computing power, the trend will inevitably be that of increasing capability and use of sensor fusion techniques.

Augmented Cognition Systems


The best known example of an Augmented Cognition System (ACS) is the 1980s demonstration by the AFRL of the “Pilot's Associate” system, initially intended for the Advanced Tactical Fighter (ATF) program. Cite46:

The Pilot's Associate program, a joint effort of the Defense Advanced Research Projects Agency (DARPA) and the US Air Force to build a cooperative, knowledge-based system to help pilots make decisions is described, and the lessons learned are examined. The Pilot's Associate concept developed as a set of cooperating, knowledge-based subsystems: two assessor and two planning subsystems, and a pilot interface. The two assessors, situation assessment and system status, determine the state of the outside world and the aircraft systems, respectively. The two planners, tactics planner and mission planner, react to the dynamic environment by responding to immediate threats and their effects on the prebriefed mission plan. The pilot-vehicle interface subsystem provides the critical connection between the pilot and the rest of the system. The focus is on the air-to-air subsystems.”

An ACS system could thus be described as an adjunct to a sensor suite, the intent of which is to accelerate interpretation of the outputs of onboard and offboard sensors, and thus accelerate the Observation-Orientation phases in Boyd's OODA Loop47.

The principal challenge in implementing ACS techniques is that they can be very computationally intensive, which presented a major obstacle two decades ago in embedded applications such as combat aircraft. With exponential growth in computing performance this will no longer be true, and as a result, Moore's law driven computing growth is an enabler for ACS technologies.

Optical Sensors


The impact of Moore's law in optical imaging array technology detailed previously is that optical sensors will see increasing use in combat aircraft, as the cost of Focal Plane Array devices progressively declines, and imaging site counts increase.

A good trend indicator is the adoption of spherical Missile Approach Warning System (MAWS) coverage using the AAR-56 MLD in the F-22A Raptor, and the much more ambitious AAQ-37 EO DAS developed for the problematic F-35 program.

The full potential of Focal Plane Array technology remains to be realised, due to limitations in array imaging site counts, but also due to the cost demands of cooling for signal to noise ratio control, the cost of sensor angular stabilisation, limited readout rates and the immaturity of algorithms for fusing outputs from multiple imaging sensors.

A problem which persists could be described as the “f-number tradeoff”, in that good long range detection performance requires large optics to gather light and a narrow instantaneous Field Of View (FOV), which is at odds with the concurrent need for wide angle coverage with multiple sensors. This is in a sense not unlike the conflicting radar antenna mainlobe needs of search regimes, against tracking regimes.

Exponential growth in Focal Plane Array sensors and supporting computing capabilities will enable significantly better capabilities over the coming decade, while the driving imperative for this growth will be that of overcoming the effects of radio-frequency signature reduction in opposing platforms.

Exponential Growth Laws vs Apertures



Figure 15.

As the preceding discussion shows, exponential growth across a range of basic technologies has yielded very important gains in the performance and capabilities of information domain technologies reliant upon radio-frequency and optical apertures.

What is also apparent is that many fundamental performance bounds are set by the physics of the aperture proper, and the properties of electromagnetic wave propagation in the atmosphere, especially the troposphere. Refer Figure 15.

This is an important consideration, in that exponential growth is not a feature of the cardinal performance parameters in apertures, whether these are employed for sensors, communications, or both. The expectation that this might be the case, sometimes observed in media and lay discussion of technological evolution in the information domain, is contrary to nature and thus quite unrealistic.


Evolutionary Growth in Stealth Techniques





Figure 16. Prototype of new Chengdu developed Very Low Observable fighter aircraft photographed in December, 2010. This design displays advanced shaping, following design rules employed in the F-22A Raptor (Chinese Internet).

Stealth designers have two principal technologies available for reducing the radar signature of an aircraft. These are shaping of airframe features and materials technology applied in coatings or absorbent structures
48.

Typically, the first 100- to 1,000-fold reduction in signature is produced by shaping, with further 10- to 30-fold reductions produced by materials. The smart application of these techniques reduces the signature of a B–52-sized B–2A Spirit down to that of a small bird, from key aspects.

Radar signatures are typically quantified as Radar Cross Section, which is specific for a given aspect and operating wavelength.

The effectiveness of both shaping and materials technologies varies strongly with the wavelength or frequency of the threat radar in question. Shaping features must be physically larger than the wavelength of the radar to be truly effective. A shaping feature with a negligible signature in the centimeter X-band or Ku-band may have a signature that is tenfold or greater in the much lower decimeter and meter radar bands49.

Materials are also characteristically less effective as radar wavelength is increased, due not only to the physics of energy loss, but also to the “skin effect” whereby the electromagnetic waves impinging on the surface of an aircraft penetrate into or through the coating materials. Known techniques include the application of absorbent surface coatings, or the embedding of absorbent material layers in composite material aircraft skin panels.

A material that is highly effective in the centimeter X-band or Ku-band may have a tenfold or less useful effect in the lower decimeter and meter radar bands50.

Neither airframe shaping techniques nor materials exhibit exponential growth properties over time. This is because the RCS properties of shapes are determined by basic electromagnetism, and the RCS reduction produced by materials is determined by quantum mechanical and electromagnetic properties of materials.

However, the development of both shaping techniques and materials can benefit from abundant computational power. This is especially true of computational RCS modelling techniques which lend themselves to parametric computation in large parallel computing systems. Computational modelling of airframe and component RCS for given shapes and materials can be accelerated very significantly through the application of computational clusters or grids, thereby accelerating the design cycle.

Minimising the structural mode RCS of sensor apertures has been and remains a major design challenge, therefore faster computational modelling techniques can produce a beneficial impact in development cycle timelines.



Figure 17. The PAK-FA first prototype displays refined stealth shaping technology (Sukhoi image).


Software



Software has been and will continue to be a major challenge in the development of combat aircraft, and their progressive evolution through their operational life cycle. While many of these difficulties reflect the direct impact of Moore's law and very short computer hardware lifecycles, which result in early hardware component obsolescence and replacement with the concomitant need for software modification, many of these difficulties reflect more than often poor design in software architecture, poor choices in technology, and inadequate understanding of the system for which the software is being architected51.

The most fundamental technological challenge is that contemporary and future combat aircraft systems are from a computational perspective, “hard realtime” systems in which highly parallel computing hardware must be employed52.

While the theory of hard realtime scheduling on single processor systems is mature and well understood, the theoretical area of hard realtime scheduling on multiprocessing systems is immature and remains an area of active academic research53.

The principal difficulty which arises in large and highly parallel “hard realtime” systems is that not all computational tasks will necessarily complete in a fixed time duration, which immensely complicates the scheduling problem in a “hard realtime” environment.

Algorithms can be broadly divided into those which are
“deterministic” in the sense that for some given input data, they always take the same time to compute on a given processor type, and those that are “non-deterministic” in that computational times vary given the same input data. A deterministic algorithm which is provided with same type of input over and over again will repeatedly compute in the same length of time. This is characteristic of many algorithms used in sensor signal processing, navigation systems, flight control systems and fire control systems. Scheduling such algorithms on a large multiprocessing system may often be difficult, but the unchanging execution time makes such tasks feasible in a “hard realtime” environment.

A much more serious problem may arise with many algorithms which are employed in data processing, sensor fusion, image extraction/analysis and artificial intelligence, where execution time for the task varies widely, presenting often intractable challenges in meeting
“hard realtime” timing deadlines, and thus scheduling these. Some are simply “non-deterministic” introducing inherent unpredictability in execution times. Other “deterministic” algorithms will vary  unpredictably in execution times with unpredictably varying data inputs.

A good case study of the latter problem is the tracking of large numbers of targets, each target being individually tracked by a Kalman filter. While Kalman filters are mature, efficient, well behaved, and computed using a series of matrix operations, each additional target requires that an additional Kalman filter be executed.
As a result, a task which is intended to update the positions of multiple targets will require an execution time mostly proportional to the number of targets. Numerous other instances exist presenting much the same challenges in meeting a “hard realtime” timing requirement.

Operation
Algorithm
Time Complexity
Applications
N2 Matrix multiplication
Trivial
O(N3)
Navigation,
Graphics,
Radar,
Sensor Fusion
N2 Matrix multiplication Strassen
O(N2.807)
N2 Matrix multiplication Coppersmith-Winograd
O(N2.376)
N2 Matrix inversion
Gauss–Jordan
O(N3) Navigation,
Graphics,
Radar,
Sensor Fusion
N2 Matrix inversion
Strassen
O(N2.807)
N2 Matrix inversion
Coppersmith-Winograd O(N2.376)
Fast Fourier Transform (FFT)
Cooley-Tukey
O(N logN)     
Signal Processing
SAR Imaging
Naive Algorithm
O(N2)
Radar

SAR Imaging DFMPY Butterfly O(N logN log(1/ε))
Table 5: Time Complexity of Algorithms Commonly Used in Sensors and Systems

Large multiprocessing computational systems, in which hundreds or even thousands of like or dissimilar processors are employed will present additional difficulties in achieving predictable timing behaviour in the operating system software and machine hardware, due to queueing effects in buffers and hardware bussing interconnects in such systems, or queueing effects in interprocess communications software, avionic multiplex busses, and external networks. As caching techniques are frequently employed internally in processor hardware to improve performance, these can also introduce non-deterministic timing behaviour into the computational system. Other “hidden” causes of unpredictable timing behaviour may include runtime environments, which often include garbage collection algorithms for in-process memory management, or other runtime environment management mechanisms with highly variable timing behaviour.

The complexity of such systems reflects the realities of attempting to execute hundreds or even thousands of concurrent computational tasks, of which a large proportion will be subject to hard realtime performance constraints, and rigid mutual synchronisation requirements. Partitioning computational activities across multiple processors, where a single processor cannot meet a hard realtime performance constraint, remains a challenging problem, and for some algorithms is simply infeasible.

To these challenges must be added the human factor in software development. Numerous case studies show that programmer errors continue to be common, and understanding the behaviour of subsystems for which software is being developed can often be inadequate. A common problem observed is that personnel who understand hard realtime design well are scarce, as this topical area is no longer taught in most universities providing computing and electrical engineering education, and is inherently difficult intellectually.

It is not an understatement to observe that software technology and technique suitable for large heterogeneous multiprocessing computer systems with hard realtime performance constraints is lagging well behind the exponentially growing performance of hardware.


Sensors versus Airframes – Kinematic versus Information Domains



Study of evolutionary behaviour of kinematic domain and information domain technologies demonstrates that in the foreseeable future, kinematic domain basic technologies will exhibit only incremental growth, while the established pattern of exponential growth in information domain basic technologies will continue. These basic technologies will act as enablers in the design of combat aircraft setting hard bounds on what can be designed and constructed at any point in time.

The availability of a basic technology does not necessarily mean that it must find its way into a combat aircraft design. For this to occur, there must be a driving imperative, or in the language of evolution, a “fitness advantage” in incorporating this technology.

The driving imperative for information domain technologies will be the evolutionary arms race between sensors used to collect information, and competing technologies such as stealth and jamming, intended to prevent the collection of information. This arms race commenced during the 1940s and has continued unabated for seven decades. As almost all of the basic technologies required to construct sensors, and jamming equipment, are de facto universally available in a globalised market for commodity consumer Information Communications Technology (ICT) products, the only constraints to what arbitrary nations can develop and deploy will be the the laws of physics, and the availability of engineers, scientists and budgets for research, development and production.

Until recently the United States held a genuine monopoly on the technology of stealth shaping, advanced stealth materials, infrared Focal Plane Arrays, and high performance MMICs for radar and other sensor applications. This is no longer true, as demonstrated by recent developments in Russian combat aircraft, stealth and radar54.

In the very near future the only advantage the United States and its allies will have in sensor, processing and stealth technologies will be incremental, the result of having climbed the technological learning curves for these technologies earlier. As noted previously, while processing capability grows exponentially, sensor detection range performance and stealth performance typically do not, so the gap in capabilities between Western stealth and sensor capabilities, and those of competitors will not display the exponentially growing gap characteristic of two exponential growth curves staggered in time.

This pattern of differential technological growth between kinematic domain and information domain basic technologies will be largely reflected in other areas of military aerospace technology, especially guided missiles carried by aircraft or launched by air defence missile batteries. Future missile seekers and guidance systems will have incrementally better detection range performance, and better resistance to jamming and evasive manoeuvre.

This assessment will hold until such time as one or more fundamental and unanticipated technological breakthroughs occur. In general, empirical experience suggests that predictions of future technology are robust for about a decade, although past experience reflects a period of much greater research investment in science and technology55.

A major consideration in assessing the stability of current predictions is that the contemporary Western academic research funding and management culture is deeply risk averse, and generally penalises researchers who choose to take risks by exploring new areas. The result of this stagnating research culture is that other nations or commercial entities, prepared to take risks, are more likely to produce breakthroughs.

Recent advances in the basic technologies of laser and radio-frequency DEW are frequently claimed to be such a breakthrough in military technology. While the ability to produce a damage effect upon a target at the speed of light is highly advantageous, these weapons are subject the same “power-aperture” and propagation physics constraints which apply to infrared optical and radio-frequency sensor technologies, limiting both effective range and constraining conditions under which these weapons can be used. Their arrival in the inventories of military forces will be reflected in opposing forces applying laser and radio-frequency hardening measures to their platforms and systems56.

A mature DEW technology may eventually replace short range weapons such as guns and close combat missiles, but basic physics preclude the replacement of longer ranging guided missiles in aerial combat.

In an environment where all strategic players have access to comparable basic technologies in the information domain, gaining a decisive technological advantage in the design of such systems will be difficult, and ultimately will, as noted, reflect the scale of research, development and production investment by any such player.

This leaves no significant remaining degrees of freedom which a designer may exploit to gain a major technological and thus strategic military advantage in the information domain.

The three decades of advantage resulting from the United States' monopoly in stealth technology are unusual and reflect in part the significant United States' investment during the 1980s in this area of basic technology, and the inability of an economically crippled Russia and underdeveloped China to match that investment during the subsequent fifteen years.

The Asymmetry Between Kinematic and Information Domain Effects

Survivability and lethality both reflect the combined effects of kinematic domain and information domain basic technologies employed in the design of combat aircraft.

The relationship between kinematic domain and information domain basic technologies does not necessarily yield symmetry in combat effect or effectiveness, certainly not in an environment where weapons fall into the kinematic domain, and kill their targets by impact, explosive blast, incendiary or high velocity fragment damage.



Figure 18. Beyond Visual Range versus Within Visual Range combat environment (Author).

An aircraft which can defeat sensors at long ranges will be able to mostly survive by evading attackers, but if it is found and engaged at short range, its ability to survive will be determined mostly by whatever advantage in may have in kinematic performance. If no such advantage exists, it will most often not survive. This is a specific criticism which has been in the past directed at the F-117A Nighthawk and B-2A Spirit stealth aircraft, and can now be directed at the F-35 Joint Strike Fighter.

The empirical reality is that in close combat, at short ranges, the kinematic domain qualities of aircraft and weapons have historically dominated combat effectiveness and thus Loss-Exchange-Rates, all else being equal. At “eyeball range”, sophisticated sensor suites have yet to demonstrate any decisive effect, or indeed decisive advantage over human sensory or cognitive capabilities. Boyd's dictates on Energy-Manoeuvrability remain valid and are the primary determinants of success in close combat47.

Engagements prosecuted under Beyond Visual Range (BVR) conditions present more complex dependencies between capabilities in the kinematic and information domains. In such engagements, sensors are employed to locate and track an opponent, upon which a guided missile attack may be prosecuted. The effectiveness of such an attack depends on a range of variables, including:
  1. Aircraft sensor performance, accuracy, and ability to overcome target stealth capabilities and defensive jamming;
  2. Missile seeker performance, accuracy, and ability to overcome target stealth capabilities and defensive jamming;
  3. Missile airframe kinematic performance, during flyout to the target, and during the terminal endgame manoeuvre employed to effect a kill;
  4. Target aircraft kinematic performance, in its ability to deny the missile closure;
  5. Target aircraft kinematic performance, in its ability to spoil the missile's endgame manoeuvre;
  6. Launch aircraft kinematic performance, in its ability to maintain a midcourse guidance track for the missile;
  7. Launch aircraft kinematic performance, in its ability to impart the largest possible kinetic and potential energy to the missile at launch, to maximise missile endgame energy.
While existing technology is immeasurably better than that of the late Cold War period, the combat effectiveness observed in BVR missile engagements since that period, when measured as a nett probability of kill, Pkill, remains unspectacular. The US AIM-120 AMRAAM has demonstrated statistically only a 50 – 60% Pkill against unchallenging targets57.

The poor kill probabilities achieved historically in BVR combat reflect the reality that long range missile combat is and will always be technologically challenging due to the evolution of defensive measures by all players, and the inherent difficulties in engineering long range missiles.



Figure 19. Beyond Visual Range missile combat presents many technological challenges, reflected in a statistically poor success rate since the 1960s. While the Pkill has improved more than fivefold since operations over North Vietnam, it remains inadequate given assumptions about force structure effectiveness. Depicted is an AIM-120C launch from an F-22A Raptor (U.S. Air Force image).

Prima facie, the poor kill probabilities observed in long range missile combat would suggest it to be the least effective regime of air combat between fighters. This raises the important question as to why all major air forces continue to make major investments in technologies and systems required for this style of combat.

The answer is at the most fundamental level no different from the answer found to the analogous question of why land armies continue to make major investments in longer ranging indirect fire weapons, despite the statistical reality of land conflict over many centuries, in that most attrition is inflicted in close combat by direct and indirect fire weapons.

Whether we are considering fighter aircraft positioning to enter a merge, or land manoeuvre forces about to close, in both instances there are two imperatives. One is that of reducing the numbers in the opposing force, and the other is that of disrupting the opponent's positioning and coordination when entering the engagement. Success in either or both yields an advantage once the opposing forces close and enter visual range close combat.

An alternate perspective on this problem can be found by applying Lanchester's models of strategic kinematics, specifically the “square law” and the “linear law”. In close combat, Loss-Exchange-Rates tend to reflect the steeper square law, whereas in long range combat, they tend to reflect the shallower linear law. Lanchester's laws show that an advantage in numbers yields a higher payoff in close combat than in long range combat, so any measures which either attrit the opponent's numbers or disadvantage the opponent's coordination of force elements when entering a close combat engagement yield an advantage58.

The forensic analysis of a large number of computer simulations involving many-vs-many engagements of similar and dissimilar fighter aircraft types displayed exactly this dynamic. The side which was able to best produce early attrition or formation disruption to an opponent, using BVR missile shots, gained an advantage when the two opposing forces merged into close combat. The advantage is neither linear nor incremental, reflecting the steep hyperbolic solution to the Lanchester's square law differential equations, with an increasing advantage to the stronger player developing through the duration of the engagement, as the weaker player loses further aircraft in close combat. Missile endgame kinematic performance, seeker jam resistance, seeker diversity and missile numbers carried by each fighter produced the best combat effect in long range engagements59.

Air combat between fighter aircraft thus presents some interesting problems in determining the best balance between kinematic performance, sensor capabilities, and stealthiness.

Boyd's theory provides a good indication of needs in close combat, where aircraft kinematic performance, missile kinematic performance, and the ability to rapidly exploit firing opportunities dominate combat effectiveness. In close combat, pilot ability and tactics to exploit aircraft and missile qualities are a strong factor.

Dominance in Beyond Visual Range missile combat requires an advantage in missile effectiveness, whether by virtue of numbers carried or missile lethality, and as detailed earlier, advantages in aircraft kinematics, sensors and measures to defeat opposing sensors, be they in opposing aircraft or missile seekers.

Future combat aircraft designed with the intent to prevail in fighter versus fighter combat will reflect these demands. Fighters which do not reflect these demands will not be effective, and in evolutionary terms, will be rendered extinct through attrition in air combat.



Figure 20. Range and combat persistence remain an important kinematic performance parameter in fighter aircraft. The depicted P-51D and F-15E both provided considerably better combat persistence than their contemporary foreign competitors (U.S. Air Force image).


The Impact of Exponential Growth Laws on Future Combat Aircraft Design



The exponential growth laws will produce a sustained and ongoing, albeit incremental in sensor performance, growth in the capabilities of sensors and systems of sensors used in air combat applications. This will be true of sensors carried by aircraft, missiles, and offboard sensors supporting these fighters, carried by other platforms. The detection range, bandwidth, jam resistance and false alarm rates of sensors will continue to progressively improve as long as development effort is invested and the exponential growth laws hold.

This growth curve will be inevitably paralleled by a growth curve in stealth technology intended to defeat sensors by concealment, and technologies intended to actively defeat sensors by jamming. Stealth will remain the best strategy for defeating sensors at long range, while jamming techniques will be essential to the defeat of sensors at shorter ranges, where signatures are large enough to be detected and tracked by such sensors.

Growth in radio-frequency and optical jamming technologies will closely track growth in sensor technologies, reflecting the reality that both are implemented using the very same basic technologies.

Future growth in stealth technologies will have to reflect growth in opposing sensors. Future stealth shaping techniques and materials will have to provide RCS improvements across greater bandwidths, and do so across a larger range of aspects, to remain effective.

As distance is a major factor in the relative effectiveness of sensors, jammers and stealth capabilities, the ability to rapidly increase the range between an aircraft and opposing sensor or system of sensors will be extremely valuable. Kinematically superior aircraft, especially in terms of sustained speed and acceleration performance, will have an important advantage over kinematically inferior aircraft. This will be true, whether the aircraft is intended to survive engagements with opposing fighters, or opposing air defence missile batteries. Kinematic superiority will mostly be reflected in a demand for higher fuel fractions than observed in most current fighter aircraft, and engines capable of supersonic persistence.

This is the evolutionary pattern which fighter developers and manufacturers will have to follow if they intend to produce aircraft which are competitive.

The exponential growth laws will see over time progressive reductions in the manufacturing costs of many types of sensors, especially in the processing component of the sensor.

A feature of Moore's law driven growth in the computer industry has been that of two concurrent yet divergent trends resulting from exponential growth. These are known as the “constant cost / increasing performance” and “constant performance / decreasing cost” curves. It is very likely that this exact dynamic will arise in information domain military technologies, built using exponentially growing basic technologies.

Upper tier combat aircraft will most likely follow the “constant cost / increasing performance” dynamic, with the caveat that overall system cost will show an increasing proportion of development and life-cycle costs consumed by software.

The exponential growth in information domain basic technologies opens up numerous technological options for upgrades and evolution in existing combat aircraft, and future, yet to be developed, combat aircraft.

As AESA and digital processing technology matures, weight, volume, efficiency and bandwidth will improve. These improvements will act as enablers for increasing use of AESAs as “multifunction apertures”, where the AESA is concurrently employed as an active radar sensor, passive radio-frequency sensor, bi-directional or uni-directional datalink aperture, and active jammer aperture. Some current AESA radars already perform all or most of these functions.

Declining AESA cost, weight and volume will permit the placement of multiple AESAs on combat aircraft, to expand angular coverage. This trend has already been established by the provision for APG-77 cheek arrays on the F-22A Raptor, and Russian experimentation with aft facing ESAs in the tailcone of Flanker demonstrators.

The provision of full spherical coverage using six or more arrays will become feasible in the near future, even if such an arrangement uses higher power-aperture AESAs in the nose and tail, and lower power-aperture AESAs in other positions. This may be a preferable design strategy to that of installing a specialised dorsal AESA for satellite communications, a specialised ventral AESA for datalinking to surface platforms and guided weapons, and supplementing the primary nose AESA with additional side and aft looking AESAs for “traditional” usage.

Spherical coverage using six fixed optical sensors is an established trend, as noted earlier in the AAR-56 MLD and AAQ-37 EO DAS, even if these existing sensors provide limited capabilities, and perform best as close quarters MAWS. Increasing imaging site counts, multiple band spectral coverage, and decreasing imaging chip costs will permit more elaborate and capable designs. As with the provision of spherical AESA coverage, there will be strong tactical imperatives to provide additional nose and tail position sensors with higher sensitivity and steerable telescopic optics.

New airframe designs and evolutionary redesigns of existing airframes will both have to consider the installation of multifunction radio-frequency and optical apertures for best coverage and lowest signature impact, from the very outset. Historically this has only ever been reflected in the design of nose mounted radar apertures.

Sensor fusion and augmented cognition technologies will likely evolve to exploit growth in raw computational performance.

A number of factors will limit or impair technology growth along these paths.

Increasing power output in radio-frequency transistors will permit higher AESA power density and thus higher power-aperture performance. With transistor design placing limits on power conversion efficiency, more powerful AESAs will force the need for greater cooling capacity. Inadequate avionic cooling capacity has been a feature of most fighter designs since the early 1970s, and has presented as a major design problem in the F/A-18E/F, F-22A, and the developmental F-35 series. While the pressure to increase fuel fractions arising from kinematic performance growth demands would aid in increasing airframe cooling capacities, fuel fraction growth faces hard physical constraints which exponentially growing avionics do not.

The development of software capable of meeting needs in both real time performance and reliability will set bounds on growth in digital sensors. Algorithms which do not parallelise well will also present problems in highly parallel onboard processing systems. Heterogenous processor hardware architectures will present numerous obstacles to growth, as the programming paradigms used for general purpose CPUs are in many respects different from specialised “number cruncher” processors developed for signal processing and graphics applications. The inability to architect and implement software with desired operational qualities may well become the single greatest obstacle to information domain capability growth in combat aircraft.


Conclusions



This study has explored how exponential growth laws can be expected to impact the evolution of combat aircraft designs over the coming two decades. Exponential growth laws in computing, optical imaging, mass storage and data communications basic technologies are surveyed and constraints explored. Recent technological advances in radar technologies are surveyed and related to exponential growth laws.

The divergence in growth rates between kinematic and information domain basic technologies is explored, and a range of conclusions drawn. These in turn are related to combat aircraft capabilities, from the perspectives of survivability, lethality and the dynamics of air combat.

Finally, this study summarises the direct and indirect impact which the exponential growth laws are likely to produce in the evolution of existing combat aircraft and development of future combat aircraft.


Endnotes/References



1 This paper will discuss “technological evolution” in its most fundamental sense, and is not concerned with more general social and philosophical issues labelled by this term and explored by authors such as Richta or Bloomfield. While technological evolution of systems shares some properties with Dawkins' memetics, the transmission of technological ideas through documentation and capture of other parties' equipment is often exact, with low or absent transmission errors. In fact the transmission of ideas in reverse engineering is often so exact, that manufacturing defects have been replicated. The best case study of the latter is the early Tu-4 Bull and Boeing B-29.
2 Dupuy T.N. and Hammerman G.M., Soldier Capability – Army Combat Effectiveness (SCACE), Vol.III, ACN 64024, UNITED STATES ARMY TRAINING AND DOCTRINE COMMAND ARMY SOLDIER SUPPORT CENTER, Technical Report, December 1980, URI: http://www.dtic.mil/, accessed December, 2010.
3 Wang Heping, Li Leiji, RESEARCH ON THE SYNTHESIS OF AIRCRAFT CONFIGURATION PARAMETERS AND COMBAT EFFECTIVENESS, Proceedings of the 22nd INTERNATIONAL CONGRESS OF AERONAUTICAL SCIENCES, 27 August - 1 September 2000, Harrogate International Conference Centre, United Kingdom, URI: http://www.icas-proceedings.net/ICAS2000/PAPERS/ICA0152.PDF, accessed December, 2010.
4 Streets G.B., Gabbert R.D., Capt, USAF, A Comparative Analysis of USAF Fixed-Wing Aircraft Losses in South East Asia Combat, Technical Report AFFDL-TR-77-115, Air Force Systems Command, December, 1977; also Pratt J.C, Maj, USAF, Tactics Against NVN Air Ground Defences December 1966-1 November 1968, HQ PACAF, Directorate Tactical Evaluation, CHECO Division, 1969, Secret [Declassified 15/08/2006].
5 Ball R.E. and Atkinson D.B., Designing for Survivability, Aerospace America, pp32 – 36, November, 2005, American Institute of Aeronautics and Astronautics; also Ball R.E., The Fundamentals of Aircraft Combat Survivability Analysis and Design, Second Edition, American Institute of Aeronautics and Astronautics, Reston, Virginia, USA, 2003.
6 Ibid, cited at URI: http://aircraft-survivability.com/Pages/Definitions.html, accessed December, 2010.
7 This paper employs the term “kinematic” rather than “kinetic” as the latter is well established in use and would be thus open to other interpretations.
8 Unpublished correspondence with WGCDR C.L. Mills, December, 2010.
9 Case studies include the bureaucratic drive to prematurely retire Australia's F-111 fleet, replacing it with much inferior and far more expensive to procure and maintain alternatives; the pursuit by the United States' Office of the Secretary of Defence of the more expensive and much inferior F-35 Joint Strike Fighter over the F-22 Raptor; the European rejection of the F-15E series in favour of less capable and more expensive Eurocanard alternatives. In all of these instances bureaucratic dysfunction appears to be the determinant of what systems are accepted or rejected, while conventional military considerations have no observable impact of any kind; Refer URIs: http://www.ausairpower.net/pig.html; http://www.ausairpower.net/jsf.html; http://www.ausairpower.net/raptor.html; http://www.ausairpower.net/Analysis-Typhoon.html.
10 Kopp C., Reflections on Information Age Air Warfare, Journal of Information Warfare, Edith Cowan University, Perth, WA, Australia, ISSN: 1445-3312, Vol 3, Issue 3, pp 11-28.
11 Kopp C. and Jordan C.C., Der Gabelschwanz Teufel - Assessing the Lockheed P-38 Lightning, Technical Report APA-TR-2010-1201, URI: http://www.ausairpower.net/P-38-Analysis.html.
12 Kopp C., Soviet Maritime Reconnaissance, Targeting, Strike and Electronic Combat Aircraft, Technical Report APA-TR-2007-0704, July, 2007, Updated August, 2009, URI: http://www.ausairpower.net/APA-Sov-ASuW.html; Kopp C., 111 (Profile), Australian Aviation, June, 1984, URI: http://www.ausairpower.net/Profile-F-111.html; Kopp C., F-4G: Anatomy of a Wild Weasel, Australian Aviation, July, 1986, URI: http://www.ausairpower.net/TE-Weasel.html
13 The term “Malthusian growth” has also been employed in the social sciences and economics. Refer: http://arnoldkling.com/econ/mathgrow.html.
14 Moore, G.E., Cramming more components onto integrated circuits, Electronics, Volume 38, Number 8, April 19, 1965, URI: ftp://download.intel.com/museum/Moores_Law/Articles-Press_Releases/Gordon_Moore_1965_Article.pdf.
15 Kopp, C., Moore's Law and its Implications for Information Warfare, Invited Paper, The 3rd International Association of Old Crows (AOC) Electronic Warfare Conference, Conference Paper, Conference Proceedings, Zurich, May 20-25 2000; URI: www.ausairpower.net/PDF-A/moore-iw.pdf.
16 Mead C.A., Conway L.A. Introduction to VLSI Systems, Addison Wesley, Reading, Massachusetts, 1980.
17 Ibid.
18 Amdahl, G.M., Validity of the single processor approach to achieving large scale computing capabilities, AFIPS spring joint computer conference, 1967, URI: http://www-inst.eecs.berkeley.edu/~n252/paper/Amdahl.pdf.
19 Ostgaard, J, et al, Architecture Specification for PAVE PILLAR Avionics, Final technical Report Sep 1985-Oct 1986, AIR FORCE WRIGHT AERONAUTICAL LABS WRIGHT-PATTERSON AFB OH, URI: http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA188722
20 What is GPU Computing?, NVIDIA Corporation, URI: http://www.nvidia.com/object/GPU_Computing.html
21 Walter, C., Kryder's Law, Scientific American Magazine, August, 2005, URI: http://www.scientificamerican.com/article.cfm?id=kryders-law, accessed June, 2010.
22 The most interesting of these is MagRAM technology, which combines monolithic Silicon planar technology with magnetic materials to yield a non-volatile memory which is competitive in access times with the fastest Static RAM technology.
23 Nielsen J, Nielsen's Law of Internet Bandwidth, Jakob Nielsen's Alertbox for April 5, 1998, URI: http://www.useit.com/alertbox/980405.html, accessed June, 2010.
24 Cherry S., Edholm's Law of Bandwidth, Telecommunications data rates are as predictable as Moore's Law, IEEE Spectrum, July, 2004, URI: http://spectrum.ieee.org/telecom/wireless/edholms-law-of-bandwidth.
25 Kopp C., Microwave monoliths bring Moore's Law to RF, article published in Comms World, July, 2000, Auscom Publishing Pty Ltd, Sydney, NSW, pp 69-74, URI: http://www.ausairpower.net/AC-0700.html.
26 Kopp C., Fifteen constraints on the capability of high-capacity mobile military networked systems, Journal of Battlefield Technology, vol 10, ed 2, Argos Press Pty Ltd, Australia, pp. 15-20.
27 Kopp C., Radio frequency spectrum congestion – Emerging headache for NCW, Defence Today, December, 2007; URI: http://www.ausairpower.net/SP/DT-RF-Congestion-2007.pdf.
28 Common Data Link [CDL], GlobalSecurity.org, URI: http://www.globalsecurity.org/intell/systems/cdl.htm.
29 Kopp C., The Properties of High Capacity Microwave Airborne Ad Hoc Networks, PhD Thesis, Monash University, 2000.
30 Skinner A., US team achieves breakthrough with AESA radar, Janes Defence Weekly, 18th January, 2006.

31 Beletic J.W. Et al, Teledyne Imaging Sensors: Infrared imaging technologies for Astronomy & Civil Space, SPIE 2008, Conference Paper, URI: http://www.rockwellscientific.com/infrared_visible_fpas/index.html
32 Soibel A. et al, A super-pixel QWIP focal plane array for imaging multiple waveband temperature sensor, Infrared Physics & Technology, Volume 52, Issue 6, November 2009, Pages 403-407; Proceedings of the International Conference on Quantum Structure Infrared Photodetectors (QSIP) 2009.
33 Leininger B., Autonomous Real-time Ground Ubiquitous Surveillance - Imaging System (ARGUS-IS), DARPA IPTO, URI: http://www.darpa.mil/ipto/programs/argus/argus.asp .
34 Tummalla R.R., Moore's Law Meets Its Match, IEEE Spectrum, June 2006, URI: http://spectrum.ieee.org/computing/hardware/moores-law-meets-its-match/0.
35 Eli Brookner, Now: Phased-array Radars: Past, Astounding Breakthroughs and Future Trends, Microwave Journal, January 2008, URI: http://www.mwjournal.com/Journal/Print.asp?Id=AR_5352.
36 Ibid.
37 Lynch,D. , Jr and Kopp, C., Multifunctional Radar Systems for Fighter Aircraft, in Radar Handbook, Third Edition, Ed. Merrill I Skolnik, McGraw Hill Companies, 2008, Columbus OH, USA.
38 Kopp C., High Energy Laser Directed Energy Weapons, Technical Report APA-TR-2008-0501, May, 2008, URI: http://www.ausairpower.net/APA-DEW-HEL-Analysis.html; also Sniper Advanced Targeting Pod, AN/AAQ-33, Product Description, Lockheed-Martin, URI: http://www.lockheedmartin.com/products/Sniper/index.html.
39 The GEMA Mammut employed no less than 192 electronically phase shifted dipoles to effect beam steering in azimuth and elevation. Refer Kopp C., Early Air Defence Radar, Defence Today, March/April 2008, Strike Publications, URI: http://www.ausairpower.net/SP/DT-MS-WW2-Radar.pdf.
40 Kopp C., Foxbat and Foxhound: Russia's Cold War Warriors, Australian Aviation, November, 1992, Updated July, 2007, URI: http://www.ausairpower.net/TE-Foxbat-Foxhound-92.html.
41 Kopp C., GPS Aided Guided Munitions - Parts I-V, Australian Aviation, August through December, 1996, Updated August, 2008, URI: http://www.ausairpower.net/TE-GPS-Guided-Weps.html.
42 In a superhet receiver analogue downconversion must be performed at some point in the receiver chain. In a DBF design, this is performed in the TR-module or at subarray level, rather than downstream of the antenna subsystem. While there is a penalty in complexity and power dissipation, exponential growth in hardware density will reduce its impact over time.
43 Kopp C., NNIIRT 1L119 Nebo SVU / RLM-M Nebo M; Assessing Russia's First Mobile VHF AESAs, Technical Report APA-TR-2008-0402, April, 2008, URI: http://www.ausairpower.net/APA-Nebo-SVU-Analysis.html.
44 Yuri I. Abramovich, ed., Military Application of Space-Time Adaptive Processing, RTO–EN–027, Research and Technology Organisation/North Atlantic Treaty Organization, April 2003, Ottawa, URI: http://www.rta.nato.int/Pubs/RDP.asp?RDP=RTO-EN-027.
45 William D. O’Neil, The Cooperative Engagement Capability (CEC): Transforming Naval Antiair Warfare, Case Studies in National Security Transformation No. 11, Washington, DC: Center for Technology and National Security Policy, August 2007, URI: http://www.ndu.edu/CTNSP/Case%20Studies/Case%2011%20%20CEC%20Transforming%20Naval%20Anti-Warfare.pdf.
46 Banks S.B. and Lizza C.S., Pilot's Associate: A Cooperative, Knowledge-Based System Application, IEEE Expert: Intelligent Systems and Their Applications, Volume 6 Issue 3, June 1991; see also Pohlmann, L.D. And Payne, J.R., Aerospace and Electronic Systems Magazine, IEEE, August 1988, Volume: 3 Issue: 8
47 Colonel John R. Boyd, USAF, Fast Transients, URI: http://www.ausairpower.net/APA-Boyd-Papers.html.
48 Eugene F. Knott, John F. Schaeffer, and Michael T. Tuley, Radar Cross Section, 1st Ed., Artech House, 1986, chapter 1; and Eugene F. Knott, John F. Schaeffer, and Michael T. Tuley, Radar Cross Section, 2nd Ed. Artech House, 1993.
49 Ibid., 2E, table 14.1.
50 Ibid., 2E, chapter 8 contains numerous examples.
51 Kopp C., System Reliability and Metrics of Reliability, Lecture series developed for Melbourne University, PHA Pty Ltd, 1996, URI: http://www.csse.monash.edu.au/~carlo/SYSTEMS/PDF-A/Reliability-PHA.pdf.
52 A good definition of hard realtime is provided in Jensen E.D., Hard and Soft Real-Time, Real Time for the Real World, website, URI: http://www.real-time.org/hardandsoftrealtime.htm.
53 The author taught CSE3141 Realtime System Design at Monash University for several years and has several years of realtime software development experience in industry. A good survey of research in this area can be found in Fisher N.W., The Multiprocessor Real-Time Scheduling of General Task Systems, Ch.1 and 2, PhD Thesis, University of North Carolina at Chapel Hill, 2007.
54 Kopp C. and Goon P.A., Assessing the Sukhoi PAK-FA, APA Analyses, Vol. VII APA-2010-01, February, 2010, URI: http://www.ausairpower.net/APA-2010-01.html; and Kopp C., Evolving Technological Strategy in Advanced Air Defense Systems, Joint Forces Quarterly, Issue 57, 2nd Quarter, April 2010, URI: http://www.ndu.edu/press/jfq_pages/editions/i57/kopp.pdf.
55 Bell C.G. . The Folly/Laws of Predictions 1.0., The Next 50 Years, ACM97 Conference Talks, Association for Computing Machinery, 1997; Birnbaum J. Computing Alternatives, The Next 50 Years, ACM97 Conference Talks, Association for Computing Machinery, 1997; and Mead C.A. . Semiconductors, The Next 50 Years, ACM97 Conference Talks, Association for Computing Machinery, 1997.
56 Kopp C., Considerations on the use of airborne X-band radar as a microwave Directed-Energy Weapon, Journal of Battlefield Technology, Vol 10, Issue 3, Argos Press Pty Ltd, Australia, pp. 19-25.
57 Stillion J. and Perdue S., Air Combat Past, Present and Future, Briefing Slides, August 2008, RAND Project Air Force, RAND Corporation.
58 A good discussion of Lanchester's laws and related modelling problems can be found in Davis P.K., Aggregation, Disaggregation, and the 3:1 Rules in Ground Combat, Monograph MR-638-AF/A/OSD, RAND Corporation, 1995, URI: http://www.rand.org/pubs/monograph_reports/MR638.html; a much more detailed treatment is in Chapter 4 of Kimball G.E. And Morse P.M., Methods of Operations Research, March 1981, Peninsula Publishing, Los Altos Hills, CA, reprinted from a 1951 edition.

59 The author is indebted to WGCDR C.L. Mills for sharing the results of these simulations between 2008 and 2010.


Acknowledgements


The author is indebted to all parties in Australia and overseas who reviewed the draft of this paper, for their cogent comments and valuable thoughts.


Support Air Power Development in Australia - Click for more ...

Air Power Australia Analyses  ISSN   1832-2433




People's Liberation Army Air Power Index Page [Click for more ...]
Military Ethics, Culture, Education and Training Index Page [Click for more ...]
Russian / Soviet Weapon Systems Index Page [Click for more ...]





Artwork, graphic design, layout and text © 2004 - 2014 Carlo Kopp; Text © 2004 - 2014 Peter Goon; All rights reserved. Recommended browsers. Contact webmaster. Site navigation hints. Current hot topics.

Site Update Status: $Revision: 1.753 $ Site History: Notices and Updates / NLA Pandora Archive