Showing posts sorted by date for query Mach. Sort by relevance Show all posts
Showing posts sorted by date for query Mach. Sort by relevance Show all posts

Wednesday, January 10, 2024

Asymmetric

Asymmetric (pronounced a-sim-et-rick)

(1) Not identical on both sides of a central line; unsymmetrical; lacking symmetry.

(2) An asymmetric shape.

(3) In logic or mathematics, holding true of members of a class in one order but not in the opposite order, as in the relation “being an ancestor of”.

(4) In chemistry, having an unsymmetrical arrangement of atoms in a molecule.

(5) In chemistry, noting a carbon atom bonded to four different atoms or groups.

(6) In chemistry (of a polymer), noting an atom or group that is within a polymer chain and is bonded to two different atoms or groups that are external to the chain.

(7) In electrical engineering, of conductors having different conductivities depending on the direction of current flow, as of diodes

(8) In aeronautics, having unequal thrust, as caused by an inoperative engine in a twin-engined aircraft.

(9) In military theory, a conflict where the parties are vastly different in terms of military capacity.  This situation is not in all circumstances disadvantageous to the nominally inferior party.

(10) In gameplay, where different players have different experiences

(11) In cryptography, not involving a mutual exchange of keys between sender a7 receiver.

(12) In set theory, of a relation R on a set S: having the property that for any two elements of S (not necessarily distinct), at least one is not related to the other via R.

1870–1875: The construct was a- + symmetric.  The a- prefix was from the Ancient Greek - (a-) (ν-) (an- if immediately preceding a vowel) and was added to stems to created the sense of "not, without, opposite of".  The prefix is referred to as an alpha privative and is used with stems beginning with consonants (except sometimes “h”); “an-“ is synonymous and is used in front of words that start with vowels and sometimes “h”.  Symmetric was from the Latin symmetria from Ancient Greek συμμετρία (summetría).  Symmetry was from the 1560s in the sense of "relation of parts, proportion", from the sixteenth century French symmétrie and directly from the Latin symmetria, from the Greek symmetria (agreement in dimensions, due proportion, arrangement", from symmetros (having a common measure, even, proportionate), an assimilated form of syn- (together) + metron (measure) from the primitive Indo-European me- (to measure).  The meaning "harmonic arrangement of parts" dates from the 1590s.  The suffix -ic was from the Middle English -ik, from the Old French -ique, from the Latin -icus, from the primitive Indo-European -kos & -os, formed with the i-stem suffix -i- and the adjectival suffix -kos & -os.  The form existed also in Ancient Greek as -ικός (-ikós), in Sanskrit as -इक (-ika) and the Old Church Slavonic as -ъкъ (-ŭkŭ); A doublet of -y.  In European languages, adding -kos to noun stems carried the meaning "characteristic of, like, typical, pertaining to" while on adjectival stems it acted emphatically.  In English it's always been used to form adjectives from nouns with the meaning “of or pertaining to”.  A precise technical use exists in physical chemistry where it's used to denote certain chemical compounds in which a specified chemical element has a higher oxidation number than in the equivalent compound whose name ends in the suffix -ous; (eg sulphuric acid (H₂SO₄) has more oxygen atoms per molecule than sulphurous acid (H₂SO₃).  Asymmetric & asymmetrical are adjectives, asymmetricity, asymmetricality, asymmetricalness & asymmetry are nouns and asymmetrically is an adverb; the noun plural is asymmetries.

The usually symmetrically attired Lindsay Lohan demonstrates the possibilities of asymmetry.

1975 Kawasaki 750 H2 Mach IV.

Manufacturers of triple-cylinder motorcycles traditionally used single (3 into 1) or symmetrical (3 into 2) exhaust systems (although, during the 1970s, Suzuki offered some of their "Ram-Air" models with a bizarre 3 into 4 setup, the centre cylinder’s header bifurcated) but in 1969 Kawasaki adopted an asymmetric addition for one of the memorable machines of the time.  The Kawasaki 500 H1 Mach III had two outlets to the right, one to the left and was a fast, lethally unstable thing which was soon dubbed the "widow maker".  Improvements to the Mach III made it a little more manageable and its successor, the 750 H2 Mach IV was claimed to be better behaved but was faster still and best enjoyed by experts, preferably in a straight line although, with a narrow power band which peaked with a sudden rush, even that could be a challenge.  The Kawasaki triples remain the most charismatic of the Japanese motorcycles.

1973 Triumph X-75 Hurricane.

Available only during 1972-1973 and produced in small numbers, the Triumph X75 Hurricane was typical of the motorcycles being produced by the British manufacturers which had neglected development and re-investment and consequently were unable adequately to respond to the offerings of the Japanese which had done both aplenty.  Whatever their charms, models like the X75 were being rendered obsolescent, some of the underlying technology dating back decades yet, without the capital to invest, this was as good as it got and some of the fudges of the era were worse.  The X-75 was however ahead of its time in one way, it was a “factory special”, a design influenced by what custom shops in the US had been doing as one-offs for customers and in the years ahead, many manufacturers would be attracted by the concept and its healthy profit margins.  The X-75 is remembered also for the distinctive asymmetric stack of three exhaust pipes on the right-hand side.

1986 Ferrari Testarossa (1984-1991) with monospecchio.

Some of Ferrari's early-production Testarossas were fitted with a single high-mounted external mirror, on the left or right depending on the market into which it was sold and although the preferred term was the Italian “monospecchio” (one mirror), in the English speaking-world it was quickly dubbed the “flying mirror" (rendered sometimes in Italian as “specchio volante” (a ordinary wing mirror being a “specchietto laterale esterno”, proving everything sounds better in Italian)).  The unusual placement and blatant asymmetry annoyed some and delighted others, the unhappy more disgruntled still if they noticed the vent on right of the front spoiler not being matched by one to the left.  It was there to feed the air-conditioning’s radiator and while such offset singularities are not unusual in cars, many manufacturers create a matching fake as an aesthetic device: Ferrari did not.  The mirror’s curious placement was an unintended consequence of a European Union regulation (and it doubtful many institutions have in a relatively short time created as many regulations of such collective length as the EU) regarding the devices and this was interpreted by the designers as having to provide 100% rearward visibility.  Because of the sheer size of the rear bodywork necessitated by the twin radiators which sat behind the side-strakes (another distinctive Testarossa feature), the elevation was the only way this could be done but it later transpired the interpretation of the law was wrong, a perhaps forgivable mistake given the turgidity of EU legalese.

The Blohm & Voss BV 141

Focke-Wulf Fw 189 Eurl (Owl)

In aircraft, designs have for very good reason (aerodynamics, weight distribution, flying characteristics, ease of manufacture et al) tended to be symmetrical, sometimes as an engineering necessity such as the use of contra-rotationg propellers on some twin-engined airframes, a trick to offset the destabilizing effects of the torque when very potent power-plants are fitted.  There has though been the odd bizarre venture into structural asymmetry, one of the most intriguing being the Blohm & Voss BV 141, the most distinctive feature of which was an offset crew-capsule.  The BV 141 was tactical reconnaissance aircraft built in small numbers and used in a desultory manner by the Luftwaffe (the German air force) during World War II (1939-1945) and although it was studied by engineers from many countries, none seem to have been inspired to repeat the experiment. The origin of the curious craft lay in a specification issued in 1937 by the Reichsluftfahrtministerium (RLM; the German Air Ministry) which called for a single-engine reconnaissance aircraft, optimized for visual observation and, in response, Focke-Wulf responded with their Fw 189 Eurl (Owl) which, because of the then still novel twin-boomed layout, encountered some resistance from the RLM bureaucrats but it found much favor with the Luftwaffe and, over the course of the war, some nine-hundred entered service and it was used almost exclusively as the German's standard battlefield reconnaissance aircraft.  In fact, so successful did it prove in this role that the other configurations it was designed to accommodate, that of liaison and close-support ground-attack, were never pursued.  Although its performance was modest, it was a fine airframe with superb flying qualities and an ability to absorb punishment which, on the Russian front where it was extensively deployed, became famous and captured examples provide Russian aeronautical engineers with ides which would for years influence their designs.

The RLM had also invited Arado to tender but their Ar 198, although featuring an unusual under-slung and elongated cupola which afforded for the observer a uniquely panoramic view, proved unsatisfactory in test-flights and development ceased.  Blohm and Voss hadn't been included in the RLM's invitation but anyway chose to offer a design which was radically different even by the standards of the innovative Fw 189.  The asymmetric BV 141 design was eye-catching with the crew housed in an extensively glazed capsule, offset to starboard of the centre-line with a boom offset to the left which housed the single-engine in front with the tail to the rear.  Prototypes were built as early as 1938 and the Luftwaffe conducted operational trials over both the UK and USSR between 1939-1941 but, despite being satisfactory in most respects, the Bv 141 was hampered by poor performance, a consequence of using an under-powered engined.  A re-design of the structure to accommodate more powerful units was begun but delays in development and the urgent need for the up-rated engines for machines already in production doomed the project and the Bv 141 was in 1943 abandoned.

Blohm & Voss BV 141 prototype with full-width rear elevators & stabilizers.

Production Blohm & Voss BV 141 with port-only rear elevator & stabilizer.

Despite the ungainly appearance, test-pilots reported the Fw 141 was a nicely balanced airframe, the seemingly strange weight distribution well compensated by (1) component placement, (2) the specific lift characteristics of the wing design and (3) the choice of opposite rotational direction for crankshaft and propeller, the torque generated used as a counter-balance.  Nor, despite the expectation of some, were there difficulties in handling whatever behavior was induced by the thrust versus drag asymmetry and pilots all indicated some intuitive trimming was all that was needed to compensate for any induced yaw.  The asymmetry extended even to the tail-plane, the starboard elevator and horizontal stabilizer removed (to afford the tail-gunner a wider field of fire) after the first three prototypes were built; surprisingly, this was said barely to affect the flying characteristics.  Focke-Wolf pursued the concept, a number of design-studies (including a piston & turbojet-engine hybrid) initiated but none progressed beyond the drawing-board.

Asymmetric warfare

In the twenty-first century, the term “asymmetric warfare” became widely used.  The concept describes conflicts in which there are significant disparities in power, capability and strategies between opposing forces and although the phrase has become recently fashionable, the idea is ancient, based often on the successes which could be exploited by small, mobile and agile (often irregular) forces against larger, conventionally assembled formations.  Reports of such tactics are found in accounts of conflicts in Asia, Africa, the Middle East and Europe from as early as reliable written records have been found.  The classic example is what came later to be called “guerrilla warfare”, hit-and-run tactics which probe and attack a weak spots as they are detected, the ancestor of insurgencies, “conventional” modern terrorism and cyber-attacks.  However, even between conventional national militaries there have long been examples of the asymmetric such as the use of small, cheap weapons like torpedo boats and mines which early in the twentieth century proved effective against the big, ruinously expensive Dreadnoughts.  To some extent, the spike in use of the phrase in the post-Cold War era happened because it provided such a contrast between the nuclear weapon states which, although having a capacity to destroy entire countries without having one soldier step foot on their territory, found themselves vulnerable to low-tech, cleverly planned attacks.

Although the term “asymmetric warfare” covers encompasses a wide vista, one increasingly consistent thread is that it can be a difficult thing for "conventional" military formations to counter insurgencies conducted by irregular combatants who, in many places and for much of the time, are visually indistinguishable from the civilian population.  The difficulty lies not in achieving the desired result (destruction of the enemy) but managing to do so without causing an “excessive” number of civilian causalities; although public disapproval has meant the awful phrase “collateral damage” is now rarely heard, civilians (many of them women & children) continue greatly to suffer in such conflicts, the death toll high.  Thus the critique of the retaliatory strategy of the Israel Defence Force (IDF) in response to the attack by the Hamas on 7 October 2023, Palestinian deaths now claimed to exceed 20,000; that number is unverified and will include an unknown number of Hamas combatants but there is no doubt the percentage of civilian deaths will be high, the total casualty count estimated early in January 2024 at some 60,000.  What the IDF appear to have done is settle on the strategy adopted by Ulysses S Grant (1822–1885; US president 1869-1877) in 1863 when appointed head of the Union armies: the total destruction of the opposing forces.  That decision was a reaction to the realization the previous approach (skirmishes and the temporary taking of enemy territory which was soon re-taken) was ineffectual and war would continue as long as the other side retained even a defensive military capacity.  Grant’s strategy was, in effect: destroy the secessionist army and the secessionist cause dies out.

In the US Civil War (1861-1965) that approach worked though at an appalling cost, the 1860s a period when ballistics had advanced to the point horrific injuries could be inflicted at scale but battlefield medical tools and techniques were barely advance from Napoleonic times.  The bodies were piled high.  Grant’s success was influential on the development of the US military which eventually evolved into an organization which came to see problems as something not to be solved but overwhelmed by the massive application of force, an attitude which although now refined, permeates from the Pentagon down to platoon level.  As the US proved more than once, the strategy works as long as there’s little concern about “collateral damage”, an example of this approach being when the Sri Lankan military rejected the argument there was “no military solution” to the long running civil war (1983-2009) waged by the Tamil Tigers (the Liberation Tigers of Tamil Eelam (LTTE)).  What “no military solution” means is that a war cannot be won if the rules of war are followed so the government took the decision that if war crimes and crimes against humanity were what was required to win, they would be committed.

In the 1990s, a number of political and military theorists actually advanced the doctrine “give war a chance”, the rationale being that however awful conflicts may be, if allowed to continue to the point where one side gains an unambiguous victory, the dispute is at least resolved and peace can ensue, sometimes for generations.  For most of human history, such was the usual path of war but after the formation of the United Nations (UN) in 1945 things changed, the Security Council the tool of the great powers, all of which (despite their publicity) viewed wars as a part of whatever agenda they were at the time pursuing and depending on this and that, that meant their interests sometimes lay in ending conflicts and sometimes in prolonging them.  In isolation, such an arrangement probably could have worked (albeit with much “collateral damage”) but over the years, a roll-call of nations run by politicians appalled by the consequences of war began to become involved, intervening with peace plans,  offering mediation and urging the UN to deploy “peacekeeping” forces, something which became an international growth industry.  Added to that, for a number of reasons, a proliferation of non-government organizations (NGO) were formed, many of which concerned themselves with relief programmes in conflict zones and while these benefited may civilians, they also had the effect of allowing combatant forces to re-group and re-arm, meaning wars could drag on for a decade or more.

In the dreadful events in Gaza, war is certainly being given a chance and the public position of both the IDF and the Israeli government is that the strategy being pursued is one designed totally “to destroy” not merely the military capacity of Hamas but the organization itself.  Such an idea worked for Grant in the 1860s and, as the Sri Lankan military predicted they would, end-game there was achieved in 2009 on the basis of “total destruction”.  However, Gaza (and the wider Middle East) is a different time & place and even if the IDF succeeds in “neutralizing” the opposing fighters and destroying the now famous network of tunnels and ad-hoc weapons manufacturing centres, it can’t be predicted that Hamas in some form won’t survive and in that case, what seems most likely is that while the asymmetry of nominal capacity between the two sides will be more extreme than before, Hamas is more likely to hone the tactics than shift the objective.  The IDF high command are of course realists and understand there is nothing to suggest “the Hamas problem” can be solved and being practical military types, they know if a problem can’t be solved it must be managed.  In the awful calculations of asymmetric conflict, this means the IDF calculate that while future attacks will happen, the more destructive the response now, the longer will be the interval before the next event.

Thursday, October 26, 2023

Nail

Nail (pronounced neyl)

(1) A slender, typically rod-shaped rigid piece of metal, usually in many lengths and thicknesses, having (usually) one end pointed and the other (usually) enlarged or flattened, and used for hammering into or through wood, concrete or other materials; in the building trades the most common use is to fasten or join together separate pieces (of timber etc).

(2) In anatomy, a thin, horny plate, consisting of modified epidermis, growing on the upper side of the end of a finger or toe; the toughened protective protein-keratin (known as alpha-keratin, also found in hair) at the end of an animal digit, such as fingernail.

(3) In zoology, the basal thickened portion of the anterior wings of certain hemiptera; the basal thickened portion of the anterior wings of certain hemiptera; the terminal horny plate on the beak of ducks, and other allied birds; the claw of a mammal, bird, or reptile.

(4) Historically, in England, a round pedestal on which merchants once carried out their business.

(5) A measure for a length for cloth, equal to 2¼ inches (57 mm) or 1⁄20 of an ell; 1⁄16 of a yard (archaic); it’s assumed the origin lies in the use to mark that length on the end of a yardstick.

(6) To fasten with a nail or nails; to hemmer in a nail.

(7) To enclose or confine (something) by nailing (often followed by up or down).

(8) To make fast or keep firmly in one place or position (also used figuratively).

(8) Perfectly to accomplish something (usually as “nailed it”).

(9) In vulgar, slang, of a male, to engage in sexual intercourse with (as “I nailed her” or (according to Urban Dictionary “I nailed the bitch”).

(10) In law enforcement, to catch a suspect or find them in possession of contraband or engaged in some unlawful conduct (usually as “nailed them”).

(11) In Christianity, as “the nails”, the relics used in the crucifixion, nailing Christ to the cross at Golgotha.

(12) As a the nail (unit), an archaic multiplier equal to one sixteenth of a base unit

(13) In drug slang, a hypodermic needle, used for injecting drugs.

(14) To detect and expose (a lie, scandal, etc)

(15) In slang, to hit someone.

(16) In slang, intently to focus on someone or something.

(17) To stud with or as if with nails.

Pre 900: From the Middle English noun nail & nayl, from the Old English nægl and cognate with the Old Frisian neil, the Old Saxon & Old High German nagal, the Dutch nagel, the German Nagel, the Old Norse nagl (fingernail), all of which were from the unattested Germanic naglaz.  As a derivative, it was akin to the Lithuanian nãgas & nagà (hoof), the Old Prussian nage (foot), the Old Church Slavonic noga (leg, foot), (the Serbo-Croatian nòga, the Czech noha, the Polish noga and the Russian nogá, all of which were probably originally a jocular reference to the foot as “a hoof”), the Old Church Slavonic nogŭtĭ, the Tocharian A maku & Tocharian B mekwa (fingernail, claw), all from the unattested North European Indo-European ənogwh-.  It was further akin to the Old Irish ingen, the Welsh ewin and the Breton ivin, from the unattested Celtic gwhīnā, the Latin unguis (fingernail, claw), from the unattested Italo-Celtic əngwhi-;the Greek ónyx (stem onych-), the Sanskrit ághri- (foot), from the unattested ághli-; the Armenian ełungn from the unattested onogwh-;the Middle English verbs naile, nail & nayle, the Old English næglian and cognate with the Old Saxon neglian, the Old High German negilen, the Old Norse negla, from the unattested Germanic nagl-janan (the Gothic was ganagljan).  The ultimate source was the primitive Indo-European h₃nog- (nail) and the use to describe the metal fastener was from the Middle English naylen, from the Old English næġlan & nægl (fingernail (handnægl)) & negel (tapering metal pin), from the Proto-Germanic naglaz (source also of Old Norse nagl (fingernail) & nagli (metal nail).  Nail is a noun & verb, nailernailless & naillike are adjectives, renail is a verbs, nailing is a noun & vern and nailed is a verb & adjective; the noun plural is nails.

Nail is modified or used as a modifier in literally dozens of examples including finger-nail, toe-nail, nail-brush, nail-file, rusty-nail, garden-nail, nail-fungus, nail-gun & frost-nail.  In idiomatic use, a “nail in one's coffin” is a experience or event that tends to shorten life or hasten the end of something (applied retrospectively (ie post-mortem) it’s usually in the form “final nail in the coffin”.  To be “hard as nails” is either to be “in a robust physical state” or “lacking in human feelings or without sentiment”. To “nail one's colors to the mast” is to declare one’s position on something.  Something described as “better than a poke in the eye with a rusty nail” is a thing, which while not ideal, is not wholly undesirable or without charm.  In financial matters (of payments), to be “on the nail” is to “pay at once”, often in the form “pay on the nail”.  To “nail something down” is to finalize it. To have “nailed it” is “to perfectly have accomplished something” while “nailed her” indicates “having enjoyed sexual intercourse with her”.  The “right” in the phrase “hit the nail right on the head” is a more recent addition, all known instances of use prior to 1700 being “hit the nail on the head” and the elegant original is much preferred.  It’s used to mean “correctly identify something or exactly to arrive at the correct answer”.  Interestingly, the Oxford English Dictionary (OED) notes there is no documentary evidence that the phrase comes from “nail” in the sense of the ting hit by a hammer.

Double-headed nails are used for temporary structures like fencing.  When the shaft is hammered in to the point where the surface of the lower head is flat against the surface of that into which it's being hammered, it leaves the upper head standing proud with just enough of the shaft exposed to allow a claw-hammer to be used to extract nail.  There is a story that as part of an environmental protest against the building or demolition of some structure (the tales vary), activists early one morning went to the temporary fencing around the contested site and hammered in all the double-headed nails.  This is believed to be an urban myth.

The sense of “fingernail” appears to be the original which makes sense give there were fingernails before there were spikes (of metal or any other material) used to build stuff.  The verb nail was from the Old English næglian (to fix or fasten (something) onto (something else) with nails), from the Proto-Germanic ganaglijan (the source also of the Old Saxon neglian, the Old Norse negla, the Old High German negilen, the German nageln and the Gothic ganagljan (to nail), all developed from the root of the nouns.  The colloquial meaning “secure, succeed in catching or getting hold of (someone or something)” was in use by at least the 1760; hence (hence the law enforcement slang meaning “to effect an arrest”, noted since the 1930s.  The meaning “to succeed in hitting” dates from 1886 while the phrase “to nail down” (to fix in place with nails) was first recorded in the 1660s.

As a noun, “nail-biter” (worrisome or suspenseful event), perhaps surprisingly, seems not to have been in common use until 1999 an it’s applied to things from life-threatening situations to watching close sporting contests.  The idea of nail-biting as a sign of anxiety has been in various forms of literature since the 1570s, the noun nail-biting noted since 1805 and as a noun it was since the mid-nineteenth century applied to those individuals who “habitually or compulsively bit their fingernails” although this seems to have been purely literal rather than something figurative of a mental state.  Now, a “nail-biter” is one who is “habitually worried or apprehensive” and they’re often said to be “chewing the ends of their fingernails” and in political use, a “nail biter” is a criticism somewhat less cutting than “bed-wetter”.  The condition of compulsive nail-biting is the noun onychophagia, the construct being onycho- (a creation of the international scientific vocabulary), reflecting a New Latin combining form, from the Ancient Greek νυξ (ónux) (claw, nail, hoof, talon) + -phagia (eating, biting or swallowing), from the Ancient Greek -φαγία (-phagía).  A related form was -φαγος (-phagos) (eater), the suffix corresponding to φαγεν (phageîn) (to eat), the infinitive of φαγον (éphagon) (I eat), which serves as aorist (essentially a compensator for sense-shifts) (for the defective verb σθίω (esthíō) (I eat).  Bitter-tasting nail-polish is available for those who wish to cure themselves.  Nail-polish as a product dates from the 1880s and was originally literally a clear substance designed to give the finger or toe-nails a varnish like finish upon being buffed.  By 1884, it was being sold as “liquid nail varnish” including shads of black, pink and red although surviving depictions in art suggests men and women in various cultures have for thousands of years been coloring their nails.  Nail-files (small, flat, single-cut file for trimming the fingernails) seem first to have been sold in 1819 and nail-clippers (hand-tool used to trim the fingernails and toenails) in 1890.

Pope Francis (b 1936; pope since 2013) at the funeral of Cardinal George Pell (1941-2023), St Peter’s Basilica, the Vatican, January 2023.

The expression "nail down the lid" is a reference to the lid of a coffin (casket), the implication being one wants to make doubly certain anyone within can't possible "return from the dead".  The noun doornail (also door-nail) (large-headed nail used for studding batten doors for strength or ornament) emerged in the late fourteenth century and was often used of many large, thick nails with a large head, not necessarily those used only in doors.  The figurative expression “dead as a doornail” seems to be as old as the piece of hardware and use soon extended to “dumb as a doornail” and “deaf as a doornail).  The noun hangnail (also hang-nail) is a awful as it sounds and describes a “sore strip of partially detached flesh at the side of a nail of the finger or toe” and appears in seventeenth century texts although few etymologists appear to doubt it’s considerably older and probably a folk etymology and sense alteration of the Middle English agnail & angnail (corn on the foot), from the Old English agnail & angnail.  The origin is likely to have been literally the “painful spike” in the flesh when suffering the condition.  The first element was the Proto-Germanic ang- (compressed, hard, painful), from the primitive Indo-European root angh- (tight, painfully constricted, painful); the second the Old English nægl (spike), one of the influences on “nail”.  The noun hobnail was a “short, thick nail with a large head” which dates from the 1590s, the first element probably identical with hob (rounded peg or pin used as a mark or target in games (noted since the 1580s)) of unknown origin.  Because hobnails were hammered into the leather soles of heavy boots and shoes, “hobnail” came in the seventeenth century to be used of “a rustic person” though it was though less offensive than forms like “yokel”.

Colors: Lindsay Lohan with nails unadorned and painted.

In the 1930s, the straight-8 became a favorite for manufacturers of luxury cars, attracted by its ease of manufacture (components and assembly-line tooling able to be shared with straight-sixes), the mechanical smoothness inherent in the layout and the ease of maintenance afforded by the long, narrow configuration.  However, the limitations were the relatively slow engine speeds imposed by the need to restrict the “crankshaft flex” and the height of the units, a product of the long strokes used to gain the required displacement.  By the 1950s, it was clear the future lay in big-bore, overhead valve V8s although the Mercedes-Benz engineers, unable to forget the glory days of the 1930s when the straight-eight W125s built for the Grand Prix circuits generated power and speed Formula One wouldn’t again see until the late 1970s, noted the relatively small 2.5 litre (153 cubic inch) displacement limit for 1954 and conjured up a quixotic final fling for the layout.  Used in both Formula One as the W196R and in sports car race as the W196S (better rememberd as the 300 SLR) the new 2.5 & 3.0 litre (183 cubic inch) straight-8s, unlike their pre-war predecessors, solved the issue of crankshaft flex by locating the power take-off at the centre, adding mechanical fuel-injection and a desmodromic valve train to make the things an exotic cocktail of ancient & modern.  Dominant during 1954-1955 in both Formula One & the Sports Car Championship, they were the last of the straight-8s.

Schematic of Buick “Nailhead” V8, 1953-1966.

Across the Atlantic, the US manufacturers also abandoned their straight-8s.  Buick introduced their overhead valve (OHV) V8 in 1953 but, being much wider than before, the new engine has to be slimmed somewhere to fit between the fenders; it would not be until later the platform was widened.  To achieve this, the engineers narrowed the cylinder heads, compelling both an conical (the so-called “pent-roof”) combustion chamber and an arrangement in which the sixteen valves pointed directly upwards on the intake side, something which not only demanded an unusual pushrod & rocker mechanism but also limited the size of the valves.  So, the valves had to be tall and narrow and, with some resemblance to nails, they picked up the nickname “nail valves”, morphing eventually to “nailhead” as a description of the whole engine.  The valve placement and angle certainly benefited the intake side but the geometry compromised the flow of exhaust gases which were compelled through their anyway small ports to make a turn of almost 180o on their way to the tailpipe.

It wasn't the last time the head design of a Detroit V8 would be dictated by considerations of width.  When Chrysler in 1964 introduced the 273 cubic inch (4.5 litre) V8 as the first of its LA-Series (that would begat the later 318, 340 & 360 as well as the V10 made famous in the Dodge Viper), the most obvious visual difference from the earlier A-Series V8s was the noticeably smaller cylinder heads.  The A engines used as skew-type valve arrangement in which the exhaust valve was parallel to the bore with the intake valve tipped toward the intake manifold (the classic polyspherical chamber).  For the LA, Chrysler rendered all the valves tipped to the intake manifold and in-line (as viewed from the front), the industry’s standard approach to a wedge combustion chamber.  The reason for the change was that the decision had been taken to offer the compact Valiant with a V8 but it was a car which had been designed to accommodate only a straight-six and the wide-shouldered polyspheric head A-Series V8s simply wouldn’t fit.  So, essentially, wedge-heads were bolted atop the old A-Series block but the “L” in LA stood for light and the engineers wanted something genuinely lighter for the compact (in contemporary US terms) Valiant.  Accordingly, in addition to the reduced size of the heads and intake manifold, a new casting process was developed for the block (the biggest, heaviest part of an engine) which made possible thinner walls.

322 cubic inch Nailhead in 1953 Buick Skylark convertible (left) and 425 cubic inch Nailhead in 1966 Buick Riviera GS (with dual-quad MZ package) (right).  Note the “Wildcat 465” label on the air cleaner, a reference to the claimed torque rating, something most unusual, most manufacturers using the space to advertise horsepower or cubic inch displacement (cid).

The nailhead wasn’t ideal for producing ultimate power but it did lend itself to prodigious low-end torque, something much appreciated by Buicks previous generation of buyers who has enjoyed the low-speed responsiveness of the famously smooth straight-8.  However, like everybody else, Buick hadn’t anticipated that as the 1950s unfolded, the industry would engage in a “power race”, something to which the free-breathing Cadillac and Chrysler’s Hemis were well-suited.  The somewhat strangulated Buick Nailhead was not at all suited and to gain power the engineers were compelled to add high-lift, long-duration camshafts which enabled the then magic 300 horsepower number to be achieved but at the expense of smoothness and tales of Buick buyers returning to the dealer to fix the “rumpity-rump” idle became legion.  Still, the Nailhead was robust, relatively light and offered what was then a generous displacement and the ever inventive hot-rod community soon worked out the path to power was to use forced induction and reverse the valve use, the supercharger blowing the fuel-air mix through the exhaust ports and the exhaust gases through the larger intake ports.  Thus the for a while Nailhead enjoyed a career as a niche player although the arrival in the mid 1950s of the much more tunable Chevrolet V8s ended the vogue for all but a few devotees who continued use well into the 1960s.  Buick acknowledged reality and, unusually, instead of following the industry trend and drawing attention to cubic inch displacement and horsepower, publicized their torque output, confusing some (though probably not Buick buyers who were a loyal crew).

Lockheed SR-71 Blackbird (1964-1999).

Not confused was the United States Air Force (USAF) which was much interested in power for its aircraft but also had a special need for torque on the tarmac and that briefly meant another small niche for the Nailhead.  The Lockheed SR-71 Blackbird (1964-1979) was a long-range, high-altitude supersonic (Mach 3+) aircraft used by the (USAF) for reconnaissance between 1966-1998 and by the National Aeronautics & Space Administration (NASA) for observation missions as late as 1999.  Something of a high-water mark among the extraordinary advances made in aeronautics and materials construction during the 1950s & 1960s, the SR-71 used the Pratt & Whitney J58 turbojet engine which used an innovative, secondary air-injection system to the afterburner, permitting additional thrust at high speed.  The SR-71 still holds a number of altitude and speed records and Lockheed’s SR-72, a hypersonic unmanned aerial vehicle (UAV) is said to be in an “advanced stage” of design and construction although whether any test flights will be conducted before 2030 remains unclear, the challenges of sustaining in the atmosphere velocities as high as Mach 6+ onerous given the heat generated.

Drawing from user manual for AG330 starter cart (left) and AG330 starter cart with dual Buick Nailhead V8s.

At the time, the SR-71 was the most exotic aircraft on the planet but during testing and early in its career, to fly, it relied on a pair of even then technologically bankrupt Buick Nailhead V8s.  These were mounted in a towed cart and were effectively the turbojet’s starter motor, a concept developed in the 1930s as a work-around for the technology gap which emerged as aero-engines became too big to start by hand but no on-board electrical systems were available to trigger ignition.  The two Nailheads were connected by gears to a single, vertical drive shaft which ran the jet up to the critical speed at which ignition became self-sustaining.  The engineers chose the Nailheads after comparing them to other large displacement V8s, the aspect of the Buicks which most appealed being the torque generated at relatively low engine speeds, a characteristic ideal for driving an output shaft.  After the Nailhead was retired in 1966, later carts used Chevrolet big-block V8s but in 1969 a pneumatic start system was added to the infrastructure of the USAF bases from which the SR-71s most frequently operated, the sixteen-cylinder carts relegated to secondary bases the planes rarely used.

Saturday, July 22, 2023

Fastback

Fastback (pronounced fast-bak or fahst-bak)

(1) A form of rearward coachwork for an automobile body consisting classically of a single, unbroken convex curve from the top to the rear bumper line (there are variations of this also called fastbacks).

(2) A car having using such styling (also used as a model name by both car and motorcycle manufacturers).

(3) A type of pig developed from the landrace or large white and bred for lean meat.

(4) In computing, a product-name sometimes used for backup software.

1960–1965: The construct was fast + back.  Fast was from the Middle English fast & fest, from the Old English fæst (firmly fixed, steadfast, constant; secure; enclosed, watertight; strong, fortified), from the Proto-West Germanic fast, from the Proto-Germanic fastu & fastuz (firm) (which was the source also of the Old Frisian fest, the Old Norse fastr, the Dutch vast and the German fest), from the primitive Indo-European root past- (firm, solid), the source for the Sanskrit pastyam (dwelling place).  The original meaning of course persists but the sense development to “rapid, speedy” dates from the 1550s and appears to have happened first in the adverb and then transferred to the adjective.  The original sense of “secure; firm” is now restricted to uses such as “hard & fast” description of track conditions in horse racing but the derived form “fasten” (attach to; make secure) remains common.  Back was from the Middle English bak, from the Old English bæc, from the Proto-West Germanic bak, from the Proto-Germanic bakam & baką which may be related to the primitive Indo-European beg- (to bend).  In other European languages there was also the Middle Low German bak (back), from the Old Saxon bak, the West Frisian bekling (chair back), the Old High German bah and the Swedish and Norwegian bak; there are no documented connections outside the Germanic and in other modern Germanic languages the cognates mostly have been ousted in this sense by words akin to Modern English ridge such as Danish ryg and the German Rücken.  At one time, many Indo-European languages may have distinguished the horizontal back of an animal or geographic formation such as a mountain range from the upright back of a human while in some cases a modern word for "back" may come from a word related to “spine” such as the Italian schiena or Russian spina or “shoulder”, the examples including the Spanish espalda & Polish plecy.  Fastback is a noun; the noun plural is fastbacks.

1935 Chrysler Imperial C2 Airflow (top left), 1936 Cadillac V16 streamliner (top centre), 1936 Mercedes Benz 540K Autobahnkurier (Motorway Cruiser) (top right), 1948 Pontiac Streamliner (bottom left), 1948 Cadillac Series 62 (bottom centre) and 1952 Bentley Continental R (bottom right).

Although it was in the 1960s the fastback became a marketing term as the range of models proliferated, it was then nothing new, the lines appearing on vehicles even before 1920, some of which even used the teardrop shape which wind tunnels would confirm was close to optimal, as least in terms of reducing drag although it would be decades before the science evolved to the point where the importance of the trade-off between drag and down-force was completely understood.  To some extent this was explained by (1) so many of the early examples being drawn from aviation where shapes were rendered to optimize the twin goals of reducing drag & increasing lift and (2) road vehicles generally not being capable of achieving the velocities at which the lack of down-force induced instability to a dangerous extent.  Rapidly that would change but there was quite a death toll as the lessons were learned.  By the 1930s, streamlining had become one of the motifs of the high-performance machinery of the era, something coincidently suited to the art deco moment through which the world was passing and in both Europe and the US there were some remarkable, sleek creations.  There was also market resistance.  Chrysler’s engineers actually built one of their sedans to operate backwards and ran tests which confirmed that in real-world conditions the results reflected exactly what the wind-tunnel had suggested: it was quicker, faster and more economical if driven with the rear bodywork facing the front.  Those findings resulted in the release of the Airflow range (1934-1937) and while the benefits promised were realized, the frontal styling proved to be too radical for the time and commercial failure ensued.  People however seemed to like the fastback approach (then often called “torpedo style”) and manufacturers added many to their ranges during the 1940s and 1950s.

Ford Galaxies, Daytona, 1963 (top), 1966 Dodge Charger (bottom left), 1968 Plymouth Barracuda (bottom centre) and 1971 Ford Torino (bottom right).

Ford in 1962 inadvertently provided a case study of relative specific efficiencies of rooflines. The sleek Starliner roof on the 1961 Galaxies used in NASCAR racing sliced gracefully through the air and while sales were initially strong, demand soon slowed and the marketing department compelled a switch to the “formal roofline” introduced on the Thunderbird; it was a success in the showroom but less than stellar on the circuits, the buffering induced by the steep rear windows reducing both stability and speed.  Not deterred, Ford resorted to the long NASCAR tradition of cheating, fabricating a handful of fibreglass hard-tops which would (for racing purposes) turn a convertible Galaxie into a Starliner.  Unfortunately, to be homologated for competition, such parts had to be produced in at least the hundreds and be available for general sale.  Not fooled by Ford’s mock-up brochure, NASCAR banned the plastic roof and not until 1963 when a “fastback” roofline was added was the car’s competitiveness restored.  Actually, it wasn’t really a fastback at all because full-sized cars like the Galaxie had become so long that even a partial sweep from the windscreen to the rear bumper would create absurd proportion but the simple expedient of a sharply raked rear window turned out to work about as well.  Even on intermediates like the Dodge Charger and Ford Torino the pure fastback didn’t really work, the result just too slab-sided.  The classic implementation was when it was used for the shorter pony cars such as the Plymouth Barracuda and Ford Mustang.

1968 Ford Mustang GT 390 Coupé (top left) & 1967 Shelby Mustang GT500 (top right); 1971 Ford Mustang 351 Coupé (bottom left) & 1971 Ford Mustang Mach 1 429 Super CobraJet SportsRoof.

The fastback for a while even influenced roofs not fast.  The original Mustang coupé (1964) was a classic “notchback” but such was the impact in the market that later in the year a fastback was added, joining the convertible to make a three body-style range.  The fastback’s popularity was bolstered by Carroll Shelby (1923–2012) choosing that style for his Shelby Mustangs which over the course of half a decade would evolve (or devolve depending on one’s view) from racing cars with number plates to Mustangs with bling but it would also influence the shape of the coupé.  By 1971 the fastback Mustangs (by then called “SportRoofs”) had adopted an even more severe angle at the rear which was dramatic to look at but hard to look through if inside, the almost horizontal rear window restricting visibility which made the more upright coupé (marketed as “Hardtop”) a more practical (and safer) choice.  However, such was the appeal of the fastback look that the profile was fastbackesque, achieved by the use of small trailing buttresses which made their own contribution to restricting reward visibility although not to the extent of some, like Ferrari’s Dino 246 which in some jurisdictions was banned from sale for just that reason.

1965 Rambler Marlin (top left), 1967 AMC Marlin (top centre), 1968 AMC Javelin (top right), 1969 AMC AMX (bottom left), 1974 AMC Javelin (bottom centre) and Lindsay Lohan in 1974 AMC Javelin (bottom right).

American Motors Corporation was (until the arrival of Tesla), the “last of the independents” (ie not part of General Motors (GM), Ford or Chrysler) and at its most successful when filling utilitarian niches the majors neglected, their problem being their successes were noticed and competition soon flooded the segments they’d profitably created.  As a result, they were compelled to compete across a wider range and while always a struggle, they did for decades survive by being imaginative and offering packages which, on cost breakdown could be compelling (at one point they joined Rolls-Royce as the only company to offer sedans with air-conditioning fitted as standard equipment).  Sometimes though they got it wrong, and that they did with the Marlin, introduced in 1965 as a fastback based on their intermediate Rambler Classic.  Although the fastback was all about style, AMC couldn’t forget their history of putting a premium on practicality an accordingly, the roof-line grafted on to the classic also ensured comfortable headroom for the rear-seat passengers, resulting in a most ungainly shape.  Sales were dismal for two seasons but AMC persisted, in 1967 switching the fastback to the full-sized Ambassador line which all conceded was better though that was damning with faint praise.  More successful was the Javelin (1968), AMC’s venture into the then lucrative pony-car business which the Mustang had first defined and then dominated.  The early Javelins were an accomplished design, almost Italianate in the delicacy of their lines and the fastback was nicely balanced.  Less balanced but more intriguing was the AMX, a two seat “sports car” created in the cheapest way possible: shorten the Javelin’s wheelbase by 12 inches (300 mm) and remove the rear seat.  That certainly solved the problem of rear seat headroom and over three seasons the AMX received a generally positive response from the press but sales never reached expectations, even a pink one being chosen as the car presented to Playboy magazine's 1968 Playmate of the Year not enough to ensure survival and when the Javelin was restyled for 1971, the two seat variant wasn’t continued although AMX was retained as a name for certain models.  The new Javelins lacked the subtlety of line of the original and the fastback part was probably the best part of the package, much of the rest rather overwrought.  The pony car ecosystem declined in the early 1970s and Javelin production ceased in 1974 although it did by a few months outlive what was technically the first pony-car of them all, the Plymouth Barracuda.

1969 Norton Commando Fastback.

The Norton Commando was produced between 1968-1977.  All Commandos initially used the distinctive tail section which, like the fuel tank, was made of fibreglass and the slope of the molding instantly attracted the nickname “fastback”, an allusion to the body-style then becoming popular for sports cars.  It was the first British motorcycle built in volume of “modern” appearance but, apart from the odd clever improvisation, much of the engineering was antiquated and a generation or more behind the coming Japanese onslaught which would doom the local industry.  In 1969, as other models were added to the Commando range, all of which used more conventional rear styling, the factory formally adopted Fastback as a model name for the originals which remained in production, upgraded in 1970 (as the Fastback Mark ll), fitted with much admired upswept exhausts.  With minor changes, after only four months, it was replaced with the Mark III which served until 1972 when the Mark IV was released, the most notable change being the fitting of a front disk brake.

1970 Norton Commando Fastback (with retro-fitted disk brake).

One interesting variant was the Fastback Long Range (LR) which, although in production for almost two years during 1971-1972, only around 400 were built, most apparently exported to Australia where the distance between gas (petrol) stations was often greater than in Europe or the US.  Although there were other detail differences, the main distinguishing feature of the LR was the larger capacity (in the style of the earlier Norton Atlas) petrol tank, a harbinger of the “Commando Interstate” which became a regular production in 1972 and lasted until Commando production ceased in 1977 by which time it constituted the bulk of sales.  Fastback production ended in 1973 and although some were fitted with the doomed 750 “Combat” engine, none ever received the enlarged unit introduced that year in the Commando 850.

1965 Ford GT40 Mark 1 (road specification) (left), 1967 Ford GT40 Mark IV (J-Car prototype) (centre) and 1967 Ford GT40 Mark IV, Sebring, 1967.

Impressed by Ferrari’s “breadvan”, Ford, this time with the help of a wind-tunnel, adopted the concept when seeking to improve the aerodynamics of the GT40.  Testing the J-Car proved the design delivered increased speed but the resultant lack of down-force proved lethal so the by then conventional fastback body was used instead and it proved successful in the single season it was allowed to run before rule changes outlawed the big engines.

1966 Fiat 850 Coupé (top left), 1970 Daf 55 Coupé (top centre), 1974 Skoda 110 R (top right), 1972 Morris Marina Coupé (bottom left), 1972 Ford Granada Fastback (later re-named Coupé) (bottom centre) and 1973 Coleman-Milne Granada Limousine (bottom right).

The Europeans took to the fastback style, not only for Ferraris & Maseratis but also to add some flair (and profit margin) to low-cost economy vehicles.  It produced some rather stubby cars but generally they were aesthetically successful and the Skoda 110 R (from Czechoslovakia and thus the Warsaw Pact’s contribution to the fastback school of thought) lasted from 1973-1980 and as the highly modified 130 RS gained an improbable victory in the 1981 European Touring Car Championship against a star-studded field which included BMW 635s, Ford’s RS Capris & Escorts, Audi GTEs, Chevrolet Camaros and Alfa Romeo GTVs.  It was a shame comrade Stalin didn’t live to see it.  Generally, the Europeans were good at fastbacks but the British had some unfortunate moments.  In fastback form, the appearance of the Morris Marina was from the start compromised by the use of the sedan’s front doors which meant the thing was fundamentally ill-proportioned, something which might have been forgiven if it had offered the practicality of a hatchback instead of a conventional trunk (boot).  A dull and uninspiring machine (albeit one which sold well), the Marina actually looked best as a station wagon, an opinion many hold also of its corporate companion the Austin Allegro although the two frequently contest the title of Britain’s worst car of the 1970s (and it's a crowded field).  Even Ford of England which at the time was selling the well-styled fastback Capri had a misstep when it offered the ungainly fastback Granada, many made to look worse still by the addition of the then fashionable vinyl roof, the mistake not repeated when the range was revised without a fastback model.  Compounding the error on an even grander scale however was coach-builder Coleman-Milne which, bizarrely, grafted the fastback’s rear on to a stretched Granada sedan to create what was at the time the world’s only fastback limousine.  Although not entirely accurate, there are reasons the 1970s came to be called “the decade style forgot”.