Vulpine (pronounced vuhl-pahyn or vuhl-pin)
Etymology of words with examples of use illustrated by Lindsay Lohan, cars of the Cold War era, comrade Stalin, crooked Hillary Clinton et al.
Monday, August 11, 2025
Vulpine
Monday, June 30, 2025
Bunker
Bunker (pronounced buhng-ker)
(1) A large
bin or receptacle; a fixed chest or box.
(2) In
military use, historically a fortification set mostly below the surface of the
ground with overhead protection provided by logs and earth or by concrete and
fitted with above-ground embrasures through which guns may be fired.
(3) A
fortification set mostly below the surface of the ground and used for a variety
of purposes.
(4) In golf,
an obstacle, classically a sand trap but sometimes a mound of dirt,
constituting a hazard.
(5) In
nautical use, to provide fuel for a vessel.
(6) In
nautical use, to convey bulk cargo (except grain) from a vessel to an adjacent
storehouse.
(7) In
golf, to hit a ball into a bunker.
(8) To
equip with or as if with bunkers.
(9) In
military use, to place personnel or materiel in a bunker or bunkers (sometimes
as “bunker down”).
1755–1760:
From the Scottish bonkar (box, chest
(also “seat” (in the sense of “bench”) of obscure origin but etymologists
conclude the use related to furniture hints at a relationship with banker (bench). Alternatively, it may be from a Scandinavian
source such as the Old Swedish bunke (boards
used to protect the cargo of a ship). The
meaning “receptacle for coal aboard a ship” was in use by at least 1839
(coal-burning steamships coming into general use in the 1820s). The use to describe the obstacles on golf
courses is documented from 1824 (probably from the extended sense “earthen seat”
which dates from 1805) but perhaps surprisingly, the familiar sense from
military use (dug-out fortification) seems not to have appeared before World
War I (1914-1918) although the structures so described had for millennia existed. “Bunkermate” was army slang for the
individual with whom one shares a bunker while the now obsolete “bunkerman”
(“bunkermen” the plural”) referred to someone (often the man in charge) who
worked at an industrial coal storage bunker.
Bunker & bunkerage is a noun, bunkering is a noun & verb,
bunkered is a verb and bunkerish, bunkeresque, bunkerless & bunkerlike are adjectives;
the noun plural is bunkers.
Just as
ships called “coalers” were used to transport coal to and from shore-based
“coal stations”, it was “oilers” which took oil to storage tanks or out to sea
to refuel ships (a common naval procedure) and these STS (ship-to-ship)
transfers were called “bunkering” as the black stuff was pumped,
bunker-to-bunker. That the coal used by
steamships was stored on-board in compartments called “coal bunkers” led
ultimately to another derived term: “bunker oil”. When in the late nineteenth century ships
began the transition from being fuelled by coal to burning oil, the receptacles
of course became “oil bunkers” (among sailors nearly always clipped to
“bunker”) and as refining processes evolved, the fuel specifically produced for
oceangoing ships came to be called “bunker oil”.
Bunker oil is
“dirty stuff”, a highly viscous, heavy fuel oil which is essentially the
residue of crude oil refining; it’s that which remains after the more
refined and volatile products (gasoline (petrol), kerosene, diesel etc) have
been extracted. Until late in the
twentieth century, the orthodox view of economists was its use in big ships was
a good thing because it was a product for which industry had little other use
and, as essentially a by-product, it was relatively cheap. It came in three flavours: (1) Bunker A: Light
fuel oil (similar to a heavy diesel), (2) Bunker B: An oil of intermediate
viscosity used in engines larger than marine diesels but smaller than those
used in the big ships and (3) Bunker C: Heavy fuel oil used in container
ships and such which use VLD (very large displacement), slow running engines with a huge reciprocating
mass. Because of its composition, Bucker
C especially produced much pollution and although much of this happened at sea
(unseen by most but with obvious implications), when ships reached harbor to dock,
all the smoke and soot became obvious.
Over the years, the worst of the pollution from the burning of bunker
oil greatly has been reduced (the work underway even before the Greta Thunberg
(b 2003) era), sometimes by the simple expedient of spraying a mist of water
through the smoke.
Floor-plans of the upper (Vorbunker) and lower (Führerbunker) levels of the structure now commonly referred to collectively as the Führerbunker.
History’s most
infamous bunker remains the Berlin Führerbunker
in which Adolf Hitler (1889-1945; Führer
(leader) and German head of government 1933-1945 & head of state 1934-1945)
spent much of the last few months of his life.
In the architectural sense there were a number of Führerbunkers built, one at each of the semi-permanent Führerhauptquartiere (Führer Headquarters) created for the German
military campaigns and several others built where required but it’s the one in Berlin
which is remembered as “the Führerbunker”. Before 1944 when the intensification of the air
raids by the RAF (Royal Air Force) and USAAF (US Army Air Force) the term Führerbunker seems rarely to have been
used other than by the architects and others involved in their construction and
it wasn’t a designation like Führerhauptquartiere
which the military and other institutions of state shifted between locations
(rather as “Air Force One” is attached not to a specific airframe but whatever
aircraft in which the US president is travelling). In subsequent historical writing, the term Führerbunker tends often to be applied
to the whole, two-level complex in Berlin and although it was only the lower
layer which officially was designated as that, for most purposes the
distinction is not significant. In military
documents, after January, 1945 the Führerbunker
was referred to as Führerhauptquartiere.
Führerbunker tourist information board, Berlin, Germany.
Only an
information board at the intersection of den
Ministergärten and Gertrud-Kolmar-Straße, erected by the German Goverment
in 2006 prior to that year's FIFA (Fédération
Internationale de Football Association (International Federation of
Association Football)) World Cup now marks the place on Berlin's Wilhelmstrasse
77 where once the Führerbunker was located.
The Soviet occupation forces razed the new Reich Chancellery and
demolished all the bunker's above-ground structures but the subsequent GDR (Deutsche Demokratische Republik (German
Democratic Republic; the old East Germany) 1949-1990) abandoned attempts
completely to destroy what lay beneath.
Until after the fall of the Berlin Wall (1961-1989) the site remained
unused and neglected, “re-discovered” only during excavations by
property developers, the government insisting on the destruction on whatever
was uncovered and, sensitive still to the spectre of “Neo-Nazi shrines”, for years the bunker’s location was never divulged, even as unremarkable buildings
(an unfortunate aspect of post-unification Berlin) began to appear on the
site. Most of what would have covered
the Führerbunker’s footprint is now a
supermarket car park.
The first
part of the complex to be built was the Vorbunker
(upper bunker or forward bunker), an underground facility of reinforced concrete
intended only as a temporary air-raid shelter for Hitler and his entourage in
the old Reich Chancellery. Substantially
completed during 1936-1937, it was until 1943 listed in documents as the Luftschutzbunker der Reichskanzlei (Reich
Chancellery Air-Raid Shelter), the Vorbunker
label applied only in 1944 when the lower level (the Führerbunker proper) was appended.
In mid January, 1945, Hitler moved into the Führerbunker and, as the military
situation deteriorated, his appearances above ground became less frequent until
by late March he rarely saw the sky,
Finally, on 30 April, he committed suicide.
Bunker
Busters
The use in
June 2025 by the USAF (US Air Force) of fourteen of its Boeing GBU-57 (Guided Bomb
Unit-57) Massive Ordnance Penetrators (MOP) bombs against underground targets in
Iran (twelve on the Fordow Uranium Enrichment Plant and two on the Natanz nuclear
facility) meant “Bunker Buster” hit the headlines. Carried by the Northrop B-2 Spirit heavy
bomber (built between 1989-2000), the GBU-57 is a 14,000 kg (30,000 lb) bomb with
a casing designed to withstand the stress of penetrating through layers of
reinforced concrete or thick rock.
“Bunker buster” bombs have been around for a while, the ancestors of
today’s devices first built for the German military early in World War II (1939-1945)
and the principle remains unchanged to this day: up-scaled armor-piercing
shells. The initial purpose was to
produce a weapon with a casing strong enough to withstand the forces imposed
when impacting reinforced concrete structures, the idea simple in that what was
needed was a delivery system which could “bust through” whatever protective
layers surrounded a target, allowing the explosive charge to do damage where
needed rtaher than wastefully being expended on an outer skin.
The German weapons proved effective but inevitably triggered an “arms
race” in that as the war progressed, the concrete layers became thicker, walls over
2 metres (6.6 feet) and ceilings of 5 (16) being constructed by 1943. Technological development continued and the
idea extended to rocket propelled bombs optimized both for armor-piercing and
aerodynamic efficiency, velocity a significant “mass multiplier” which made the
weapons still more effective.
USAF test-flight footage of Northrop B2-Spirit dropping two GBU-57 "Bunker Buster" bombs.
Concurrent with this, the British developed the first true “bunker busters”, building on the idea of the naval torpedo, one aspect of which was in exploding a short distance from its target, it was highly damaging because it was able to take advantage of one of the properties of water (quite strange stuff according to those who study it) which is it doesn’t compress. What that meant was it was often the “shock wave” of the water rather than the blast itself which could breach a hull, the same principle used for the famous “bouncing bombs” used for the RAF’s “Dambuster” (Operation Chastise, 17 May 1943) raids on German dams. Because of the way water behaved, it wasn’t necessary to score the “direct hit” which had been the ideal in the early days of aerial warfare.
RAF Bomber Command archive photograph of Avro Lancaster (built between 1941-1946) in flight with Grand Slam mounted (left) and a comparison of the Tallboy & Grand Slam (right), illustrating how the latter was in most respects a scaled-up version of the former. To carry the big Grand Slams, 32 “B1 Special” Lancasters were in 1945 built with up-rated Rolls-Royce Merlin V12 engines, the removal of the bomb doors (the Grand Slam carried externally, its dimensions exceeding internal capacity), deleted front and mid-upper gun turrets, no radar equipment and a strengthened undercarriage. Such was the concern with weight (especially for take-off) that just about anything non-essential was removed from the B1 Specials, even three of the four fire axes and its crew door ladder. In the US, Boeing went through a similar exercise to produce the run of “Silverplate” B-29 Superfortresses able to carry the first A-bombs used in August, 1945.
Best known of the British devices were the so called “earthquake bombs”, the Tallboy (12,000 lb; 5.4 ton) & Grand Slam (22,000 lb, 10 ton) which, despite the impressive bulk, were classified by the War Office as “medium capacity”. The terms “Medium Capacity” (MC) & “High Capacity” referenced not the gross weight or physical dimensions but ratio of explosive filler to the total weight of the construction (ie how much was explosive compared to the casing and ancillary components). Because both had thick casings to ensure penetration deep into hardened targets (bunkers and other structures encased in rock or reinforced concrete) before exploding, the internal dimensions accordingly were reduced compared with the ratio typical of contemporary ordinance. A High Capacity (HC) bomb (a typical “general-purpose” bomb) had a thinner casing and a much higher proportion of explosive (sometimes over 70% of total weight). These were intended for area bombing (known also as “carpet bombing”) and caused wide blast damage whereas the Tallboy & Grand Slam were penetrative with casings optimized for aerodynamic efficiency, their supersonic travel working as a mass-multiplier. The Tallboy’s 5,200 lb (2.3 ton) explosive load was some 43% of its gross weight while the Grand Slam’s 9,100 lb (4 ton) absorbed 41%; this may be compared with the “big” 4000 lb (1.8 ton) HC “Blockbuster” which allocated 75% of the gross weight to its 3000 LB (1.4 ton) charge. Like many things in engineering (not just in military matters) the ratio represented a trade-off, the MC design prioritizing penetrative power and structural destruction over blast radius. The novelty of the Tallboy & Grand Slam was that as earthquake bombs, their destructive potential was able to be unleashed not necessarily by achieving a direct hit on a target but by entering the ground nearby, the explosion (1) creating an underground cavity (a camouflet) and (2) transmitting a shock-wave through the target’s foundations, leading to the structure collapsing into the newly created lacuna.
The
etymology of camouflet has an interesting history in both French and military
mining. Originally it meant “a whiff of
smoke in the face (from a fire or pipe) and in figurative use it was a
reference to a snub or slight insult (something unpleasant delivered directly
to someone) and although the origin is murky and it may have been related to
the earlier French verb camoufler (to
disguise; to mask) which evolved also into “camouflage”. In the specialized military jargon of siege
warfare or mining (sapping), over the seventeen and nineteenth centuries “camouflet”
referred to “an underground explosion that does not break the surface, but
collapses enemy tunnels or fortifications by creating a subterranean void or
shockwave”. The use of this tactic is
best remembered from the Western Front in World War I,
some of the huge craters now tourist attractions.
Since aerial
bombing began to be used as a strategic weapon, of great interest has been the
debate over the BDA (battle damage assessment) and this issue emerged almost as
soon as the bunker buster attack on Iran was announced, focused on the extent
to which the MOPs had damaged the targets, the deepest of which were concealed deep
inside a mountain. BDA is a constantly
evolving science and while satellites have made analysis of surface damage
highly refined, it’s more difficult to understand what has happened deep
underground. Indeed, it wasn’t until the
USSBS (United States Strategic Bombing Survey) teams toured Germany and Japan
in 1945-1946, conducting interviews, economic analysis and site surveys that a
useful (and substantially accurate) understanding emerged of the effectiveness of
bombing although what technological advances have allowed for those with the
resources is the so-called “panacea targets” (ie critical infrastructure
and such once dismissed by planners because the required precision was for many
reasons rarely attainable) can now accurately be targeted, the USAF able to
drop a bomb within a few feet of the aiming point. As the phrase is used by the military, the Fordow
Uranium Enrichment Plant is as classic “panacea target” but whether even a technically
successful strike will achieve the desired political outcome remains to be
seen.
Donald Trump (b 1946; US president
2017-2021 and since 2025) claimed “total obliteration” of the targets while Grand
Ayatollah Khamenei admitted only there had been “some damage” and which is closer to the truth
should one day be revealed. Even modelling
of the effects has probably been inconclusive because the deeper one goes
underground, the greater the number of variables in the natural structure and
the nature of the internal built environment will also influence blast
behaviour. All experts seem to agree much
damage will have been done but what can’t yet be determined is what has been
suffered by the facilities which sit as deep as 80 m (260 feet) inside the
mountain although, as the name implies, “bunker busters” are designed for buried
targets and it’s not always required for blast directly to reach target. Because the shock-wave can travel through earth
& rock, the effect is something like that of an earthquake and if the structure
sufficiently is affected, it may be the area can be rendered geologically too
unstable again to be used for its original purpose.
Within minutes of the bombing having been announced, legal academics were being interviewed (though not by Fox News) to explain why the attacks were unlawful under international law and in a sign of the times, the White House didn't bother to discuss fine legal points like the distinction between "preventive & pre-emptive strikes", preferring (like Fox News) to focus on the damage done. However, whatever the murkiness surrounding the BDA, many analysts have concluded that even if before the attacks the Iranian authorities had not approved the creation of a nuclear weapon, this attack will have persuaded them one is essential for “regime survival”, thus the interest in both Tel Aviv and (despite denials) Washington DC in “regime change”. The consensus seems to be Grand Ayatollah Khamenei had, prior to the strike, not ordered the creation of a nuclear weapon but that all energies were directed towards completing the preliminary steps, thus the enriching of uranium to ten times the level required for use in power generation; the ayatollah liked to keep his options open. So, the fear of some is the attacks, even if they have (by weeks, months or years) delayed the Islamic Republic’s work on nuclear development, may prove counter-productive in that they convince the ayatollah to concur with the reasoning of every state which since 1945 has adopted an independent nuclear deterrent (IND). That reasoning was not complex and hasn’t changed since first a prehistoric man picked up a stout stick to wave as a pre-lingual message to potential adversaries, warning them there would be consequences for aggression. Although a theocracy, those who command power in the Islamic Republic are part of an opaque political institution and in the struggle which has for sometime been conducted in anticipation of the death of the aged (and reportedly ailing) Supreme Leader, the matter of “an Iranian IND” is one of the central dynamics. Many will be following what unfolds in Tehran and the observers will not be only in Tel Aviv and Washington DC because in the region and beyond, few things focus the mind like the thought of ayatollahs with A-Bombs.
Of the word "bust"
The Great Bust: The Depression of the Thirties (1962) by Jack Lang (left), highly qualified content provider Busty Buffy (b 1996, who has never been accused of misleading advertising, centre) and The people's champion, Mr Lang, bust of Jack Lang, painted cast plaster by an unknown artist, circa 1927, National Portrait Gallery, Canberra, Australia (right). Remembered for a few things, Jack Lang (1876–1975; premier of the Australian state of New South Wales (NSW) 1925-1927 & 1930-1932) remains best known for having in 1932 been the first head of government in the British Empire to have been sacked by the Crown since William IV (1765–1837; King of the UK 1830-1837) in 1834 dismissed Lord Melbourne (1779–1848; prime minister of the UK 1834 & 1835-1841).
Those
learning English must think it at least careless things can both be (1) “razed
to the ground” (totally to destroy something (typically a structure), usually
by demolition or incineration) and (2) “raised to the sky” (physically lifted upwards). The etymologies of “raze” and “raise” differ
but they’re pronounced the same so it’s fortunate the spellings vary but in
other troublesome examples of unrelated meanings, spelling and pronunciation
can align, as in “bust”. When used in
ways most directly related to human anatomy: (1) “a sculptural portrayal of a
person's head and shoulders” & (2) “the circumference of a woman's chest
around her breasts” there is an etymological link but these uses wholly are unconnected
with bust’s other senses.
Bust in the sense of “a sculpture of upper torso and head” dates from the 1690s and was from the sixteenth century French buste, from the Italian busto (upper body; torso), from the Latin bustum (funeral monument, tomb (although the original sense was “funeral pyre, place where corpses are burned”)) and it may have emerged (as a shortened form) from ambustum, neuter of ambustus (burned around), past participle of amburere (burn around, scorch), the construct being ambi- (around) + urere (to burn), The alternative etymology traces a link to the Old Latin boro, the early form of the Classical Latin uro (to burn) and it’s though the development in Italian was influenced by the Etruscan custom of keeping the ashes of the dead in an urn shaped like the person when alive. Thus the use, common by the 1720s of bust (a clipping from the French buste) being “a carving of the “trunk of the human body from the chest up”. From this came the meaning “dimension of the bosom; the measurement around a woman's body at the level of her breasts” and that evolved on the basis of a comparison with the sculptures, the base of which was described as the “bust-line”, the term still used in dress-making (and for other comparative purposes as one of the three “vital statistics” by which women are judged (bust, waist, hips), each circumference having an “ideal range”). It’s not known when “bust” and “bust-line” came into oral use among dress-makers and related professions but it’s documented since the 1880s. Derived forms (sometimes hyphenated) include busty (tending to bustiness, thus Busty Buffy's choice of stage-name), overbust & underbust (technical terms in women's fashion referencing specific measurements) and bustier (a tight-fitting women's top which covers (most or all of) the bust.
The bust was carved by Swiss sculptor Ernest Durig (1894–1962) who gained posthumous notoriety when his career as a forger was revealed with the publication of his drawings which he’d represented as being from the hand of the French sculptor Auguste Rodin (1840-1917) under whom he claimed to have studied. Mussolini appears here in one of the subsequently much caricatured poses which were a part of his personality cult. More than one of the Duce's counterparts in other nations was known to have made fun of some of the more outré poses and affectations, the outstretched chin, right hand braced against the hip and straddle-legged stance among the popular motifs.
In sculpture, what had been known as the “portrait statue” came after the 1690s to be known as the “portrait bust” although both terms meant “sculpture of upper torso and head” and these proved a popular choice for military figures because the aspect enabled the inclusion of bling such as epaulettes, medals and other decorations and being depictions of the human figure, busts came to be vested with special significance by the superstitious. In early 1939, during construction of the new Reich Chancellery in Berlin, workmen dropped one of the busts of Otto von Bismarck by Reinhold Begas, breaking it at the neck. For decades, the bust had sat in the old Chancellery and the building’s project manager, Albert Speer (1905–1981; Nazi court architect 1934-1942; Nazi minister of armaments and war production 1942-1945), knowing Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945) believed the Reich Eagle toppling from the post-office building right at the beginning of World War I had been a harbinger of doom for the nation, kept the accident secret, hurriedly issuing a commission to the German sculptor Arno Breker (1900–1991) who carved an exact copy. To give the fake the necessary patina, it was soaked for a time in strong, black tea, the porous quality of marble enabling the fluid to induce some accelerated aging. Interestingly, in his (sometimes reliable) memoir (Erinnerungen (Memories or Reminiscences) and published in English as Inside the Third Reich (1969)), even the technocratic Speer admitted of the accident: “I felt this as an evil omen”.
The other senses of bust (as a noun, verb & adjective) are diverse (and sometimes diametric opposites and include: “to break or fail”; “to be caught doing something unlawful / illicit / disgusting etc”; “to debunk”; “dramatically or unexpectedly to succeed”; “to go broke”; “to break in” (horses, girlfriends etc): “to assault”; the downward portion of an economic cycle (ie “boom & bust”); “the act of effecting an arrest” and “someone (especially in professional sport) who failed to perform to expectation”. That’s quite a range and that has meant the creation of dozens of idiomatic forms, the best known of which include: “boom & bust”, “busted flush”, “dambuster”, “bunker buster”, “busted arse country”, “drug bust”, “cloud bust”, belly-busting, bust one's ass (or butt), bust a gut, bust a move, bust a nut, bust-down, bust loose, bust off, bust one's balls, bust-out, sod buster, bust the dust, myth-busting and trend-busting. In the sense of “breaking through”, bust was from the Middle English busten, a variant of bursten & bresten (to burst) and may be compared with the Low German basten & barsten (to burst). Bust in the sense of “break”, “smash”, “fail”, “arrest” etc was a creation of mid-nineteenth century US English and is of uncertain inspiration but most etymologists seem to concur it was likely a modification of “burst” effected with a phonetic alteration but it’s not impossible it came directly as an imperfect echoic of Germanic speech. The apparent contradiction of bust meaning both “fail” and “dramatically succeed” happened because the former was an allusion to “being busted” (ie broken) while the latter meaning used the notion of “busting through”.
Thursday, March 27, 2025
Dwarf
Dwarf (pronounced dwawrf)
(1) A
person of abnormally small stature owing to a pathological condition,
especially one suffering from cretinism or some other disease that produces
disproportion or deformation of features and limbs. In human pathology, dwarfism is usually
defined, inter-alia, as an adult height less than 1.47 m (4 ft 10 in).
(2) In
zoology & botany, an animal or plant much smaller than the average of its
kind or species.
(3) In European
folklore, a being in the form of a small, often misshapen and ugly, man,
usually having magic powers.
(4) In
Norse mythology, any member of a race of beings from (especially Scandinavian
and other Germanic) folklore, usually depicted as having some sort of
supernatural powers and being skilled in crafting and metalworking, often as
short with long beards, and sometimes as clashing with elves.
(5) In
astronomy, a small version of a celestial body (planet, moon, galaxy, star etc).
(6) Of unusually
small stature or size; diminutive; to become stunted or smaller.
Pre 900:
From the Middle English dwerf, dwergh,
dwerw & dwerȝ, from
the Old English dweorh & dweorg (dwarf),
replacing the Middle English dwerg and
ultimately from the Proto-Germanic dwergaz. It was
cognate with the Scots dwerch,
the Old High German twerg & twerc (German Zwerg), the Old Norse dvergr
(Swedish dvärg), the Old Frisian dwirg (West Frisian dwerch), the Middle
Low German dwerch, dwarch & twerg (German & Low German Dwarg & Dwarch) and the Middle Dutch dwerch
& dworch (Dutch dwerg).
The Modern English noun has undergone complex phonetic changes. The form
dwarf is the regular continuation of Old English dweorg, but the plural dweorgas
gave rise to dwarrows and the oblique
stem dweorge which led to dwery, forms sometimes found as the
nominative singular in Middle English texts and in English dialects. Dwarf is a noun and verb, dwarfness & dwarfishness
are nouns, dwarfish & dwarflike are adjectives and dwarfishly is an adverb. The plural forms are dwarves and dwarfs. Dwarfs was long the common plural in English
but after JRR Tolkien (1892-1973) used dwarves, his influence was enough to
become the standard plural form for mythological beings. For purposes non-mythological, dwarfs remains
the preferred form.
The M
Word
Dwarf seems still to be an acceptable term to describe those with dwarfism
and Little People of America (LPA), the world’s oldest and largest dwarfism
support organization (which maintains an international, membership-based
organization for people with dwarfism and their families) has long campaigned
to abolish the use of the word “midget” in the context of short humans. The objection to midget is associative. It was never part of the language of medicine
and it was never adopted as official term to identify people with dwarfism, but
was used to label used those of short stature who were on public display for
curiosity and sport, most notoriously in the so-called “freak shows”. Calling people “midgets” is thus regarded as derogatory. Midget remains an apparently acceptable word to
use in a historic context (midget submarine, MG Midget etc) or to describe machinery
(midget car racing; the Midget Mustang aerobatic sports airplane) but no new adoptions
have been registered in recent years. The
LPA is also reporting some supportive gestures, noting with approval the
decision of the Agricultural Marketing Service (AMS) of the US Department of
Agriculture’s (USDA) to revise the nomenclature used in the US standards for grades
of processed raisins by removing five references to the term “midget”. Although obviously a historically benign use
of the word, its removal was a welcome display of cultural sensitivity.
An interesting outlier however is midget wrestling, a field in which the participants are said enthusiastically to support the label, citing its long traditions and the marketing value of the brand. Although in the late twentieth century, midget wrestling’s popularity diminished in the last decade there’s be a resurgence of interest and the sport is now a noted content provider for the streaming platforms which run live and recorded footage. Since the 1970s, midget wrestling has included styles other than the purely technical form with routines extending from choreographed parody and slapstick performances to simulated sexual assault. These innovations have attracted criticism and the suggesting it’s a return to the freak shows of earlier centuries but audiences in the target demographic seem appreciative and, noting the success of a number of tours and operators, Major League Wrestling in 2022 announced the creation of a midget division.
The MG Midget
Where it began: 1930 MG M-Type Midget Roadster.
The earliest cars to wear MG badge (the name originally “Morris Garages”, an operation which had the same relationship to Morris as AMG does to Mercedes-Benz (ie high-performance variants)) were tuned (and often re-bodied) editions of existing Morris models but in 1928 the 8/33 M.G Midget Sports Series M (truncated usually to “M Type” was displayed at the 1928 London Olympia Motor, series production commencing the next season. The first of a long line of tiny roadsters, 3,232 would be made between 1929-1932 and the one in the photographs above is fitted with coachwork typical of the era: an open two-seater in the fashionable “boat-tail” style, constructed by Carbodies of Coventry using construction technique which began in aviation, the panels a mix of steel and fabric-covered plywood over an ash frame. The fabric soft-top was stored under the rear deck along its frame, tools and a spare wheel. In the spirit of the age, a rakish two-piece windshield was fitted and there was no provision for a heater. Despite the minimalist accommodation, the engine was surprisingly advanced, the four-cylinder engine using a bevel-gear-driven single overhead cam turning off the vertically mounted generator, 27 horsepower at a then impressive 5400 rpm generated from a displacement of 847 cm3 (51.68 cubic inches). A footnote in the Midget’s history is that the first exported to the US was in 1930 bought by Edsel Ford (1893–1943; president of the Ford Motor Company (FoMoCo) 1919–1943), then titular head of his father’s (Henry Ford (1863-1947)) eponymous Motor company which, by the million, built larger vehicles.
1960 Austin-Healey Sprite Mark I (top left), 1966 MG Midget Mark II (top right), 1973 MG Midget Mark III (RWA, bottom left) and 1979 MG Midget Mark IV (bottom right).
For a new generation (1961-1979) of diminutive roadsters, MG revived the Midget name last used on the M-Type. MG was by the 1950s part of British Motor Corporation (BMC (1952-1967) which later would be absorbed by the doomed British Leyland (1968–1990)) and a corporate companion marquee was Austin-Healey which between 1958-1961 produced the Sprite (known variously as the “bugeye” or “frogeye”), a small sports car, built on the familiar template of economy car underpinnings with a stylish body. After the release of the MGA (1955-1962), MG no longer had a competitor in the low-price segment so BMC took the decision that the two companies would share the model, yet another example of the “badge engineering” which BMC pragmatically (and for a while lucratively) would exploit until the process descended into self-defeating absurdity. When the Mark II Sprites were released in 1961 (without the distinctive headlights which were the source of the nicknames), simultaneously there was the debut of the Midget, the latter slightly more expensive and better equipped, although both remained basic roadsters in the old tradition, lacking fittings such as side windows and external door-handles. The Sprite would continue in three versions (Mark II; 1961-1964, Mark III; 1964-1966, Mark IV 1966-1971) before, following the end of BMC’s contractual arrangement with Donald Healey (1898–1988), briefly it was sold as the Austin Sprite (1971-1972) before the name was retired and the segment was left to MG. In the decade they’d been companion models, the pair significantly had been improved, gaining power, refinement and creature comforts (the overdue door handles and side windows part of the Mark II upgrades) but what never changed were the dimensions, the things always small, something the balanced styling tended to disguise, the compactness best appreciated when one was seen parked next to a more typically sized vehicle; the Sprite and Midget being dwarfed.
Almost 130,000 Sprites were built while Midget production (which lasted until 1980) totalled some 225,000, the most numerous being the later models (Mk III; 1966–1974: 100,246 units & 1500; 1974–1979: 81,916 units). A decade before production ended it was already outdated but such was the charm (and lack of competition) that demand remained strong almost to the end. The most fancied Midgets are the so-called RWA (round wheel arch) models produced between 1972-1974; these adopted the design used on the rear of the bugeyes and are considered the best looking (as well as making the use of wider rear tyres easier) but in 1974 MG had to revert to the squared-off look because the strength gain from the additional metal was necessary to support the large “rubber” bumpers added to conform with US regulations; the RWA bodywork was found to be prone to damage when the rear-impact tests were conducted. Even before the huge bumpers unhappily had been grafted, US market cars had for some months had large rubber “buffers” bolted to the chrome bumpers, known in the US as “Dagmars” and in the UK as “Sabrinas” both names tributes to the hardly vague anatomical similarity with the two pop culture figures. Along with the big bumpers, to comply with minimum headlight height regulations in the US, the suspension height was raised by about an inch (25 mm), something which raised the centre of gravity and thus affected the handling characteristics, something adjustments to the anti-roll bars only partially ameliorated. Visually, the increased height was disguised by lowering curve of the front wheel arch.
Triumph Spitfire, also a midget-sized roadster
A midget (with a small “m”) dwarfed by two behemoths: A 1977 Triumph Spitfire between two Ford Super Duty F-450s heavy pick-up trucks. At their intended purpose (carrying or towing heavy payloads) Ford’s Super Duty heavy pick-up trucks perform well but such is the consumer appeal they’re a not uncommon sight used as passenger vehicles, even in cities; they can thus be both a personal and political statement, owners delighted Ford has made pick-ups great again (MPUGA).
Adopted for the range in 1999, Ford between 1958-1981 had previously used the “Super Duty” label on three large displacement (401, 477 & 534 cubic inch (6.6, 7.8 & 8.8 litre) gas (petrol) V8s, the family one of a remarkable variety of different V8s the corporation produced during the 1950s & 1960s. Big, heavy and low-revving, the Super Duty V8 were legendarily robust and famed for their longevity but were doomed ultimately by their prodigious thirst. They were intended only for heavy-duty, industrial use and in that very different from the Pontiac Super Duty (SD) V8s which were high-performance units, the early versions in the 1960s optimized for drag racing while the revival the next decade was the final fling of the original muscle car era (1964-1974). The 389 & 421 cubic inch (6.4 & 6.9 litre) versions were offered between 1960-1963 while the 455 (7.5) appeared in 1973-1974 and had it not been for the 455 SD Pontiac Firebirds in those years, the muscle car era would have been regarded as having ended in 1972. The Watergate-era 455 SD is also a footnote in the history of environmental law because Pontiac (in a preview of Volkswagen’s later “Dieselgate”) used a device to “cheat” on emission testing being undertaken as part of the certification process. Caught re-handed, Pontiac, guilty as sin, was compelled to remove the “cheat gear” and re-submit a vehicle for testing; that’s the reason the 1973-1974 455 SD was rated at 290 horsepower (HP) rather than the 310 of the original (and more toxic) engine.
1967 Triumph Spitfire Mark II (left) and 1972 Triumph Spitfire Mark IV (with after-market exhaust tips, right).
The Triumph Spitfire had the same relationship to the larger TR sports cars (1952-1976) as the Midget did to the MGB. Produced in five distinct generations between 1962-1980, like the Sprite & Midget, the Spitfire featured a stylish body atop the platform of a high-volume model and for the coachwork Triumph out-sourced the job to Italy, Giovanni Michelotti (1921–1980) producing a shape which owed nothing to the little Herald (1959-1971) on which it was based. In continuous production in five versions (Mark I; 1962–1964, Mark II; 1965–1967, Mark III; 1967–1970, Mark IV; 1970–1974 & 1500; 1974–1980), almost 315,000 were built with the later models the most popular, the some 96,000 of the 1500s sold. Like the Midget, the Spitfire was over the years improved although the things did at least stagnate in the post-1974 US models which became heavier, slower and uglier although in the 1970s that was a general industry trend. The Although soon under the same corporate umbrella, the Midget & Spitfire were competitors (in the showroom and on the circuits) for almost two decades and when Road & Track magazine in their September 1967 edition published a comparison test, they couldn't decide which was best, concluding: "...whichever one the buyer chooses, he is assured of many miles of motoring pleasure in the great sports car tradition. They're good cars, both of them. You can't go wrong." For the readers that may not have been a great deal of help and the phrasing must have been force of habit because the two little roadsters had always enjoyed some popularity among women.
The photograph run in 1959 with the caption “Hark the Herald’s axle’s swing” (left) and a Mark I Spitfire's swing axles displaying the same behavior.
The Spitfires of the 1960s were a bit more lively but that description wasn’t always a compliment because, based on the Herald, what was inherited was the swing-axle rear suspension and swing the axles certainly could, leading to a “lively rear”. When the British motoring press first tested the Herald they noted the behaviour of the swing axles under extreme load and had a photographer appropriately positioned: The caption “Hark the Herald’s axle’s swing” became famous. None of that deterred Triumph which in 1962 introduced a more powerful version powered by a 1.6 litre (97 cubic inch) straight six. That meant a faster car which meant the behaviour of the swing axles could be experienced at a higher speed (with all that implies) but the car sold well which was encouraging so Triumph in 1966 fitted a 2.0 litre (122 cubic inch) six. It was not until 1968 the rear suspension was revised and this curative solved the errant characteristics to a degree which impressed even the usually sceptical motoring journalists and sales remained strong until production ended in 1971. Offered only in four-cylinder form, the revisions to the Spitfire’s rear suspension were less complex but when tested on the Mark IV in 1970, the improvement was apparent and from this point, criticism ceased of of road-holding at the limit.
1967 Triumph GT6 Mark I (also with after-market exhaust tips, left) and 1979 Triumph Spitfire 1500 (right). With production ending in 1973, the GT6 was spared from being disfigured by the battering-ram like bumpers imposed on the Spitfire, those on the last of the line (1979-1980) the biggest.
While the roadster never gained six-cylinder power, Triumph from 1966 offered a coupé version (with a convenient hatchback, al la the Jaguar E-Type (XKE, 1961-1974) called the GT6. Mechanically it followed the Vitesse except it was only ever fitted with the 2.0 litre engine and didn’t receive the suspension fix until the Mark II in 1969 and that transformed things although, being relatively complex it must have been deemed too expensive to justify on what proved a low-volume model and the with the release of the Mark III in 1970, a version of that used on the Spitfire was substituted and it proved just as effective. Sales of the GT6 never matched the company’s expectation and the market preferred the MGB GT (1965-1980) which used the same concept for the body. Noting the costs which would have been incurred to make the GT6 compliant with the US regulations to take effect from 1974, production ended in late 1973. Because the considerably more powerful (especially the fuel-injected versions sold outside the US) 2.5 litre six Triumph used in the TR5 (which, with twin carburetors, was in North America sold as the TR-250), TR6, 2.5 PI & 2500 is a relatively easy swap, quite a few GT6s have been so upgraded although some attention does need to be paid to the chassis to achieve a completely satisfactory road car.
The short stature of Victor Emmanuel III (1869–1947; King of Italy 1900-1946) with (left to right), with Aimone of Savoy, King of Croatia (Rome, 1943), with Albert I, King of the Belgians (France, 1915), with his wife, Princess Elena of Montenegro (Rome 1937) & with Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945), observing the Italian navy conduct manoeuvres, Gulf of Naples, 1938. Note the King of Italy's sometimes DPRKesque hats.
Technically, Victor Emmanuel didn’t fit the definition of dwarfism which sets a threshold of adult height at 4 feet 10 inches (1.47 m), the king about 2 inches (50 mm) taller (or less short) and it’s thought the inbreeding not uncommon among European royalty might have been a factor, both his parents and grandparents being first cousins. However, although not technically a dwarf, that didn’t stop his detractors in Italy’s fascist government calling him (behind his back) il nano (the dwarf), a habit soon picked up the Nazis as der Zwerg (the dwarf) (although Hermann Göring (1893–1946; leading Nazi 1922-1945, Hitler's designated successor & Reichsmarschall 1940-1945) was said to have preferred der Pygmäe (the pygmy)). In court circles he was knows also (apparently affectionately) as la piccola sciabola (the little sabre) a nickname actually literal in origin because the royal swordsmith had to forge a ceremonial sabre with an unusually short blade for the diminutive sovereign to wear with his many military uniforms. His French-speaking wife (Princess Elena of Montenegro (1873–1952; Queen of Italy 1900-1946)) stood a statuesque six feet (1.8 m) tall and always called him mon petit roi (my little king). It was a long and happy marriage and genetically helpful too, his son and successor (who enjoyed only a brief reign) very much taller although his was to be a tortured existence. Still, in his unhappiness the scion stood tall and that would have been appreciated by the late Prince Philip, Duke of Edinburgh (1921–2021) who initially approved of the marriage of Lady Diana Spencer (1960-1997) to the Prince of Wales (b 1948) on the basis that she “would breed some height into the line”.
In cosmology, the word dwarf is applied to especially small versions of celestial bodies. A dwarf galaxy is a small galaxy of between several hundred and several billion stars, (the Milky Way may have as many as billion) and astronomers have identified many sub-types of dwarf galaxies, based on shape and composition. A dwarf planet is a small, planetary-mass object is in direct orbit of a star, smaller than any of the eight classical planets but still a world in its own right. Best-known dwarf planet is now Pluto which used to be a planet proper but was in 2006 unfortunately down-graded by the humorless types at the International Astronomical Union (IAU) who are in charge of such things. It’s hoped one day this decision will be reversed so Pluto will again be classified a planet. Dwarf planets are of interest to planetary geologists because despite their size, they may be geologically active bodies. The term dwarf star was coined when it was realized the reddest stars could be classified as brighter or dimmer than our sun and they were created the categories “giant star” (brighter) and dwarf star (dimmer). As observational astronomy improved, the
With
the development of infrared astronomy there were refinements to the model to
include (1) the dwarf star (the “generic” main-sequence star), (2) the red
dwarf (low-mass main-sequence star), (3) the yellow dwarfs are (main-sequence stars
with masses comparable to that of the Sun, (4) the orange dwarf (between a red
dwarf and yellow/white stars), (5) the controversial blue dwarf which is a hypothesized
class of very-low-mass stars that increase in temperature as they near the end
of their main-sequence lifetime, (6) the white dwarf which is the remains of a
dead star, composed of electron-degenerate matter and thought to be the final
stage in the evolution of stars not massive enough to collapse into a neutron
star or black hole, (7) the black dwarf which is theorized as a white dwarf
that has cooled to the point it no longer emits visible light (it’s thought the
universe is not old enough for any white dwarf to have yet cooled to black
& (8) the brown dwarf, a sub-stellar object not massive enough to ever fuse
hydrogen into helium, but still massive enough to fuse deuterium.
Coolest
dwarf of all is (9) the ultra-cool dwarf (first defined in 1997), somewhat
deceptively named for non cosmologists given the effective temperature can be
as high as 2,700 K (2,430°C; 4,400°F); in space, everything is relative. Because of their slow hydrogen fusion
compared to other types of low-mass stars, their life spans are estimated at
several hundred billion years, with the smallest lasting for about 12 trillion
years. As the age of the universe is thought
to be only 13.8 billion years, all ultra-cool dwarf stars are relatively young
and models predict that at the ends of their lives the smallest of these stars
will become blue dwarfs instead of expanding into red giants.
Disney's seven dwarfs; they're now cancelled.
The events towards the conclusion of the nineteenth century German fairy tale Snow White and the Seven Dwarfs make ideal reading for young children. Her evil step-mother has apparently killed poor Snow White so the seven disappointed dwarfs lay her body in a glass coffin. The very next, a handsome prince happens upon the dwarfs’ house in the forest and is so captivated by her beauty he asks to take her body back to his castle. To this the dwarfs agree but while on the journey, a slight jolt makes Snow White come to life and the prince, hopelessly in love, proposes and Snow White accepts. Back at the palace, the prince invites to the wedding all in the land except Snow White's evil stepmother.
Snow White and the Seven Dwarfs, even Happy looking sad.
The
step-mother however crashes the wedding and discovers the beautiful Snow White
is the bride. Enraged, she again attempts murder but the prince protects her and, learning the truth from his
bride, forces the step-mother to wear a pair of red-hot iron slippers and dance in
them until she dies; that takes not
long and once she has the decency to drop dead, the nuptials resume. In the way things happen in fairy tales, the prince and Snow White live happily ever
after.
DEI (diversity, equity and inclusion)
The condition achondroplasiaphobia describes those with a “fear of little people". The construct is achondroplasia (the Latin a- (not) + the Ancient Greek chondro- (cartilage) + the New Latin -plasia (growth); the genetic disorder that causes dwarfism) + phobia (from the New Latin, from the Classical Latin, from the Ancient Greek -φοβία (-phobía) from φόβος (phóbos) (fear). The condition, at least to the extent of being clinically significant, is thought rare and like many of the especially irrational phobias is induced either by (1) a traumatic experience, (2) depictions in popular culture or (3) reasons unknown. Achondroplasiaphobia has never appeared in the American Psychiatric Association’s (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM). In 2006, it was reported that while dining at the Chateau Marmont hotel in Los Angeles, after noticing two people of short stature had entered the restaurant, Lindsay Lohan suffered an "anxiety attack" and hyperventilated to the extent she had to take "an anti-anxiety pill" to calm down. To her companions she repeatedly said "I’m so scared of them!" A spokesperson for the LPA responded by suggesting Ms Lohan should "...treat her fear the same as she would a fear of any other protected minority population. If that fails, she might find diversity training to be useful." Almost immediately the story appeared, it was debunked by a representative for Ms Lohan who issued a statement saying she is not achondroplasiaphobic and not in any way scared of little people, adding "Lindsay loves all people."
Among critics and industry analysts, the consensus seem to be that in late 1919 when the project was approved, for Disney to allocate a budget of US$200 million (it ended up being booked at around US$250 million) to a remake of Snow White and the Seven Dwarfs (1937) probably was a good idea. Based on the German fairy tale Sneewittchen which first appeared in print early in the nineteenth century, Disney’s 1937 production was the first animated, full-length feature film made in the US and it was both critically acclaimed and a great commercial success, becoming the highest-grossing film of 1938; adjusted for inflation, it’s success then and since has made it one of the most profitable films ever made.
The elements in its success were (1) the quality of the studio’s work, (2) advances in the technology delivering sight and sound which made the audience's experience so vivid and (3) the threads of the story which are fairy tale classics: A wicked queen, jealous of her stepdaughter’s beauty orders her murder, only to discover she’s hiding out in a cottage with seven dwarves so she poisons her with an adulterated apple, inducing a deep sleep from which she eventually is awoken by the kiss of a handsome prince. In 1937, had the word “problematic” then been in use, it wouldn't have been applied to anything in the plot but by the early 2020s, things had changed. In the pre Trump 2.0 era, when DEI (diversity, equity & inclusion) was compulsory, having Snow White gaining her name because her skin was as “white as snow” and the very existence of dwarfs were both definitely “problematic” so the challenge was to keep the “Snow White” in the title while changing troublesome content as required. That's been done before and had the 2024 US presidential election elected someone (probably anyone) else, Snow White could have appeared in cinemas to lukewarm reviews but a solid box office based on 7-11 year old girls still impressed at Meghan Markle (Meghan, Duchess of Sussex; b 1981) having proved its not only in fairy tales that princes rescue middle-class girls from dreary lives. Only Fox News would much have bothered with a condemnation.
So for Disney, the timing of events was unfortunate but the earlier race and cultural controversies which swirled around the earlier remakes of Mulan (2020) and The Little Mermaid (2023) should have been a warning. Most jarring perhaps was the absence of “dwarfs” (in the historic sense of the word). While Snow White is of course the protagonist, in casting terms there was only one of her and seven of them so the substitution of the heptad with “magical creatures” was always going to attract a critique of its own. According to the studio, it consulted members of the dwarfism community (the so-called “little people”) “to avoid reinforcing stereotypes” before the re-casting but, given the production was, according to many, replete with cultural, sexist and chauvinist tropes, the cancelled dwarfs received less attention than might have been expected. With reviewers using phrases like “exhaustingly awful reboot” and “tiresome pseudo-progressive additions”, expectations of success for Snow White have been lowered.