Showing posts sorted by date for query delivery. Sort by relevance Show all posts
Showing posts sorted by date for query delivery. Sort by relevance Show all posts

Thursday, August 14, 2025

Quadraphonic

Quadraphonic (pronounced kwod-ruh-fon-ik)

(1) Of, noting, or pertaining to the recording and reproduction of sound over four separate transmission or direct reproduction channels instead of the customary two of the stereo system.

(2) A quadraphonic recording.

(3) A class of enhanced stereophonic music equipment developed in the 1960s.

1969: An irregular formation of quadra, a variant (like quadru) from the older Latin form quadri- (four) + phonic from the Ancient Greek phonē (sound, voice).  All the Latin forms were related quattor (four) from the primitive Indo-European kwetwer (four).  Phonē was from the primitive Indo-European bha (to speak, tell, say) which was the source also of the Latin fari (to speak) and fama (talk, report).  Phonic, as an adjective in the sense of “pertaining to sound; acoustics" was used in English as early as 1793. Those for whom linguistic hygiene is a thing approved not at all of quadraphonic because it was a hybrid built from Latin and Greek.  They preferred either the generic surround sound which emerged later or the pure Latin lineage of quadrasonic (sonic from sonō (make a noise, sound)) which appeared as early as 1970 although it seems to have been invented as a marketing term rather than by disgruntled pedants.  Quadraphonic, quadrasonic and surround sound all refer to essentially the same thing: the reproduction of front-to-back sound distribution in addition to side-to-side stereo.  In live performances, this had been done for centuries and four-channel recording, though not mainstream, was by the 1950s, not uncommon.  Quadraphonic is and adjective but had been used as a noun; the (equally irregular) noun plural is quadraphonics.

Surround sound

Quadraphonic was an early attempt to mass-market surround sound.  It used four sound channels with four physical speakers intended to be positioned at the four corners of the listening space and each channel could reproduce a signal, in whole or in part, independent of the others.  It was briefly popular with manufacturers during the early 1970s, many of which attempted to position it as the successor to stereo as the default standard but consumers were never convinced and quadraphonic was a commercial failure, both because of technical issues and the multitude of implementations and incompatibilities between systems; many manufacturers built equipment to their own specifications and no standard was defined, a mistake not repeated a generation later with the CD (compact disc).  Nor was quadraphonic a bolt-on to existing equipment; it required new, more expensive hardware.

Quadraphonic audio reproduction from vinyl was patchy and manufacturers used different systems to work around the problems but few were successful and the physical wear of vinyl tended always to diminish the quality.  Tape systems also existed, capable of playing four or eight discrete channels and released in reel-to-reel and 8-track cartridge formats, the former more robust but never suited to the needs of mass-market consumers.  The rise of home theatre products in the late 1990s resurrected interest in multi-channel audio, now called “surround sound” and most often implemented in the six speaker 5.1 standard.  Modern electronics and the elimination of vinyl and tape as storage media allowed engineers to solve the problems which beset quadraphonic but there remain audiophiles who insist, under perfect conditions, quadraphonic remains the superior form of audio transmission for the human ear.

Highway Hi-Fi record player in 1956 Dodge.

First commercially available in 1965, the eight-track cartridge format (which would later become the evil henchman of quadraphonic) convinced manufacturers it was the next big thing and they rushed to mass-production and one genuine reason for the appeal was that the 8-track cartridge was the first device which was practical for use as in-car entertainment.  During the 1950s, the US car industry had offered the option of record players, neatly integrated into the dashboard and in the relatively compact space of a vehicle's interior, the sound quality could be surprisingly high.  Although not obviously designed with acoustic properties optimized for music, the combination of parallel flat surfaces, a low ceiling and much soft, sound absorbing material did much to compensate for the small size and range offered by the speakers.  However, although they worked well when sitting still in showroom or in certain vehicles, on the road things could be different.  The records (the same size as the classic 7 inch (180 mm) 45 rpm "singles") played by means of a stylus (usually called "the needle") which physically traced the grooves etched into the plastic disks rotating at 16.66 rpm which, combined with an etching technique called "ultra micro-grooving" meant the some 45 minutes of music were available, a considerable advance on the 4-5 minutes of the standard single.  The pressings were also thicker than other records, better to resist the high temperatures caused by heat-soak from the engine and the environment although, in places like Arizona, warping was soon reported.  To keep the stylus in the track, the units were fitted with a shock-absorbing, spring enclosure and a counterweighted needle arm.  Improbably, in testing, the system performed faultlessly even under the most adverse road conditions so the designers presented the product for corporate approval.  At that point there was a delay because the designers worked for the Colombia Broadcasting Corporation (CBS) which had affiliations with thousands of radio stations all over the country and no wish to cannibalize their own markets; if people could play records in their cars, the huge income stream CBS gained from advertising would be threatened as drivers tuned out.  The proposal was rejected.

Highway Hi-Fi record player in 1956 Plymouth.

Discouraged but not deterred, the engineers went to Detroit and demonstrated the players to Chrysler which had their test-drivers subject the test vehicles to pot-holes, railway tracks and rolling undulations.  The players again performed faultlessly and Chrysler, always looking for some novelty, placed an order for 18,000, a lucrative lure which convinced even CBS to authorize production, their enthusiasm made all the greater by the proprietary format of the disks which meant CBS would be the exclusive source.  So, late in 1956, Chrysler announced the option of "Highway Hi-Fi", a factory-installed record player mounted under the car's dashboard at a cost of (US$200 (some US$1750 in 2023 terms)).  Highway Hi-Fi came with six disks, the content of which reflected the reactionary tastes of CBS executives and their desire to ensure people still got their popular music from radio stations but the market response was positive, Chrysler selling almost 4000 of the things in their first year, the early adopters adopting with their usual alacrity.

The second generation of players used standard 45 rpm singles: Austin A55 Farina (left) and George Harrison's (1943–2001) Jaguar E-Type S1 (right); all four Beatles had the players fitted in their cars and lead guitarist Harrison is pictured here stocking his 14-stack array.  The lady on the left presumably listened to different music than the Beatle on the right (although their in-car hardware was identical) but tastes can't always be predicted according to stereotype; although he disapproved of most modern music, Rudolf Hess (1894–1987; Nazi deputy Führer 1933-1941) told the governor of Spandau prison (where he spent 40 of his 46 years in captivity) he enjoyed The Beatles because their tunes "were melodic".  

At that point, problems surfaced.  Tested exclusively in softly-sprung, luxury cars on CBS's and Chrysler's executive fleets, the Highway Hi-Fi had to some extent been isolated from the vicissitudes of the road but when fitted to cheaper models with nothing like the same degree of isolation, the styluses indeed jumped around and complaints flowed, something not helped by dealers and mechanics not being trained in their maintenance; even to audio shops the unique mechanism was a mystery.  Word spread, sales collapsed and quietly the the option was withdrawn in 1957.  The idea however didn't die and by the early 1960s, others had entered the field and solved most of the problems, disks now upside-down which made maintaining contact simpler and now standard 45 rpm records could be used, meaning unlimited content and the inherent limitation of the 4 minute playing time was overcome with the use of a 14-disk stacker, anticipating the approach taken with CDs three decades later.  Chrysler tried again by the market was now wary and the option was again soon dropped.

1966 Ford Mustang with factory-fitted 8-track player.

Clearly though, there was demand for in-car entertainment, the content of which was not dictated by radio station programme directors and for many there were the additional attractions of not having to endure listening either to advertising or DJs, as inane then as now.  It was obvious to all tape offered possibilities but although magnetic tape recorders had appeared as early as 1930s, they were bulky, fragile complicated and expensive, all factors which mitigated against their use as a consumer product fitted to a car.  Attention was thus devoted to reducing size and complexity so the tape could be installed in a removable cartridge and by 1963, a consortium including, inter alia, Lear, RCA, Ford & Ampex had perfected 8-track tape which was small, simple, durable and able to store over an hour of music.  Indeed, so good was the standard of reproduction that to take advantage of it, it had to be connected to high quality speakers with wiring just as good, something which limited the initial adoption to manufacturers such as Rolls-Royce and Cadillac or the more expensive ranges of others although Ford's supporting gesture late in 1965 of offering the option on all models was soon emulated.  Economies of scale soon worked its usual wonders and the 8-track player became an industry standard, available even in cheaper models and as an after-market accessory, some speculating the format might replace LP records in the home.

Lindsay Lohan's A Little More Personal (Raw) as it would have appeared if released in the 8-Track format.

That never happened although the home units were widely available and by the late 1960s, the 8-track was a big seller for all purposes where portability was needed.  It maintained this position until the early 1970s when, with remarkable suddenness, it was supplanted the the cassette, a design dating from 1962 which had been smaller and cheaper but also inferior in sound delivery and without the broad content offered by the 8-track supply system.  That all changed by 1970 and from that point the 8-track was in decline, reduced to a niche by late in the decade, the CD in the 1980s the final nail in the coffin although it did for a while retain an allure, Jensen specifying an expensive Lear 8-track for the Interceptor SP in 1971, despite consumer reports at the time confirming cassettes were now a better choice.  The market preferring the cheaper and conveniently smaller cassette tapes meant warehouses were soon full of 8-track players and buyers were scarce.  In Australia, GMH (General Motor Holden) by 1975 had nearly a thousand in the inventory which also bulged with 600-odd Monaro body-shells, neither of which were attracting customers.  Fortunately, GMH was well-acquainted with the concept of the "parts-bin special" whereby old, unsaleable items are bundled together and sold at what appears a discount, based for advertising purposes on a book-value retail price there’s no longer any chance of realizing.

1976 Holden HX LE

Thus created was the high-priced, limited edition Holden LE (not badged as a Monaro although it so obviously looked like one that they've never been known as anything else), in "LE Red" (metallic crimson) with gold pin-striping, Polycast "Honeycomb" wheels, fake (plastic) burl walnut trim, deep cut-pile (polyester) carpet and crushed velour (polyester) upholstery with plaid inserts over vinyl surrounds in matching shades; in the 1970s, this was tasteful.  Not exactly suited to the image of luxury were the front and rear spoilers but they too were sitting unloved in the warehouse so they became part of the package and, this being the 1970s, rear-seat occupants got their own cigar lighter, conveniently located above the central ashtray.  Not designed for the purpose, the eight-track cartridge player crudely was bolted to the console but the audio quality was good and five-hundred and eighty LEs were made, GMH pleasantly surprised at how quickly they sold.  When new, they listed at Aus$11,500, a pleasingly profitable premium of some 35% above the unwanted vehicle on which it was based.  These days, examples are advertised for sale for (Aus$) six-figure sums but those who now buy a LE do so for reasons other than specific-performance.  Although of compact size (in US terms) and fitted with a 308 cubic inch (5.0 litre) V8, it could achieve barely 110 mph (175 km/h), acceleration was lethargic by earlier and (much) later standards yet fuel consumption was very high; slow and thirsty the price to be paid for the early implementations of the emission control devices bolted to engines designed during more toxic times.

1976 Holden HX LE Polycast "Honeycomb" wheel (14 x 7").

The Polycast process used a conventional steel wheel with a decorative face of molded polyurethane, attached with mechanical fasteners or bonded using adhesives (in some, both methods were applied) and although some snobs still call them "fake alloy" wheels, legitimately, they're a category of their own.  Because the rubbery, molded plastic fulfilled no structural purpose, designers were able to create intricate shapes which would then have been too delicate or complex to render (at an acceptable cost) in any sort of metal.  By consensus, some of the Ploycast wheels were among the best looking of the decade and, unstressed, they were strong, durable and long-lasting while the manufacturers liked them because the tooling and production costs were much lower than for aluminium or magnesium-alloy.  Another benefit was, being purely decorative (essentially a permanently attached wheelcover), their use faced no regulatory barriers; US safety rules were even then strict and Citroën at the time didn't both seeking approval for the more exotic "resin" wheels offered in Europe on the SM).

Aftermath of the pace car crash, Indianapolis 500, 29 May 1971; dozens were injured but there were no fatalities, despite impact with the well-populated camera stand being estimated at 60 mph (100 km/h).

The Holden LE's wheels came straight from the Pontiac parts bin in the US where they'd first appeared on the 1971 Firebird Trans-Am.  The concept proved popular with manufacturers and a set of Motor Wheels' "Exiter" (14" x 7", part number 36830 and advertised also as "Exciter") was fitted to the Dodge Challenger Pace car which crashed during the 1971 Indianapolis 500.  The crash was unrelated to the wheels, the driver (one of the Dodge dealers providing the pace car fleet) blaming the incident on somebody moving the traffic cone he'd used in practice as his pit-lane braking marker.    Motor Wheel's advertising copy: “What wheel can survive this beating?” and “...the new wheel too tough for the 'mean machine'” predated the crash at Indianapolis and was intended to emphasise the strength of the method of construction.

Twenty years on, the “parts bin special” idea was a part of local story-telling.  Although most doubt the tale, it's commonly recounted the 85 HSV VS GTS-R Commodores Holden built in 1996 were all finished in the same shade of yellow because of a cancelled order for that number of cars in "taxi spec", the Victorian government having mandated that color for the state's cabs.  While a pleasing industry myth, most suspect it's one of those coincidences and the government's announcement came after the bodies for the GTS-R had already been painted.  Being "taxi yellow" doesn't appear to have deterred demand and examples now sell for well into six figures (in Aus$).      

1971 Holden HQ Monaro LS 350

The overwrought and bling-laden Holden LE typified the tendency during the 1970s and of US manufacturers and their colonial off-shoots to take an elegant design and, with a heavy-handed re-style, distort it into something ugly.  A preview of the later “malaise era” (so named in the US for many reasons), it was rare for a facelift to improve the original.  The HQ Holden (1971-1974) was admired for an delicacy of line and fine detailing; what followed over three subsequent generations lacked that restraint although to be fair, while the last of the series (HZ, 1977-1980) ascetically wasn't as pleasing as the first, dynamically, it was much-improved.   

1973 Ford Falcon XA GT Hardtop (RPO83).

In the era of the LE, Ford Australia had it's own problem with unwanted two-door bodyshells.  Released too late to take advantage of what proved a market fad, Ford’s Falcon Hardtops (XA; 1972-1973, XB; 1973-1976 & XC; 1976-1979) never enjoyed the success of Holden’s Monaro (1968-1976), Chrysler’s Valiant Charger (1971-1978) or even that of Ford’s own, earlier Falcon Hardtop (XM; 1964-1965 & XP; 1965-1966).  The public’s increasing and unpredicted uninterest in the style meant that by 1976, like Holden, Ford had languishing in unwanted in their hands hundreds of body-shells for the big (in Australian terms although in the US they would have been classed “compacts”) coupés.  When released in 1972 Ford’s expectation was it would every year sell more than 10,000 Hardtops but that proved wildly optimistic and not even discounting and some “special editions” did much to stimulate demand.  By 1977 sales had dropped to a depressing 913 and with over 500 bodies in stock, the projection no more than 100 would attract buyers meant a surplus of 400; an embarrassing mistake.

Edsel Ford II with Falcon Cobra #001, publicity shot, Ford Australia's Head Office, Campbellfield, Victoria.  The badge below the Cobra decal reads 5.8; Australia switched to the metric system in 1973 but because of the nature of the machines, almost always the V8s are described either as 302 (4.9) or 351 (5.8), cubic inches being a muscle car motif. 

Scrapping them all had been discussed but in Australia at the time was Edsel Ford II (b 1948), great-grandson of Henry Ford (1863-1947), grandson of Edsel Ford (1893–1943) and the only son of Henry Ford II (1917–1987).  The scion had been sent to southern outpost to learn the family business and been appointed assistant managing director of Ford Australia; his solution profitably to shift the surplus hardtops was hardly original but, like many sequels, it worked.  What Edsel Ford suggested was to use the same approach which in 1976 had been such a success when applied in the US to the Mustang II (1973-1978): Create a dress-up package with the motifs of the original Shelby Mustangs (1965-1968), the most distinctive of which were the pair of broad, blue stripes running the vehicle’s full length.  In truth, the stripes had been merely an option on the early Shelby Mustangs but so emblematic of the breed did they become it’s now rare to see one un-striped.  The blinged-up Mustang IIs had been dubbed “Cobra II” and although mechanically unchanged, proved very popular.  One (unverified) story which is part of industry folklore claims the American’s suggestion was initially rejected by local management and discarded before a letter arrived from Ford’s Detroit head office telling the colonials that if Edsel Ford II wanted a Falcon Cobra with stripes, it must be done.  As Edsel's father once told a Lee Iacocca (1924–2019) who seemed to be getting ideas above his station: "Don't forget my name is on the building". 

Falcon Cobra #31.  The rear-facing bonnet (hood) scoop was the most obvious visual clue identifying the Option 97 (#002-031) cars although the after market responded and it became possible to buy replica scoops as well as the decals and plaques for those who wanted their own "Cobra look".

The Australian cars thus came to be “Cobra” and as well as providing a path to monetizing what had come to be seen as dead stock, the cars would also be a platform with which Ford could homologate some parts for use in racing.  The latter task was easy because in November 1977 Ford had built 13 “special order” XC Hardtops which conformed with the “evolution” rules of the Confederation of Australian Motor Sport (CAMS, then the regulatory body) for homologating parts for Group C touring car events.  Cognizant of the furore which had erupted in 1972 when high-output engines were homologated in road cars, the changes were mostly about durability and included enlarged rear wheel wells to accommodate wider wheels and tyres, a reverse hood (bonnet) scoop which drew desirable cool-air from the low-pressure area at the base of the windscreen, twin electric fans (switchable from the cockpit) which replaced the power-sapping engine-driven fan, a front tower brace (K-brace) which stiffened the body structure, an idler arm brace and front and rear spoilers.

Falcon Cobra #094 which was one of the "fully optioned" of the Option 96 build (#081-200 including the 351 V8, air-conditioning, power steering & power windows).

A prototype Falcon Cobra was built in April 1978 with production beginning the following July.  Unusually, all were originally painted Bold Blue before the areas which would become the stripes and the sill & wheel-arch highlights was masked with a coating of Sno White was painted over the top (thin Olympic (Blaze) Blue accent stripes separated the colors and “Cobra” decals were fitted to the sides and rear).  Each of the 400 built was fitted with a sequentially numbered plaque (001 to 400) on the dash and the production breakdown was:

#001: Created for promotional use, it was allocated for the photo-sessions from which came the images used in the first brochures (351 automatic).

#002-031: The Option 97 run which contained the parts and modifications intended for competition and produced in conformity with CAMS’s “evolution” rules (351 manual).

#032-041: 351 manual with air-conditioning (A/C) & power steering (P/S).

#042-080: 351 manual with A/C, P/S & power windows (P/W).

#081-200: 351 automatic with A/C, P/S & P/W.

#201-300: 302 manual.

#301-360 (except 351): 302 automatic with A/C & P/S.

#351: 351 manual.

#361-400: 302 automatic with A/C, P/S & P/W.

Moffat Ford Dealers team cars in the Hardie-Ferodo 1000 at Bathurst, finishing 1-2 in 1977 (left) and on the opening lap in 1978 (right).  In 1978, the cars (actually 1976 XB models modified to resemble XCs) matched their 1977 qualifying pace by starting second & third on the grid but in the race both recorded a DNF (did not finish). 

The Option 97 run (#002-031) included the modifications fitted to the 13 cars built in November 1997 but also included was engine & transmission oil coolers, a tramp rod (fitted only to the left-side because most racing in Australia is on anti-clockwise circuits and most turns thus to the left) and a special front spoiler which directed cooling air to the front brakes.  Visually, the Option 97 run was differentiated from the rest by the (functional) bonnet scoop and a pair of Scheel front bucket seats (part number KBA90018) in black corduroy cloth. Collectively, the 370 Option 96 and 30 Option 97 made up the 400 SVP (Special Value Pack) that was the Falcon Cobra.  The Cobra’s blue & white livery appeared on the race tracks in 1978 but the best known (the pair run by Allan Moffat's (v 1939) “Moffat Ford Dealers” team were actually modified XB Hardtops built in 1976 and the same vehicles which had completed the photogenic 1-2 at Bathurst in 1977.

Thursday, July 24, 2025

Kamikaze

Kamikaze (pronounced kah-mi-kah-zee or kah-muh-kah-zee)

(1) A member of a World War II era special corps in the Japanese air force charged with the suicidal mission of crashing an aircraft laden with explosives into an enemy target, especially Allied Naval vessels.

(2) In later use, one of the (adapted or specifically built) airplanes used for this purpose.

(3) By extension, a person or thing that behaves in a wildly reckless or destructive manner; as a modifier, something extremely foolhardy and possibly self-defeating.

(4) Of, pertaining to, undertaken by, or characteristic of a kamikaze; a kamikaze pilot; a kamikaze attack.

(5) A cocktail made with equal parts vodka, triple sec and lime juice.

(6) In slang, disastrously to fail.

(7) In surfing, a deliberate wipeout.

1945: From the Japanese 神風 (かみかぜ) (kamikaze) (suicide flyer), the construct being kami(y) (god (the earlier form was kamui)) + kaze (wind (the earlier form was kanzai)), usually translated as “divine wind” (“spirit wind” appearing in some early translations), a reference to the winds which, according to Japanese folklore, destroying Kublai Khan's Mongol invasionfleet in 1281.  In Japanase military parlance, the official designation was 神風特別攻撃隊 (Shinpū Tokubetsu Kōgekitai (Divine Wind Special Attack Unit)).  Kamikaze is a noun, verb & adjective and kamikazeing & kamikazed are verbs; the noun plural is kamikazes.  When used in the original sense, an initial capital is used. 

HESA Shahed 136 UAV.

The use of kamikaze to describe the Iranian delta-winged UAV (unmanned aerial vehicle, popularly known as “drones”) being used by Russia against Ukraine reflects the use of the word which developed almost as soon as the existence of Japan’s wartime suicide bomber programme became known.  Kamikaze was the name of the aviators and their units but it was soon also applied to the aircraft used, some re-purposed from existing stocks and some rocket powered units designed for the purpose.  In 1944-1945 they were too little, too late but they proved the effectiveness of precision targeting although not all military cultures would accept the loss-rate the Kamikaze sustained.  In the war in Ukraine, the Iranian HESA Shahed 136 (شاهد ۱۳۶ (literally "Witness-136" and designated Geran-2 (Герань-2 (literally "Geranium-2") by the Russians) the kamikaze drone have proved extraordinarily effective being cheap enough to deploy en masse and capable of precision targeting.  They’re thus a realization of the century-old dream of the strategic bombing theorists to hit “panacea targets” at low cost while sustaining no casualties.  Early in World War II, the notion of panacea targets had been dismissed, not because as a strategy it was wrong but because the means of finding and bombing such targets didn’t exist, thus “carpet bombing” (bombing for several square miles around any target) was adopted because it was at the time the best option.  Later in the war, as techniques improved and air superiority was gained, panacea targets returned to the mission lists but the method was merely to reduce the size of the carpet.  The kamikaze drones however can be pre-programmed or remotely directed to hit a target within the tight parameters of a GPS signal.  The Russians know what to target because so many blueprints of Ukrainian infrastructure sit in Moscow’s archives and the success rate is high because, deployed in swarms because they’re so cheap, the old phrase from the 1930s can be updated for the UAV age: “The drone will always get through”.

Imperial Japan’s Kamikazes

By 1944, it was understood by the Japanese high command that the strategic gamble simultaneously to attack the US Pacific Fleet at anchor in Pearl Harbor and the Asian territories of the European powers.  Such was the wealth and industrial might of the US that within three years of the Pearl Harbor raid, the preponderance of Allied warships and military aircraft in the Pacific was overwhelming and Japan’s defeat was a matter only of time.  That couldn’t be avoided but within the high command it was thought that if the Americans understood how high would be the causality rate if they attempted an invasion of the Japanese home islands, that and the specter of occupation might be avoided and some sort of "negotiated settlement" might be possible, the notion of the demanded "unconditional surrender" unthinkable.

HMS Sussex hit by Kamikaze (Mitsubishi Ki-51 (Sonia)), 26 July 1945 (left) and USS New Mexico (BB-40) hit by Kamikaze off Okinawa, 12 May 1945 (right).

Although on paper, late in the war, Japan had over 15,000 aircraft available for service, a lack of development meant most were at least obsolescent and shortages of fuel increasingly limited the extent to which they could be used in conventional operations.  From this analysis came the estimate that if used as “piloted bombs” on suicide missions, it might be possible to sink as many as 900 enemy warships and inflict perhaps 22,000 causalities and in the event of an invasion, when used at shorter range against landing craft or beachheads, it was thought an invading force would sustain over 50,000 casualties by suicide attacks alone.  Although the Kamikaze attacks didn't achieve their strategic objective, they managed to sink dozens of ships and kill some 5000 allied personnel.  All the ships lost were smaller vessels (the largest an escort carrier) but significant damage was done to fleet carriers and cruisers and, like the (also often dismissed as strategically insignificant) German V1 & V2 attacks in Europe, resources had to be diverted from the battle plan to be re-tasked to strike the Kamikaze air-fields.  Most importantly however, so vast by 1944 was the US military machine that it was able easily to repair or replace as required.  Brought up in a different tradition, US Navy personnel the target of the Kamikaze dubbed the attacking pilots Baka (Japanese for “Idiot”).

A captured Japanese Yokosuka MXY-7 Ohka (Model 11), Yontan Airfield, April 1945.

Although it’s uncertain, the first Kamikaze mission may have been an attack on the carrier USS Frankin by Rear Admiral Arima (1895-1944) flying a Yokosuka D4Y Suisei (Allied codename Judy) and the early flights were undertaken using whatever airframes were available and regarded, like the pilots, as expendable.  Best remembered however, although only 850-odd were built, were the rockets designed for the purpose.  The Yokosuka MXY-7 Ohka (櫻花, (Ōka), (cherry blossom)) was a purpose-built, rocket-powered attack aircraft which was essentially a powered bomb with wings, conceptually similar to a modern “smart bomb” except that instead of the guidance being provided by on board computers and associated electronics which were sacrificed in the attack, there was a similarly expendable human pilot.  Shockingly single-purpose in its design parameters, the version most produced could attain 406 mph (648 km/h) in level flight at relatively low altitude and 526 mph (927 km/h) while in an attack dive but the greatest operational limitation was the range was limited to 23 miles (37 km), forcing the Japanese military to use lumbering Mitsubishi G4N (Betty) bombers as “carriers” (the Ohka the so-called "parasite aircraft") with the rockets released from under-slung assemblies when within range.  As the Ohka was originally conceived (with a range of 80 miles (130 km)), as a delivery system that may have worked but such was the demand on the designers to provide the highest explosive payload, the fuel load was reduced, restricting the maximum speed to 276 mph (445 km/h), making the barely maneuverable little rockets easy prey for fighters and even surface fire.

Yokosuka MXY-7 Ohka.

During the war, Japan produced more Mitsubishi G4Ms than any other bomber and its then remarkable range (3130 miles (5037 km)) made it a highly effective weapon early in the conflict but as the US carriers and fighters were deployed in large numbers, its vulnerabilities were exposed: the performance was no match for fighters and it was completely un-armored without even self-sealing fuel tanks, hence the nick-name “flying lighter” gained from flight crews.  However, by 1945 Japan had no more suitable aircraft available for the purpose so the G4M was used as a carrier and the losses were considerable, an inevitable consequence of having to come within twenty-odd miles of the US battle-fleets protected by swarms of fighters.  It had been planned to develop a variant of the much more capable Yokosuka P1Y (Ginga) (as the P1Y3) to perform the carrier role but late in the war, Japan’s industrial and technical resources were stretched and P1Y development was switched to night-fighter production, desperately needed to repel the US bombers attacking the home islands.  Thus the G4M (specifically the G4M2e-24J) continued to be used.

Watched by Luftwaffe chief Hermann Göring (1893–1946; leading Nazi 1922-1945, Hitler's designated successor & Reichsmarschall 1940-1945), Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945) presents test pilot Hanna Reitsch (1912-1979) with the Iron Cross (2nd class), Berlin, March, 1941 (left); she was later (uniquely for a woman), awarded the 1st-class distinction.  Conceptual sketch of the modified V1 flying bomb (single cockpit version, right).

The idea of suicide missions also appealed to some Nazis, predictably most popular among those never likely to find themselves at the controls, non-combatants often among the most blood-thirsty of politicians.  The idea had been discussed earlier as a means of destroying the electricity power-plants clustered around Moscow but early in 1944, the intrepid test pilot Hanna Reitsch suggested to Adolf Hitler (1889-1945; German head of government 1933-1945 & of state 1934-1945) a suicide programme as the most likely means of hitting strategic targets.  Ultimately, she settled on using a V1 flying bomb (the Fieseler Fi 103R, an early cruise missile) to which a cockpit had been added, test-flying it herself and even mastering the landing, a reasonable feat given the high landing speed.  As a weapon, assuming a sufficient supply of barely-trained pilots, it would probably have been effective but Hitler declined to proceed, feeling things were not yet sufficiently desperate.  The historic moment passed although in the skies above Germany, in 1945 there were dozens of what appeared to be "suicide attacks" by fighter pilots ramming their aircraft into US bombers.  The Luftwaffe was by this time so short of fuel that training had been cut to the point new recruits were being sent into combat with only a few hours of solo flying experience so it's believed some incidents may have been "work accidents" but the ad-hoc Kamikaze phenomenon was real.

According to statics compiled by the WHO (World Health Organization) in 2021, globally, there were an estimated 727,000 suicides and within the total: (1) among 15–29-year-olds, suicide was the third leading cause of death (2) for 15–19-year-olds, it was the fourth leading and (3) for girls aged 15–19, suicide ranked the third leading.  What was striking was that in middle & high income nations, suicide is the leading cause of death in the young (typically defined as those aged 15-29 or 15-34.  Because such nations are less affected by infectious disease, armed conflicts and accident mortality that in lower income countries, it appeared there was a “mental health crisis”, one manifestation of which was the clustering of self-harm and attempted suicides, a significant number of the latter successful.  As a result of the interplay of the economic and social factors reducing mortality from other causes, intentional self-harm stands out statistically, even though suicide rates usually are not, in absolute terms, “extremely” high.  Examples quoted by the WHO included:

Republic of Korea (ROK; South Korea): Among people aged 10–39, suicide is consistently the leading cause of death and that’s one of the highest youth suicide rates in the OECD (Organization of Economic Cooperation & Development, sometimes called the “rich countries club” although changes in patterns of development have compressed relativities and that tag is not as appropriate as once it was.

Japan (no longer styled the “Empire of Japan although the head of state remain an emperor): Suicide is the leading cause of death among those aged 15-39 and while there was a marked decline in the total numbers after the government in the mid 1990s initiated a public health campaign the numbers did increase in the post-COVID pandemic period.  Japan is an interesting example to study because its history has meant cultural attitudes to suicide differ from those in the West.

New Zealand (Aotearoa): New Zealand has one of the highest youth suicide rates in the developed world, especially among Māori youth and although the numbers can bounce around, for those aged 15–24, suicide is often the leading or second leading cause of death.

Finland:  For those aged 15-24, suicide is always among leading causes of mortality and in some reporting periods the leading one.  Because in Finland there are there are extended times when the hours of darkness are long and the temperatures low, there have been theories these conditions may contribute to the high suicide rate (building on research into rates of depression) but the studies have been inconclusive.

Australia: Suicide is the leading cause of death for those in the cohorts 15–24 and 25–44 and a particular concern is the disproportionately high rate among indigenous youth, the incidents sometimes happening while they’re in custody.  In recent years, suicide has road accidents and cancer as the leading cause in these age groups.

Norway & Sweden: In these countries, suicide is often one of the top three causes of death among young adults and in years when mortality from disease and injury are especially low it typically will rise to the top.

Kamikaze Energy Cans in all six flavors (left) and potential Kakikaze Energy Can customer Lindsay Lohan (right).

Ms Lohan was pictured here with a broken wrist (fractured in two places in an unfortunate fall at Milk Studios during New York Fashion Week) and 355 ml (12 fluid oz) can of Rehab energy drink, Los Angeles, September 2006.  Some recovering from injuries find energy drinks a helpful addition to the diet.  The car is a 2005 Mercedes-Benz SL 65 (R230; 2004-2011) which earlier had featured in the tabloids after a low-speed crash.  The R230 range (2001-2011) was unusual because of the quirk of the SL 550 (2006-2011), a designation used exclusively in the North American market, the RoW (rest of the world) cars retaining the SL 500 badge even though both used the 5.5 litre (333 cubic inch) V8 (M273).

Given the concerns about suicide among the young, attention has in the West been devoted the way the topic is handled on social media and the rise in the use of novel applications for AI (artificial intelligence) has flagged new problems, one of the “AI companions” now wildly popular among youth (the group most prone to attempting suicide) recently in recommending their creator take his own life.  That would have been an unintended consequence of (1) the instructions given to the bot and (2) the bot’s own “learning process”, the latter something which the software developers would have neither anticipated nor expected.  Given the sensitivities to the way suicide is handled in the media, on the internet or in popular culture, it’s perhaps surprising there’s an “energy drink” called “Kamikaze”.  Like AI companions, the prime target for the energy drink suppliers is males aged 15-39 which happens to be the group most at risk of suicide thoughts and most likely to attempt suicide.  Despite that, the product’s name seems not to have attracted much criticism and the manufacturer promises: “With your Kamikaze Energy Can, you'll enjoy a two-hour energy surge with no crash.  Presumably the word “crash” was chosen with some care although, given the decline in the teaching of history at school & university level, it may be a sizeable number of youth have no idea about the origin of “Kamikaze”.  Anyway, containing “200mg L-Citrulline, 160mg Caffeine Energy, 1000mg Beta Alanine, vitamin B3, B6 & B12, zero carbohydrates and zero sugar, the cans are available in six flavours: Apple Fizz, Blue Raspberry, Creamy Soda, Hawaiian Splice, Mango Slushy & Rainbow Gummy.

Monday, June 30, 2025

Bunker

Bunker (pronounced buhng-ker)

(1) A large bin or receptacle; a fixed chest or box.

(2) In military use, historically a fortification set mostly below the surface of the ground with overhead protection provided by logs and earth or by concrete and fitted with above-ground embrasures through which guns may be fired.

(3) A fortification set mostly below the surface of the ground and used for a variety of purposes.

(4) In golf, an obstacle, classically a sand trap but sometimes a mound of dirt, constituting a hazard.

(5) In nautical use, to provide fuel for a vessel.

(6) In nautical use, to convey bulk cargo (except grain) from a vessel to an adjacent storehouse.

(7) In golf, to hit a ball into a bunker.

(8) To equip with or as if with bunkers.

(9) In military use, to place personnel or materiel in a bunker or bunkers (sometimes as “bunker down”).

1755–1760: From the Scottish bonkar (box, chest (also “seat” (in the sense of “bench”) of obscure origin but etymologists conclude the use related to furniture hints at a relationship with banker (bench).  Alternatively, it may be from a Scandinavian source such as the Old Swedish bunke (boards used to protect the cargo of a ship).  The meaning “receptacle for coal aboard a ship” was in use by at least 1839 (coal-burning steamships coming into general use in the 1820s).  The use to describe the obstacles on golf courses is documented from 1824 (probably from the extended sense “earthen seat” which dates from 1805) but perhaps surprisingly, the familiar sense from military use (dug-out fortification) seems not to have appeared before World War I (1914-1918) although the structures so described had for millennia existed.  “Bunkermate” was army slang for the individual with whom one shares a bunker while the now obsolete “bunkerman” (“bunkermen” the plural”) referred to someone (often the man in charge) who worked at an industrial coal storage bunker.  Bunker & bunkerage is a noun, bunkering is a noun & verb, bunkered is a verb and bunkerish, bunkeresque, bunkerless & bunkerlike are adjectives; the noun plural is bunkers.

Just as ships called “coalers” were used to transport coal to and from shore-based “coal stations”, it was “oilers” which took oil to storage tanks or out to sea to refuel ships (a common naval procedure) and these STS (ship-to-ship) transfers were called “bunkering” as the black stuff was pumped, bunker-to-bunker.  That the coal used by steamships was stored on-board in compartments called “coal bunkers” led ultimately to another derived term: “bunker oil”.  When in the late nineteenth century ships began the transition from being fuelled by coal to burning oil, the receptacles of course became “oil bunkers” (among sailors nearly always clipped to “bunker”) and as refining processes evolved, the fuel specifically produced for oceangoing ships came to be called “bunker oil”.

Bunker oil is “dirty stuff”, a highly viscous, heavy fuel oil which is essentially the residue of crude oil refining; it’s that which remains after the more refined and volatile products (gasoline (petrol), kerosene, diesel etc) have been extracted.  Until late in the twentieth century, the orthodox view of economists was its use in big ships was a good thing because it was a product for which industry had little other use and, as essentially a by-product, it was relatively cheap.  It came in three flavours: (1) Bunker A: Light fuel oil (similar to a heavy diesel), (2) Bunker B: An oil of intermediate viscosity used in engines larger than marine diesels but smaller than those used in the big ships and (3) Bunker C: Heavy fuel oil used in container ships and such which use VLD (very large displacement), slow running engines with a huge reciprocating mass.  Because of its composition, Bucker C especially produced much pollution and although much of this happened at sea (unseen by most but with obvious implications), when ships reached harbor to dock, all the smoke and soot became obvious.  Over the years, the worst of the pollution from the burning of bunker oil greatly has been reduced (the work underway even before the Greta Thunberg (b 2003) era), sometimes by the simple expedient of spraying a mist of water through the smoke.

Floor-plans of the upper (Vorbunker) and lower (Führerbunker) levels of the structure now commonly referred to collectively as the Führerbunker.

History’s most infamous bunker remains the Berlin Führerbunker in which Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945) spent much of the last few months of his life.  In the architectural sense there were a number of Führerbunkers built, one at each of the semi-permanent Führerhauptquartiere (Führer Headquarters) created for the German military campaigns and several others built where required but it’s the one in Berlin which is remembered as “the Führerbunker”.  Before 1944 when the intensification of the air raids by the RAF (Royal Air Force) and USAAF (US Army Air Force) the term Führerbunker seems rarely to have been used other than by the architects and others involved in their construction and it wasn’t a designation like Führerhauptquartiere which the military and other institutions of state shifted between locations (rather as “Air Force One” is attached not to a specific airframe but whatever aircraft in which the US president is travelling).  In subsequent historical writing, the term Führerbunker tends often to be applied to the whole, two-level complex in Berlin and although it was only the lower layer which officially was designated as that, for most purposes the distinction is not significant.  In military documents, after January, 1945 the Führerbunker was referred to as Führerhauptquartiere.

Führerbunker tourist information board, Berlin, Germany.

Only an information board at the intersection of den Ministergärten and Gertrud-Kolmar-Straße, erected by the German Goverment in 2006 prior to that year's FIFA (Fédération Internationale de Football Association (International Federation of Association Football)) World Cup now marks the place on Berlin's Wilhelmstrasse 77 where once the Führerbunker was located.  The Soviet occupation forces razed the new Reich Chancellery and demolished all the bunker's above-ground structures but the subsequent GDR (Deutsche Demokratische Republik (German Democratic Republic; the old East Germany) 1949-1990) abandoned attempts completely to destroy what lay beneath.  Until after the fall of the Berlin Wall (1961-1989) the site remained unused and neglected, “re-discovered” only during excavations by property developers, the government insisting on the destruction on whatever was uncovered and, sensitive still to the spectre of “Neo-Nazi shrines”, for years the bunker’s location was never divulged, even as unremarkable buildings (an unfortunate aspect of post-unification Berlin) began to appear on the site.  Most of what would have covered the Führerbunker’s footprint is now a supermarket car park.

The first part of the complex to be built was the Vorbunker (upper bunker or forward bunker), an underground facility of reinforced concrete intended only as a temporary air-raid shelter for Hitler and his entourage in the old Reich Chancellery.  Substantially completed during 1936-1937, it was until 1943 listed in documents as the Luftschutzbunker der Reichskanzlei (Reich Chancellery Air-Raid Shelter), the Vorbunker label applied only in 1944 when the lower level (the Führerbunker proper) was appended.  In mid January, 1945, Hitler moved into the Führerbunker and, as the military situation deteriorated, his appearances above ground became less frequent until by late March he rarely saw the sky,  Finally, on 30 April, he committed suicide.

Bunker Busters

Northrop Grumman publicity shot of B2-Spirit from below, showing the twin bomb-bay doors through which the GBU-57 are released.

Awful as they are, there's an undeniable beauty in the engineering of some weapons and it's unfortunate humankind never collectively has resolved exclusively to devote such ingenuity to stuff other than us blowing up each other.  That’s not a new sentiment, being one philosophers and others have for millennia expressed in various ways although since the advent of nuclear weapons, concerns understandably become heightened.  Like every form of military technology ever deployed, once the “genie is out of the bottle” the problem is there to be managed and at the dawn of the atomic age, delivering a lecture in 1936, the British chemist and physicist Francis Aston (1877–1945) (who created the mass spectrograph, winning the 1922 Nobel Prize in Chemistry for his use of it to discover and identify the isotopes in many non-radioactive elements and for his enunciation of the whole number rule) observed:

There are those about us who say that such research should be stopped by law, alleging that man's destructive powers are already large enough.  So, no doubt, the more elderly and ape-like of our ancestors objected to the innovation of cooked food and pointed out the great dangers attending the use of the newly discovered agency, fire.  Personally, I think there is no doubt that sub-atomic energy is available all around us and that one day man will release and control its almost infinite power.  We cannot prevent him from doing so and can only hope that he will not use it exclusively in blowing up his next door neighbor.

The use in June 2025 by the USAF (US Air Force) of fourteen of its Boeing GBU-57 (Guided Bomb Unit-57) Massive Ordnance Penetrators (MOP) bombs against underground targets in Iran (twelve on the Fordow Uranium Enrichment Plant and two on the Natanz nuclear facility) meant “Bunker Buster” hit the headlines.  Carried by the Northrop B-2 Spirit heavy bomber (built between 1989-2000), the GBU-57 is a 14,000 kg (30,000 lb) bomb with a casing designed to withstand the stress of penetrating through layers of reinforced concrete or thick rock.  “Bunker buster” bombs have been around for a while, the ancestors of today’s devices first built for the German military early in World War II (1939-1945) and the principle remains unchanged to this day: up-scaled armor-piercing shells.  The initial purpose was to produce a weapon with a casing strong enough to withstand the forces imposed when impacting reinforced concrete structures, the idea simple in that what was needed was a delivery system which could “bust through” whatever protective layers surrounded a target, allowing the explosive charge to do damage where needed rtaher than wastefully being expended on an outer skin.  The German weapons proved effective but inevitably triggered an “arms race” in that as the war progressed, the concrete layers became thicker, walls over 2 metres (6.6 feet) and ceilings of 5 (16) being constructed by 1943.  Technological development continued and the idea extended to rocket propelled bombs optimized both for armor-piercing and aerodynamic efficiency, velocity a significant “mass multiplier” which made the weapons still more effective.

USAF test-flight footage of Northrop B2-Spirit dropping two GBU-57 "Bunker Buster" bombs.

Concurrent with this, the British developed the first true “bunker busters”, building on the idea of the naval torpedo, one aspect of which was in exploding a short distance from its target, it was highly damaging because it was able to take advantage of one of the properties of water (quite strange stuff according to those who study it) which is it doesn’t compress.  What that meant was it was often the “shock wave” of the water rather than the blast itself which could breach a hull, the same principle used for the famous “bouncing bombs” used for the RAF’s “Dambuster” (Operation Chastise, 17 May 1943) raids on German dams.  Because of the way water behaved, it wasn’t necessary to score the “direct hit” which had been the ideal in the early days of aerial warfare.

RAF Bomber Command archive photograph of Avro Lancaster (built between 1941-1946) in flight with Grand Slam mounted (left) and a comparison of the Tallboy & Grand Slam (right), illustrating how the latter was in most respects a scaled-up version of the former.  To carry the big Grand Slams, 32 “B1 Special” Lancasters were in 1945 built with up-rated Rolls-Royce Merlin V12 engines, the removal of the bomb doors (the Grand Slam carried externally, its dimensions exceeding internal capacity), deleted front and mid-upper gun turrets, no radar equipment and a strengthened undercarriage.  Such was the concern with weight (especially for take-off) that just about anything non-essential was removed from the B1 Specials, even three of the four fire axes and its crew door ladder.  In the US, Boeing went through a similar exercise to produce the run of “Silverplate” B-29 Superfortresses able to carry the first A-bombs used in August, 1945. 

Best known of the British devices were the so called earthquake bombs”, the Tallboy (12,000 lb; 5.4 ton) & Grand Slam (22,000 lb, 10 ton) which, despite the impressive bulk, were classified by the War Office as “medium capacity”.  The terms “Medium Capacity” (MC) & “High Capacity” referenced not the gross weight or physical dimensions but ratio of explosive filler to the total weight of the construction (ie how much was explosive compared to the casing and ancillary components).  Because both had thick casings to ensure penetration deep into hardened targets (bunkers and other structures encased in rock or reinforced concrete) before exploding, the internal dimensions accordingly were reduced compared with the ratio typical of contemporary ordinance.  A High Capacity (HC) bomb (a typical “general-purpose bomb) had a thinner casing and a much higher proportion of explosive (sometimes over 70% of total weight).  These were intended for area bombing (known also as “carpet bombing”) and caused wide blast damage whereas the Tallboy & Grand Slam were penetrative with casings optimized for aerodynamic efficiency, their supersonic travel working as a mass-multiplier.  The Tallboy’s 5,200 lb (2.3 ton) explosive load was some 43% of its gross weight while the Grand Slam’s 9,100 lb (4 ton) absorbed 41%; this may be compared with the “big” 4000 lb (1.8 ton) HC “Blockbuster” which allocated 75% of the gross weight to its 3000 LB (1.4 ton) charge.  Like many things in engineering (not just in military matters) the ratio represented a trade-off, the MC design prioritizing penetrative power and structural destruction over blast radius.  The novelty of the Tallboy & Grand Slam was that as earthquake bombs, their destructive potential was able to be unleashed not necessarily by achieving a direct hit on a target but by entering the ground nearby, the explosion (1) creating an underground cavity (a camouflet) and (2) transmitting a shock-wave through the target’s foundations, leading to the structure collapsing into the newly created lacuna. 

The etymology of camouflet has an interesting history in both French and military mining.  Originally it meant “a whiff of smoke in the face (from a fire or pipe) and in figurative use it was a reference to a snub or slight insult (something unpleasant delivered directly to someone) and although the origin is murky and it may have been related to the earlier French verb camoufler (to disguise; to mask) which evolved also into “camouflage”.  In the specialized military jargon of siege warfare or mining (sapping), over the seventeen and nineteenth centuries “camouflet” referred to “an underground explosion that does not break the surface, but collapses enemy tunnels or fortifications by creating a subterranean void or shockwave”.  The use of this tactic is best remembered from the Western Front in World War I, some of the huge craters now tourist attractions.

Under watchful eyes: Grand Ayatollah Ali Khamenei (b 1939; Supreme Leader, Islamic Republic of Iran since 1989) delivering a speech, sitting in front of the official portrait of the republic’s ever-unsmiling founder, Grand Ayatollah Ruhollah Khomeini (1900-1989; Supreme Leader, Islamic Republic of Iran, 1979-1989).  Ayatollah Khamenei seemed in 1989 an improbable choice as Supreme Leader because others were better credentialed but though cautious and uncharismatic, he has proved a great survivor in a troubled region.

Since aerial bombing began to be used as a strategic weapon, of great interest has been the debate over the BDA (battle damage assessment) and this issue emerged almost as soon as the bunker buster attack on Iran was announced, focused on the extent to which the MOPs had damaged the targets, the deepest of which were concealed deep inside a mountain.  BDA is a constantly evolving science and while satellites have made analysis of surface damage highly refined, it’s more difficult to understand what has happened deep underground.  Indeed, it wasn’t until the USSBS (United States Strategic Bombing Survey) teams toured Germany and Japan in 1945-1946, conducting interviews, economic analysis and site surveys that a useful (and substantially accurate) understanding emerged of the effectiveness of bombing although what technological advances have allowed for those with the resources is the so-called “panacea targets” (ie critical infrastructure and such once dismissed by planners because the required precision was for many reasons rarely attainable) can now accurately be targeted, the USAF able to drop a bomb within a few feet of the aiming point.  As the phrase is used by the military, the Fordow Uranium Enrichment Plant is as classic “panacea target” but whether even a technically successful strike will achieve the desired political outcome remains to be seen.

Mr Trump, in a moment of exasperation, posted on Truth Social of Iran & Israel: “We basically have two countries that have been fighting so long and so hard that they don't know what the fuck they're doing."  Actually, both know exactly WTF they're doing; it's just Mr Trump (and many others) would prefer they didn't do it.

Donald Trump (b 1946; US president 2017-2021 and since 2025) claimed “total obliteration” of the targets while Grand Ayatollah Khamenei admitted only there had been “some damage” and which is closer to the truth should one day be revealed.  Even modelling of the effects has probably been inconclusive because the deeper one goes underground, the greater the number of variables in the natural structure and the nature of the internal built environment will also influence blast behaviour.  All experts seem to agree much damage will have been done but what can’t yet be determined is what has been suffered by the facilities which sit as deep as 80 m (260 feet) inside the mountain although, as the name implies, “bunker busters” are designed for buried targets and it’s not always required for blast directly to reach target.  Because the shock-wave can travel through earth & rock, the effect is something like that of an earthquake and if the structure sufficiently is affected, it may be the area can be rendered geologically too unstable again to be used for its original purpose.

Within minutes of the bombing having been announced, legal academics were being interviewed (though not by Fox News) to explain why the attacks were unlawful under international law and in a sign of the times, the White House didn't bother to discuss fine legal points like the distinction between "preventive & pre-emptive strikes", preferring (like Fox News) to focus on the damage done.  However, whatever the murkiness surrounding the BDA, many analysts have concluded that even if before the attacks the Iranian authorities had not approved the creation of a nuclear weapon, this attack will have persuaded them one is essential for “regime survival”, thus the interest in both Tel Aviv and (despite denials) Washington DC in “regime change”.  The consensus seems to be Grand Ayatollah Khamenei had, prior to the strike, not ordered the creation of a nuclear weapon but that all energies were directed towards completing the preliminary steps, thus the enriching of uranium to ten times the level required for use in power generation; the ayatollah liked to keep his options open.  So, the fear of some is the attacks, even if they have (by weeks, months or years) delayed the Islamic Republic’s work on nuclear development, may prove counter-productive in that they convince the ayatollah to concur with the reasoning of every state which since 1945 has adopted an independent nuclear deterrent (IND).  That reasoning was not complex and hasn’t changed since first a prehistoric man picked up a stout stick to wave as a pre-lingual message to potential adversaries, warning them there would be consequences for aggression.  Although a theocracy, those who command power in the Islamic Republic are part of an opaque political institution and in the struggle which has for sometime been conducted in anticipation of the death of the aged (and reportedly ailing) Supreme Leader, the matter of “an Iranian IND” is one of the central dynamics.  Many will be following what unfolds in Tehran and the observers will not be only in Tel Aviv and Washington DC because in the region and beyond, few things focus the mind like the thought of ayatollahs with A-Bombs.

Of the word "bust"

The Great Bust: The Depression of the Thirties (1962) by Jack Lang (left), highly qualified content provider Busty Buffy (b 1996, who has never been accused of misleading advertising, centre) and The people's champion, Mr Lang, bust of Jack Lang, painted cast plaster by an unknown artist, circa 1927, National Portrait Gallery, Canberra, Australia (right).  Remembered for a few things, Jack Lang (1876–1975; premier of the Australian state of New South Wales (NSW) 1925-1927 & 1930-1932) remains best known for having in 1932 been the first head of government in the British Empire to have been sacked by the Crown since William IV (1765–1837; King of the UK 1830-1837) in 1834 dismissed Lord Melbourne (1779–1848; prime minister of the UK 1834 & 1835-1841).

Those learning English must think it at least careless things can both be (1) “razed to the ground” (totally to destroy something (typically a structure), usually by demolition or incineration) and (2) “raised to the sky” (physically lifted upwards).  The etymologies of “raze” and “raise” differ but they’re pronounced the same so it’s fortunate the spellings vary but in other troublesome examples of unrelated meanings, spelling and pronunciation can align, as in “bust”.  When used in ways most directly related to human anatomy: (1) “a sculptural portrayal of a person's head and shoulders” & (2) “the circumference of a woman's chest around her breasts” there is an etymological link but these uses wholly are unconnected with bust’s other senses.

Bust of Lindsay Lohan in white marble by Stable Diffusion.  Sculptures of just the neck and head came also to be called “busts”, the emphasis on the technique rather than the original definition.

Bust in the sense of “a sculpture of upper torso and head” dates from the 1690s and was from the sixteenth century French buste, from the Italian busto (upper body; torso), from the Latin bustum (funeral monument, tomb (although the original sense was “funeral pyre, place where corpses are burned”)) and it may have emerged (as a shortened form) from ambustum, neuter of ambustus (burned around), past participle of amburere (burn around, scorch), the construct being ambi- (around) + urere (to burn),  The alternative etymology traces a link to the Old Latin boro, the early form of the Classical Latin uro (to burn) and it’s though the development in Italian was influenced by the Etruscan custom of keeping the ashes of the dead in an urn shaped like the person when alive.  Thus the use, common by the 1720s of bust (a clipping from the French buste) being “a carving of the “trunk of the human body from the chest up”.  From this came the meaning “dimension of the bosom; the measurement around a woman's body at the level of her breasts” and that evolved on the basis of a comparison with the sculptures, the base of which was described as the “bust-line”, the term still used in dress-making (and for other comparative purposes as one of the three “vital statistics” by which women are judged (bust, waist, hips), each circumference having an “ideal range”).  It’s not known when “bust” and “bust-line” came into oral use among dress-makers and related professions but it’s documented since the 1880s.  Derived forms (sometimes hyphenated) include busty (tending to bustiness, thus Busty Buffy's choice of stage-name), overbust & underbust (technical terms in women's fashion referencing specific measurements) and bustier (a tight-fitting women's top which covers (most or all of) the bust.

Benito Mussolini (1883-1945; Duce (leader) & Prime-Minister of Italy 1922-1943) standing beside his “portrait bust” (1926).

The bust was carved by Swiss sculptor Ernest Durig (1894–1962) who gained posthumous notoriety when his career as a forger was revealed with the publication of his drawings which he’d represented as being from the hand of the French sculptor Auguste Rodin (1840-1917) under whom he claimed to have studied.  Mussolini appears here in one of the subsequently much caricatured poses which were a part of his personality cult.  More than one of the Duce's counterparts in other nations was known to have made fun of some of the more outré poses and affectations, the outstretched chin, right hand braced against the hip and straddle-legged stance among the popular motifs. 

“Portrait bust” in marble (circa 1895) of (1815-1989; chancellor of the German Empire (the "Second Reich") 1871-1890) by the German Sculptor Reinhold Begas (1831-1911).

 In sculpture, what had been known as the “portrait statue” came after the 1690s to be known as the “portrait bust” although both terms meant “sculpture of upper torso and head” and these proved a popular choice for military figures because the aspect enabled the inclusion of bling such as epaulettes, medals and other decorations and being depictions of the human figure, busts came to be vested with special significance by the superstitious.  In early 1939, during construction of the new Reich Chancellery in Berlin, workmen dropped one of the busts of Otto von Bismarck by Reinhold Begas, breaking it at the neck.  For decades, the bust had sat in the old Chancellery and the building’s project manager, Albert Speer (1905–1981; Nazi court architect 1934-1942; Nazi minister of armaments and war production 1942-1945), knowing Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945) believed the Reich Eagle toppling from the post-office building right at the beginning of World War I had been a harbinger of doom for the nation, kept the accident secret, hurriedly issuing a commission to the German sculptor Arno Breker (1900–1991) who carved an exact copy.  To give the fake the necessary patina, it was soaked for a time in strong, black tea, the porous quality of marble enabling the fluid to induce some accelerated aging.  Interestingly, in his (sometimes reliable) memoir (Erinnerungen (Memories or Reminiscences) and published in English as Inside the Third Reich (1969)), even the technocratic Speer admitted of the accident: “I felt this as an evil omen”.

The other senses of bust (as a noun, verb & adjective) are diverse (and sometimes diametric opposites and include: “to break or fail”; “to be caught doing something unlawful / illicit / disgusting etc”; “to debunk”; “dramatically or unexpectedly to succeed”; “to go broke”; “to break in (horses, girlfriends etc): “to assault”; the downward portion of an economic cycle (ie “boom & bust”); “the act of effecting an arrest” and “someone (especially in professional sport) who failed to perform to expectation”.  That’s quite a range and that has meant the creation of dozens of idiomatic forms, the best known of which include: “boom & bust”, “busted flush”, “dambuster”, “bunker buster”,  “busted arse country”, “drug bust”, “cloud bust”, belly-busting, bust one's ass (or butt), bust a gut, bust a move, bust a nut, bust-down, bust loose, bust off, bust one's balls, bust-out, sod buster, bust the dust, myth-busting and trend-busting.  In the sense of “breaking through”, bust was from the Middle English busten, a variant of bursten & bresten (to burst) and may be compared with the Low German basten & barsten (to burst).  Bust in the sense of “break”, “smash”, “fail”, “arrest” etc was a creation of mid-nineteenth century US English and is of uncertain inspiration but most etymologists seem to concur it was likely a modification of “burst” effected with a phonetic alteration but it’s not impossible it came directly as an imperfect echoic of Germanic speech.  The apparent contradiction of bust meaning both “fail” and “dramatically succeed” happened because the former was an allusion to “being busted” (ie broken) while the latter meaning used the notion of “busting through”.