Monday, August 12, 2024

Dreadnought

Dreadnought (pronounced dred-nawt)

(1) A type of battleship armed with heavy-calibre guns in turrets: so called from the British battleship HMS Dreadnought (1906); a name used by the Royal Navy for many ships and submarines.

(2) A garment made of thick woolen cloth that can defend against storm and cold.

(3) A thick cloth with a long pile (known also as fearnought).

(4) Slang a heavyweight boxer in the heavyweight class.

(5) By extension, something the largest or heaviest in a given field.

(6) A person who fears nothing; something that assures against fear.

(7) A type of acoustic guitar with a very large body and a waist less pronounced than in other designs, producing a deep, "bold" sound.

1800-1810: The construct was dread + nought.  Dread was from the Middle English dreden, from Old English drǣdan (to fear, dread), aphetic form of ondrǣdan (to fear, dread), from and- + rǣdan (from which English picked up read); corresponding to an aphesis of the earlier adread.  The Old Saxon was antdrādan & andrādan (to fear, dread), the Old High German was intrātan (to fear) and the Middle High German entrāten (to fear, dread, frighten).  Nought was from the Middle English nought & noght, (noȝt), from the Old English nōwiht & nāwiht (the construct being nay + a + wight), which in turn came from ne-ā-wiht, a phrase used as an emphatic "no", in the sense of "not a thing".  In the transition to Modern English, the word reduced gradually to nought, nawt and finally not; a doublet of naught.  The alternative spelling (though never used by the Admiralty) is Dreadnaught.  Dreadnought is a noun; the noun plural is dreadnoughts.

The dreadnoughts

HMS Dreadnought, 1906.

Launched in 1906, HMS Dreadnought is often said to have revolutionized naval power, the design so significant it proved the final evolution of what had, by the late nineteenth century, evolved into the battleship.  Subsequent vessels would be larger, faster, increasingly electronic and more heavily armed but the concept remained the same.  HMS Dreadnought rendered instantly obsolete every other battleship in the world (including the rest of the Royal Navy) and all other battleships then afloat were immediately re-classified as pre-dreadnoughts.  In naval architecture, so epoch-making was the ship that it changed the nomenclature in navies world-wide: after 1906 there would be pre-dreadnoughts, semi-dreadnoughts, demi-dreadnoughts & super-dreadnoughts (hyphenated and not),  The adjective dreadnoughtish was non-standard but was used to describe ships of a design beyond that of the orthodox battleship of the late nineteenth century but with only some of a dreadnought's distinguishing characteristics.  Presumably someone in the Admiralty would have coined dreadnoughtesque but no document seems to have survived as proof.    

HMS Dreadnought.

Her main design features were speed, armor, steam turbine propulsion and, especially, firepower almost exclusively of weapons of the largest caliber.  In the decades after her launch, British, German, American, Japanese and other navies would build larger and heavier dreadnoughts until, during world war two, their utility was finally seen to been eclipsed by both aircraft carriers and submarines.  The last dreadnought, HMS Vanguard, launched in 1946, was scrapped in 1958 but the US Navy maintained until 2004 (on either the active or reserve list), at least one of the four battleships it retained from World War II (1939-1945) when the last was decommissioned.

HMS Dreadnought, 1908.

That it was the Royal Navy which first launched a dreadnought doesn’t mean the British Admiralty was alone in pursuing the concept.  Naval strategists in several nations had noted the course of battle between the Russian and Japanese fleets in 1905 and concluded the immediate future of naval warfare lay in the maximum possible deployment of big guns, able to launch attacks from the longest possible range, subsidiary smaller caliber weapons seen even as a disadvantage in battle.  That the Royal Navy was the first with such a ship afloat was a testament to the efficiency of British designers and shipbuilders, not the uniqueness of its plans.

Much read in palaces, chancelleries and admiralties around the word was a book released in 1890 called The Influence of Sea Power Upon History, 1660–1783 by US naval officer and theorist Captain Alfred Mahan (1840-1914).  Published in what, in retrospect, was a historical sweet-spot (technologically and politically) for the views it espoused, it brought Mahan great fame and exerted an extraordinary influence on diplomacy, military planning and the politics of the era.  The book was not alone the cause of the naval arms-race in the decade before World War I (1914-1918) but was at least a sharp nudge, push or shove depending on one’s view.  Curiously though, although a work primarily about naval strategy, while many of the maritime powers seemed convinced by Mahan’s arguments about the importance of naval sea power in geopolitics, not all admiralties adopted the strategic template.  What all agreed however was they needed more ships.

The nineteenth century of Pax Britannica ("British Peace", echoing the Pax Romana of the Roman Empire), describes the century of relative great-power stability between the Congress of Vienna (1814-1815) and the outbreak of war in 1914 encompasses the idea of British Empire as the global hegemon, a role possible only because the Royal Navy enjoyed an unchallenged ability to patrol and protect the key maritime trade routes.  The effective control of these transport corridors not only guaranteed the security of the British Empire but it meant also the British effectively controlled maritime access to much of Asia, the Americas, Oceania the south Pacific, although, one factor in the success was it was that London ran things essentially in accordance with US foreign policy, assisting Washington in enforcing the Monroe Doctrine which upheld the US preponderance of interest in the Americas.  It can be argued the roots of the so-called "special relationship" took hold here.

The British Empire, in terms of the impression created by a map of the world on which its colonies and dependencies were colored usually in some shade of red, was deceptive, the remit of the local administrators sometimes extending little beyond the costal enclaves, even the transport links between towns not always entirely secure.  Never did the Empire posses the military resources to defend such vast, remote and disparate territories but it was the control of the sea, uniquely in history, which allowed the British for centuries to maintain what was, with no disparagement intended, a confidence trick.  The reason the empire could be maintained was not because of control of big colonies, it was all the little islands dotted around the oceans which enabled the navy to operate outposts which housed the ports and coaling stations from which ships could make repairs or provision with fuel, food and water.  All those little dots on the map were the "keys to the world".  Mahan’s book had drawn its influential conclusions from his study of the role of sea power during the seventeenth and eighteenth centuries; what the British did was take advantage of the circumstances of the nineteenth century and deploy their sea power globally, in competition when necessary, in cooperation when possible and in conflict when required.  The practical expression of all this was British naval policy: that the Royal Navy must be of sufficient strength simultaneously to prevail in war against the combined strength of the next two biggest navies, either in separate theatres or as a massed fleet.

By the early twentieth century, economic and geopolitical forces combined to render the policy impossible to maintain, Britain no longer able to operate in “splendid isolation” (another somewhat misleading phrase of the era), needing alliances to spread the load of imperial defense.  It wasn’t just the rapid growth of the German fleet which had changed the balance of power but that alone was enough for the British and the French to reach an accommodation which is remembered as the Entente Cordiale (Cordial Agreement) of 1904 which may or may not have been an alliance but was enough of one for the admiralties in Paris and London cooperatively to organize the allocations of their fleets.  It certainly illustrated Lord Palmerston's (1784–1865) doctrine that the country had neither eternal allies nor perpetual enemies but only permanent interests for despite the centuries of enmity between Britain and France, the self-interest of both dictated the need to align against the German threat.

Royal Navy battlecruiser HMS New Zealand, 1911.

It was in this atmosphere the great naval arms race took place, plans for which were laid before the Wright brothers had flown a hundred–odd feet, barely off the ground, torpedoes were in their infancy and submarines were little threat more than a few miles from the coast.  The measure of a fleet was its battleships and their big guns and whichever side could put to sea the most firepower was winning the race.  It intrigued the navalists, strategists and theorists who knew from history that such a race, if left to run, could end only in war, the great, decisive set-piece battle of which would be the clash of massed fleets of battleships on the high seas, trading shell-fire at a range of twenty miles (32 km), before closing for the kill as the battle climaxed.  Dreadnought was one strand of the theorists’ imagination but there were others.  There was a school of thought which favored an emphasis on radio communications and a greater attention to the possibilities offered by the torpedo and, most influentially, what seems now the curious notion of a complimentary range of faster capital ships, essentially battleships with the big guns but little armor, the loss of protection off-set by the few knots in speed gained; these ships were called battlecruisers.  The argument was they could fight at such range nothing but a battleship would be a threat and those the battlecruiser could outrun because of their greater speed.  It seemed, to many, a good idea at the time.

Super-Dreadnought: HMS Iron Duke, Port Said, 1921.

But it was the Dreadnoughts which captured the imagination and defined the era.  Impressive though she was, HMS Dreadnought was not long unique as navies around the world launched the own and, as happens in arms races, the original was quickly out-classed and the next generation of ships, bigger and more heavily gunned still, came to be known as super dreadnoughts.  War did come but the grand battle on the high seas which the navalists had, for a quarter century been planning, never happened.  There were smaller clashes of squadrons but the imperative of the Royal Navy was more practical and traditionally British: avoid defeat.  As Winston Churchill (1875-1965; UK prime-minister 1940-1945 & 1951-1955), then First Lord of the Admiralty (minister for the navy), emphasized to the First Sea Lord (the navy’s senior admiral), against a continental empire like Germany, while the Royal Navy couldn’t in a year win the war, because Britain’s empire was maritime, they could lose it in one afternoon.  Accordingly, the Royal Navy made no sustained attempts to induce a massed battle, focusing instead on a blockade, keeping the German fleet confined to its ports.  It was the German admirals who attempted to force the British to a set-piece battle, venturing into the North Sea in May 1916 with a fleet of nearly a hundred, including sixteen dreadnoughts and five battlecruisers.  Against this, the British assembled a hundred and fifty odd with twenty-eight dreadnoughts and nine battlecruisers.  The action came to be known as the Battle of Jutland.

Imperial German Navy battlecruiser SMS Goeben, 1914.

On paper, although the result described as inconclusive, it was a tactical success for the Germans but strategically, the British achieved their goal.  The dreadnoughts barely engaged, most of the action confined to the battlecruisers and, unlike the smaller Battle of Tsushima (May 1905) in the Far East, fought by pre-dreadnoughts a decade earlier between the Japanese and Russian fleets, there was no winner in the traditional sense of naval warfare.  The German's tactical success in retrospect was something of a Dunkirk moment but the strategic implications were profound.  British losses were heavier but their numeric advantage was such they could absorb the loss and had the financial and industrial capacity to restore the fleet’s strength.  Damage to the German fleet was less but they lacked the time or capacity to build their navy to the point it could be used as a strategic weapon and it remained confined to its ports.  Both sides learned well the inherent limitations of the battlecruiser.

WWI era German U-Boot (Unterseeboot (under-sea-boat)), anglicized as U-Boat.

After Jutland, the German admirals concluded that to venture again against the British Home Fleet would either be an inconclusive waste or lead to the inevitable, decisive defeat.  They accordingly prevailed on the politicians and eventually gained approval to use the only genuinely effective weapon in their hands, the submarine.  It was the consequences of unrestricted submarine warfare which would bring the United States into the war in 1917 as a belligerent and without that intervention, the war would certainly have followed a different course and reached perhaps a different conclusion.

Although HMS Dreadnought lent her name to an era and remains one of the most significant warships built, she's remembered for the geopolitical reverberations in the wake of her launching rather than any achievement at sea, missing even the anti-climatic Battle of Jutland (1916) because of a scheduled re-fit.  Indeed, her only achievement of note in combat was the ramming and sinking of German U-Boat SM U-29 on 18 March 1915 although that does remain a unique footnote in naval history, being the only time a battleship deliberately sank an enemy submarine.  Dreadnought was decommissioned in 1920 and scrapped the next year.  Later, under the terms of the 1922 Washington Naval Treaty which sought to prevent another naval arms race, most of the surviving dreadnoughts were scrapped or scuttled but many of the super-dreadnoughts remained in the fleets, some not scrapped until after World War II.  The name has a strong resonance in the halls of the Admiralty (now the Navy Command in the UK's Ministry of Defense) and has been chosen for the class of vessels to replace the existing Vanguard class ballistic nuclear-missile submarines.  Now under construction, the first of the nuclear-powered Dreadnought class boats is expected to enter service early in the 2030s.

Dreadnought coats

The term “dreadnought coat” was adopted by the UK’s garment industry in 1908 to refer to a heavy, durable and water-resistant overcoat.  It was an opportunistic “borrowing” that verged on what would now be called “ambush marketing” and took advantage on the extensive publicity the name attracted during the so-called “naval scare” during that decade, the attraction being the arms-race had done the hard word of “brand-name recognition”.  The reference point of the design was the heavy “pea coat” (the construct being the Dutch pij (cowl) + the English coat) issued to Royal Navy sailors (although similar garments were worn in many navies).  Typically, naval pea coats were made from a thick wool yarn, designed to protect against the harsh maritime weather encountered in coastal environments as well as on the high-seas.  Pea coats were of rugged construction, almost always double-breasted and featured large lapels (for extra warmth around the neck, often turned up in cold weather) and deep pockets.

A dreadnought pea coat by Triple Aught ("Dreadnaught Peacoat" the spelling used) (left), Lindsay Lohan in dreadnought coat (London, June, 2014, centre) and in trench coat (London, October 2015, right).

To facilitate ease of movement and avoid becoming entangled in the ropes and chains which are a feature of a ship’ deck, the classic naval pea coat was hip-length, unlike the ankle-length great coats used by armies.  When the double-breasted design was extended to the civilian market, the pea coat was almost unchanged (although many were of lighter construction and navy blue remained the most popular color.  When the style of a pea coat is extended to something calf or ankle-length, it becomes a “dreadnought coat” which should not be confused with a “trench coat” which is of lighter construction, traditionally beige and belted and, as all fashionistas know, the belt is always tied, never buckled.

Sunday, August 11, 2024

Crapper

Crapper (pronounced krap-er)

(1) A proprietary trade name for a brand of loo; toilet; lavatory etc.

(2) A slang term for the loo; toilet; lavatory etc.

1920s: The construct was crap + er.  Dating from 1375-1425, crap was from the Middle English crappe (which at various times existed in the plural as crappen, crappies and craps) (chaff; buckwheat) from the Old French crappe & crapin (chaff; siftings, waste or rejected matter).  In the Medieval Latin there were the plural forms crappa & crapinum, apparently from the Old Dutch krappen (to cut off, pluck off) from which Middle Dutch gained crappe & crap (a chop, cutlet) and Modern Dutch krip (a steak); the most obvious modern relative is crop.  The Middle English agent suffix er was from the Old English ere, from the Proto-Germanic ārijaz and generally thought to have been borrowed from the Latin ārius.  The English forms were cognate with the Dutch er & aar, the German er, the Swedish are, the Icelandic ari and the Gothic areis.  Related are the Ancient Greek ήριος (rios) and the Old Church Slavonic арь (arĭ).  Although unrelated, the development of er was reinforced by the synonymous Old French or & eor and the Angle-Norman variant our, all derived from the Latin (ā)tor, the ultimate root being the primitive European tōr.  Dating from 1846, crap was the English slang for the proper term crapping ken which is crap’s first documented application to bodily waste although etymologists suspect it had been in widespread use for some time prior.  In this context, crap was used in the earlier English and French sense of “siftings, waste or rejected matter” and ken was an existing term for a small building or house.

The urban myth is part-truth, part-crap

The brand-name Crapper was first applied to a toilet designed and by plumber Thomas Crapper (1836-1910) and manufactured by the company he founded, Thomas Crapper & Co, Licenced Plumbers & Sanitary Engineers.  In 1884, the Prince of Wales (later Edward VII (1841–1910; King of the UK & Emperor of India 1901-1910)) purchased Sandringham House and asked Mr Crapper to supply the plumbing, including thirty flushing loos with cedarwood seats and enclosures.  Impressed with the quality, the prince granted the company their first Royal Warrant.  The occupational surname Crapper is a dialectal variant of cropper (harvester of crops, farmer).

It’s a linguistic coincidence that a Mr Crapper choose to become a plumber and begin manufacturing loos bearing his name which bore such similarity to both crap and crapping which had earlier been used to describe bodily and other waste.  Despite being a coincidence, decades before the internet spread fake news, the urban myth was well-established that the terms words crap and crapper, in their scatological sense, all derive from the efforts and products of Mr Crapper.  The myth is often fleshed-out with reference to US soldiers stationed in England during World War One popularizing the phrase "I'm going to the crapper", after seeing the name on barracks’ cisterns.  In the way army slang does, it was taken home when the servicemen returned to the US.  Despite this, most dictionaries cite the origin of the slang term to the 1920s with popular use becoming widespread by the mid 1930s.  It spread with the empire and was noted in the era to be in use in the Indian Army although, after 1947, the troops came often to prefer "I am going to Pakistan".

ride) and (4), spit out after brushing and do not rinse (this maintains the fluoride concentration level).

Selfie with crapper backdrop: Lindsay Lohan on the set of HBO's Eastbound & Down (2013), brushing teeth while smoking.  It's an unusual combination but might work OK if one smokes a menthol cigarette and uses a nurdle of mint toothpaste.  Other combinations might clash.

By one's name, one shall be remembered.

The long-standing urban myth that Mr Crapper actually invented the flushing loo seems to lie in the 1969 book Flushed with Pride: The Story of Thomas Crapper by New Zealand-born humorist Wallace Reyburn (1913–2001) which purported to be a legitimate history.  Reyburn later wrote a "biography" of an influential inventor who created another product without which modern life also (for half the population) would be possible but less comfortable.  His 1971 volume Bust-Up: The Uplifting Tale of Otto Titzling and the Development of the Bra detailed the life of the putative inventor of the brassiere, Otto Titzling.  Unlike Mr Crapper, Herr Titzling (Reyburn helpfully mansplaining that the correct pronunciation was "tit-sling") never existed.  In truth, the flushing loo has probably existed in a recognizably modern form since the 1400s but, although the designs were gradually improved, they remained expensive and it was not until the nineteenth century they achieved any real popularity and it was well into the next century with the advent of distributed sanitation systems that they became expected, everyday installations.  To mark the day of his death in 1910, 27 January is designated International Thomas Crapper Day.  Each year, on that day, at the right moment, briefly, all should pause, reflect and then with gratitude, proceed.


Lindsay Lohan mug shots on the doors of the crappers at the Aqua Shard restaurant.  Located on the 31st floor of The Shard in London, the view is panoramic.

Saturday, August 10, 2024

Traumatic

Traumatic (pronounced traw-mat-ik (U), truh-mat-ik or trou-mat-ik (both non-U))

(1) In clinical medicine, of, relating to, or produced by a trauma or injury (listed by some dictionaries as dated but still in general use).

(2) In medicine, adapted to the cure of wounds; vulnerary (archaic).

(3) A psychologically painful or disturbing reaction to an event.

1650–1660: From the French traumatique, from the Late Latin traumaticum from traumaticus, from the Ancient Greek τραυματικός (traumatikós) (of or pertaining to wounds, the construct being traumat- (the stem of τραμα (traûma) (wound, damage) + -ikos (-ic) (the suffix used to forms adjectives from nouns).  Now familiar in the diagnoses post traumatic stress disorder (PTSD) & post traumatic stress syndrome (PTSS), it was first used in a psychological sense in 1889.  Traumatic is an adjective & noun and traumatically is an adverb; the noun plural is traumatics.

PTSD, PTSS and the DSM

Exposure to trauma has been a part experience which long pre-dates the evolution of humans and has thus always been part of the human condition, the archeological record, literature of many traditions and the medical record all replete with examples, Shakespeare's Henry IV often cited by the profession as one who would fulfill the diagnostic criteria of post traumatic stress disorder (PTSD).  Long understood and discussed under a variety of labels (famously as shell-shock during World War I (1914-1918)), it was in 1980 the American Psychiatric Association (APA) added PTSD to the third edition of its Diagnostic and Statistical Manual of Mental Disorders (DSM-III).  The entry was expected but wasn’t at the time without controversy but it’s now part of the diagnostic orthodoxy (though perhaps over-used and even something of a fashionable term among the general population) and the consensus seems to be that PTSD filled a gap in psychiatric theory and practice.  In a sense that acceptance has been revolutionary in that the most significant innovation in 1980 was the criterion the causative agent (the traumatic event) lay outside the individual rather than there being an inherent individual weakness (a traumatic neurosis).

However, in the DSM-III, the bar was set higher than today’s understanding and a traumatic event was conceptualized as something catastrophic which was beyond the usual range of human experience and thus able to be extremely stressful.  The original diagnostic criteria envisaged events such as war, torture, rape, natural disasters explosions, airplane crashes, and automobile accidents as being able to induce PTSD whereas reactions to the habitual vicissitudes of life (relationship breakdowns, rejection, illness, financial losses et) were mere "ordinary stressors" and would be characterized as adjustment disorders.  The inference to draw from the DSM-III clearly was most individuals have the ability to cope with “ordinary stress” and their capacities would be overcome only when confronted by an extraordinarily traumatic stressor.  The DSM-III diagnostic criteria were revised in DSM-III-R (1987), DSM-IV (1994), and DSM-IV-TR (2000), at least partly in response to the emerging evidence that condition is relatively common even in stable societies while in post-conflict regions it needed to be regarded as endemic.  The DSM-IV Diagnostic criteria included a history of exposure to a traumatic event and symptoms from each of three symptom clusters: intrusive recollections, avoidant/numbing symptoms, and hyper-arousal symptoms; also added were the DSM’s usual definitional parameters which stipulated (1) the duration of symptoms and (2) that the symptoms must cause significant distress or functional impairment.

#freckles: Freckles can be a traumatic experience.

The changes in the DSM-5 (2013) reflected the wealth of research and case studies published since 1980, correcting the earlier impression that PTSD could be thought a fear-based anxiety disorder and PTSD ceased to be categorized as an anxiety disorder, instead listed in the new category of Trauma- and Stressor-Related Disorders, the critical definitional point of which is that the onset of every disorder has been preceded by exposure to a traumatic or otherwise adverse environmental event.  It required (1) exposure to a catastrophic event involving actual or threatened death or injury or (2) a threat to the physical integrity of one’s self or others (including sexual violence) or (3) some indirect exposure including learning about the violent or accidental death or perpetration of sexual violence to a loved one (reflecting the understanding in the laws of personal injury tort and concepts such as nervous shock).  Something more remote such as the depiction of events in imagery or description was not considered a traumatic event although the repeated, indirect exposure (typically by first responders to disasters) to gruesome and horrific sight can be considered traumatic.  Another clinically significant change in the DSM-5 was that symptoms must have their onset (or a noticeable exacerbation) associated with the traumatic event.  Sub-types were also created.  No longer an anxiety disorder but now reclassified as a trauma and stressor-related disorder, established was the (1) dissociative sub-type which included individuals who meet the PTSD criteria but also exhibit either depersonalization or derealization (respectively alterations in the perception of one's self and the world) and (2) the pre-school subtype (children of six years and younger) which has fewer symptoms and a less demanding form of interviewing along with lower symptom thresholds to meet full PTSD criteria.

When the revised DSM-5-TR was released early in 2022, despite earlier speculation, the condition referred to as complex posttraumatic stress disorder (CPTSD) wasn’t included as a separate item, the explanation essentially that the existing diagnostic criteria and treatment regimes for PSTD were still appropriate in almost all cases treated by some as CPTSD, the implication presumably that this remains an instance of a spectrum condition.  That didn’t please all clinicians and even before DSM-5-TR was released papers had been published which focused especially on instances of CPTSD be associated with events of childhood (children often having no control over the adverse conditions and experiences of their lives) and there was also the observation that PTSD is still conceptualized as a fear-based disorder, whereas CPTSD is conceptualized as a broader clinical disorder that characterizes the impact of trauma on emotion regulation, identity and interpersonal domains.

Still, the DSM is never a static document and the committee has much to consider.  There is now the notion of post-traumatic stress syndrome (PTSS) which occurs within the thirty-day technical threshold the DSM establishes for PTSD, clinicians noting PTSS often goes unrecognized until a diagnosis of PTSD is made.  There is also the notion of generational trauma said to afflicting children exposed repeatedly to the gloomy future under climate change and inter-generational trauma Screening tools such as the PTSS-14 have proven reliable in identifying people with PTSS who are at risk of developing PTSD. Through early recognition, providers may be able to intervene, thus alleviating or reducing the effects of a traumatic experience.  Long discussed also has been the effect on mental health induced by a disconnection from nature but there was no name for the malaise until Professor Glenn Albrecht (b 1953; one-time Professor of Sustainability at Murdoch University (Western Australia) and now honorary fellow in the School of Geosciences of the University of Sydney) coined psychoterratic, part of his lexicon which includes ecoagnosy (environmental ignorance or indifference to ecology and solastalgia (the psychic pain of climate change and missing a home transforming before one’s eyes).  The committee may find its agenda growing.

Saved by a “traumatic” transmission

In the 1960s, “the ocean was wide and Detroit far away” from Melbourne which is why Holden was authorized to design and built its own V8 rather than follow the more obviously logical approach of manufacturing a version of Chevrolet’s fully-developed small-block V8.  The argument was the Chevrolet unit wouldn’t fit under the hood of Holden's new (HK) range which was sort of true in that there wasn’t room for both engine and all ancillaries like air-conditioning, power brakes and power steering although it would have been easier and cheaper to redesign the ancillaries rather than embark on a whole new engine programme but this was the 1960s and General Motors (GM) was in a position to be indulgent.  As it was, Holden’s V8 wasn’t ready in time for the release of the HK in 1968 so the company was anyway forced in the interim to use 307 cubic inch (5.0 litre) and 327 (5.3) Chevrolet V8s, buyers able to enjoy things like power steering or disk brakes but not both.

The "Tasman Bridge" 1974 Holden Monaro GTS (308 V8 Tri-matic).  The HQ coupé was Holden's finest design. 

Also under development was a new three-speed automatic transmission to replace the legendarily robust but outdated two-speed Powerglide.  It was based on a unit designed by GM’s European operation in Strasbourg and known usually as the Turbo-Hydramatic 180 (TH180; later re-named 3L30-C & 3L30-E) although, despite the name, it lacked the Powerglide-like robustness which made the earlier (1964) Turbo-Hydramatic 400 (TH400) famous.  Holden called its version the Tri-matic (marketed eventually without the hyphen) and, like the early versions of the TH180 used in Europe, there were reliability problems although in Australia things were worse because the six and eight cylinder engines used there subjected the components to higher torque loadings than were typical in Europe when smaller displacment units were used.  Before long, the Tri-matic picked up the nickname “traumatic” and in the darkest days it wasn’t unknown for cars to receive more than one replacement transmission and some even availed themselves of their dealer’s offer to retrofit the faithful Powerglide.  The Tri-matics’s problems were eventually resolved and it became a reliable unit, even behind the 308 cubic inch (5.0 litre) Holden V8 (although no attempt was ever made to mate it with the 350 cubic inch (5.7 litre) Chevrolet V8 Holden offered as an option until 1974).  As a footnote, even today the old Powerglide still has a niche because it's well suited to drag-racing, the single gear change saving precious fractions of a second during ¼ mile (402 metre) sprints.  

Whatever its troubled history, the “traumatic” did on one occasion prove a lifesaver.  In the early evening of 5 January 1975, the bulk carrier Lake Illawarra, while heading up Hobart's Derwent River, collided with the pylons of the Tasman Bridge which caused a 420 foot (128 m) section of the roadway to collapse onto the ship and into the river, killing twelve (seven of the ship's crew and five occupants of the four cars which tumbled 130 feet (40 m) into the water.  Two cars were left dangling precariously at the end of the severed structure and it emerged later that the 1974 Holden Monaro was saved from the edge only because it was fitted with a Tri-matic gearbox.  Because the casing sat lower than that used by the manual gearbox, it dug into to road surface, the frictional effect enough to halt progress.

The tragedy had a strange political coda the next day when, at a press conference in The Hague in the Netherlands, the Australian prime-minister (Gough Whitlam, 1916-2014; Australian prime-minister 1972-1975) was asked about the event and instead of responding with an expression of sympathy answered:

I sent a cable to Mr Reece, the Premier of Tasmania, I suppose twelve hours ago and I received a message of thanks from him.  Now you have the text I think.  I expect there will be an inquiry into how such a ludicrous happening took place.  It's beyond my imagination how any competent person could steer a ship into the pylons of a bridge.  But I have to restrain myself because I would expect the person responsible for such an act would find himself before a criminal jury. There is no possibility of a government guarding against mad or incompetent captains of ships or pilots of aircraft.

Mr Whitlam’s government had at the time been suffering in the polls, the economy was slowing and ten days earlier Cyclone Tracy had devastated the city of Darwin.  The matter didn’t go to trial but a court of marine inquiry found the captain had not handled the ship in a proper and seamanlike manner, ordering his certificate be suspended for six months.

Aftermath:  Hobart clinical psychologist Sabina Lane has for decades treated patients still traumatized by the bridge’s collapse in 1975.  Their condition is gephyrophobia (pronounced jeff-i-ro-fo-bia), from the Ancient Greek γέφυρα (géphura) (bridge) + -phobia (fear of a specific thing; hate, dislike, or repression of a specific thing), from the New Latin, from the Classical Latin, from the Ancient Greek -φοβία (-phobía) and used to form nouns meaning fear of a specific thing (the idea of a hatred came later) which describes those with an intense fear of driving over a bridge (which in the most severe cases can manifest at the mere thought or anticipation of it), sometimes inducing panic attacks.   Ms Lane said she had in the last quarter century treated some seven patients who suffered from gephyrophobia trigged by the trauma associated with the tragedy, their symptoms ranging from “...someone who gets anxious about it all the way to someone who would turn into complete hysterics."  Some, she added, were unable “…even to look at a photo of the Tasman Bridge.”  She noted the collapse remains “still quite clear in everybody's mind, and that's perhaps heightened by the fact that we stop traffic when we have a large boat passing beneath it."  Her treatment regime attempts to break the fear into manageable steps, having patients sketch the bridge or study photographs before approaching the structure and finally driving over it.

Friday, August 9, 2024

Capsule

Capsule (pronounced kap-suhl (U), kap-sool (non-U) or kap-syool (non U))

(1) In pharmacology, a gelatinous case enclosing a dose of medicine.

(2) In biology and anatomy, a membranous sac or integument; a cartilaginous, fibrous, or membranous envelope surrounding any of certain organs or parts, especially (1) the broad band of white fibres (internal capsule) near the thalamus in each cerebral hemisphere and (2) the membrane surrounding the eyeball.

(3) Either of two strata of white matter in the cerebrum.

(4) The sporangium of various spore-producing organisms, as ferns, mosses, algae, and fungi.

(5) In botany, a dry dehiscent (one that that liberates its seeds by splitting, as in the violet, or through pores, as in the poppy) fruit, composed of two or more carpels.

(6) A small case, envelope, or covering.

(7) In aerospace, a sealed cabin, container, or vehicle in which a person or animal can ride in flight in space or at very high altitudes within the earth's atmosphere (also called space-capsule).

(8) In aviation, a similar cabin in a military aircraft, which can be ejected from the aircraft in an emergency, complete with crew and instruments etc; an outgrowth of the original escape device, the ejector-seat.  The concept is used also by some sea-going vessels and structures such as oil-rigs where they’re essentially enclosed life-boats equipped for extended duration life-support.

(9) A thin cap or seal (made historically from lead or tin but now usually of plastic), covering for the mouth of a corked (ie sealed with some sort of stopper) bottle.

(10) A concise report; brief outline.

(11) To furnish with or enclose in or as if in a capsule; to encapsulate; to capsulize.

(12) In bacteriology, a gelatinous layer of polysaccharide or protein surrounding the cell wall of some bacteria and thought to be responsible for the virulence in pathogens.  The outer layer of viscous polysaccharide or polypeptide slime of the capsules with which some bacteria cover their cell walls is thought to provide defense against phagocytes and prevent the bacteria from drying out.

(13) In the fashion industry (as a modifier), a sub-set of a collection containing the most important or representative items (a capsule-collection).

(14) In chemistry, a small clay saucer for roasting or melting samples of ores etc, known also as a scorifier (archaic); A small, shallow evaporating dish, usually of porcelain.

(15) In ballistics, a small cup or shell, often of metal, for a percussion cap, cartridge etc.

1645–1655: From the Middle English capsula (small case, natural or artificial), from the French capsula (a membranous sac) or directly from the Latin capsula (small box or chest), the construct being caps(a) (box; chest; case) + -ula (the diminutive suffix).  The medicinal sense is 1875, the origin of the shortened form being that in 1942 adopted by British army quartermasters in their inventory and supply lists (eg Cap, ASA, 5 Gr (ie a 5 grain capsule of aspirin)).  The use to describe the part of a spacecraft containing the crew is from 1954, thought influenced by the number of military personnel involved during the industry’s early years, the sense from the jargon of ballistics meaning "shell of a metallic cartridge" dating from 1864 (although the word in this context had earlier been used in science fiction (SciFi or SF)).  Capsule has been applied as an adjective since 1938.  The verb encapsulate (enclose in a capsule) is from 1842 and was in figurative use by 1939 whereas the noun encapsulation didn’t appear until 1859 but was a figurative form as early as 1934.  Capsule is a noun & verb, capsuler, capsulization & encapsulation are nouns, encapsule, capsulizing, encapsulated & encapsulating are verbs, capsulated and capsuliferous & capsuligenous are adjectives; the noun plural is capsules.  In medicine, the adjective capsuloligamentous is used in anatomical science to mean "relating to a capsule and a ligament".

Science (especially zoology, botany, medicine & anatomy) has found many uses for capsule (because in nature capsule-like formations occur with such frequency) as a descriptor including the nouns capsulotomy (incision into a capsule, especially into the lens of the eye when removing cataracts), (the generation and development of a capsule), capsulorhexis (the removal of the lens capsule during cataract surgery) & capsulectomy (the removal of a capsule, especially one that surrounds an implant) and the adjective capsuloligamentous (of or relating to a capsule and a ligament).  Science also applied modifiers as required, thus forms such as intercapsule, pseudocapsule, microcapsule, macrosapsule & subsapsule.  Industry found a use: the noun capsuler describing "a machine for applying the capsule to the cork of a wine bottle" and the first "space capsules" (the part of spaceships with the life-support systems able to sustain life and thus used as the crew compartment) appeared in SF long before any were built or launched.  The derived forms most frequently used are encapsulate and its variations encapsulation and encapulated.  

The Capsule in Asymmetric Engineering

Focke-Wulf Fw 189 Eurl (Owl).

Unusual but far from unique in its structural asymmetry, and offset crew-capsule, the Blohm & Voss BV 141 was tactical reconnaissance aircraft built in small numbers and used in a desultory manner by the Luftwaffe during WWII.  A specification issued in 1937 by the Reichsluftfahrtministerium (RLM; the German Air Ministry) had called for a single-engine reconnaissance aircraft, optimized for visual observation and, in response, Focke-Wulf responded with their Fw 189 Eurl (Owl) which, because of the twin-engined, twin-boomed layout encountered some resistance from the RLM bureaucrats but it found much favor with the Luftwaffe and, over the course of the war, some nine-hundred entered service and it was used almost exclusively as the German's standard battlefield reconnaissance aircraft.  In fact, so successful did it prove in this role that the other configurations it was designed to accommodate, that of liaison and close-support ground-attack, were never pursued.  Although its performance was modest, it was a fine airframe with superb flying qualities and an ability to absorb punishment which, on the Russian front where it was extensively deployed, became famous and captured exampled provide Russian aeronautical engineers with ides which would for years influence their designs.

Arado Ar 198.

The RLM had also invited Arado to tender but their Ar 198, although featuring an unusual under-slung and elongated cupola which afforded for the observer a uniquely panoramic view, proved unsatisfactory in test-flights and development ceased.  Blohm and Voss hadn't been included in the RLM's invitation but anyway chose to offer a design which was radically different even by the standards of the innovative Fw 189.  The asymmetric BV 141 design was intriguing with the crew housed in an extensively glazed capsule, offset to starboard of the centre-line with a boom, offset to the left, housing the single-engine in front and tail to the rear.  Prototypes were built as early as 1938 and the Luftwaffe conducted were operational trials over both the UK and USSR between 1939-1941 but, despite being satisfactory in most respects, the Bv 141 was hampered by poor performance, a consequence of using an under-powered engined.  A re-design of the structure to accommodate more powerful units was begun but delays in development and the urgent need for the up-rated engines for machines already in production doomed the project and the Bv 141 was in 1943 abandoned.

Blohm & Voss BV 141 prototype.

Blohm & Voss BV 141.

Despite the ungainly appearance, the test-pilots reported the Fw 141 was a nicely balanced airframe, the seemingly strange weight distribution well compensated by (1) component placement, (2) the specific lift characteristics of the wing design and (3) the choice of rotational direction of both crankshaft and propeller, the torque generated used as a counter-balance.  Nor, despite the expectation of some, were there difficulties in handling whatever behavior was induced by the thrust versus drag asymmetry and pilots all indicated some intuitive trimming was all that was needed to compensate for any induced yaw.  The asymmetry extended even to the tail-plane, the starboard elevator and horizontal stabilizer removed (to afford the tail-gunner a wider field of fire) after the first three prototypes were built; surprisingly, this was said barely to affect the flying characteristics.  Focke-Wolf pursued the concept, a number of design-studies (including a piston & turbojet-engine hybrid) initiated but none progressed beyond the drawing-board.

Lindsay Lohan's promotion of Los Angeles-based Civil Clothing's capsule collection, November 2014.  The pieces were an ensemble in black & white, named "My Addiction".

The capsule on the circuits

Bisiluro Damolnar, Le Mans, 1955.

The concept of the asymmetric capsule made little impact in aviation but it certain made an impression on “Smokey” Yunick (Henry Yunick 1923–2001).  Smokey Yunick was American mechanic and self-taught designer who was for years one of the most innovative and imaginative builders in motorsport.  A dominant force in the early years of NASCAR where his team won two championships and dozens of races, he continued his involvement there and in other arenas for over two decades including the Indianapolis 500, his car winning the 1960 event.  During WWII, Yunick had piloted a Boeing B-17 Flying Fortress for the 97th Bombardment Group (Heavy), flying some fifty missions out of Amendola Field, Italy and on one run, he’d had seen in the skies over Germany a Blohm & Voss BV 141 and was intrigued by the outrigger capsule in which sat the crew, immediately trying to imagine how such a layout would affect the flying characteristics.  The image of the strange aircraft stayed with him and a decade later he noted the Bisiluro Damolnar which ran at Le Mans in 1955, the year of the horrific accident in which eighty-four died.  He must have been encouraged by the impressive pace of the Bisiluro Damolnar rather than its high-speed stability (it was blown (literally) of the track by a passing Jaguar D-Type) and to contest the 1964 Indianapolis 500, he created a capsule-car.

Hurst Floor Shifter Special, Indianapolis, 1964.

Like many of the machines Yunick built, the capsule-car was designed with the rule-book in one hand and a bucket of the sponsor’s money in the other, Hurst Corporation in 1964 paying US$40,000 (equal to circa US$335,000 in 2021) for the naming rights.  Taking advantage of the USAC’s (the Indianapolis 500’s sanctioning body) rules which permitted the cars to carry as much as 75 gallons (284 litres) of fuel, some did, the placement of the tanks being an important factor in the carefully calculated weight-distribution.  The drawback of a heavy fuel load was greater weight which, early on, decreased speed and increased tyre wear but did offer the lure of less time spent re-fueling so what Yunick did was take a novel approach to the "fuel as ballast" principle which balanced the mass by placing the driver and fuel towards the front and the engine to the rear, the desired leftward bias (the Indianapolis 500 being run anti-clockwise) achieved by specific placement.  His great innovation was that using a separate, left-side capsule for the driver, he created three different weight masses (front, rear and left-centre) which, in theory, would both improve aerodynamic efficiency and optimize weight distribution.

Hurst Floor Shifter Special, Indianapolis, 1964.

Despite the appearance, the capsule-car was more conventional than intended.  The initial plan had been to use a turbine engine (as Lotus later would, almost successfully) and a single throttle/brake control but, for various reasons, it ended up using the ubiquitous Offenhauser power-plant and a conventional, two-pedal setup.  Upon arrival at the track, it made quite an impression and many understood the theories which had inspired the design.  Expectations were high.  Unfortunately, the theories didn’t work in practice and the car struggled to reach competitive speeds, an attempt at a qualifying lap delayed until the last available day.  Going into turn one at speed, a problem with the troublesome brakes caused a loss of control and the car hit the wall, the damage severe enough to preclude any chance of repairs being made in time for the race.

Hurst Floor Shifter Special, Indianapolis, 1964.

Yunick wasn’t discouraged and remained confident a year was enough time to develop the concept and solve the problem the shakedown on the circuit had revealed but the capsule-car would never race again, rule changes imposed after a horrific crash which happened early in 1964 race meaning it would have been impossible for it to conform yet remain competitive.  Effectively rendered illegal, the capsule-car was handed to the Indianapolis Motor Speedway Museum, where it's sometimes displayed.

Japanese Hotels: The Pod and the Capsule

The term "capsule hotel" is a calque of the Japanese カプセルホテル (kapuseru hoteru).  The capsule hotel is a hotel with very small accommodation units which certainly can’t be called “rooms” in any conventiona sense of the word although the property management software (PMS) the operators use to manage the places is essentially the same (though simplified because there’s no need to handle things such as mini-bars, rollaway beds etc).  Although not exclusive to Japan, it’s Japanese cities with which the concept is most associated, the first opened in Osaka in 1979 and they were an obvious place for the idea to emerge because of the high cost of real estate.  Although the market has softened since the “property bubble” which in 1989 peaked with Tokyo commercial space alone reputedly (at least as extrapolated by the theorists) worth more than the continental United States, the cost per m2 remains high by international standards.  Because one typical hotel room can absorb as many m3 as a dozen or more capsules, the optimized space efficiency made the economic model compelling, even as a niche market.

Anna in Capsule 620.

Many use the terms “pod hotel” (pod used here in the individual and not the collective sense) & “capsule hotel” interchangeably to describe accommodation units which compact sleeping spaces with minimal additional facilities but in Japan the industry does note there are nuances of difference between the two.  Both are similar in that structurally the design is one of an array of small, pod-like sleeping units stacked side by side and/or atop each other in a communal space.  In a capsule hotel, the amenities are limited usually to a bed, small television and usually some (limited) provision of personal storage space with bathroom facilities shared and located in the communal area.  The target market traditionally has been budget travellers (the business as well as the leisure market) but there was for a while the phenomenon of those booking a night or two just to post the images as something exotic on Instagram and other platforms.  Interestingly, "female only" capsule hotels are a thing which must be indicative of something. 

Entrance to the world of your capsule, 9h nine hours Suidobashi, Tokyo.

The “Pod Hotel” came later and tended to be (slightly) larger, some 10-20% more expensive and positioned deliberately as “upmarket”, obviously a relative term and best thought of as vaguely analogous with the “premium economy” seats offered by airlines.  Compared with a capsule, a pod might have adjustable lighting, a built-in entertainment system supporting BYD (bring your own device) and somewhat more opulent bedding.  Demand clearly existed and a few pod hotels emerged with even a private bathroom and additional storage space although the sleeping area tended to remain the same.  It’s part of Japanese urban folklore that these more self-contained pods are often used by the famous “salarymen” who find them an attractive alternative to finding their way home after an evening of karaoke, strong drink, the attention of hostesses and such.  That aspect of the salaryman lifestyle predated the 1980s and capsules and pods were just a more economic way of doing things.  Not however predicted in a country which had since the mid-1950s become accustomed to prosperity, full-employment and growth were the recessions and consequent increase in unemployment which became part of the economy after the bubble burst in 1990.  In this environment, the capsules and especially the pods became low-cost alternative accommodation for the under-employed & unemployed and while estimates vary according to the city and district, it may be that at times as many as 20% of the units were rented on a weekly or monthly basis by those for whom the cost of a house or apartment had become prohibitive.