Friday, July 15, 2022

Decapitate

Decapitate (pronounced dih-kap-i-teyt)

(1) To cut off the head; to behead.

(2) Figuratively, to oust or destroy the leadership or ruling body of a government, military formation, criminal organization etc.

1605–1615: From the fourteenth century French décapiter, from the Late Latin dēcapitātus, past participle of dēcapitāre, the construct being - + capit- (stem of caput (head), genitive capitis), from the Proto-Italic kaput, from the Proto-Indo-European káput- (head) + -ātus.  The Latin prefix dē- (off) was from the preposition (of, from); the Old English æf- was a similar prefix.  The Latin suffix -ātus was from the Proto-Italic -ātos, from the primitive Indo-European -ehtos.  It’s regarded as a "pseudo-participle" and perhaps related to –tus although though similar formations in other Indo-European languages indicate it was distinct from it already in early Indo-European times.  It was cognate with the Proto-Slavic –atъ and the Proto-Germanic -ōdaz (the English form being -ed (having).  The feminine form was –āta, the neuter –ātum and it was used to form adjectives from nouns indicating the possession of a thing or a quality.  The English suffix -ate was a word-forming element used in forming nouns from Latin words ending in -ātus, -āta, & -ātum (such as estate, primate & senate).  Those that came to English via French often began with -at, but an -e was added in the fifteenth century or later to indicate the long vowel.  It can also mark adjectives formed from Latin perfect passive participle suffixes of first conjugation verbs -ātus, -āta, & -ātum (such as desolate, moderate & separate).  Again, often they were adopted in Middle English with an –at suffix, the -e appended after circa 1400; a doublet of –ee.  Decapitate, decapitated & decapitating are verbs, decapitation & decapitator are nouns.

Lindsay Lohan gardening with a lopper in her gloved hands, decapitation a less demanding path to destruction than deracination, New York City, May, 2015.  She appears to be relishing the task.

As a military strategy, the idea of decapitation is as old as warfare and based on the effective “cut the head off the snake”.  The technique of decapitation is to identify the leadership (command and control) of whatever structure or formation is hostile and focus available resources on that target.  Once the leadership has been eliminated, the effectiveness of the rest of the structure should be reduced and the idea is applied also in cyber warfare although in that field, target identification can be more difficult.  The military’s decapitation strategy is used by many included law enforcement bodies and can to some extent be applied in just about any form of interaction which involves conflicting interests.  The common English synonym is behead and that word may seem strange because it means “to take off the head” where the English word bejewel means “to put on the jewels”.  It’s because of the strange and shifting prefix "be-".  Behead was from the Middle English beheden, bihefden & biheveden, from the Old English behēafdian (to behead).  The prefix be- however evolved from its use in Old English.  In modern use it’s from the Middle English be- & bi-, from the Old English be- (off, away), from the Proto-Germanic bi- (be-), from the Proto-Germanic bi (near, by), the ultimate root the primitive Indo-European hepi (at, near) and cognate be- in the Saterland Frisian, the West Frisian, the Dutch, the German & Low German and the Swedish.  When the ancestors of behead were formed, the prefix be- was appended to create the sense of “off; away” but over the centuries it’s also invested the meanings “around; about” (eg bestir), “about, regarding, concerning” (eg bemoan), “on, upon, at, to, in contact with something” (eg behold), “as an intensifier” (eg besotted), “forming verbs derived from nouns or adjectives, usually with the sense of "to make, become, or cause to be" (eg befriend) & "adorned with something" (eg bejewel)).

A less common synonym is decollate, from the Latin decollare (to behead) and there’s also the curious adjective decapitable which (literally “able or fit to be decapitated”) presumably is entirely synonymous with “someone whose head has not been cut off” though not actually with someone alive, some corpses during the French Revolution being carted off to be guillotined, the symbolism of the seemingly superfluous apparently said to have been greeted by the mob "with a cheer".  Just as pleasing though less bloody were the Citroën cabriolets crafted between 1958-1974 by French coachbuilder Henri Chapron (1886-1978).

1971 Citroën DS21 Décapotable Usine.

Produced between 1955-1975, the sleek Citroën DS must have seemed something from science fiction to those accustomed to what was plying the roads outside but although it soon came to be regarded as something quintessentially French, the DS was actually designed by an Italian.  In this it was similar to French fries (invented in Belgium) and Nicolas Sarközy (b 1955; President of France 2007-2012), who first appeared on the planet the same year as the shapely DS and he was actually from here and there.  It was offered as the DS and the lower priced ID, the names a play on words, DS in French pronounced déesse (goddess) and ID idée (idea).  The goddess nickname caught on though idea never did.

Citroën Cabriolet d'Usine production count, 1960-1971.

Henri Chapron had attended the Paris Auto Salon when the DS made its debut and while Citroën had planned to offer a cabriolet, little had been done beyond some conceptual drawings and development resources were instead devoted to higher-volume variants, the ID (a less powerful DS with simplified mechanicals and less elaborate interior appointments) which would be released in 1957 and the Break (a station wagon marketed variously the Safari, Break, Familiale or Wagon), announced the next year.  Chapron claims it took him only a glance at the DS in display for him instantly to visualise the form his cabriolet would take but creating one proved difficult because such was the demand Citroën declined to supply a partially complete platform, compelling the coach-builder to secure a complete car from a dealer willing (on an undisclosed basis) to “bump” his name up the waiting list while he worked on the blueprints.  It wasn’t until 1958 Carrosserie Chapron presented their first DS cabriolet, dubbed La Croisette, named after the emblematic costal boulevard of Cannes and while initially it wasn’t approved by the factory (compelling Chapron to continue buying complete cars from dealers), it was obvious to Citroën’s engineers that they’d been presented with a shortcut to production.  Accordingly, Chapron designed a DS cabriolet suited to series production (as opposed to his bespoke creations) and that meant using the longer wheelbase platform of the Break, chosen because it was structurally enhanced to cope with the loads station wagons carry.  Beginning in 1960, these (in ID & DS versions) were the approved Cabriolets d'Usine, distributed until 1971 through Citroën’s dealer network, complete with a factory warranty.

1964 Citroën DW19 Décapotable Usine.

The DS and ID are well documented in the model's history but there was also the more obscure DW, built at Citroën's UK manufacturing plant in the Berkshire town Slough which sits in the Thames Valley, some 20 miles west of London.  The facility was opened in February 1926 as part of the Slough Trading Estate (opened just after World War I (1914-1918)) which was an early example of an industrial park, the place having the advantage of having the required infrastructure needed because constructed by the government for wartime production and maintenance activities.  Citroën was one of the first companies to be established on the site, overseas assembly prompted by the UK government's imposition of tariffs (33.3% on imported vehicles, excluding commercial vehicles) and the move had the added advantage of the right-hand-drive (RHD) cars being able to be exported throughout the British Empire under the “Commonwealth Preference” arrangements then in place.  Unlike similar operations, which in decades to come would appear world-wide, the Slough Citroëns were not assembled from CKD (completely knocked down) kits which needed only local labor to put them together but used a mix of imported parts and locally produced components.  The import tariff was avoided if the “local content” (labor and domestically produced (although those sourced from elsewhere in the empire could qualify) parts) reached a certain threshold (measured by the total value in local currency); it was an approach many governments would follow and elements of it exist even today as a means of encouraging (and protecting) local industries and creating employment.  People able to find jobs in places like Slough would have been pleased but for those whose background meant they were less concerned with something as tiresome as paid-employment, the noise and dirt of factories seemed just a scar upon the “green and pleasant land” of William Blake (1757–1827).  In his poem Slough (1937), Sir John Betjeman (1906–1984; Poet Laureate 1972-1984), perhaps recalling Stanley Baldwin's (1867–1947; UK prime-minister 1923-1924, 1924-1929 & 1935-1937) “The bomber will always get through” speech (1932) welcomed the thought, writing:  Come friendly bombs and fall on Slough!  It isn’t fit for humans now”  Within half a decade, the Luftwaffe would grant his wish.

1964 Citroën DS19 Décapotable Usine.

During World War II (1939-1945), the Slough plant was repurposed for military use and some 23,000 CMP (Canadian Military Pattern) trucks were built, civilian production resuming in 1946.  After 1955, Slough built both the ID and DS, the latter including the traditionally English leather trim and a wooden dashboard, a touch which some critics claimed was jarring among the otherwise modernist ambiance but the appeal was real because some French distributors imported the Slough dashboard parts for owners who liked the look.  The UK-built cars also used 12 volt Lucas electrics until 1963 and it was in that year the unique DW model was slotted in between the ID and DS.  Available only with a manual transmission and a simplified version of the timber veneer, the DW was configured with the ID's foot-operated clutch but used the more powerful DS engine, power steering and power brakes.  When exported, the DW was called DS19M and the "DW" label was applied simply because it was Citroën's internal code to distinguish (RHD) models built in the UK from the standard left-hand-drive (LHD) models produced in France.  Citroën assembly in Slough ended in February 1965 and although the factory initially retained the plant as a marketing, service & distribution centre, in 1974 these operations were moved to other premises and the buildings were taken over by Mars Confectionery.  Today no trace remains of the Citroën works in Slough.

1963 Citroën Le Dandy & 1964 Citroën Palm Beach by Carrosserie Chapron.

Citroën DS by Carrosserie Chapron production count. 

Demand was higher at a lower price-point, as Citroën's 1325 cabriolets indicate but Carrosserie Chapron until 1974 maintained output of his more exclusived an expensive lines although by the late 1960s, output, never prolific, had slowed to a trickle.  Chapron’s originals varied in detail and the most distinguishing difference between the flavors was in the rear coachwork, the more intricate being those with the "squared-off" (sometimes called "finned" or "fin-tailed") look, a trick Mercedes-Benz had in 1957 adopted to modernize the 300d (W189, 1957-1963, the so called "Adenauer Mercedes", named after Konrad Adenauer (1876–1967; chancellor of the FRG (Federal Republic of Germany (the old West Germany) 1949-1963) who used several of the W186 (300, 300b, 300c, 1951-1957) & 300s models as his official state cars).  Almost all Chapron's customized DS models were built to special order between under the model names La Croisette, Le Paris, Le Caddy, Le Dandy, Concorde, Palm BeachLe Léman, Majesty, & Lorraine; all together, 287 of these were delivered and reputedly, no two were exactly alike.

Citroën Concorde coupés by Chapron: 1962 DS 19 (left) and 1965 DS 21 (right).  The DS 21 is one of six second series cars, distinguished by their “squared-off” rear wing treatment and includes almost all the luxury options Chapron had on their list including electric windows, leather trim, the Jaeger instrument cluster, a Radiomatic FM radio with automatic Hirschmann antenna, the Robergel wire wheel covers and the Marchal auxiliary headlights.

Alongside the higher-volume Cabriolets d'Usine, Carrosserie Chapron continued to produce much more expensive décapotables (the Le Caddy and Palm Beach cabriolets) as well as limousines (the Majesty) and coupés, the most numerous of the latter being Le Dandy, some 50 of which were completed between 1960-1968.  More exclusive still was another variation of the coupé coachwork, the Concorde with a more spacious cabin notably for the greater headroom it afforded the rear passengers.  Only 38 were built over five years and at the time they cost as much as the most expensive Cadillac 75 Limousine.

The Citroën SM, a few of which were decapitated 

1972 Citroën SM (left) & 1971 Citroën SM Mylord by Carrosserie Chapron (right).  The wheels are the Michelin RR (roues en résine or résine renforcée (reinforced resin)) composites, cast using a patented technology invented by NASA for the original moon buggy.  The Michelin wheel was one-piece and barely a third the weight of the equivalent steel wheel but the idea never caught on, doubts existing about their long-term durability and susceptibility to extreme heat (the SM had inboard brakes).  

Upon release in 1971, immediately the Citroën SM was recognized as among the planet's most intricate and intriguing cars.  A descendant of the DS which in 1955 had been even more of a sensation, it took Citroën not only up-market but into a niche the SM had created, nothing quite like it previously existing, the combination of a large (in European terms), front-wheel-drive (FWD) luxury coupé with hydro-pneumatic suspension, self-centreing (Vari-Power) steering, high-pressure braking and a four-cam V6 engine, a mix unique in the world.  The engine had been developed by Maserati, one of Citroën’s recent acquisitions and the name acknowledged the Italian debt, SM standing for Systemé Maserati.  Although, given the size and weight of the SM, the V6 was of modest displacement to attract lower taxes (initially 2.7 litres (163 cubic inch)) and power was limited (181 HP (133 kW)) compared to the competition, such was the slipperiness of the body's aerodynamics that in terms of top speed, it was at least a match for most.

1973 Citroën SM with reproduction RR wheels in aluminium.

However, lacking the high-performance pedigree enjoy by some of that competition, a rallying campaign had been planned as a promotional tool.  Although obviously unsuited to circuit racing, the big, heavy SM didn’t immediately commend itself as a rally car; early tests indicated some potential but there was a need radically to reduce weight.  One obvious candidate was the steel wheels but attempts to use lightweight aluminum units proved abortive, cracking encountered when tested under rally conditions.  Michelin immediately offered to develop glass-fibre reinforced resin wheels, the company familiar with the material which had proved durable when tested under extreme loads.  Called the Michelin RR (roues resin (resin wheel)), the new wheels were created as a one-piece mold, made entirely of resin except for some embedded steel reinforcements at the stud holes to distribute the stresses.  At around 9.4 lb (4¼ kg) apiece, they were less than half the weight of a steel wheel and in testing proved as strong and reliable as Michelin had promised.  Thus satisfied, Citroën went rallying.

Citroën SM, Morocco Rally, 1971.

The improbable rally car proved a success, winning first time out in the 1971 Morocco Rally and further success followed.  Strangely, the 1970s proved an era of heavy cruisers doing well in the sport, Mercedes-Benz winning long-distance events with their 450 SLC 5.0 which was both the first V8 and the first car with an automatic transmission to win a European rally.  Stranger still, Ford in Australia re-purposed one of the Falcon GTHO Phase IV race cars which had become redundant when the programme was cancelled in 1972 and the thing proved surprisingly competitive during the brief periods it was mobile although the lack of suitable tyres meant repeatedly the sidewalls would fail; the car was written off after a serious crash.  The SM, GTHO & SLC proved a quixotic tilt and the sport went a different direction.  On the SM however, the resin wheels had proved their durability, not one failing during the whole campaign and encouraged by customer requests, Citroën in 1972 offered the wheels as a factory option although only in Europe; apparently the thought of asking the US federal safety regulators to approve plastic wheels (as they’d already been dubbed by the motoring press) seemed to the French so absurd they never bothered to submit an application.

1974 prototype Citroën SM with 4.0 V8.

Ambitious as it was, circumstances combined in a curious way that might have made the SM more remarkable still.  By 1973, sales of the SM, after an encouraging start had for two years been in decline, a reputation for unreliability already tarnishing its reputation but the first oil shock dealt what appeared to be a fatal blow; from selling almost 5000 in 1971, by 1974 production numbered not even 300.  The market for fast, thirsty cars had shrunk and most of the trans-Atlantic hybrids (combining elegant European coachwork with large, powerful and cheap US V8s), which had for more than a decade done good business as alternative to the highly strung British and Italian thoroughbreds, had been driven extinct.  Counter-intuitively, Citroën’s solution was to develop an even thirstier V8 SM and that actually made some sense because, in an attempt to amortize costs, the SM’s platform had been used as the basis for the new Maserati Quattroporte but, bigger and heavier still, performance was sub-standard and the theory was a V8 version would transform both and appeal to the US market, then the hope of many struggling European manufacturers.

Recreation of 1974 Citroën SM V8 prototype.

Citroën didn’t have a V8; Maserati did but it was big and heavy, a relic with origins in racing and while its (never wholly tamed) raucous qualities suited the character of the sports cars and saloons Maserati offered in the 1960s, it couldn’t be used in something like the SM.  However, the SM’s V6 was a 90o unit and thus inherently better suited to an eight-cylinder configuration.  In 1974 therefore, a four litre (244 cubic inch) V8 based on the V6 (by then 3.0 litres (181 cubic inch)) was quickly built and installed in an SM which was subjected to the usual battery of tests over a reported 20,000 km (12,000 miles) during which it was said to have performed faultlessly.  Bankruptcy (to which the SM, along with some of the company's other ventures, notably the GZ Wankel programme, contributed) however was the death knell for both the SM and the V8, the prototype car scrapped while the unique engine was removed and stored, later used to create a replica of the 1974 test mule.

Evidence does however suggest a V8 SM would likely have been a failure, just compounding the existing error on an even grander scale.  It’s true that Oldsmobile and Cadillac had offered big FWD coupés with great success since the mid 1960s (the Cadillac at one point fitted with a 500 cubic inch (8.2 litre) V8 rated at 400 HP (300 kW)) but they were very different machines to the SM and appealed to a different market.  Probably the first car to explore what demand might have existed for a V8 SM was the hardly successful 1986 Lancia Thema 8•32 which used the Ferrari 2.9 litre (179 cubic inch) V8 in a FWD platform.  Although well-executed within the limitations the configuration imposed, it was about a daft an idea as it sounds.  Even had the V8 SM been all-wheel-drive (AWD) it would probably still have been a failure but it would now be remembered as a revolution ahead of its time.  As it is, the whole SM story is just another cul-de-sac, albeit one which has become a (mostly) fondly-regarded cult.

State Citroëns by Carrosserie Chapron: 1968 Citroën DS state limousine (left) and 1972 Citroën SM Présidentielle (right).

In the summer of 1971, after years of slowing sales, Citroën announced the end of the décapotable usine and Chapron’s business model suffered, the market for specialized coach-building, in decline since the 1940s, now all but evaporated.  Chapron developed a convertible version of Citroën’s new SM called the Mylord but, very expensive, it was little more successful than the car on which it was based; although engineered to Chapron’s high standard, fewer than ten were built.  Government contracts did for a while seem to offer hope.  Charles De Gaulle (1890–1970; President of France 1958-1969) had been aghast at the notion the state car of France might be bought from Germany or the US (it’s not known which idea he thought most appalling and apparently nobody bothered to suggest buying British) so, at his instigation, Chapron (apparently without great enthusiasm) built a long wheelbase DS Presidential model.

Size matters: Citroën DS Le Presidentielle (left) and LBJ era stretched Lincoln Continental by Lehmann-Peterson of Chicago (right).

Begun in 1965, the project took three years, legend having it that de Gaulle himself stipulated little more than it be longer than the stretched Lincoln Continentals then used by the White House (John Kennedy (JFK, 1917–1963; US president 1961-1963) assassinated in Continental X-100 modified by Hess and Eisenhardt) and this was achieved, despite the requirement the turning circle had to be tight enough to enter the Elysée Palace’s courtyard from the rue du Faubourg Saint-Honoré and then pull up at the steps in a single maneuver.  Delivered just in time for the troubles of 1968, the slinky lines were much admired in the Élysée and in 1972, Chapron was given a contract to supply two really big four-door convertible (Le Presidentielle) SMs as the state limousines for Le Général’s successor, Georges Pompidou (1911–1974; President of France 1969-1974).  First used for 1972 state visit of Elizabeth II (1926-2022; Queen of the UK and other places, 1952-2022), they remained in regular service until the inauguration of Jacques Chirac (1932–2019; President of France 1995-2007) in 1995, seen again on the Champs Elysees in 2004 during Her Majesty’s three-day state visit marking the centenary of the Entente Cordiale.

1972 Citroën SM Opera by Carrosserie Chapron (left) & 1973 Maserati Quattroporte II (right).

However, state contracts for the odd limousine, while individually lucrative, were not a model to sustain a coach building business and a year after the Mylord was first displayed, Chapron inverted his traditional practice and developed from a coupé, a four-door SM called the Opera.  On a longer wheelbase, stylistically it was well executed but was heavy and both performance and fuel consumption suffered, the additional bulk also meaning some agility was lost.  Citroën was never much devoted to the project because they had in the works what was essentially their own take on a four-door SM, sold as the Maserati Quattroporte II (the Italian house having earlier been absorbed) but as things transpired in those difficult years, neither proved a success, only eight Operas and a scarcely more impressive thirteen Quattroporte IIs ever built.  The French machine deserved more, the Italian knock-off, probably not.  In 1974, Citroën entered bankruptcy, dragged down in part by the debacle which the ambitious SM had proved to be although there had been other debacles worse still.   Four years later, Henri Chapron died in Paris, his much down-sized company lingering on for some years under the direction of his industrious widow, the bulk of its work now customizing Citroën CXs.  Operations ceased in 1985 but the legacy is much admired and the décapotables remain a favorite of collectors and film-makers searching for something with which to evoke the verisimilitude of 1960s France.

Judith and the decapitation of Holofernes

In the Bible, the deuterocanonical books (literally “belonging to the second canon”) are those books and passages traditionally regarded as the canonical texts of the Old Testament, some of which long pre-date Christianity, some composed during the “century of overlap” before the separation between the Christian church and Judaism became institutionalized.  As the Hebrew canon evolved, the seven deuterocanonical books were excluded and on this basis were not included in the Protestant Old Testament, those denominations regarding them as apocrypha and they’re been characterized as such since.  Canonical or not, the relationship of the texts to the New Testament has long interested biblical scholars, none denying that links exist but there’s wide difference in interpretation, some finding (admittedly while giving the definition of "allusion" wide latitude) a continuity of thread, others only fragmentary references and even then, some paraphrasing is dismissed as having merely a literary rather than historical or theological purpose.

Le Retour de Judith à Béthulie (The Return of Judith to Bethulia) (1470) by Botticelli, (circa 1444-1510).

The Book of Judith exists thus in the Roman Catholic and Eastern Orthodox Old Testaments but is assigned (relegated some of the hard-liners might say) by Protestants to the apocrypha.  It is the tale of Judith (יְהוּדִית in the Hebrew and the feminine of Judah), a legendarily beautiful Jewish widow who uses her charms to lure the Assyrian General Holofernes to his gruesome death (decapitated by her own hand) so her people may be saved.  As a text, the Book of Judith is interesting in that it’s a genuine literary innovation, a lengthy and structured thematic narrative evolving from the one idea, something different from the old episodic tradition of loosely linked stories.  That certainly reflects the influence of Hellenistic literary techniques and the Book of Judith may be thought a precursor of the historical novel: A framework of certain agreed facts upon a known geography on which an emblematic protagonist (Judith the feminine form of the national hero Judah) performs.  The atmosphere of crisis and undercurrent of belligerence lends the work a modern feel while theologically, it’s used to teach the importance of fidelity to the Lord and His commandments, a trust in God and how one must always be combative in defending His word.  It’s not a work of history, something made clear in the first paragraph; this is a parable.

Judit decapitando a Holofernes (Judith Beheading Holofernes) (circa 1600) by Caravaggio (Michelangelo Merisi da Caravaggio, 1571–1610).

The facts of the climactic moment in the decapitation of General Holofernes are not in dispute, Judith at the appropriate moment drawing the general’s own sword, beheading him as he lay recumbent, passed out from too much drink.  Deed done, the assassin dropped the separated head in a leather basket and stole away.  The dramatic tale for centuries has attracted painters and sculptors, the most famous works created during the high Renaissance and Baroque periods and artists have tended to depict either Judith acting alone or in the company of her aged maid, a difference incidental to the murder but of some significance in the interpretation of preceding events.

Judit si presenta a Holofernes (Judith Presenting Herself to Holofernes) (circa 1724) by Antonio Gionima (1697–1732).

All agree the picturesque widow was able to gain access to the tent of Holofernes because of the general’s carnal desires but in the early centuries of Christianity, there’s little hint that Judith resorted to the role of seductress, only that she lured him to temptation, plied him with drink and struck.  The sexualization of the moment came later and little less controversial was the unavoidable juxtaposition of the masculine aggression of the blade-wielding killer with her feminine charms.  Given the premise of the tale and its moral imperative, the combination can hardly be avoided but it was for centuries disturbing to (male) theologians and priests, rarely at ease with bolshie women.  It was during the high Renaissance that artists began to vest Judith with an assertive sexuality (“from Mary to Eve” in the words of one critic), her features becoming blatantly beautiful, the clothing more revealing.  The Judith of the Renaissance and the Baroque appears one more likely to surrender her chastity to the cause where once she would have relied on guile and wine.

Judith (1928) by Franz von Stuck (1863–1928).

It was in the Baroque period that the representations more explicitly made possible the mixing of sex and violence in the minds of viewers, a combination that across media platforms remains today as popular as ever.  For centuries “Judith beheading Holofernes” was one of the set pieces of Western Art and there were those who explored the idea with references to David & Goliath (another example of the apparently weak decapitating the strong) or alluding to Salome, showing Judith or her maid carrying off the head in a basket.  The inventiveness proved not merely artistic because, in the wake of the ruptures caused by the emergent Protestant heresies, in the counter-attack by the Counter-Reformation, the parable was re-imagined in commissions issued by the Holy See, Judith’s blade defeating not only Assyrian oppression but all unbelievers, heretical Protestants just the most recently vanquished.  Twentieth century artists too have used Judith as a platform, predictably perhaps sometimes to show her as the nemesis of toxic masculinity and some have obviously enjoyed the idea of an almost depraved sexuality but there have been some quite accomplished versions.

Thursday, July 14, 2022

Bedint

Bedint (pronounced buh-dent (U) or bed-ent (non-U))

(1) Something which suggests a bourgeois aspiration to the tastes or habits of the upper classes.

(2) A generalized expression of disapproval of anyone or anything not in accord with the social standards or expectation of the upper classes.

(3) Any behavior thought inappropriate (ie something of which one for whatever reason disapproves).

1920s:  A coining attributed to variously to (1) English writer and diplomat Harold Nicolson (1886–1968), (2) his wife, the writer Vita Sackville-West (1892–1962) or (3) speculatively, Vita Sackville-West’s family.  The word is of Germanic origin and although there are variants, a common source is the Middle Dutch bedienen, the construct being be- + dienen.  The Middle Dutch be- was from the Old Dutch bi- & be-, from the Middle High German be-, from the Old High German bi-, from the Proto-Germanic bi-, from the primitive Indo-European hepi and was used to indicate a verb is acting on a direct object.  Dienen was from the Middle Dutch dienen, from the Old Dutch thienon, from the Proto-Germanic þewanōną and meant “to be of assistance to, to serve; to serve (at a tavern or restaurant); to operate (a device).  In the rituals of the Roman Catholic Church, it has the specific technical meaning of “to administer the last sacraments (the last rites).  A bedient (the second third-person singular present indicative of bedienen) was thus a servant, a waiter etc.  The acceptable pronunciation is buh-dent, bed-int, be-dit or anything is the depth of bedintism. 

The idea thus is exemplified by a maître d'hôtel (the head waiter in a good restaurant) who, well dressed and well mannered, appears superficially not dissimilar to someone from the upper classes but of course is someone from a lower class, adopting for professional reasons, some of their characteristics (dress, manner, speech (and sometimes snobbery) etc).  Whoever coined the word, it was certainly popularized by Harold Nicholson and Vita Sackville-West.  It seems initially to have been their shared code for discussing such things but soon became common currency amongst the smart set in which they moved and from there, eventually entered the language although not all dictionaries acknowledge its existence.  It one of those words which need not be taken too seriously and is most fun to use if played with a bit (bedintish, bedintesque, bedintingly bedinded, bedintism, bedintology et al).  As a word, although from day one weaponized, bedint was subject to some mission-creep to the point where, as Lewis Carol’s Humpty Dumpty explained to Alice: "When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."  Humpty Dumpty to Alice in Alice Through the Looking-Glass (1871) by Lewis Carroll (1832–1898).

Harold Nicolson & Vita-Sackville West, London, 1913.

As originally used by Nicolson & Sackville-West, bedint, one the many linguistic tools of exclusion and snobbery (and these devices exist among all social classes, some of which are classified as “inverted snobbery” when part of “working-class consciousness” or similar constructs) was used to refer to anyone not from the upper class (royalty, the aristocracy, the gentry) in some way aping the behavior or manners of “their betters”; the behavior need not be gauche or inappropriate, just that of someone “not one of us”.  Nicolson didn’t exclude himself from his own critique and, as one who “married up” into the socially superior Sackville family, was his whole life acutely aware of what behaviors of his might be thought bedint, self-labelling as he thought he deserved.  His marriage he never thought at all bedint although many of those he condemned as bedint would have found it scandalously odd, however happy the diaries of both parties suggest that for almost fifty years it was.

Harold Nicolson & Vita-Sackville West, Sissinghurst Castle Garden, Kent, 1932.

Bedint as a word proved so useful however that it came to be applied to members of the upper classes (even royalty) were they thought guilty of some transgression (like dullness) or hobbies thought insufficiently aristocratic.  The idea of some behavior not befitting one’s social status was thus still a thread but by the post-war years, when bedint had entered vocabulary of the middle-class (a bedint thing in itself one presumes Nicolson and Sackville-West would have thought), it was sometimes little more than a synonym for bad behavior (poor form as they might have said), just an expression of disapproval.

Harold Nicolson & Vita-Sackville West, Sissinghurst Castle Garden, Kent, 1960.

The biographical work on Nicolson reveals a not especially likable snob but, in common with many fine and sharp-eyed diarists, he seems to have been good company though perhaps best enjoyed in small doses.  One of those figures (with which English political life is studded) remembered principally for having been almost a successful politician, almost a great writer or almost a viceroy, he even managed to be almost a lord but despite switching party allegiances to curry favor with the Labour government (1945-1951), the longed-for peerage was never offered and he was compelled to accept a knighthood.  His KCVO (Knight Commander of the Royal Victorian Order, an honor in the personal gift of the sovereign) was granted in 1953 in thanks for his generous (though well-reviewed and received) biography of King George V (1865-1936, King of England 1910-1936), although those who could read between the lines found it not hard to work out which of the rather dull monarch’s activities the author thought bedint.  As it was, Nicolson took his KCVO, several steps down the ladder of the Order of Precedence, accepting it only "faute de mieux" (in the absence of anything better) and describing it “a bedint knighthood”, wondering if, given the shame, he should resign from his clubs.

Wedding day: Duff Cooper & Lady Diana Manners, St Margaret's Church, London, 2 June 1919.

So a knighthood, a thing which many have craved, can be bedint if it's not the right knighthood.  When the Tory politician Duff Cooper (1890–1954) ended his term (1944-1948) as the UK's ambassador to France, the Labor government (which had kept him on) granted him a GCMG (Knight Grand Commander of the order of St Michael & St George) and although he thought his years as a cabinet minister might have warranted a peerage, he accepted while wryly noting in his diary it was hardly something for which he should  be congratulated because: "No ambassador in Paris has ever failed to acquire the it since the order was invented and the Foreign Office has shown how much importance they attach to it by conferring it simultaneously on my successor Oliver Harvey (1893-1968), who is, I suppose, the least distinguished man who has ever been appointed to the post".  Still, Cooper took his "bedint" GCMG and when a Tory government returned to office, he was raised to the peerage, shortly before his death, choosing to be styled Viscount Norwich of Aldwick.  His wife (Lady Diana Cooper (1892–1986) didn't fancy becoming "Lady Norwich" because she though it "sounded like porridge" and took the precaution of placing notices in The Times and Daily Telegraph telling all who mattered she would continue to be styled "Lady Diana Cooper".  They had a "modern marriage" so differences between them were not unusual.

Wednesday, July 13, 2022

Conjecture

Conjecture (pronounced kuhn-jek-cher)

(1) The formation or expression of an opinion or theory without sufficient evidence for proof; an opinion or theory so formed or expressed; guess; speculation; to conclude or suppose from grounds or evidence insufficient to ensure reliability.

(2) The interpretation of signs or omens (obsolete though still used in some superstitious circles and a common phrase among occultists).

(3) In mathematics and philology, a technical term for a statement which, based on available evidence, is likely to be true but for which there’s no formal proof.

1350–1400: From the Middle English conjecturen (infer, predict, form (an opinion or notion) upon probabilities or slight evidence), from the Old and Middle French from the Latin conjectūra (a guess; inferring, an assembling of facts; reasoning), the construct being conject(us), past participle of conjicere (to throw together; to form a conclusion).  The late Middle English verb conjecturen was a direct borrowing from the Middle French, from the Late Latin conjecturāre, derivative of the noun.  The Latin conjicere is a combining form jacere (to throw) + -ūraure (the Latin suffix used to form nouns of quality from adjectives).  The Latin coniectūra is derived from coniectus, perfect passive participle of cōniciō (throw or cast together; guess), the construct being con- (together) + iaciō (throw, hurl).  In Middle English, there were also peacefully co-existing forms, the noun conjecte & the verb conjecten.

Derived forms include the adjective conjecturable, the adverb conjecturably, the noun conjecturer and the verbs (used with our without the object) conjectured and conjecturing.  The verbs misconjecture & misconjectured and the noun misconjecturing are valid words but so rare that some dictionaries list them as obscure.  Indeed, given the meaning of the root, it can be argued there’s little difference between conjecture and misconjecture although it could be useful in describing things in retrospect.  For those times when conjecture seems not quite right, there’s surmise, inference, supposition, theory, hypothesis, suppose, presume, guesswork, hunch, presumption, guess, fancy, opinion, conclusion, notion, guesstimate, gather, figure, conclude, feel, deem & expect.

The Oesterlé–Masser Conjecture

The Oesterlé–Masser conjecture, a problem in number theory, is named after the mathematicians Joseph Oesterlé (b 1954) and David Masser (b 1948) who first published their speculation in the 1980s and popularly known as the abc conjecture, based on the equation which underlies it all.  The conjecture postulates that if a lot of small prime numbers divide two numbers (a) and (b), then only a few large ones divide their sum (c); basically, if you add lots of primes together the result is divisible only by a few large numbers.  Mathematicians concur that intuitively this seems likely because of the nature of prime numbers but a proof has proved elusive.  It’s of interest to the profession because it might resolve some of the fundamental problems in Diophantine geometry, a typically arcane fork of number theory but beyond the implications for mathematics, given the importance of prime numbers in commerce, ICT and diplomacy (primes underpin encryption), other fields may be significantly affected. 

Japanese mathematician Shinichi Mochizuki san (b 1969) has been working on the problem for some thirty years and, over the decades has circulated within the community many un-published papers, none of which garnered much support.  Not discouraged, Mochizuki San persisted and in 2012 posted on his website, four papers 500 pages in length, claiming they contained the definitive proof (including a new theory called inter-universal Teichmüller theory (IUTT)).  While some of his peers actively disagreed with his methods or conclusions, most either ignored his work or said it couldn’t be understood, one recently commenting his experience was something like “reading a paper from the future, or from outer space”.

Several years later, despite conferences staged to explain Mochizuki san’s work to other mathematicians, there is no consensus and he has been accused of not doing enough to communicate (in the sense of explaining) his ideas.  While there are some who claim to have both read his work (that alone an achievement) and understood it (more admirable still given how much that depends on knowledge of other work he has developed over decades), they're a small sub-set of number theorists, most of whom remain sceptical or dismissive .  Interest was stirred in 2018 when two noted German mathematicians, Peter Scholze (b 1987) and Jakob Stix (b 1974), published a paper in which they asserted a critical part of Mochizuki san’s work (said to be central to the proof), was wrong.  Unusually in this matter, their work was based not only on analysis but a face-to-face meeting with Mochizuki san.  The discussion however concluded with neither sided able to persuade the other, something like three pocket calculators sitting on a table, unable to agree on the best method of determining a number without knowing that number.

In April 2020, it was announced the claimed proof would be published in the Japanese journal Publications of the Research Institute for Mathematical Sciences (RIMS).  Although Mochizuki san was RIMS's chief editor, the institution noted he was “…not involved in the review” or the decision to publish.  There was scepticism but in 2021, the material appeared in RIMS and the number theory community awaits with interest to see if there are defections from the tiny “proven” faction or the more populated “unproven”.

It's not just number theorists who have engaged with Mochizuki san.  Ted Nelson (b 1937), a US sociologist who as long ago as 1963 invented the term hypertext, thinks the controversial Japanese professor may be the inventor of Bitcoin.  Dr Nelson noted that that Bitcoin creator "Satoshi Nakamoto san" appears to have existed long enough only to (1) introduce Bitcoin, (2) stimulate excitement and (3) disappear and thought this similar behavior to that of Mochizuki san who has some history of making mathematical discoveries and posting them to the internet to be found, rather than publishing.  Not many share the suspicion, noting that while a grasp of high-level mathematics would have been essential to build the Blockchain, Mochizuki san is not known to have any background in software development although, given Bitcoin may have been developed by a team, that may not be significant.  Dr Nelson remained convinced and in 2013 offered to donate one Bitcoin (then trading at $US123) to charity were Mochizuki san to deny being the inventor.  It's not known if Dr Nelson revised the terms of his offer as the Bitcoin price moved.

Tuesday, July 12, 2022

Googly

Googly (pronounced goo-glee)

(1) In cricket, a bowled ball that swerves in one direction and breaks in the other; an off-break bowled with a leg break action.  The delivery was once also known as the bosie or bosie-ball and is now commonly referred to as the wrong'un.

(2) Something protruding; exophthalmic (rare).

(3) A slang term for bulging eyes (archaic).

Circa 1880s: Apparently an invention of modern English but the origin is uncertain.  It may have been from the mid-nineteenth century use of goggle (to stare at admiringly or amorously'' although google was during the same era used to describe the Adam's apple, derived presumably from the sense of eyes which are thought similarly protruding, either literally or figuratively, a meaning alluded to by a popular hero in a contemporary comic strip being named “Goo” (suggesting ''sentimental, amorous'').  Googly (and a number of variant spellings) does appear around the turn of the twentieth century to have been in common use as an adjective applied to the eyes.  The numerical value of googly in Chaldean numerology is 2 and in Pythagorean numerology, 5.  Googly is a noun & adjective; the noun plural is googlies.

Bernard Bosanquet sending down a googly.

In cricket, a googly is a type of delivery bowled by a right-arm leg spin bowler.  It differs from the classic leg spin type delivery, in that the ball is propelled from the bowler's hand in a way that upon hitting the pitch, it deviates in a direction opposite from that the batter expects (ie towards the leg rather than the off stump).  Usually now called the wrong'un, it was once also called the bosie, the eponym referring to English cricketer Bernard Bosanquet (1877-1936) who is believed to have invented the action.  That the googly is Bosanquet’s creation is probably true in the sense that he was the one who practiced the delivery, learning to control and disguise it so it could be deployed when most useful.  However, cricket being what it is, it’s certain that prior to Bosanquet, many bowlers would occasionally (and perhaps unintentionally have bowled deliveries that behaved as googlies but, being something difficult to perfect, few would have been able to produce it on demand.  What Bosanquet, uniquely at the time, did was add it to his stock repertoire which inspired other bowlers to practice.

The “googly problem” also exists in “twister theory”, one of the many esoteric branches of theoretical physics understood only by a chosen few.  In twister theory, the “googly problem” is nerd shorthand for what’s properly known as “the gravitational problem”, an allusion to certain observed behavior differing from that which would be predicted by the mysterious twister theory, rather the same experience suffered by the batter in cricket who finds his leg stump disturbed by a ball he had every reasonable expectation would harmlessly go through to the keeper down the off side.  As one might expect of a work which involves a "googly problem", the author was an English mathematician, the Nobel laureate in physics, Sir Roger Penrose (b 1931).  It's presumed one of his pleasures has been explaining the googly reference to physicists from places not well acquainted with the charms of cricket.

Bosanquet appears to have perfected his googly between 1897-1901 and, as a noun, the word has been attached to it since shortly afterwards, documented in publications in England, Australia and New Zealand from circa 1903.  However, that was just the point at which a certain delivery which was so distinctive to demand an identifier came to be regarded as the definitive googly, a word which had long been used to describe cricket balls which behaved erratically off the pitch, a use perhaps based on the long use of “google-eyed” to refer to a surprised expression (ie those of the bamboozled batter).  Some etymologists have pondered whether the construct might thus be the phonetic goo + guile but few have ever supported this.  Googly was by the late-nineteenth century a well-known word used adjectively to describe spin-bowling which proved effective but there’s no suggestion it implied a particular type of delivery.  Googly and the odd variant seem to have been a way of refereeing to balls which relied on a slow delivery and spin through the air to turn off the pitch as opposed to those bowled fast or at medium pace, using the seam on the ball to achieve any movement.  All the evidence suggests the word was used to convey some unusual behavior.

Match report, day three of the second test of the 1891-1892 Ashes series, published in the Australian Star (Sydney), 1 February 1, 1892.

Here, in the one match report are deliveries described both as being googly (ie used as an adjective) and the googler (a noun) but there’s nothing here or anywhere else to suggest either is anything more specific than a reference to beguiling slow bowling.  Everything suggests the existence of both the noun and adjective was deliberate rather than some sloppy antipodean sub-editing.  Whatever the nuances of Briggs' bowling however, it must have been effective because in the same match he took what was then only the third hat-trick (a bowler taking wickets with three successive balls in the one innings) in Test cricket.  There has been the suggestion the adjectival use (apparently an Australian invention) as "googly ones" was an allusion to the idea of how a cricket ball might behave if egg-shaped, this based on the then widely-used local slang "googie" used for eggs.  Googie was from the Irish and Scottish Gaelic gugaí, gogaí & gogaidh (a nursery word for an egg).  Although wholly speculative, the notion has received support but more popular is the idea it was based on the use of googly in the manner of "googly-eyed", slang for eyes which bulged or were in some way irregular.

Match report of a club game, the Australian Star (Sydney), 1 February 1, 1892.

The report of the test match in 1892 isn’t the first known reference to the googly, the word appearing (again adjectively) in The Leader (Melbourne) on 19 December 1885 in a match report of a club game although, despite noting the bowler’s success in taking two wickets, the writer seemed less than impressed with the bowling.  Although googly is now less common ("wrong'un" now often preferred), it is at least gender-neutral and therefore unlikely to fall foul of the language police and be banned; the fate of batsman, fieldsman and all the positional variations of third man (though "silly point" and other "sillies" are still with us).  Nor is there any hint of the ethnic insensitivity which doomed the “chinaman” (a left-arm unorthodox spin), now dumped in the bin of of words linked with colonial oppression.

Monday, July 11, 2022

Bottomage

Bottomage (pronounced bot-uh m-ree)

In the marine insurance division of Admiralty law, a contract, of the nature of a mortgage, by which the owner of a ship borrows money to make a voyage, pledging the title of the ship as security.

1615-1625: From Middle English as as an addition to Admiralty law, modelled on the Dutch bodemerij, equivalent to bodem (bottom; hull of a ship) + -erij (–ry).  Bottom is from the Middle English botme & bottom (ground, soil, foundation, lowest or deepest part of anything), from the Old English botm & bodan (bottom, foundation; ground, abyss), from the Proto-Germanic buthm, butmaz & budmaz, from the primitive Indo-European bhudhno (bottom).  It was cognate with Old Frisian boden (soil), the Old Norse botn, the Dutch bodem, the Old High German & German Boden (ground, earth, soil), the Icelandic botn and the Danish bund and was related also to the Irish bonn (sole (of foot)), the Ancient Greek πυθμήν (puthmn or pythmen) (bottom of a cup or jar), the Sanskrit बुध्न (budhna) (bottom), the Avestan buna, the Persian بن‎ (bon) (bottom) and the Latin fundus (bottom, piece of land, farm), from which, via French, English gained “fund”.  The suffix -age was from the Middle English -age, from the Old French -age, from the Latin -āticum.  Cognates include the French -age, the Italian -aggio, the Portuguese -agem, the Spanish -aje & Romanian -aj.  It was used to form nouns (1) with the sense of collection or appurtenance, (2) indicating a process, action, or a result, (3) of a state or relationship, (4) indicating a place, (5) indicating a charge, toll, or fee, (6) indicating a rate & (7) of a unit of measure.  Bottamage is a noun; the noun plural is bottomages.

The sense of bottom as “posterior of a person (the sitting part)” is from 1794; the “verb to reach the bottom of” from 1808 and the expression “bottom dollar (the last dollar one has) is from 1857.  The meaning "fundamental character or essence" is from the 1570s and the variation “to get to the bottom of some matter” is from 1773; “bottoms up” as the call to finish one's drink is from 1875 while to do or feel something from “the bottom of (one's) heart” is from 1540s.  The bottom-feeder, originally a technical term in the classification of fishes, dates from 1866, the figurative sense ("one of the lowest status or rank" or an "opportunist who seeks quick profit usually at the expense of others or from their misfortune") noted from 1919.  Bottomage also sometimes appears in Australia as an alternative spelling of "bottom-age" (used in aged based sporting competitions to list the oldest age permitted to participate).

On the bottom.

Bottomage (sometimes referred to as bottomry), is a financing arrangement in maritime law whereby the owner or master of a ship borrows money “upon the bottom (or keel) of it” with an agreement to forfeit the ship itself to the creditor if the loan and interest is not paid at the time nominated, after the ship's safe return.  The contracts tended to be executed when a ship in a foreign port needed emergency repairs and it wasn’t possible to arrange funds in other ways.  Now rare because developments in maritime law discounted the bottomage bond's priority as against other liens and improvements in communications made international money transfers more efficient.  Hardly used since the nineteenth century and now of only historic interest.

It was an unusual, hybrid form of financing and one almost wholly peculiar to the pre-modern sea-trade.  It wasn’t a conventional loan because the lender accepted part of the risk, ships sinking not infrequently.  Nor was it insurance because there was nothing which explicitly secured the risk to the merchant's goods.  Bottomage can be thought of as a type of futures contract in that the insurer has purchased an option on the venture's final profit.  The risk being greater, a bottomage bond giving no remedy to the lender against the owners of the ship or cargo personally, rates were always much higher than the historic trading average of around 12%.

Doctors' Commons (1808), the High Court of Admiralty in session, Designed and etched by Thomas Rowlandson (1757–1827) & Auguste Charles Pugin (1768–1832 London), aquatint by John Bluck (1791–1832), Lambeth Palace Library collection, London.

The Admiralty Court in England dates from the mid- fourteenth century and its creation appears linked to the victory of Edward III’s (1312–1377; King of England 1327-1377) fleet in the battle of Sluys in 1340, one of the opening engagements of the Hundred Years' War (1737-1453).  A specialist tribunal which appears to have been charged with keeping peace at sea and dealing with piracy, as the High Court of Admiralty it developed its own distinct procedures and practices and was attended by a specialist group of solicitors (called proctors), its advocates educated in civil rather than common law; those trained only in common law not permitted to appear.

Bottamage re-imagined, Lindsay Lohan at the beach.

The advocates of the Admiralty Court were all Doctors of Law and were variously described as belonging to the College of Advocates, the College of Civilians or the Society of Doctors' Commons and specialized in ecclesiastical and civil law.  They were admitted to practice by the Dean of Arches who served the Archbishop of Canterbury and, practicing from Doctors’ Commons, cluster of buildings on Knightrider Street between St Paul’s cathedral and the north bank of the Thames they were most concerned with Admiralty and Church law although the advocates also verified and stored documents such as wills and marriage and divorce certificates.  The Doctors’ Commons was unusual in that while it resembled a modern Inn of Court in that it housed a library, a dining hall and rooms from which lawyers practiced, it also contained a court-room where the Admiralty Judge sat.  The arrangement persisted until the reforms of the Victorian Judicature Acts (1873-1875), the College of Advocates abolished in 1865 and the High Court of Admiralty transferred to became part of the unified High Court in 1875 although the tradition of a specialist Admiralty Judge and a specialist Admiralty Bar continues to this day.  In the US, one unique quirk of admiralty courts seemed to one lawyer to offer a possibility, the argument being a judgement should be set aside because the flag hanging in the courtroom didn't have the traditional fringe and thus was not properly constituted.  This the judge rejected and no attempt was made to seek leave to appeal.

The symbol of the Admiralty Court is the Admiralty Oar, traditionally displayed in court when a trial is in progress.

After the passage of the Judicature Acts, Admiralty jurisdiction moved to the newly created division of Divorce, Probate and Admiralty, referred to within the profession as the 3W (wives, wills & wrecks) and this lasted until the 1970 Administration of Justice Act which shifted divorce to the Family Division and probate to Chancery.  The Admiralty Court became part of the Queen’s Bench Division and claims are now dealt with by one of its two judges: the Admiralty Judge and the Admiralty Registrar, the arrest and release of ships handled by the Admiralty Marshal.

Sunday, July 10, 2022

Lede

Lede (pronounced leed)

(1) In hot-print typesetting, a deliberate misspelling of “lead”, created to avoid the possibility “lead” being used as text rather than being understood as an instruction.

(2) In journalism, a short summary serving as an introduction to a news story, article or other copy.

(3) In journalism, the main and thus the news item thought most important.

(4) A man; a person (and many related meanings, all obsolete).

Mid twentieth century:  An altered spelling of lead (in the sense used in journalism: “short introductory summary”), used in the printing trades to distinguish it from the homograph lead (in the sense of the “thin strip of type metal for increasing the space between lines of type” which was made of lead (pronounced led (rhyming with “dead”)) and use in this context has always been was rare outside the US.  The historic plural form (Middle English) was lede but in modern use in journalism it’s always ledes.

The historic use has been obsolete since the late sixteenth century.  The Middle English lede & leode (variously: man; human being, person; lord, prince; God; sir; group, kind; race; a people, nation; human race; land, real property, the subjects of a lord or sovereign; persons collectively (and other forms)) were connected to three closely related words (1) the Old English lēod (man; chief, leader; (poetic) prince; a people, people group; nation), (2) the Old English lēoda (man; person; native of a country (and related to lēod)) and (3) the Old English lēode (men; people; the people of a country (which was originally the plural of lēod)).  Lēod was derived from the Proto-West Germanic liud & liudi, from the Proto-Germanic liudiz (man; person; men; people), from the primitive Indo-European hléwdis (man, people) from hlewd- (to grow; people).  The bafflingly large array of meanings of lede in the Middle English is perhaps best encapsulated in the phrase “in all lede” (all the world).  It was related too to the Northumbrian dialectical form lioda (men, people) and was cognate with the German Leute (nation, people) and the Old High German liut (person, people), from the primitive Indo-European root leudh- (people), the source also of the Old Church Slavonic ljudu and the Lithuanian liaudis (nation, people).  Care must be taken if using lede in Sweden.  In the Swedish, lede was from the nominal use (masculine inflection) of the adjective led (evil), used in the synonym den lede frestaren (the evil tempter), understood as “the evil one, the loathsome or disgusting one; the devil, Satan”.

In modern use, lede was a deliberate misspelling of lead, one of a number of words created so they could be used in the instructions given to printers while avoid the risk they might appear in the text being set.  Lede could thus be used to indicate which paragraphs constitute the lede while avoiding any confusion with the word “lead” which might be part of the text to appear in an article.  The other created forms were “dek” (subhead or sub-heading (modified from “deck”)), hed (headline (modified from head)) and kum (used as “lede to kum” or “lede 2 kum” (a placeholder for “lead to come”)).  The technical terms were of some significance when the hot-typesetting process was used for printing.

It’s not clear when lede came into use in the printing trades, the dictionaries which maintain a listing are not consistent and the origin is suggested to date from anywhere between 1950-1976.  The difficulty is determining the date of origin is that the documents on which the words (lede, dek, hed, kum) appeared were ephemeral, created to be used on the day and discarded.  Lede move from the print factory floor to the newsroom to become part of the jargon of journalism, used to describe the main idea in the first few lines of a story, a device to entice a reader to persist (the idea of click-bait nothing new).  In much of the English-speaking world, the word in this context is spelled “lead” but lede remains common in the US.  In either form, it’s entered the jargon, the phrase "to bury the lede" meaning “unwittingly to neglect to emphasize the very most important part of the story (about an editor’s most damning critique) and “if it bleeds it ledes” means a dramatic story, preferably with explicit images, will be on the front page or be the first item of a news bulletin.

Beyond journalism, lede is sometimes used.  An auction site might for example have a “lede photograph”, the one which is though the most tempting and this attract the biggest audience (ie more click-bait).  In general use, it’s probably not helpful and likely only to confuse.  Henry Fowler (1858-1933) would doubtless have included its general use among what in his Dictionary of Modern English Usage (1926) he derided as a “pride of knowledge”: “...a very un-amiable characteristic and the display of it should sedulously be avoided” and to illustrated his point he said such vanities included “didacticism, French words, Gallicisms, irrelevant allusions, literary critics’ words, novelty hunting, popularized technicalities, quotations, superiority & word patronage.”  Ones suspects that outside of the print-shop, little would Henry Fowler have tolerated lede.

Others agree, suggesting the old sense from Middle English is now used only as a literary or poetic device and it’s otherwise the sort of conscious archaism of which Henry Fowler disapproved, a thing used only to impress others and beyond that, the modern form should never travel beyond wherever as a form of jargon it makes sense.  Still, there were word nerds who though it might have a future.  The US author William Safire (1929–2009) seemed to find hope in its prospects and even coined “misledeing” which might be tempting for editors searching for novel words with which berate neophyte writers.

Saturday, July 9, 2022

Inflation

Inflation (pronounced in-fley-shuhn)

(1) In economics, a persistent, substantial rise in the general level of prices, often related to an increase in the money supply, resulting in the loss of value of currency.

(2) Of or pertaining to the act of inflating or the state of being inflated.

(3) In clinical medicine, the act of distending an organ or body part with a fluid or gas.

(4) In the study of the metrics of educational standards, an undue improvement in academic grades, unjustified by or unrelated to merit.

(5) In theoretical cosmology, an extremely rapid expansion in the size of the universe, said to have happened almost immediately after the big bang.

1300-1350: From the Middle English inflacioun & inflacion, from the Old French inflation (swelling), from the Latin inflationem (nominative īnflātiō) (expansion; a puffing up, a blowing into; flatulence), noun of action from the past participle stem of inflare (blow into, puff up) and thus related to from īnflātus, the perfect passive participle of īnflō (blow into, expand).  The construct of the figurative sense (inspire, encourage) was in- (into) (from the primitive Indo-European root en (in)) + flare (to blow) (from the primitive Indo-European root bhle- (to blow)).  The meaning "action of inflating with air or gas" dates from circa 1600 while the monetary sense of "a sustained increase in prices" replaced the original meaning (an increase in the amount of money in circulation), first recorded in US use in 1838,  The derived noun hyperinflation dates from 1925 when it was first used to describe the period of high inflation in Weimar Germany; earlier, surgeons had used the word when describing certain aspects of lung diseases.  The adjective inflationary was first used in 1916 as a historic reference to the factors which caused a rapid or sustained increase in prices.

The early meaning related to flatulence, the sense of a “swelling caused by gathering of "wind" in the body” before being adopted as a technical term by clinicians treating lung conditions.  The figurative use as in "outbursts of pride" was drawn directly from the Latin inflationem, nominative inflatio, as a noun of action from past participle stem of inflare (blow into; puff up).  The now most common use beyond the tyre business, that of economists to describe statistically significant movement in prices is derived from an earlier adoption by state treasuries to measure volume of money in circulation, first recorded in 1838 in the US; the money supply is now counted with a number of definitions (M1, M3 etc).  The first papers in cosmological inflation theory were published in 1979 by Cornell theoretical physicist Alan Guth (b 1947).

Cosmic Inflation

Cosmic inflation is a theory of exponential expansion of space in the early universe.  This inflationary period is speculated to have begun an indescribably short time after the start of the big bang and to have been about as brief.  Even now, space continues to expand, but at less rapid rates so the big bang is not just a past event but, after fourteen billion-odd years, still happening.

Definitely not to scale.

One implication of the scale of the expansion of space is the speculation that things, some of which may have been matter, may have travelled faster than the speed of light, suggesting the speed of light came into existence neither prior to or at the start of the big bang but after, possibly within a fraction of a second although, within the discipline, other models have been built.  The breaking of the Einsteinian speed limit may suggest conditions in that first fraction of a second of existence were so extreme the laws of physics may not merely have been different but may not have existed or have been even possible.  If that's true, it may be nonsensical to describe them as laws.  Matter, energy and time also may have come into existence later than the start of the big bang.

The theory has produced derivatives.  One notion is, even though it’s possible always to imagine an equation which can express any duration, time may not be divisible beyond a certain point; another that there can never exist a present, only a past or a future.  Perhaps most weird is the idea the (often labeled chaotic but actually unknown) conditions of the very early big bang could have progressed instead to expanding space but without matter, energy or time.  Among nihilists, there’s discussion about whether such a universe could be said to contain nothing, although an even more interesting question is whether a genuine state (non-state?) of nothing is possible even in theory.

Price Inflation

In economics, inflation is in the West is suddenly of interest because the rate has spiked.  The memories are bad because the inflation associated with the 1970s & 1980s was finally suppressed by central banks and some political realists good at managing expectations combining to engineer recessions and the consequent unemployment.  After that, in advanced economies, as inflation faded from memory to history, there tended to be more academic interest in the possibility deflation might emerge as a problem.  As the Bank of Japan discovered, high inflation was a nasty thing but experience and the textbooks at least provided case-studies of how it could be tamed whereas deflation, one established and remaining subject to the conditions which led to its existence, could verge on the insoluble.

In most of the West however, deflationary pressures tended to be sectoral components of the whole, the re-invented means of production and distribution in the Far East exporting unprecedented efficiencies to the West, the falling prices serving only to stimulate demand because they happened in isolation of other forces.  However, the neo-liberal model which began to prevail after the political and economic construct of the post-World War II settlement began to unravel was based on a contradictory implementation of supply-side economics: Restricting the money supply while simultaneously driving up asset prices.  That was always going to have consequences (and there were a few), one of which was the GFC (global financial crisis (2008-circa 2011)) which happened essentially because the rich had run out of customers with the capacity to service loans and had begun lending money to those who were never going to be able to pay it back.  Such lending has always happened but at scale, it can threaten entire financial infrastructures.  Whether that was actually the case in 2008 remains a thing of debate but such was the uncertainty at the time (much based on a widespread unwillingness of many to reveal their true positions) that everyone’s worst-case scenarios became their default assumption and the dynamics which have always driven markets in crisis (fear and stampede) spread.

What was clear in the wake of the failure of Lehman Brothers (1847-2008) was that much money had simply ceased to exist, a phenomenon discussed by a most interested Karl Marx (1818-1883) in Das Kapital (1867–1883) and while losses were widespread, of particular significance were those suffered by the rich because it was these which were restored (and more) by what came to be called quantitative easing (QE), actually a number of mechanisms but essentially increasing the money supply.  The text books had always mentioned the inflationary consequences of this but that had been based on the assumption that the supply would spread wide.  The reason the central bankers had little fear of inducing inflation (as measured by the equations which have been honed carefully since the 1970s so as not to frighten the horses) was that the money created was given almost exclusively to the rich, a device under which not only were the GFC losses made good but the QE system (by popular demand) was maintained, the wealth of rich increasing extraordinarily.  It proved trickle-down economics did work (at least as intended, a trickle being a measure of a very small flow), the inequalities of wealth in society now existing to an extent not seen in more than a century.

Salvator Mundi (circa 1500) by Leonardo da Vinci.  Claimed to be the artist's last known painting, in 2017 it sold at auction in 2017 for US$450.3 million, still a record and more than double that achieved by the next most expensive, Picasso’s Les femmes d’Alger (Version ‘O’), which made US$179.4 million in 2015.

Post GFC inflation did happen but it was sectorally specific, mansions and vintage Ferraris which once changed hands for a few million suddenly selling for tens of millions and a Leonardo of not entirely certain provenance managed not far from half a billion.  The generalized inflationary effect in the broad economy was subdued because (1) the share of the money supply held by the non-rich had been subject only to modest increases and (2) the pre-existing deflationary pressures which had for so long been helpful continued to operate.  By contrast, what governments were compelled (for their own survival) to do as the measures taken during the COVID-19 pandemic so affected economic activity, had the effect of increasing the money supply in the hands of those not rich and combined with (1) low interest rates which set the cost of money at close to zero, (2) pandemic-induced stresses in labour markets and supply and distribution chains and (3) the effects of Russia’s invasion of Ukraine created what is now called a “perfect storm”.  The inflation rate was already trending up even before the invasion but it has proved an accelerant.  In these circumstances, all that can be predicted is that the text-book reaction of central banks (raising interest rates) will be (1) a probably unavoidable over-reaction to deal with those factors which can be influenced by monetary policy and (2) will not affect the geopolitical factors which are vectors through which inflation is being exported to the West.  Central banks really have no choice other than to use the tools at their disposal and see what happens but the problem remains that while those tools are effective (if brutish) devices for dealing with demand-inflation, their utility in handling supply-inflation is limited. 

First world problem: It’s now hard to find a Ferrari 250 GTO for less than US$70 million.