Saturday, July 9, 2022

Assassin

Assassin (pronounced uh-sas-in)

A murderer, especially one who kills a politically prominent person for reason of fanaticism or profit.

One of an order of devout Muslims, active in Persia and Syria circa 1090-1272, the prome object of whom was to assassinate Crusaders (should be used with initial capital letter).

1525–1535: An English borrowing via French and Italian, from the Medieval Latin assassīnus (assassinī in the plural), from the Arabic Hashshashin (ashshāshīn in the plural) (eaters of hashish), the Arabic being حشّاشين, (ħashshāshīyīn (also Hashishin or Hashashiyyin).  It shares its etymological roots with the Arabic hashish (from the Arabic: حشيش (ashīsh)) and in the region is most associated with a group of Nizari Shia Persians who worked against various Arab and Persian targets.

The Hashishiyyin were an Ismaili Muslim sect at the time of the Crusades, under leadership of to Hasan ibu-al-Sabbah (known as shaik-al-jibal or "Old Man of the Mountains") although the name was widely applied to a number of secret sects operating in Persia and Syria circa 1090-1272.  The word was known in Anglo-Latin from the mid-thirteenth century and variations in spelling not unusual although hashishiyy (hashishiyyin in the plural) appears to be the most frequently used.  The plural suffix “-in” was a mistake by Medieval translators who assumed it part of the Bedouin word.  

Whether in personal, political or family relations, assassination is one of the oldest and, done properly, one of the most effective tools known to man.  The earliest known use in English of the verb "to assassinate" in printed English was by Matthew Sutcliffe (circa 1548-1629) in A Briefe Replie to a Certaine Odious and Slanderous Libel, Lately Published by a Seditious Jesuite (1600), borrowed by William Shakespeare (circa 1564-1616) for Macbeth (1605).  Among the realists, it’s long been advocated, Sun Tzu in the still read The Art of War (circa 500 BC) arguing the utilitarian principle: that a single assassination could be both more effective and less destructive that other methods of dispute resolution, something with which Niccolò Machiavelli (1469–1527), in his political treatise Il Principe (The Prince, written circa 1513 & published 1532), concurred.  As a purely military matter, it’s long been understood that the well-targeted assassination of a single leader can be much more effective than a battlefield encounter whatever the extent of the victory; the “cut the head off the snake” principle.

Modern history

The assassination in July 2022 of Abe Shinzō san (安倍 晋三 (Shinzo Abe, 1954-2022, prime minister of Japan 2006-2007 & 2012-2020) came as a surprise because as a part of political conflict, assassination had all but vanished from Japan.  That’s not something which can be said of many countries in the modern era, the death toll in Asia, Africa, the Middle East and South & Central America long, the methods of dispatch sometimes gruesome.  Russia’s annals too are blood-soaked although it’s of note perhaps in that an extraordinary number of the killings were ordered by one head of Government.  The toll of US presidents is famous and also documented are some two-dozen planned attempted assassinations.  Even one (as far as is known) prime-minister of the UK has been assassinated, Spencer Perceval (1762–1812; Prime-Minister of the UK 1809-1912) shot dead (apparently by a deranged lone assassin) on 11 May 1812, his other claim to fame that uniquely among British premiers, he also served as solicitor-general and attorney-general.  Conspiracy theorists note also the death of Pope John-Paul I (1912–1978; pope Aug-Sep 1978).

Ultranationalist activist Otoya Yamaguchi (1943-1960), about to stab Socialist Party leader Inejiro Asanuma san (1898-1960) with his yoroi-dōshi (a short sword, fashioned with particularly thick metal and suitable for piercing armor and using in close combat), Hibiya Public Hall, Tokyo, 12 October 1960.The assassin committed suicide while in custody.

Historically however, political assassinations in Japan were not unknown, documented since the fifth century, the toll including two emperors.  In the centuries which unfolded until the modern era, by European standards, assassinations were not common but the traditions of the Samurai, a military caste which underpinned a feudal society organized as a succession of shogunates (a hereditary military dictatorship (1192–1867)), meant that violence was seen sometimes as the only honorable solution when many political disputes were had their origin in inter and intra-family conflict.  Tellingly, even after firearms came into use, most assassinations continued to be committed with swords or other bladed-weapons, a tradition carried on when the politician Asanuma Inejirō san was killed on live television in 1960.

Most remembered however is the cluster of deaths which political figures in Japan suffered during the dark decade of the 1930s.  It was a troubled time and although Hara Takashi san (1856-1921; Prime Minister of Japan 1918-1921) had in 1921 been murdered by a right-wing malcontent (who received a sentence of only three years), it had seemed at the time an aberration and few expected the next decade to assume the direction it followed.  However in an era in which the most fundamental aspects of the nation came to be contested by the politicians, the imperial courtiers, the navy and the army (two institutions with different priorities and intentions), all claiming to be acting in the name of the emperor, conflict was inevitable, the only thing uncertain was how things would be resolved.

Hamaguchi Osachi san (1870–1931; Prime Minister of Japan 1929-1931) was so devoted to the nation that when appointed head of the government’s Tobacco Monopoly Bureau, he took up smoking despite his doctors warnings it would harm his fragile health.  His devotion was praised but he was overtaken by events, the Depression crushing the economy and his advocacy of peace and adherence to the naval treaty which limited Japan’s ability to project power made him a target for the resurgent nationalists.  In November 1930 he was shot while in Tokyo Railway station, surviving a few months before succumbing an act which inspired others.  In 1932 the nation learned of the Ketsumeidan Jiken (the "League of Blood" or "Blood-Pledge Corps Incident"), a nationalist conspiracy to assassinate liberal politicians and the wealthy donors who supported them.  A list on twenty-two intended victims was later discovered but the group succeeded only in killing one former politician and one businessman.

The death of Inukai Tsuyoshi san (1855–1932; Prime Minister of Japan 1931-1932) was an indication of what was to follow.  A skilled politician and something of a technocrat, he’d stabilized the economy but he abhorred war as a ghastly business and opposed army’s ideas of adventures in China, something increasingly out of step with those gathering around his government.  In May 1932, after visiting the Yasukuni Shrine to pay homage to the Meiji’s first minister of war (assassinated in 1869), nine navy officers went to the prime-minister’s office and shot him dead.  Deed done, the nine handed themselves to the police.  At their trial, there was much sympathy and they received only light sentences (later commuted) although some fellow officers feared they may be harshly treated and sent to the government a package containing their nine amputated fingers with offers to take the place of the accused were they sentenced to death.  In the way the Japanese remember such things, it came to be known as “the May 15 incident”.

Nor was the military spared.  Yoshinori Shirakawa san (1869–1932) and Tetsuzan Nagata san (1884–1935), both generals in the Imperial Japanese Army were assassinated, the latter one of better known victims of the Aizawa Incident of August 1935, a messy business in which two of the three army factions then existing resolved their dispute with murder.  Such was the scandal that the minister of army was also a victim but he got of lightly; being ordered to resign “until the fuss dies down” and returning briefly to serve as prime-minister in 1937 before dying of natural cause some four years later.

All of the pressures which had been building to create the political hothouse that was mid-1930s Japan were realized in Ni Ni-Roku Jiken (the February 26 incident), an attempted military coup d'état in which fanatical young officers attempted to purge the government and military high command of factional rivals and ideological opponents (along with, as is inevitable in these things, settling a few personal scores).  Two victims were Viscount Takahashi Korekiyo san (1854–1936; Prime Minister 1921-1922) and Viscount Saitō Makoto san (1858–1936; admiral in the Imperial Japanese Navy & prime-minister 1932-1934 (and the last former Japanese Prime Minister to be assassinated until Shinzo Abe san in 2022)).  As a coup, it was a well-drilled operation, separate squads sent out at 2am to execute their designated victims although, in Japanese tradition, they tried not to offend, one assassin recorded as apologizing to terrified household staff for “the annoyance I have caused”.  Of the seven targets the rebels identified, only three were killed but the coup failed not because not enough blood was spilled but because the conspirators made the same mistake as the Valkyrie plotters (who sought in 1944 to overthrow Germany’s Nazi regime); they didn’t secure control of the institutions which were the vital organs of state and notably, did not seize the Imperial Palace and thus place between themselves between the Emperor and his troops, something they could have learned from Hernán Cortés (1485–1547) who made clear to his Spanish Conquistadors that the capture of Moctezuma (Montezuma, circa 1466-1520; Emperor of the Aztec Empire circa 1502-1520) was their object.  As it was, the commander in chief ordered the army to suppress the rebellion and within hours it was over.

However, the coup had profound consequences.  If Japan’s path to war had not been guaranteed before the insurrection, after it the impetus assumed its own inertia and the dynamic shifted from one of militarists against pacifists to agonizing appraisals of whether the first thrust of any attack would be to the south, against the USSR or into the Pacific.  The emperor had displayed a decisiveness he’d not re-discover until two atomic bombs had been dropped on his country but, seemingly convinced there was no guarantee the army would put down a second coup, his policy became one of conciliating the military which was anyway the great beneficiary of the February 26 incident; unified after the rebels were purged, it quickly asserted control over the government, weakened by the death of its prominent liberals and the reluctance of others to challenge the army, assassination a salutatory lesson.

Assassins both:  David Low’s (1891-1963) Rendezvous, Evening Standard, 20 September 1939. 

The Molotov–Ribbentrop Pact (usually styled as the Nazi-Soviet Pact), was a treaty of non-aggression between the USSR and Nazi Germany and signed in Moscow on 23 August 1939.  A political sensation when it was announced, it wouldn't be until the first Nuremberg Trial (1945-1946) that the Western powers became aware of the details of the suspected secret protocol under which the signatories partitioned Poland between them.   Low's cartoon was published shortly after Stalin (on 17 September) invaded from the east, having delayed military action until German success was clear.

It satirizes the cynicism of the Molotov-Ribbentrop Pact, Hitler and Stalin bowing politely, words revealing their true feelings.  After returning to Berlin from the signing ceremony, von Ribbentrop reported the happy atmosphere to Hitler as "…like being among comrades" but if he was fooled, comrade Stalin remained the realist.  When Ribbentrop proposed a rather effusive communiqué of friendship and a 25 year pact, the Soviet leader suggested that after so many years of "...us tipping buckets of shit over each-other", a ten year agreement announced in more business-like terms might seem to the peoples of both nations, rather more plausible.  It was one of a few occasions on which comrade Stalin implicitly admitted even a dictator needs to take note of public opinion.  His realism served him less well when he assumed no rational man fighting a war on two fronts against a formidable enemy would by choice open another front of 3000-odd kilometres (1850 miles) against an army which could raise 500 divisions.  Other realists would later use the same sort of cold calculation and conclude that however loud the clatter from the sabre rattling, Mr Putin would never invade Ukraine.

Inflation

Inflation (pronounced in-fley-shuhn)

(1) In economics, a persistent, substantial rise in the general level of prices, often related to an increase in the money supply, resulting in the loss of value of currency.

(2) Of or pertaining to the act of inflating or the state of being inflated.

(3) In clinical medicine, the act of distending an organ or body part with a fluid or gas.

(4) In the study of the metrics of educational standards, an undue improvement in academic grades, unjustified by or unrelated to merit.

(5) In theoretical cosmology, an extremely rapid expansion in the size of the universe, said to have happened almost immediately after the big bang.

1300-1350: From the Middle English inflacioun & inflacion, from the Old French inflation (swelling), from the Latin inflationem (nominative īnflātiō) (expansion; a puffing up, a blowing into; flatulence), noun of action from the past participle stem of inflare (blow into, puff up) and thus related to from īnflātus, the perfect passive participle of īnflō (blow into, expand).  The construct of the figurative sense (inspire, encourage) was in- (into) (from the primitive Indo-European root en (in)) + flare (to blow) (from the primitive Indo-European root bhle- (to blow)).  The meaning "action of inflating with air or gas" dates from circa 1600 while the monetary sense of "a sustained increase in prices" replaced the original meaning (an increase in the amount of money in circulation), first recorded in US use in 1838,  The derived noun hyperinflation dates from 1925 when it was first used to describe the period of high inflation in Weimar Germany; earlier, surgeons had used the word when describing certain aspects of lung diseases.  The adjective inflationary was first used in 1916 as a historic reference to the factors which caused a rapid or sustained increase in prices.

The early meaning related to flatulence, the sense of a “swelling caused by gathering of "wind" in the body” before being adopted as a technical term by clinicians treating lung conditions.  The figurative use as in "outbursts of pride" was drawn directly from the Latin inflationem, nominative inflatio, as a noun of action from past participle stem of inflare (blow into; puff up).  The now most common use beyond the tyre business, that of economists to describe statistically significant movement in prices is derived from an earlier adoption by state treasuries to measure volume of money in circulation, first recorded in 1838 in the US; the money supply is now counted with a number of definitions (M1, M3 etc).  The first papers in cosmological inflation theory were published in 1979 by Cornell theoretical physicist Alan Guth (b 1947).

Cosmic Inflation

Cosmic inflation is a theory of exponential expansion of space in the early universe.  This inflationary period is speculated to have begun an indescribably short time after the start of the big bang and to have been about as brief.  Even now, space continues to expand, but at less rapid rates so the big bang is not just a past event but, after fourteen billion-odd years, still happening.

Definitely not to scale.

One implication of the scale of the expansion of space is the speculation that things, some of which may have been matter, may have travelled faster than the speed of light, suggesting the speed of light came into existence neither prior to or at the start of the big bang but after, possibly within a fraction of a second although, within the discipline, other models have been built.  The breaking of the Einsteinian speed limit may suggest conditions in that first fraction of a second of existence were so extreme the laws of physics may not merely have been different but may not have existed or have been even possible.  If that's true, it may be nonsensical to describe them as laws.  Matter, energy and time also may have come into existence later than the start of the big bang.

The theory has produced derivatives.  One notion is, even though it’s possible always to imagine an equation which can express any duration, time may not be divisible beyond a certain point; another that there can never exist a present, only a past or a future.  Perhaps most weird is the idea the (often labeled chaotic but actually unknown) conditions of the very early big bang could have progressed instead to expanding space but without matter, energy or time.  Among nihilists, there’s discussion about whether such a universe could be said to contain nothing, although an even more interesting question is whether a genuine state (non-state?) of nothing is possible even in theory.

Price Inflation

In economics, inflation is in the West is suddenly of interest because the rate has spiked.  The memories are bad because the inflation associated with the 1970s & 1980s was finally suppressed by central banks and some political realists good at managing expectations combining to engineer recessions and the consequent unemployment.  After that, in advanced economies, as inflation faded from memory to history, there tended to be more academic interest in the possibility deflation might emerge as a problem.  As the Bank of Japan discovered, high inflation was a nasty thing but experience and the textbooks at least provided case-studies of how it could be tamed whereas deflation, one established and remaining subject to the conditions which led to its existence, could verge on the insoluble.

In most of the West however, deflationary pressures tended to be sectoral components of the whole, the re-invented means of production and distribution in the Far East exporting unprecedented efficiencies to the West, the falling prices serving only to stimulate demand because they happened in isolation of other forces.  However, the neo-liberal model which began to prevail after the political and economic construct of the post-World War II settlement began to unravel was based on a contradictory implementation of supply-side economics: Restricting the money supply while simultaneously driving up asset prices.  That was always going to have consequences (and there were a few), one of which was the GFC (global financial crisis (2008-circa 2011)) which happened essentially because the rich had run out of customers with the capacity to service loans and had begun lending money to those who were never going to be able to pay it back.  Such lending has always happened but at scale, it can threaten entire financial infrastructures.  Whether that was actually the case in 2008 remains a thing of debate but such was the uncertainty at the time (much based on a widespread unwillingness of many to reveal their true positions) that everyone’s worst-case scenarios became their default assumption and the dynamics which have always driven markets in crisis (fear and stampede) spread.

What was clear in the wake of the failure of Lehman Brothers (1847-2008) was that much money had simply ceased to exist, a phenomenon discussed by a most interested Karl Marx (1818-1883) in Das Kapital (1867–1883) and while losses were widespread, of particular significance were those suffered by the rich because it was these which were restored (and more) by what came to be called quantitative easing (QE), actually a number of mechanisms but essentially increasing the money supply.  The text books had always mentioned the inflationary consequences of this but that had been based on the assumption that the supply would spread wide.  The reason the central bankers had little fear of inducing inflation (as measured by the equations which have been honed carefully since the 1970s so as not to frighten the horses) was that the money created was given almost exclusively to the rich, a device under which not only were the GFC losses made good but the QE system (by popular demand) was maintained, the wealth of rich increasing extraordinarily.  It proved trickle-down economics did work (at least as intended, a trickle being a measure of a very small flow), the inequalities of wealth in society now existing to an extent not seen in more than a century.

Salvator Mundi (circa 1500) by Leonardo da Vinci.  Claimed to be the artist's last known painting, in 2017 it sold at auction in 2017 for US$450.3 million, still a record and more than double that achieved by the next most expensive, Picasso’s Les femmes d’Alger (Version ‘O’), which made US$179.4 million in 2015.

Post GFC inflation did happen but it was sectorally specific, mansions and vintage Ferraris which once changed hands for a few million suddenly selling for tens of millions and a Leonardo of not entirely certain provenance managed not far from half a billion.  The generalized inflationary effect in the broad economy was subdued because (1) the share of the money supply held by the non-rich had been subject only to modest increases and (2) the pre-existing deflationary pressures which had for so long been helpful continued to operate.  By contrast, what governments were compelled (for their own survival) to do as the measures taken during the COVID-19 pandemic so affected economic activity, had the effect of increasing the money supply in the hands of those not rich and combined with (1) low interest rates which set the cost of money at close to zero, (2) pandemic-induced stresses in labour markets and supply and distribution chains and (3) the effects of Russia’s invasion of Ukraine created what is now called a “perfect storm”.  The inflation rate was already trending up even before the invasion but it has proved an accelerant.  In these circumstances, all that can be predicted is that the text-book reaction of central banks (raising interest rates) will be (1) a probably unavoidable over-reaction to deal with those factors which can be influenced by monetary policy and (2) will not affect the geopolitical factors which are vectors through which inflation is being exported to the West.  Central banks really have no choice other than to use the tools at their disposal and see what happens but the problem remains that while those tools are effective (if brutish) devices for dealing with demand-inflation, their utility in handling supply-inflation is limited. 

First world problem: It’s now hard to find a Ferrari 250 GTO for less than US$70 million.

Friday, July 8, 2022

Heckblende

Heckblende (pronounced hek-blend or hek-blend-ah (German)

A moulded piece of reflective plastic permanently mounted between a car’s tail lamp (or tail light) assemblies and designed to make them appear a contiguous entity

1980s: A German compound noun, the construct being Heck (rear; back) + Blende (cover).  As a surname, Heck (most common in southern Germany and the Rhineland) came from the Middle High German hecke or hegge (hedge), the origin probably as a topographic name for someone who lived near a hedge.  The link with hedges as a means of dividing properties led in the Middle Low German to heck meaning “wooden fencing” under the influence of the Old Saxon hekki, from the Proto-West Germanic hakkju.  In nautical slang "heck" came to refer to the “back of a ship” because the position of the helmsman in the stern was enclosed by such a fence and from here it evolved in modern German generally to refer to "back or rear".  The Modern German Blende was from blenden (deceive), from the Middle High German blenden, from the Old High German blenten, from the Proto-Germanic blandijaną, from the primitive Indo-European blend- and was cognate with the Dutch blenden and the Old English blendan.  Because all German nouns are capitalized, Heckblende is correct but in English, heckblende is the usual spelling.

The German blende translates as “cover” so the construct Heck + Blende (one of their shorter compounds) happily deconstructs as “back cover” and that obviously describes the plastic mouldings used to cover the space between a car’s left and right-side tail lamps.  Blenden however can (as a transitive or intransitive) translate as (1) “to dazzle; to blind” in the sense of confuse someone’s sight by means of excessive brightness”, (2) (figuratively and usually as an intransitive) to show off; to pose (try to make an impression on someone by behaving affectedly or overstating one’s achievements) and (3) “to dazzle” in the sense of deception (from the 1680s German Blende (an ore of zinc and other metals, a back-formation from blenden (in the sense of "to blind, to deceive") and so called because the substance resembles lead but yields none (but should not be confused with the English construct hornblende (using the English “blende” in the sense of “mix”) (a dark-green to black mineral of the amphibole group, calcium magnesium iron and hydroxyl aluminosilicate)).

A heckblende thus (1) literally is a cover and (2) is there to deceive a viewer by purporting to be part of the rear lighting rather than something merely decorative (sic).  If a similar looking assembly is illuminated and thus part of the lighting system, then it's not a heckblende but part of a full-width tail lamp. 

1934 Auburn Boat-tail Speedster.

On cars, the design of tail lamps stated modestly enough and few were in use before 1914, often a small, oil-lit single lens the only fitting.  Electric lamps were standardized by the 1920s and early legislation passed in many jurisdictions specified the need for red illumination to the rear (later also to indicate braking) but about the only detail specified was a minimum luminosity; shape, size and placement was left to manufacturers.  Before the late 1940s, most early tail laps were purely functional with little attempt to make them design motifs although during the art deco era, there were some notably elegant flourishes but despite that, they remained generally an afterthought and on lower priced models, a second tail lamp was sometimes optional, the standard of a left and right-side unit not universal until the 1950s.

A tale of the tails of two economies:  1959 MGA Twin-Cam FHC & 1959 Daimler Majestic (upper) and 1959 Chevrolet Impala (batwing) flattop & 1959 DeSoto Adventurer convertible (lower).

It was in the 1950s the shape of tail lamps became increasingly stylized.  With modern plastics freeing designers from the constraints the use of glass had imposed and the experience gained during the Second World War in the mass-production of molded Perspex, new possibilities were explored.  In the UK and Europe, there was little extravagance, manufacturers content usually to take advantage of new materials and techniques mostly to fashion what were little more than larger, more rounded versions of what had gone before, the amber lens being adopted as turn indicators to replace the mechanically operated semaphore signals often little more than a duplication of the red lamp or an unimaginatively-added appendage.

1961 Chrysler Turboflite show car.

Across the Atlantic, US designers were more ambitious but one idea which seems not to have been pursued was the full-width tail lamp and that must have been by choice because it would have presented no challenges in engineering.  Instead, as the jet age became the space age, the dominant themes were aeronautical or recalled the mechanism of rocketry, tail lamps styled to resemble the exhausts of jet-engines or space ships, the inspiration as often from SF (science fiction) as the runway.  Pursuing that theme, much of the industry succumbed to the famous fin fetish, the tails of their macropterous creations emphasizing the vertical more than the horizontal.  Surprisingly though, despite having produced literally dozens of one-off “concept” and “dream” cars over the decade, it seems it wasn’t until 1961 when Chrysler sent their Turboflite around the show circuit that something with a genuine full-width tail lamp was shown.

1936 Tatra T87 (left), 1961 Tatra T603A prototype (centre) & 1963 Tatra T-603-X5 (right).

That same year, in Czechoslovakia, the Warsaw Pact’s improbable Bohemian home of the avant garde, Tatra’s engineers considered full-width tail lamps for their revised 603A.  As indicated by the specification used since before the war (rear-engined with an air-cooled, 2.5 litre (155 cubic inch) all-aluminum V8), Tatra paid little attention to overseas trends and were influenced more by dynamometers and wind tunnels.  However, the tail lamps didn’t make it to volume production although the 603A prototype did survive to be displayed in Tatra’s Prague museum.  Tatra’s designs, monuments to mid-century modernism, remain intriguing.

1967 Imperial LeBaron four door Hardtop.

If the idea didn’t impress behind the iron curtain, it certainly caught on in the West, full-width assemblies were used by many US manufacturers over the decades including Mercury, Imperial, Dodge, Shelby, Ford, Chrysler & Lincoln.  Some genuinely were full-width lamps in that the entire panel was illumined, a few from the Ford corporation even with the novelty of sequential turn-signals (outlawed in the early 1970s, bureaucrats seemingly always on the search for something to ban).  Most however were what would come to be called heckblendes, intended only to create an illusion.

Clockwise from top left: 1974 ZG Fairlane (AU), 1977 Thunderbird (US), 1966 Zodiac Mark IV (UK), 1970 Thunderbird (US), 1973 Landau (AU) & 1970 Torino (US).

Whether heckblendes or actually wired assemblies, Ford became especially fond of the idea which in 1966 made an Atlantic crossing, appearing on the Mark IV Zodiac, a car packed with advanced ideas but so badly executed it tarnished the name and when it (and the lower-priced Zephyr which made do without the heckblende) was replaced, the Zephyr & Zodiac names were banished from Europe, never to return.  Ford’s southern hemisphere colonial outpost picked-up the style (and typically several years later), Ford Australia using heckblendes on the ZF & ZG Fairlanes (1972-1976) and the P5 LTD & Landau (1973-1976).  The Fairlane’s heckblendes weren’t reprised when the restyled ZH (1976-1979) model was released but, presumably having spent so much of the budget on new tail lamps, the problem of needing a new front end was solved simply by adapting that of the 1968 Mercury Marquis (the name shamelessly borrowed too), colonies often run with hand-me-downs.


1968 HK Holdens left to right: Belmont, Kingswood, Premier & Monaro GTS.  By their heckblende (or its absence), they shall be known.

In Australia, the local subsidiary of General Motors (GM) applied a double fake.  The "heckblende" on the HK Monaro GTS (1968-1969), as a piece of cost-cutting, was actually red-painted metal rather than reflective plastic and unfortunately prone to deterioration under the harsh southern sun; it was a fake version of a fake tail lamp.  Cleverly though, the fake apparatus was used as an indicator of one's place in the hierarchy, the basic Belmont with just tail lamps, the (slightly) better-appointed Kingswood with extensions, the up-market Premier with extended extensions and the Monaro GTS with the full-width part.  Probably the Belmont and Premier were ascetically most successful.  Exactly the same idea was recycled for the VH Commodore (1981-1984), the SL/E (effectively the Premier's replacement) model's tail lamp assemblies gaining stubby extensions.




Left to right, 1967 HR Premier, 1969 HT Brougham & 1971 HQ Premier.  

The idea of a full-width decorative panel wasn’t new, Holden having used such a fitting on earlier Premiers.  Known as the “boot appliqué strip”, it began small on the EJ (1962-1963), EH (1963-1965) & HD (1965-1966) before becoming large and garish on the HR (1966-1968) but (although not then known as bling), that must have been thought a bit much because it was toned down and halved in height when applied to the elongated and tarted-up Brougham (1968-1971 and intended to appeal to the bourgeoisie) and barely perceptible when used on the HQ Premier (1971-1974).  Holden didn’t however forget the heckblende and a quite large slab appeared on the VT Commodore (1997-2000) although it wasn’t retained on the revised VX (2000-2002) but whether in this the substantial rise in the oil price (and thus the cost of plastic) was a factor isn’t known.

Left to right: 1973 Porsche 914 2.0, 1983 BMW 323i (E30) & 1988 Mercedes-Benz 300E (W124).

Although, beginning with the 914 in 1973, Porsche was an early European adopter of the heckblende and has used it frequently since, it was the 1980s which were the halcyon days of after-market plastic, owners of smaller BMWs and Mercedes-Benz seemingly the most easily tempted.  The additions were always unnecessary and the only useful way they can be catalogued is to say some were worse than others.  The fad predictably spread to the east (near, middle & far) and results there were just as ghastly although the popularity of the things must have been helpful as a form of economic stimulus, such was the volume in which the things were churned out.  Among males aged 17-39, few things have proved as enduringly infectious as a love of gluing or bolting to cars, pieces of plastic which convey their owner's appalling taste. 

2019 Mercedes-Benz EQC 400 with taillight bar.

Fewer manufacturers now use heckblendes as original equipment and when they did the terminology varied, nomenclature including "decor panels", "valances" or "tail section appliqués".  However, although the heckblende may (hopefully) be headed for extinction, full-width tail lamps still entice stylists and modern techniques of design and production, combined with what LEDs & OLEDs have made possible, mean it’s again a popular feature, the preferred term now “taillight bar”.

Scrunchie

Scrunchie (pronounced skruhn-chee)

An elastic band covered with gathered fabric, used to fasten the hair, as in a ponytail (usually as scrunchie).

1987: The construct was scrunch + -ie.  Scrunch has existed since the early nineteenth century, either an intensive form of crunch; ultimately derived from the onomatopoeia of a crumpling sound or a blend of squeeze + crunch.  The suffix -ie was a variant spelling of -ee, -ey & -y and was used to form diminutive or affectionate forms of nouns or names.  It was used also (sometimes in a derogatory sense to form colloquial nouns signifying the person associated with the suffixed noun or verb (eg bike: bikie, surf: surfie, hood: hoodie etc).  Scrunchie is now used almost exclusively to describe the hair accessory.  Historically, the older form (scrunchy) was used for every other purpose but since the emergence of the new spelling there’s now some overlap.  Never fond of adopting an English coining, in French, the term is élastique à cheveux (hair elastic).  The alternative spelling is scrunchy (used both in error and in commerce).  Scrunchie is a noun; the noun plural is scrunchies.  The adjectives scrunchier & scrunchiest are respectively the comparative & superlative forms of scrunchy; as applied to scrunchies, they would be used to describe a scrunchie's relative degree of scrunchiness (a non-standard noun).

Mean Girls (2004) themed scrunchies are available on-line.

It's not unlikely that in times past, women took elastic bands (or something with similar properties) and added a decorative covering to create their own stylized hair-ties but as a defined product, the scrunchie appears to have first been offered as a commercial product 1963-1964 but the design was not patented until 1987 when night club singer Rommy Hunt Revson (1944–2022) named the hair accessory the Scunci (after her pet toy poodle), the name scrunchie an organic evolution because the fabric "scrunched up".  They were very popular in the 1990s although some factions in the fashion industry disparaged them, especially offended (some people do take this stuff seriously) if seeing them worn on the wrist or ankle.  The scrunchie is a classic case-study in the way such products wax and wane in popularity, something wholly unrelated to their functionality or movements in price; while the elasticity of scrunchies has remained constant, the elasticity of demand is determined not by supply or price but by perceptions of fashionability.  When such a low-cost item becomes suddenly fashionable, influenced by the appearance in pop-culture, use extends to a lower-income demographic which devalues the appeal and the fashion critics declare the thing "a fashion faux pas" and use declines among all but those oblivious of or indifferent to such rulings.  In the way of such things however, products often don't go away but lurk on the edges of respectability, the comebacks variously ironic, part of a retro trend or something engineered by industry, the tactics now including the use of the millions of influencers available for hire.          

The Dinner Scrunchie

A more recent evolution is the enlarged version called the Dinner Scrunchie, so named, the brand suggests, because it's "elevated enough to wear to dinner".  They're available from MyKitsch in black and designer colors and, covered with semi-sheer chiffon, they're eight inches (200mm) in diameter, about the size of a small dinner plate.  Breakfast and lunch scrunchie seem not to be a thing but those gaps in the market are catered to by the Brunch Scrunchie which while larger than most, is smaller and cheaper than the dinner version; it appear to be about the size of a bread & butter plate.    

Rita Ora (b 1990) in fluoro scrunchie, New York City, May 2014.

The most obvious novelty of the bigger scrunchies is of course the large size and because that translates into greater surface area, in the minds of many, thoughts turn immediately to advertising space.  There are possibilities but because of the inherent scrunchiness, they're really not suitable for text displays except perhaps for something simple like X (formerly known as Twitter) although Elon Musk (b 1971) probably thinks whatever else X may require, it's not brand awareness or recognition.  Where a message can be conveyed with something as simple as a color combination (such as the LGBTQQIAAOP's rainbow flag), the scrunchie can be a good billboard.

Thursday, July 7, 2022

Proscenium

Proscenium (pronounced proh-see-nee-uhm or pruh-see-nee-uhm)

(1) In a modern theatre, the stage area between the curtain and the orchestra or the arch that separates a stage from the auditorium together with the area immediately in front of the arch (also called the proscenium arch).

(2) In the theatre of antiquity, the stage area immediately in front of the scene building (probably a medieval misunderstanding).

(3) In the theatre of antiquity, the row of columns at the front the scene building, at first directly behind the circular orchestra but later upon a stage.

1608: From the Latin proscēnium and proscaenium (in front of the scenery) from the Ancient Greek προσκήνιον (prosknion), (entrance to a tent, porch, stage) which, in late Classical Greek had come to mean “stent; boothtage curtain”.  The construct in Greek was πρό (pró-) (before) + σκηνή (skēn) (scene; building) + --ion (the neuter noun suffix).  The noun plural is proscenia, the relative rarity of the base word meaning prosceniums is seen less frequently still but both are acceptable.  The standard abbreviation in the industry and among architects is pros.  For purists, the alternative spelling is proscænium and other European forms include the French proscénium and the Italian proscenio, other languages borrowing these spellings.

The occasionally cited literal translation of the Greek "the space in front of the scenery" appears to be another of the medieval-era errors created by either a mistranslation or a misunderstanding.  The modern sense of "space between the curtain and the orchestra" is attested from 1807 although it had been used figurative to suggest “foreground or front” since the 1640s.

Architectural variations

Emerson Colonial Theatre, Boston, Massachusetts.

Although the term is not always applied correctly, technically, a proscenium stage must have an architectural frame (known to architects as the “proscenium arch” although these are not always in the shape of an arch).  Their stages tend to be deep (the scale of the arch usually dictating the extent) and to aid visibility, are sometimes raked, the surface rising in a gentle slope away from the audience.  Especially in more recent constructions, the front of the stage can extend beyond the proscenium into the auditorium; this called an apron or forestage.  Theatres with proscenium stages are known as “proscenium arch theatres” and often include an orchestra pit and a fly tower with one or more catwalks to facilitate the movement of scenery and the lighting apparatus.


Thrust stage, Shakespeare Festival Theatre, Stratford, Ontario.

There are other architectural designs for theatres.  The thrust stage projects (ie “thrusts”) the performance into the auditorium with the audience sitting on three sides in what’s called the “U” shape.  In diagrams and conceptual sketches, the thrust stage area is often represented as a square but they’ve been built in rectangles, as semi-circles, half-polygons, multi-pointed stars and a variety of other geometric shapes.  Architects can tailor a thrust stage to suit the dimensions of the available space but the usual rationale is to create an intimacy between actors and audience.


In the round: Circle in the Square Theatre, New York City.

The term theatre-in-the-round can be misleading because the arrangement of the performance areas, while central, is rarely executed as an actual circle, the reference instead being to the audience being seated “all around”.  Built typically in a square or polygonal formation, except in some one-act performances, the actors enter through aisles or vomitories between the seating and directors have them move as necessitated by the need to relate to an audience viewing from anywhere in the 360o sweep, the scenery minimal and positioned avoid obstructions.  Because theatre-in-the-round inherently deconstructs the inherently two-dimensional nature of the classical stage, it was long a favorite of the avant-garde (there was a time when such a thing could be said to exist).  The arena theatre is theatre-in-the-round writ large, big auditoria with a central stage and like the sports stadia they resemble, typically rectangular and often a multi-purpose venue.  There’s a fine distinction between arena theatres and hippodromes which more recall circuses with a central circular (or oval) performance space surrounded by concentric tiered seating with deep pits or low screens often separating audience and performers.

Winter Talent Show stage, Mean Girls (2004).

The black-box (or studio or ad hoc) theatre is a flexible performance space.  At its most basic it can be a single empty room, painted black, the floor of the stage the same level as the first audience row from which there’s no separation.  To maximize the flexibility, some black-box theatres have no permanent fixtures and allow for the temporary setup of seating to suit the dynamics of the piece and the spaces have even been configured with no seating for an audience, the positional choices made by patrons influencing the performance.  The platform stage is the simplest setup, often not permanent and suited to multi-purpose venues.  Flexible thus but the lack of structure does tend to preclude more elaborate productions with the stage a raised and usually rectangular platform at one end of a room; the platform may be level or raked according to the size and shape of the space.  The will audience sit in rows and such is the simplicity that platform stages are often used without curtains, the industry term being “open stage or “end stage”, the latter perhaps unfortunate but then actors are used to “break a leg” and “died on stage”.

Open Air Theatre Festival, Paris.

The phrase open air theatre refers more to the performance than the physical setting.  It means simply something performed not under a roof (although sometimes parts of the stage or audience seating will be covered).  The attraction for a director is that stages so exposed can make use of natural light as it changes with the hour sunsets and stars especially offering dramatic possibilities; rain can be a problem.  Open air theatres are also an example of site-specific theatre (of which street theatre is probably best-known), a term with quite a bit of overlap with other descriptors although it’s applied usually to theatre is performed in a non-traditional environments such as a pubs, old prisons or warehouse, often reflecting the history of the place.  Promenade theatre (sometimes called peripatetic theatre) involves either the actors or the audience moving from place to place as the performance dictates.  Interactive theatre is rarely performed (at least by intent); it involves the actors interacting with the audience and is supposed to be substantially un-scripted but, like reality television, some of what’s presented as interactive theatre has been essentially fake.

Borrowed from antiquity, the proscenium arch theatre was for centuries a part of what defined the classical tradition of Western dramatic art but in the twentieth century playwrights and directors came to argue that modern audiences were longing for more intimate experiences although there’s scant evidence this view was the product of demand rather than supply.  That said, the novelty of immersive, site-specific performances gained much popularity and modern production techniques stimulated a revival of interest in older forms like theatre-in-the-round.

There were playwrights and directors however (some at whatever age self-styled enfants terribles), who preferred austerity, decrying the proscenium arch as a theatre based on a lavish illusion for which we either no longer had the taste or needed to have it beaten out of us.  It was thought to embody petit bourgeois social and cultural behaviors which normalized not only the style and content of theatre but also the rules of how theatre was to be watched: sitting quietly while well dressed, deferentially laughing or applauding at the right moments.  A interesting observation also was that the proscenium arch created a passive experience little different from television, a critique taken up more recently by those who thought long performances, typically with no more than one intermission (now dismissed as anyway existing only to serve wine and cheese) unsuitable for audiences with short attention spans and accustomed to interactivity.

Quite how true any of that was except in the minds of those who thought social realist theatre should be compulsory re-education for all is a mystery but the binge generation seems able easily to sustain their attention for epic-length sessions of the most lavishly illusionary stuff which can fit on a screen so there’s that.  The criticisms of the proscenium arch were more a condemnation of those who were thought its devoted adherents than any indication the form was unsuitable for anything but the most traditional delivery of drama.  Neither threatening other platforms nor rendered redundant by them, the style of theatre Plato metaphorically called “the cave” will continue, as it long has, peacefully to co-exist.

Interahamwe

Interahamwe (pronounced in-ter-ah-ham-way or in-tra-ham-way)

A Hutu paramilitary organization.

1992: A constructed proper noun, described variously as (1) borrowed from a Rwanda-Rundi (a dialect of Kinyarwanda) term or (2) a creation to describe the paramilitary formation.  Literal translation is "those who work together" and is thus a euphemism, one based on the link to the Interahamwe’s preferred choice of weapons: farm tools and the machete.  The construct is intera (from the verb gutera), (to work) + hamwe (together) which is related to rimwe (one).

After the genocide

Flag of the Interahamwe.

Although most associated with the Rwandan genocide on 1994, the Interahamwe began as the innocuous youth wing of the National Republican Movement for Democracy and Development (MRND), then the Hutu ruling party of Rwanda.  However, like other some political youth movements (the Taliban in Pakistan; the Mandela United Football Club in South Africa et al), the circumstances of the times led to mission creep.

The Rwanda genocide had its origin in the Hutu-Tutsi civil war of 1990-1992.  As violence escalated, use of the word “Interahamwe” changed from a description of the youth group into a broad term applied to almost anyone engaged in the mass-murder of Tutsis, regardless of their age of membership of the MRND.  The translation as “those who work together” became a euphemism for “those who kill together”.  Sardonic forms are not rare in both military and paramilitary jargon; the IDF (Israeli Defense Force) category for suicide-bombers prematurely blown-up by their own malfunctioning devices is “work accident”.

Although their numbers are now much reduced, the Interahamwe retain the ambition to destabilize Rwanda and still operate from the Democratic Republic of the Congo (DRC), the place to which they fled in late 1994.  From there and neighboring countries, along with other splinter groups such as the Democratic Forces for the Liberation of Rwanda (FDLR), they conduct an insurgency against Rwanda although recent operations suggest they're as much concerned with the various criminal activities undertaken to ensure their survival as any political agenda.