Showing posts sorted by date for query Viceroy. Sort by relevance Show all posts
Showing posts sorted by date for query Viceroy. Sort by relevance Show all posts

Wednesday, December 18, 2024

Consecutive

Consecutive (pronounced kuhn-sek-yuh-tiv)

(1) Following one another in uninterrupted succession or order; successive without interruption.

(2) Marked or characterized by logical sequence (such as chronological, alphabetical or numerical sequence).

(3) In grammar & linguistics, as “consecutive clause”, a linguistic form that implies or describes an event that follows temporally from another (expressing consequence or result).

(4) In musical composition, a sequence of notes or chords which results from repeated shifts in pitch of the same interval (an alternative term for “parallel”).

1605-1615: From the sixteenth century French consécutif, from the Medieval Latin cōnsecūtīvus, from the Latin cōnsecūtus (follow up; having followed), from consequī (to pursue) & cōnsequor (to travel).  The construct was consecut(ion) + -ive.  Consecution dates from the early fifteenth century and by the 1530s was used in the sense of “proceeding in argument from one proposition to another in logical sequence”.  It was from the Middle English consecucioun (attainment), from the Latin consecutionem (nominative consecution), noun of action from the past-participle stem of consequi (to follow after), from an assimilated form of com (in the sense of “with, together”) + sequi (to follow (from the primitive Indo-European root sekw- (to follow).  The meaning “any succession or sequence” emerged by the 1650s.  The Latin cōnsecūtiō (to follow after) was from the past participle of cōnsequor (to follow, result, reach).  The –ive suffix was from the Anglo-Norman -if (feminine -ive), from the Latin -ivus.  Until the fourteenth century, all Middle English loanwords from the Anglo-Norman ended in -if (actif, natif, sensitif, pensif et al) and, under the influence of literary Neolatin, both languages introduced the form -ive.  Those forms that have not been replaced were subsequently changed to end in -y (hasty, from hastif, jolly, from jolif etc).  The antonyms are inconsecutive & unconsecutive but (except in some specialized fields of mathematics) “non-sequential” usually conveys the same meaning.  Like the Latin suffix -io (genitive -ionis), the Latin suffix -ivus is appended to the perfect passive participle to form an adjective of action.  Consecutive is a noun & adjective, consecutiveness is a noun and consecutively is an adverb; the noun plural is consecutives.

In sport, the most celebrated consecutive sequence seems to be things in three and that appears to first to have been institutionalized in cricket where for a bowler to take three wickets with three consecutive deliveries in the same match was first described in 1879 as a “hat trick”.  Because of the rules of cricket, there could be even days between these deliveries because a bowler might take a wicket with the last ball he delivered in the first innings and the first two he sent down in the second.  A hat trick however can happen only within a match; two in one match and one in another, even if consecutive, doesn’t count.  Why the rare feat came to be called “hat trick” isn’t certain, the alternative explanations being (1) an allusion to the magician’s popular stage trick of “pulling three rabbits out of the hat” (there had earlier also been a different trick involving three actions and a hat) or (2) the practice of awarding the successful bowler a hat as a prize; hats in the nineteenth century were an almost essential part of the male wardrobe and thus a welcome gift.  The “hat trick” terminology extended to other sports including rugby (a player scoring three tries in a match), football (soccer) & ice hockey (a player scoring three goals in a match) and motor racing (a driver securing pole position, setting the fastest lap time and winning a race).  It has become common in sport (and even politics (a kind of sport)) to use “hat trick” of anything in an uninterrupted sequence of three (winning championships, winning against the same opponent over three seasons etc) although “threepeat” (the construct being three + (re)peat) has become popular and to mark winning three long-established premium events (not always in the same season) there are “triple crowns).  Rugby’s triple crown is awarded to whichever of the “home countries” (England, Ireland, Scotland & Wales) wins all three matches that season; US Horse racing’s triple crown events are the Kentucky Derby, the Preakness Stakes and the Belmont Stakes.

Graham Hill (1929–1975) in BRM P57 with the famous (but fragile) open-stack exhausts, Monaco Grand Prix, 3 June 1962.  Hill is the only driver to have claimed motor-racing's classic Triple Crown.

The term is widely used in motorsport but the classic version is the earliest and consists of the Indianapolis 500, the 24 Hours of Le Mans and the Formula One (F1) World Drivers' Championship (only one driver ever winning all three) and there’s never been any requirement of “consecutiveness”; indeed, now that F1 drivers now rarely appear in other series while contracted, it’s less to happen.

Donald Trump, a third term and the 22nd Amendment

Steve Bannon (left) and Donald Trump (right).

Although the MAGA (Make America Great Again) team studiously avoided raising the matter during the 2024 presidential election campaign, now Donald Trump (b 1946; US president 2017-2021) is President elect awaiting inauguration, Steve Bannon (b 1957 and a one the most prominent MAGAs) suggested there’s a legal theory (that term may be generous) which could be relevant in allowing him to run again in 2028, by-passing the “two-term limit” in the US Constitution.  Speaking on December 15 at the annual gala dinner of New York’s Young Republican Club’s (the breeding ground of the state’s right-wing fanatics), Mr Bannon tantalized the guests by saying “…maybe we do it again in 28?”, his notion of the possibility a third Trump term based on advice received from Mike Davis (1978, a lawyer who describes himself as Mr Trump’s “viceroy” and was spoken of in some circles as a potential contender for attorney general in a Trump administration).  Although the 22nd Amendment to the constitution states: “No person shall be elected to the office of the President more than twice”, Mr Davis had noted it was at least arguable this applied only to “consecutive” terms so as Mr Bannon confirmed, there was hope.  Warming to the topic, Mr Bannon went on to say :“Donald John Trump is going to raise his hand on the King James Bible and take the oath of office, his third victory and his second term.” (the MAGA orthodoxy being he really “won” the 2020 election which was “stolen” from him by the corrupt “deep state”.

Legal scholars in the US have dismissed the idea the simple, unambiguous phrase in the 22nd amendment could be interpreted in the way Mr Bannon & Mr Davis have suggested.  In the common law world, the classic case in the matter of how words in acts or statutes should be understood by courts is Bank of England v Vagliano Brothers (1891) AC 107, a bills of exchange case, decided by the House of Lords, then the UK’s final court of appeal.  Bank of England v Vagliano Brothers was a landmark case in the laws relating to negotiable instruments but of interest here is the way the Law Lords addressed significant principles regarding the interpretation of words in statutes, the conclusion being the primary goal of statutory interpretation is to ascertain the intention of Parliament as expressed in the statute and that intention must be derived from the language of the statute, interpreted in its natural and ordinary sense, unless the context or subject matter indicates otherwise.  What the judgment did was clarify that a statute may deliberately depart from or modify the common law and courts should not assume a statute is merely a restatement of common law principles unless the statute's language makes this clear.  The leading opinion was written by Lord Herschell (Farrer Herschell, 1837–1899; Lord High Chancellor of Great Britain 1886 & 1892-1895) who held that if the language of the statute is clear and unambiguous, it should be interpreted as it stands, without assuming it is subject to implicit common law principles; only if the language is ambiguous may courts look elsewhere for context and guidance.

So the guiding principle for courts is the words of a statute should be understood with what might be called their “plain, simple meaning” unless they’re not clear and unambiguous.  While the US Supreme Court recently has demonstrated it does not regard itself as bound even its own precedents and certainly not those of a now extinct UK court, few believe even the five most imaginative of the nine judges could somehow construe a constitutional amendment created for the explicit purpose of limiting presidents to two terms could be read down to the extent of “…more than twice…” being devalued to “…more than twice in a row…”.  Still, it was a juicy chunk of bleeding raw meat for Mr Bannon to throw to his hungry audience.

The ratification numbers: Ultimately, the legislatures of 41 of the then 48 states ratified the amendment with only Massachusetts and Oklahoma choosing to reject.  

What the 22nd amendment did was limit the number of times someone could be elected president.  Proposed on 21 March 1947, the ratification process wasn’t completed until 27 February 1951, a time span of time span: 3 years, 343 days which is longer than all but one of the other 26, only the 27th (delaying laws affecting Congressional salary from taking effect until after the next election of representatives) took longer, a remarkable 202 years, 223 days elapsing between the proposal on 25 September 1789 and the conclusion on 7 May 1992; by contrast, the speediest was the 26th which lowered the voting age to 18, its journey absorbed only 100 days between 23 March-1 July 1971.  While not too much should be read into it, it’s of interest the 18th Amendment (prohibiting the manufacturing or sale of alcohol within the US) required 1 year, 29 days (18 December 1917-16 January 1919) whereas the 21st (repealing the 18th) was done in 288 days; proposed on 20 February 1933, the process was completed on 5 December the same year.

The path to the 22nd amendment began when George Washington (1732–1799; first president of the United States, 1789-1797) choose not to seek a third term, his reasons including (1) a commitment to republican principles which required the presidency not be perceived as a life-long or vaguely monarchical position, (2) the importance of a peaceful transition of power to demonstrate the presidency was a temporary public service, not a permanent entitlement and (3) a desire not to see any excessive concentration of power in one individual or office.  Historians have noted Washington’s decision not to seek a third term was a deliberate effort to establish a tradition of limited presidential tenure, reflecting his belief this would safeguard the republic from tyranny and ensure no individual indefinitely could dominate government.

AI (Artificial Intelligence) generated images by Stable Diffusion of Lindsay Lohan and Donald Trump having coffee in the Mar-a-Largo Coffee Shop. 

For more than a century, what Washington did (or declined to do) was regarded as a constitutional convention and no president sought more than two terms.  Theodore Roosevelt (TR, 1858–1919; US president 1901-1909), celebrating his re-election in 1904 appeared to be moved by the moment when, unprompted, he announced: “Under no circumstances will I be a candidate for or accept another nomination” and he stuck to the pledge, arranging for William Howard Taft (1857–1930; president of the United States 1909-1913 & chief justice of the United States 1921-1930) to be his successor, confident he’d continue to pursue a progressive programme.  Taft however proved disappointingly conservative and Roosevelt decided in 1912 to seek a third term.  To critics who quoted at him his earlier pledge, he explained that “…when a man at breakfast declines the third cup of coffee his wife has offered, it doesn’t mean he’ll never in his life have another cup.  Throughout the 1912 campaign, comedians could get an easy laugh out of the line: “Have another cup of coffee”? and to those who objected to his violating Washington’s convention, he replied that what he was doing was “constitutional” which of course it was.

Puck magazine in 1908 (left) and 1912 (right) wasn't about to let Theodore Roosevelt forget what he'd promised in 1904.  The cartoon on the left was an example of accismus (an expression of feigned uninterest in something one actually desires).  Accismus was from the Latin accismus, from Ancient Greek ακκισμός (akkismós) (prudery).  Puck Magazine (1876-1918) was a weekly publication which combined humor with news & political satire; in its use of cartoons and caricatures it was something in the style of today's New Yorker but without quite the same tone of seriousness.

Roosevelt didn’t win the Republican nomination because the party bosses stitched thing up for Taft so he ran instead as a third-party candidate, splitting the GOP vote and thereby delivering the White House to the Democrats but he gained more than a quarter of the vote, out-polling Taft and remains the most successful third-party candidate ever so there was that.  His distant cousin Franklin Delano Roosevelt (FDR, 1882–1945, US president 1933-1945) was the one to prove the convention could be ignored and he gained not only a third term in 1940 but also a fourth in 1944.  FDR was not only a Democrat but also a most subversive one and when Lord Halifax (Edward Wood, 1881–1959; British Ambassador to the United States 1940-1946) arrived in Washington DC to serve as ambassador, he was surprised when one of a group of Republican senators with whom he was having dinner opened proceedings with: “Before you speak, Mr Ambassador, I want you to know that everyone in this room regards Mr Roosevelt as a bigger dictator than Hitler or Mussolini.  We believe he is taking this country to hell as quickly as he can.  As a sentiment, it sounds very much like the discourse of the 2024 campaign.

"The Trump Dynasty has begun" four term coffee mugs (currently unavailable) created for the 2020 presidential campaign. 

The Republicans truly were appalled by Roosevelt’s third and fourth terms and as soon as they gained control of both houses of Congress began the process of adding an amendment to the constitution which would codify in that document the two-term limit Washington had made a convention.  It took longer than usual but the process was completed in 1951 when the 22nd Amendment became part of the constitution and were Mr Trump to want to run again in 2028, it would have to be repealed, no easy task because such a thing requires not only the concurrence of two thirds of both the House of Representatives & Senate but also three quarters of the legislatures of the fifty states.  In other countries where presidential term limits have appeared tiresome to those who have no intention of leaving office the “work-arounds” are usually easier and Mr Trump may cast the odd envious eye overseas.  In Moscow, Mr Putin (Vladimir Vladimirovich Putin; b 1952; president or prime minister of Russia since 1999) solved the problem by deciding he and his prime-minister temporarily should swap jobs (though not authority) while he arranged a referendum to effect the necessary changes to the Russian Constitution.  The point about referendums in Russia was explained by Comrade Stalin (1878-1953; Soviet leader 1924-1953) who observed: “it matters not who votes, what matters is who gets to count the votes.”  Barring accidents or the visitation of the angel of death, Mr Putin is now set to remain as president until at least the mid-2030s.  

Some mutual matters of interest: Donald Trump (left) and Vladimir Putin (right).

There have been many African presidents who have "arranged" for constitutional term limits to be "revised" but the most elegant in the handling of this was Pierre Nkurunziza (1964–2020; president of Burundi 2005-2020) who simply ignored the tiresome clause and announced he would be standing for a third term, tidying up loose ends by having Burundi's Constitutional Court declare the president was acting in accordance with the law.  It would seem the principle of statutory interpretation in Bank of England v Vagliano Brothers wasn't brought before the court (formerly part of the empire of Imperial Germany and later a Belgian-administered territory under a League of Nations mandate, Burundi follows the civil law tradition rather than the common law inheritance from the old British Empire) and shortly before the verdict was handed down, one judge fled into exile, claiming the government had applied "pressure" on the court to deliver a ruling favorable to the president. 

Sunday, October 6, 2024

Monolith

Monolith (pronounced mon-uh-lith)

(1) A column, large statue etc, formed originally from a single block of stone but latter day use applies the term to structures formed from any material and not of necessity a single piece (although technically such a thing should be described using the antonym: “polylith”.

(2) Used loosely, a synonym of “obelisk”.

(3) A single block or piece of stone, especially when used in architecture or sculpture and applied most frequently to large structures.

(4) Something (or an idea or concept) having a uniform, massive, redoubtable, or inflexible quality or character.

(5) In architecture, a large hollow foundation piece sunk as a caisson and having a number of compartments that are filled with concrete when it has reached its correct position

(6) An unincorporated community in Kern County, California, United States (initial capital).

(7) In chemistry, a substrate having many tiny channels that is cast as a single piece, which is used as a stationary phase for chromatography, as a catalytic surface etc.

(8) In arboreal use, a dead tree whose height and size have been reduced by breaking off or cutting its branches (use rare except in UK horticultural use).

1829: The construct was mono- + lith.  Mono was from the Ancient Greek, a combining form of μόνος (monos) (alone, only, sole, single), from the Proto-Hellenic mónwos, from the primitive Indo-European mey- (little; small).  It was related to the Armenian մանր (manr) (slender, small), the Ancient Greek μανός (manós) (sparse, rare), the Middle Low German mone & möne, the West Frisian meun, the Dutch meun, the Old High German muniwa, munuwa & munewa (from which German gained Münne (minnow).  As a prefix, mono- is often found in chemical names to indicate a substance containing just one of a specified atom or group (eg a monohydrate such as carbon monoxide; carbon attached to a single atom of oxygen). 

In English, the noun monolith was from the French monolithe (object made from a single block of stone), from Middle French monolythe (made from a single block of stone) and their etymon the Latin monolithus (made from a single block of stone), from the Ancient Greek μονόλιθος (monólithos) (made from a single block of stone), the construct being μονο- (mono-) (the prefix appended to convey the meaning “alone; single”), from μόνος (monos) + λίθος (líthos) (a stone; stone as a substance).  The English form was cognate with the German monolith (made from a single block of stone).  The verb was derived from the noun.  Monolith is a noun & verb, monolithism, monolithicness & monolithicity are nouns, monolithic is an adjective and monolithically is an adverb; the noun plural is monoliths.  The adjective monolithal is listed as "an archaic form of monolithic".

Monolith also begat two back-formations in the technical jargon of archaeology: A “microlith” is (1) a small stone tool (sometimes called a “microlite”) and (2) the microscopic acicular components of rocks.  A “megalith” is (1) a large stone slab making up a prehistoric monument, or part of such a monument, (2) A prehistoric monument made up of one or more large stones and (3) by, extension, a large stone or block of stone used in the construction of a modern structure.  The terms seem not to be in use outside of the technical literature of the profession.  The transferred and figurative use in reference to a thing or person noted for indivisible unity is from 1934 and is now widely used in IT, political science and opinion polling.  The adjective monolithic (formed of a single block of stone) was in use by the early nineteenth century and within decades was used to mean “of or pertaining to a monolith”, the figurative sense noted since the 1920s.  The adjective prevailed over monolithal which seems first to have appeared in a scientific paper in 1813.  The antonym in the context of structures rendered for a single substance is “polylith” but use is rare and multi-component constructions are often described as “monoliths”.  The antonym in the context of “anything massive, uniform, and unmovable, especially a towering and impersonal cultural, political, or social organization or structure” is listed by many sources as “chimera” but terms like “diverse”, “fragmented” etc are usually more illustrative for most purposes.  In general use, there certainly has been something of a meaning-shift.  While "monolith" began as meaning "made of a single substance", it's now probably most used to covey the idea of "something big & tall" regardless of the materials used.

One of the Monoliths as depicted in the film 2001: A Space Odyssey (1968). 

The mysterious black structures in Sir Arthur C Clarke's (1917–2008) Space Odyssey series (1968-1997) became well known after the release in 1968 of Stanley Kubrick's (1928–1999) film of the first novel in the series, 2001: A Space Odyssey.  Although sometimes described as “obelisk”, the author noted they were really “monoliths”.  In recent years, enthusiasts, mischief makers and click-bait hunters have been erecting similar monoliths in remote parts of planet Earth, leaving them to be discovered and publicized.  With typical alacrity, modern commerce noted the interest  and soon, replicas were being offered for sale, a gap in the market for Christmas gifts between US$10,000-45,000 apparently identified.

The terms “obelisk” and “monolith” are sometimes used interchangeably and while in the case of many large stone structures this can be appropriate, the two terms have distinct meanings.  Classically, an obelisk is a tall, four-sided, narrow pillar that tapers to a pyramid-like point at the top.  Obelisks often are carved from a single piece of stone (and are thus monolithic) but can also be constructed in sections and archaeologists have discovered some of the multi-part structures exists by virtue of necessity; intended originally to be a single piece of stone, the design was changed after cracks were detected.  A monolith is a large single block stone which can be naturally occurring (such as a large rock formation) or artificially shaped; monoliths take many forms, including obelisks, statues and even buildings.  Thus, while an obelisk can be a monolith, not all monoliths are obelisks.

Highly qualified German content provider Chloe Vevrier (b 1968) standing in front of the Luxor Obelisk, Paris 2010.

The Luxor Obelisk sits in the centre of the Place de la Concorde, one of the world’s most photographed public squares.  Of red granite, 22.5 metres (74 feet) in height and weighing an estimated 227 tonnes (250 short (US) tons), it is one of a pair, the other still standing front of the first pylon of the Luxor Temple on the east bank of the Nile River, Egypt.  The obelisk arrived in France in May 1833 and less than six month later was raised in the presence of Louis Philippe I (1773–1850; King of the French 1830-1848).  The square hadn’t always been a happy place for kings to stand; in 1789 (then known as the Place de Louis XV) it was one of the gathering points for the mobs staging what became the French Revolution and after the storming of the Bastille (of of history’s less dramatic events despite the legends), the square was renamed Place de la Revolution, living up to the name by being the place where Louis XVI (1754–1793; King of France 1774-1792), Marie Antoinette (1755–1793; Queen Consort of France 1774-1792) and a goodly number of others were guillotined.  Things were calmer by 1833 when the obelisk was erected.

The structure was a gift to France by Pasha Mehmet Ali (1769–1849, Ottoman Albanian viceroy and governor of Egypt 1805-1848) and in return Paris sent a large mechanical clock which to this day remains in place in the clock tower of the mosque at the summit of the Citadel of Cairo and of the 28 obelisks, six remain in Egypt with the rest in various displays around the world.  Some 3000 years old, in its original location the Obelisk contributed to scientific history wine in circa 250 BC Greek geographer & astronomer Eratosthenes of Cyrene (circa 276 BC–circa 195 BC) used the shadow it cast to calculate the circumference of the Earth.  By comparing the shadow at a certain time with one in Alexandria, he concluded that the difference in distance between Alexandria and Aswan was seven degrees and 14 minutes and from this he could work out the Earth’s circumference.

Monolithic drivers

In IT, the term “monolithic driver” was used to refer to a software driver designed to handle multiple hardware components or functionalities within a single, large, and cohesive codebase.  In this it differed from earlier (and later) approaches which were modular or layered, the functionality is split into separate, smaller drivers or modules, each of which handled specific tasks or addressed only certain hardware components.  Monolithic drivers became generally available in the late 1980s, a period when both computer architecture and operating systems were becoming more sophisticated in an attempt to overcome the structural limitations imposed by the earlier designs.  It was in the era many of the fundamental concepts which continue to underpin modern systems were conceived although the general adoption of some lay a decade or more away.

During the 1970s & 1980s, many systems were built with a tight integration between software and hardware and some operating systems (OS) were really little more than “file loaders” with a few “add-ons”, and the limitations imposed were “worked-around” by some programmers who more-or-less ignored the operating system an address the hardware directly using “assemblers” (a flavor of “machine-code”).  That approach made for fast software but at the cost of interoperability and compatibility, such creations hardware specific rather using an OS as what came to be known as the HAL (hardware abstraction layer) but at the time, few OSs were like UNIX with its monolithic kernel in which the core OS services (file system management, device drivers etc.) were all integrated into a single large codebase.  As the market expanded, it was obvious the multi-fork approach was commercially unattractive except for the odd niche.

After its release in 1981, use of the IBM personal computers (PC) proliferated and because of its open (licence-free) architecture, an ecosystem of third party suppliers arose, producing a remarkable array of devices which either “hung-off” or “plugged-in” a PC; the need for hardware drivers grew.  Most drivers at the time came from the hardware manufacturers themselves and typically were monolithic (though not yet usually described as such) and written usually for specific hardware and issues were rife, a change to an OS or even other (apparently unrelated) hardware or software sometimes inducing instability or worse.  As operating systems evolved to support more modularity, the term “monolithic driver” came into use to distinguish these large, single-block drivers from the more modular or layered approaches that were beginning to emerge.

It was the dominance of Novell’s Netware (1983-2009) on PC networks which compelled Microsoft to develop Windows NT (“New Technology”, 1993) and it featured a modular kernel architecture, something which made the distinction between monolithic and modular drivers better understood and as developers increasingly embraced the modular, layered approach which better handled maintainability and scalability.  Once neutral, the term “monolithic driver” became something of a slur in IT circles, notably among system administrators (“sysadmins” or “syscons”, the latter based on the “system console”, the terminal on a mainframe hard-wired to the central processor) who accepted ongoing failures of this and that as inherent to the business but wanted to avoid a SPoFs (Single Point of Failure).

In political science, the term “monolithic” is used to describe a system, organization, or entity perceived as being unified, indivisible, and operating with a high degree of internal uniformity, often with centralized control. When something is labeled as monolithic, it implies that it lacks diversity or internal differentiation and presents a singular, rigid structure or ideology.  Tellingly, the most common use of the term is probably when analyzing electoral behavior and demonstrating how groups, societies or sub-sets of either. Although often depicted in the media as “monolithic” in their views, voting patterns or political behavior are anything but and there’s usually some diversity.  In political science, such divergences within defined groups are known as “cross-cutting cleavages”.

It’s used also of political systems in which a regime is structured (or run) with power is highly concentrated, typically in a single dictator or ruling party.  In such systems, usually there is little effective opposition and dissent is suppressed (although some of the more subtle informally tolerate a number of “approved dissenters” who operated within understood limits of self-censorship.  The old Soviet Union (the USSR (Union of Soviet Socialist Republics) 1922-1991), the Islamic Republic of Iran (1979-), the Republic of China (run by the Chinese Communist Party (CCP) (1949-) and the DPRK (Democratic Republic of Korea (North Korea) 1948-) are classic examples of monolithic systems; while the differences between them were innumerable, structurally all were (or are) politically monolithic.  The word is used also as a critique in the social sciences, Time magazine in April 2014 writing of the treatment of “Africa” as a construct in Mean Girls (2004):  Like the original Ulysses, Cady is recently returned from her own series of adventures in Africa, where her parents worked as research zoologists. It is this prior “region of supernatural wonder” that offers the basis for the mythological reading of the film. While the notion of the African continent as a place of magic is a dated, rather offensive trope, the film firmly establishes this impression among the students at North Shore High School. To them, Africa is a monolithic place about which they know almost nothing. In their first encounter, Karen inquires of Cady: “So, if you’re from Africa, why are you white?” Shortly thereafter, Regina warns Aaron that Cady plans to “do some kind of African voodoo” on a used Kleenex of his to make him like her—in fact, the very boon that Cady will come to bestow under the monomyth mode.”  It remains a sensitive issue and one of the consequences of European colonial practices on the African continent (something which included what would now be regarded as "crimes against humanity) so the casual use of "Africa" as a monolithic construct is proscribed in a way a similar of "Europe" would not attract criticism.    

The limitations of the utility of the term mean it should be treated with caution and while there are “monolithic” aspects or features to constructs such as “the Third World”, “the West” or “the Global South”, the label does over-simplify the diversity of cultures, political systems, and ideologies within these broad categories.  Even something which is to some degree “structurally monolithic” like the United States (US) or the European Union (EU) can be highly diverse in terms of actual behavior.  In the West (and the modern-day US is the most discussed example), the recent trend towards polarization of views has become a popular topic of study and the coalesced factions are sometimes treated as “monolithic” despite in many cases being themselves intrinsically factionalized.

Monday, July 1, 2024

Discreet & Discrete

Discreet (pronounced dih-skreet)

(1) Judicious in conduct or speech, especially with regard to respecting privacy or maintaining silence about delicate matters; prudent; circumspect.

(2) Showing prudence and circumspection; decorous.

(3) Modestly unobtrusive; unostentatious.

1325–1375: From the Middle English discret, from the Anglo-French & Old French discret (prudent, discerning), from the Medieval Latin discrētus (separated), past participle of discernere (to discern), the construct being dis- + crē- (separate, distinguish (variant stem of cernere)) + -tus, the Latin past participle suffix.  The dis prefix was from the Middle English dis-, from the Old French des from the Latin dis, from the proto-Italic dwis, from the primitive Indo-European dwís and cognate with the Ancient Greek δίς (dís) and the Sanskrit द्विस् (dvis).  It was applied variously as an intensifier of words with negative valence and to render the senses “incorrect”, “to fail (to)”, “not” & “against”.  In Modern English, the rules applying to the dis prefix vary and when attached to a verbal root, prefixes often change the first vowel (whether initial or preceded by a consonant/consonant cluster) of that verb. These phonological changes took place in Latin and usually do not apply to words created (as in Modern Latin) from Latin components since the language was classified as “dead”.  The combination of prefix and following vowel did not always yield the same change and these changes in vowels are not necessarily particular to being prefixed with dis (ie other prefixes sometimes cause the same vowel change (con; ex)).  The Latin suffix –tus was from the Proto-Italic -tos, from the primitive Indo-European -tós (the suffix creating verbal adjectives) and may be compared to the Proto-Slavic –tъ and Proto-Germanic –daz & -taz.  It was used to form the past participle of verbs and adjectives having the sense "provided with".  Latin scholars caution the correct use of the –tus suffix is technically demanding with a myriad of rules to be followed and, in use, even the pronunciation used in Ecclesiastical Latin could vary.  Discreet, discreeter, discreetest & discretionary are adjectives, discreetness & discretion are nouns and discreetly is an adverb; the noun plural is discretions.  Such is the human condition, the derived form "indiscretion" is in frequent use.

Discrete (pronounced dih-skreet)

(1) Apart or detached from others; separate; non-continuous; distinct; that which can be perceived individually and not as connected to, or part of something else.

(2) Consisting of or characterized by distinct or individual parts; discontinuous; that which can be perceived individually, not as connected to, or part of, something else.

(3) In mathematics, of a topology or topological space, having the property that every subset is an open set; defined only for an isolated set of points; using only arithmetic and algebra; not involving calculus.

(4) In mathematics, consisting of or permitting only distinct values drawn from a finite, countable set.

(5) In statistics (of a variable), having consecutive values not so infinitesimally close, so that its analysis requires summation rather than integration.

(6) In electrical engineering, having separate electronic components (diodes, transistors, resisters etc) as opposed to integrated circuitry (IC).

(7) In audio engineering, having separate and independent channels of audio, as opposed to multiplexed stereo, quadraphonic (also as quadrasonic) or other multi-channel sound.

(8) In linguistics, disjunctive, containing a disjunctive or discretive clause.

(9) In angelology, the technical description of the hierarchies and orders of angels.

1350–1400: Middle English from the Latin discrētus (separated; set apart) past participle of discernō (divide), the construct being dis- + cernō (sift); a doublet of discreet.  The Middle English adoption came via the Old French discret.  The common antonym is indiscrete (never hyphenated) but nondiscrete (also non-discrete), while synonymous in general used, is often used with specific meanings in mathematics & statistics.  Discrete is an adjective, discreteness is a noun and discretely is an adverb.  

Strange words

An etymological tangle, it was the influence of the Middle French discret (prudent, discerning) which saw discreet evolve to mean “wise person” in Anglo-French.  The Latin source was discrētus (past participle of the verb discernere (to discern; to separate, distinguish, mark off, show differences between)) and in post-Classical Latin discrētus also acquired the sense “prudent, wise,” possibly arising from association with the noun discrētiō, which shows a similar semantic development: physical separation, to discernment, to capacity to discern, the the notion of a "discreet person" being able to "pick" their way, setting "apart" the good from the bad, (dis- being "apart" & cerno "pick").

Discrete (apart or detached from others; separate; distinct) was originally a spelling doublet of discreet, sharing meanings, both derived from the same Latin source.  The spelling discrete is closer in form to the Latin discrētus and was probably a deliberate attempt to differentiate "discreet" from "discrete" (a courtesy to users English doesn't always extend) and one has always been more prolific than the other, dictionaries for centuries tending to offer some five times the citations for “prudent, circumspect” compared with the sense “separate or distinct” although the history of the latter is long in statistics, angelology, astronomy, and mathematics.  It wasn’t until the late sixteenth century that discrete became restricted to the now familiar meanings, leaving the spelling discreet to predominate in its own use.  In a way not uncommon in English, pre-modern spellings proliferated: discreyt, discrite, discreit, discreete and others existed but, by the late sixteenth century, the standard meanings became discrete in the sense of “individual” and discreet in the sense of “tactful”.  Had the usual convention been followed it would have been the other way around because in English the Latin ending –etus usually becomes –ete.  Even into the mid-twentieth century, there were style & usage guides which recommended different pronunciations for discrete & discreet the former accented dĭ'-krē’t rather than dĭs-krē’t, the rationale being it was both “natural in English accentuation” (the example of the adjectival use of “concrete” cited) and helpful in distinguishing the word from “discreet”.  The modern practice however is to use the same pronunciation for both, leaving the labor of differentiation to context.

Artistic angelology: The Assumption of the Virgin (1475-1476), by Francesco Botticini (1446–1498), tempera on wood panel, National Gallery, London.  Commissioned as the altarpiece for a Florentine church, it portrays Mary's assumption and shows the discrete three hierarchies and nine orders of angels.

The noun discretion means (1) the power or right to decide or act according to one's own judgment; freedom of judgment or choice and (2) the quality of being discreet, especially with reference to one's own actions or speech; prudence or decorum.  Discretion dates from 1250–1300 and was from the Middle English discrecioun, from the Anglo-French & Old French discrecion, from the Late Latin discrētiōnem-(stem of discrētiō (separation)).  The special use in English law as the “age of discretion” began in the mid-fourteenth century as dyscrecyounne (ability to perceive and understand) meaning one was deemed to have attained “moral discernment, ability to distinguish right from wrong”.  It thus implied “prudence, sagacity regarding one's conduct”.  The meanings of the later forms came from the Medieval Latin (discernment, power to make distinctions), which evolved from the use in Classical Latin (separation, distinction).

The Age of Discretion

The familiar phrase “at one’s discretion seems not to have been in use until the 1570s although “in one's discretion” was documented by the late fourteenth century.  The use in English common law meaning “power to decide or judge; the power of acting according to one's own judgment” was reflected in the legal principle “the age of discretion which was part of law since the late fourteenth century when the age was deemed to be fourteen years, the age William Shakespeare (1564–1616) chose for the star-cross'd lovers in Romeo and Juliet (1597).

Historically, the “age of discretion” referred to the age at which a child was considered to be capable of making certain decisions and understanding the consequences of their actions.  Typically, was typically around seven years old, the point at which a child was deemed to have enough understanding to be responsible for certain actions, such as committing a crime or making religious decisions.  Gradually, the age crept up, especially as it applied to doli incapax (the age under which a child was presumed incapable of committing a crime) until it became established law a child between seven and fourteen was presumed not to have criminal intent unless it could be proven otherwise, the evidential onus of proof resting wholly with the prosecutor (almost always the Crown (ie some agent of the state)).  The generalized idea of an “age of discretion” influenced later developments in law such as the age of criminal responsibility, at which one could enter into legally enforceable contracts, enjoy a testamentary capacity or (lawfully) have sex.  Between jurisdictions the relevant age for this and that does vary and changes are not always without controversy: under the Raj, when Lord Lansdowne (1845–1927; Viceroy of India 1888-1894) raised the age of sexual consent for girls from ten to twelve, the objections from men united the castes like few other issues.

Lindsay Lohan and her lawyer in court, Los Angeles, December 2011.

For their purposes, the Church preferred seven and habitually declared children this age were capable of making their own decisions regarding religious practices, such as confession and communion and the phrase “give me the child until the age of seven and I will give you the man” is attributed usually to the Spanish priest Saint Ignatius of Loyola 1491-1556) who founded the religious order of the Society of Jesus (the Jesuits).  It’s no longer thought wise to leave children alone with priests but the social media platforms well-understood the importance of gaining young converts and for years did nothing to try to enforce their minimum age requirements for account creation.  The consequences of this have of late become understood and the debate about the wisdom of “giving children access to the internet” is now being framed as the more ominous “giving the internet access to children.

Discreet Allure: “Discreet” is here used in the sense of “modestly unobtrusive; unostentatious” and was in reference to the displayed clothing lines which were designed to be acceptable (halal (حلال)) under the Sharia (شَرِيعَة).  Lindsay Lohan at London Modern Fashion Week, February 2018.

Friday, October 27, 2023

Eminence

Eminence (pronounced em-uh-nuhns)

(1) A position of superiority; high station, rank, or repute.

(2) The quality or state of being eminent; Prominence in a particular order or accumulation; esteem.

(3) In topography, a high place or part; a hill or elevation; height.

(4) As a color, a dark or deep shade of purple.

(5) In anatomy, a protuberance.

(6) In the hierarchy of the Roman Catholic Church, a title used to address or refer to a cardinal (in the form “eminence”, “your eminence”, “his eminence” or “their eminences”).

(7) As “gray eminence” (the usual spelling of éminence grise), a “power behind the throne”.

1375–1425: From the late Middle English eminence (projection, protuberance (and by the early fifteenth century a “high or exalted position”)), from the Anglo-French, from the Old French eminence, from the Latin ēminēntia (prominence, protuberance; eminence, excellence; a standing out, a distinctive feature, most conspicuous part), the construct being equivalent to ēmin- (base of ēminēre (to stand out) + -entia (-ence) (the noun suffix), from eminentem (nominative eminens) (standing out, projecting (and figuratively “prominent, distinctive”)), from an assimilated form of the construct ex- (out) + -minere (related to mons (hill), from the primitive Indo-European root men- (to project).  The adjective eminent dates from the early fifteenth century and was used in the sense of “standing or rising above other places; exceeding other things in quality or degree” and was from the thirteenth century Old French éminent (prominent) or directly from the Latin eminentem.  From the 1610s, it came be used of those “distinguished in character or attainments”.  The noun pre-eminence (also as pre-eminence) was known as early as the twelfth century and then meant “surpassing eminence; superiority, distinction; precedence, a place of rank or distinction”.  It was from the Late Latin praeeminentia (distinction, superiority), from the Classical Latin praeeminentem (nominative praeeminens), the present participle of praeeminere (transcend, excel (literally “project forward, rise above”)) the construct being prae (before) + eminere (stand out, project).  The alternative for eminency is listed usually as archaic or obsolete.  Synonyms include conspicuousness, distinction, prominence, renown, celebrity, note & fame in the context of status and elevation or prominence when applied to topography.  Eminence & eminency are nouns, eminently is an adverb and eminent is an adjective (and a non-standard noun); the noun plural is eminences or eminencies.

The use in anatomy is to describe certain protuberances including (1) hypothenar eminence (plural hypothenar eminences) (the ulnar side of the human hand; the edge of the hand between the pinky and the outer side of the wrist, (2) ileocecal eminence (plural ileocecal eminences) (the ileocecal valve), (3) median eminence (plural median eminences) (part of the inferior boundary for the hypothalamus in the human brain and (4) frontal eminence (plural frontal eminences) (either of two rounded elevations on the frontal bone of the skull (known also as the “tuber frontale”).

Extract from xona.com's color list.

As a name for a deep or dark shade of purple, name eminence has been in regular use since the nineteenth century and there have always been variations in the shades so described; on the color charts of different manufacturers, this continues.  In digital use however, eminence as a shade of purple has been (more or less) standardized since 2001 when xona.com promulgated their influential color list.  Although “eminence” is the form of address for a cardinal in the Roman Catholic Church, it’s presumable this has no relationship with the color eminence because cardinals wear red and it’s the monsignors who don a purple which does look like the shade typically described as eminence.  As far as is known, the name “monsignor” has never been applied to any shade.  Monsignor is one of the honorary titles Popes for centuries granted to priests within their Papal Court and there were many degrees of these, conferred usually on priests worked closely with the Holy Father in Rome.  Over time, the use of monsignor was expanded and could be granted to priests beyond Rome on the recommendation of a bishop.  Recently, Pope Francis (b 1936; pope since 2013) has restricted this, returning to the older ways and this will have please some bishops, not all of whom were anxious to see too much purple in their diocese.  The monsignor’s purple (which most would probably call a magenta) was connected to the tradition in the Roman empire to vest new dignitaries with a purple toga and in medieval heraldry the color symbolized justice, regal majesty and sovereignty although not so much should be made of this in the context of the Vatican’s choices in ecclesiastical fashion: Originally, it was never envisaged monsignors would wander far from the Holy See.

Pope Francis passes the coffin (casket) at the funeral of Cardinal George Pell (1941-2023), St Peter’s Basilica, the Vatican, January 2023.  Within the Roman Curia (a place of Masonic-like plotting & intrigue and much low skulduggery), Cardinal Pell's nickname was “Pell Pot”, an allusion to Pol Pot (1925–1998, dictator of communist Cambodia 1976-1979) who announced the start of his regime was “Year Zero” and all existing culture and tradition must completely be destroyed and replaced.  

Until the sixteenth century bishops wore green and this use persists on the traditional coat of arms that each bishop chooses when elected.  In the 1500s, the switch was made to “amaranth red,” named after the amaranth flower although, despite the name, the actual hue is more like fuchsia but, being similar to a purple, church historians maintain there’s some symbolic value linking with the bishop being charged to govern his local diocese.  Technically, the Holy See describes the color worn by cardinals as “scarlet” and their eminences are described as “princes” of the church although part of the mystique of the place is that the red symbolizes the blood they’re all supposed to be prepared to spill to defend the pope.  When the Pope places the biretta (the hat with 3 or 4 stiffened corners worn as part of liturgical dress) on top of the cardinal’s head, he says, “(This is) scarlet as a sign of the dignity of the cardinalate, signifying your readiness to act with courage, even to the shedding of your blood, for the increase of the Christian faith, for the peace and tranquility of the people of God and for the freedom and growth of Holy Roman Church.”  As a title of honor within the church, eminence was in use as early as the 1650s although apparently since the 1720s, the honorific has been exclusive to cardinals.

Cardinal Richelieu (1636), oil on canvas by Philippe de Champaigne (1602–1674) (left) and Engraving of Francois Leclerc du Tremblay (circa 1630) by an unknown artist.

The term gray eminence was from the French éminence grise, plural eminences grises or eminence grises (literally “grey eminence” and the French spelling is sometimes used in the English-speaking world).  It was applied originally to François Leclerc du Tremblay (1577–1638), also known as Père Joseph, a French Capuchin friar who was the confidant and agent of Cardinal Richelieu (1585–1642), the chief minister of France under Louis XIII (1601–1643; King of France 1610-1643).  The term refers to du Tremblay’s influence over the Cardinal (who bore the honorific of Eminence), and the colour of his habit (he wore gray).  Aldous Huxley (1894–1963) sub-titled his biography of Leclerc (L'Éminence Grise (1941)): A Study in Religion and Politics.  Huxley discussed the nature of both religion & politics, his purpose being to explore the relationship between the two and his work was a kind of warning to those of faith who are led astray by proximity to power.

Use of the term éminence grise suggests a shadowy, backroom operator who avoids publicity, operating in secret if possible yet exercising great influence over decisions, even to the point of being “the power behind the throne”.  In this a gray eminence differs from a king-maker or a svengali is that those designations are applied typically to those who operate in the public view, even flaunting their power and authority.  Probably the closest synonym of the gray eminence is a “puppetmaster” because of the implication of remaining hidden, and although never seen, the strings they pull are if one looks closely enough.  The svengali was named for the hypnotist character Svengali in George du Maurier’s (1834–1896) novel Trilby (1894).  Svengali seduced, dominated and manipulated Trilby who was a young, half-Irish girl, transforming her into a great singer but in doing so he made her utterly dependent on him and this ruthlessly he exploited.

The brown eminence

Adolf Hitler (1889-1945; German head of government 1933-1945 & head of state 1934-1945) followed by his "brown eminence", Martin Bormann (1900–1945).

Bormann attached himself to the Nazi Party in the 1920s and proved diligent and industrious, rewarded in 1933 by being appointed chief of staff in the office of Rudolf Hess (1894–1987; Nazi Deputy Führer 1933-1941) where he first built his power base.  After Hess bizarrely flew to Scotland in 1941, Hitler abolished the post of Deputy Führer, assigning his offices to Bormann and styling him Head of the Parteikanzlei (Party Chancellery), a position of extraordinary influence, strengthened further when in 1943 he was appointed Personal Secretary to the Führer, a title he exploited to allow him to act as a kind of viceroy, exercising power in Hitler’s name.  Known within the party as the der brauner Schatten (the brown shadow) which was translated usually as “Brown Eminence” (an allusion to an éminence grise), he maintained his authority by controlling access to Hitler to whom his efficiency and dutifulness proved invaluable.  The "brown" refers to the Nazi's brown uniforms, a color adopted not by choice but because when the cash-strapped party in the 1920s needed uniforms for their Sturmabteilung (The SA, literally "Storm Division" or Storm Troopers and known as the "brownshirts"), what were available cheaply and in bulk was the stock of brown army clothing intended for use in the tropical territories the Germans would have occupied had they won World War I (1914-1918).  Bormann committed suicide while trying to make his escape from Berlin in 1945 although this wasn't confirmed until 1973.

Lindsay Lohan's inner eminence on film.


Lindsay Lohan (2011) by Richard Phillips & Taylor Steele.

Screened in conjunction with the 54th international exhibition of the Venice Biennale (June 2011), Lindsay Lohan was a short film the director said represented a “new kind of portraiture.”  Filmed in Malibu, California, the piece was included in the Commercial Break series, presented by Venice’s Garage Center for Contemporary Culture and although the promotional notes indicated it would include footage of the ankle monitor she helped make famous, the device doesn't appear in the final cut.

At the festival, co-director Richard Phillips (b 1962) was interviewed by V Magazine and explained: Lindsay has an incredible emotional and physical presence on screen.  “[She] holds an existential vulnerability, while harnessing the power of the transcendental — the moment in transition. She is able to connect with us past all of our memory and projection, expressing our own inner eminence.

Directed by: Richard Phillips & Taylor Steele
Director of Photography: Todd Heater
Creative Director: Dominic Sidhu
Art Director: Kyra Griffin
Editor: Haines Hall
Color mastering: Pascal Dangin for Boxmotion
Music: Tamaryn & Rex John Shelverton
Costume Designer: Ellen Mirojnick