Showing posts sorted by relevance for query Continuity. Sort by date Show all posts
Showing posts sorted by relevance for query Continuity. Sort by date Show all posts

Saturday, December 21, 2024

Continuity

Continuity (pronounced kon-tn-oo-i-tee or kon-tn-yoo-i-tee)

(1) The state or quality of being continuous; logical sequence, cohesion or connection; lack of interruption.

(2) A continuous or connected whole.

(3) In political science, as “continuity theory”, an approach to twentieth century German historiography which focuses on structural and sociological continuities between eras (including pre-twentieth century influences and traditions).

(4) In narratology, a narrative device in episodic fiction where previous and/or future events in a series of stories are accounted for in present stories.

(5) As bicontinuity (the sate of being bicontinuous), (1) in topology: homeomorphic (a continuous bijection from one topological space to another, with continuous inverse) and (2) in physics, chemistry (of a liquid mixture), being a continuous phase composed of two immiscible liquids interacting through rapidly changing hydrogen bonds.

(6) In film production, as “continuity girl” (the now archaic title in film production (now called “continuity supervisor” or “script supervisor”)) for the person responsible for ensuring the details in each scene conform to the continuity of the narrative.

(7) In film production, the scenario (in the industry jargon a synonym of “continuity”) of script, scenes, camera angles, details of verisimilitude etc, in the sequence in which they should appear in the final cut.

(8) In fiction (especially in television series but also in film and literature), as “continuity nod”, a reference, to part of the plot of a previous series, volume, episode etc.

(9) In audio & visual production (radio, podcasts, television, internet et al), the spoken part of a script that which provides introductory, transitional or concluding material in non-dramatic (documentaries and such) programmes (some production houses include in their staff establishment the position “continuity announcer”).

(10) In film projection, the continuous projection of a film, using automatic rewind.

(11) In mathematics, a characteristic property of a continuous function.

(12) In mathematics, as semicontinuity (of a function), the state of being semicontinuous (that it is continuous almost everywhere, except at certain points at which it is either upper semi-continuous or lower semi-continuous).

(13) In mathematics, as equicontinuity, (of a family of functions), the state of being equicontinuous (such that all members are continuous, with equal variation in a given neighborhood).  The Lipschitz continuity was named after German mathematician Rudolf Lipschitz (1832–1903); the Scott continuity was named after US logician Dana Scott (b 1932).

(14) In mathematics, as hemicontinuity, the state of being hemicontinuous (having the property that if a sequence of points in the domain of a function converges to a point L, then either the sequence of sets that are the images of those points contains a sequence that converges to a point that is in the image of L, or, alternatively, for every element in the image of L, there will be a sub-sequence in the domain whose image contains a convergent sequence to that element.

(15) In marketing, in the plural, as “continuities”, sets of merchandise, given away for free or sold cheaply as promotional tool (the idea being the continuity of the customers returning).

1375–1425: From the late Middle English continuite (uninterrupted connection of parts in space or time), from the Old & Middle French continuité, from the Latin continuitatem (nominative continuitās) (a connected series (the construct being continu(us) (continuous) + -itās (equivalent to the English continu(e) + -ity), from continuus (joining, connecting with something; following one after another) from the intransitive verb continere (to be uninterrupted (literally “to hang together”).  The –ity suffix was from the French -ité, from the Middle French -ité, from the Old French –ete & -eteit (-ity), from the Latin -itātem, from -itās, from the primitive Indo-European suffix –it.  It was cognate with the Gothic –iþa (-th), the Old High German -ida (-th) and the Old English -þo, -þu & (-th).  It was used to form nouns from adjectives (especially abstract nouns), thus most often associated with nouns referring to the state, property, or quality of conforming to the adjective's description.  Continuity is a noun, continuance, & continuousness are nouns, continue is a verb, continuous & continual are adjectives and continually is an adverb; the noun plural is continuities.

The adjective continuous (characterized by continuity, not affected by disconnection or interruption) dates from the 1640s and was from either the French continueus or directly from the Latin continuus.  The verb continue (was in use by at least the mid-fourteenth century) in the form contynuen (maintain, sustain, preserve) which by the late 1300s has assumed the meaning “go forward or onward; persevere in”.  It was from the thirteenth century Old French continuer and directly from Latin continuare (join together in uninterrupted succession, make or be continuous, do successively one after another), from continuus.  The sense of “to carry on from the point of suspension” emerged early in the fifteenth century while the meaning “to remain in a state, place, or office” dates from the early 1400s, the transitive sense of “to extend from one point to another” was first documented in the 1660s.  The word entered the legal lexicon with the meaning “to postpone a hearing or trial” in the mid fifteenth century.

The noun continuation (act or fact of continuing or prolonging; extension in time or space) dates from the late 1300s, from the thirteenth century Old French continuation and directly from the Latin continuationem (nominative continuatio) (a following of one thing after another), a noun of action from past-participle stem of continuare.  The adjective continual was from the early fourteenth century continuell (proceeding without interruption or cessation; often repeated, very frequent), from the twelfth century Old French continuel and directly from the Latin continuus.  The noun continuance (perseverance, a keeping up, a going on) dates from the mid-fourteenth century, from the thirteenth century Old French continuance, from continuer.  Continuance seems to have been the first of the family to appear in the terminology of legal proceedings, used since the late fourteenth century in the sense of “a holding on or remaining in a particular state”, in courts this by the early fifteenth had extended to “the deferring of a trial or hearing to a future date” and in some jurisdictions lawyers to this day still file an “application for continuation”.  The now widely used discontinuation (of legal proceedings; of a product range etc) has existed since at least the 1610s in the sense of “interruption of continuity, separation of parts which form a connected series” and was from the fourteenth century French discontinuation, from the Medieval Latin discontinuationem (nominative discontinuatio), noun of action from past-participle stem of discontinuare.

Page 1 of IMDb's (Internet Movie Database) listing of discontinuities in Mean Girls (2004).

A discontinuity: In Mean Girls, a donut (doughnut) appeared with a large bite taken from it while a few seconds later it had endured just a nibble.

In film production, the job title “continuity girl” seems to have been retired in favor of “continuity supervisor” or “script supervisor”, one of the terms culled in the process of gender neutrality which also claimed most of the “best boys” (they’re now styled with titles such as “assistant chief lighting technician” or “second lighting technician”.  Whether myth or not, the industry legend is the “best boy” job title really did begin with the request “give me your best boy” although that wasn’t something as ominous as now it may sound.  The first known reference to a continuity girl in a film’s credits was in the US 1918 and the job involved ensuring the “continuity” (in the industry “scenario” is synonymous) of the final cut appeared as a seamless narrative.  The job was required because although a single scene in a film might appear to be a contiguous few minutes, the parts assembled in the editing process to produce it may be made up of takes shot days or even months weeks and possibly in different places.  Among a myriad of tasks, what a continuity girl had to do was maintain a database with the details of each piece of film (vital for the editing process) and ensure the details of each shot (clothes, haircuts, props (including their exact placement) and environment (climate, time of day etc) are in accordance with the previous footage.  The detail can be as simple as the time displayed on a wall clock and it matters because there’s a minor industry of film buffs who go through things frame-by-frame looking for discontinuities, all of which gleefully they’ll catalogue on various internet sites.

Three covers used for Leah McLaren’s The Continuity Girl (2007, left); not all Chick lit titles used vibrant or pastel shades in the cover art.  The Continuity Girl (2018, right) by Dr Patrick Kincaid (b 1966) is an unrelated title.

The Continuity Girl (2007) was the debut novel of Canadian journalist Leah McLaren (b 1975), the protagonist being a continuity girl named Meredith Moore.  A classic piece of Chick lit (the construct being chick (slang for “a young woman” + lit(erature)), a now unfashionable term describing novels focused on women and their feelings) the plotline involves Ms Moore’s biological clock tick-tocking to the psychological moment on her 35th birthday: she wakes up feeling a sudden acute yearning for a baby.  In a Chick lit sort of way, her solution was to leave her predictably pleasant Canadian life and head for London where she plans to select a man on the basis of her assessment of his genetic suitability for breeding, seduce him and, in the way these things happen, fall pregnant.  Things of course don’t work out quite that effortlessly but, being Chick lit, there’s much self-realization, self-discovery and self-expression on the path to a happy ending.

In political science, “continuity theory” is an approach (in two aspects) to twentieth century German historiography which focuses on structural and sociological continuities between eras (including pre-twentieth century influences and traditions).  The first aspect was the notion there existed “continuity” in the persistent influence of long-term social, political, cultural, and institutional developments in German history, dating at least from the time of Martin Luther (1483–1546) contributed to the particular nature of Imperial Germany (1871-1918), the failure of the Weimar Republic (1918-1933) and the Führerprinzip (Leader Principle) which, structurally, was the distinguishing feature of the Third Reich (1933-1945).  This idea has underpinned a number of major historical studies but has always been contested because another faction (which has at times included a significant proportion of the German population) which argues that Nazism was uniquely radical and an aberration in the nation’s history.  Most controversially, some proponents of continuity theory extend the application to the post war years, examining how former Nazis, neo-Nazis and their ideologies persisted (and at times have flourished) both in the FRG (Federal Republic of Germany; the old West Germany (1949-1990)) and the unified state formed 1990 after the FRG absorbed the GDR (German Democratic Republic (the old East Germany)).

Adolf Hitler (left) looking at Ernst Röhm (right), Nürnberg, 3 September 1933.  Some nine months later, Hitler would order Röhm's discontinuation (murder).  Photograph from the Bundesarchiv (Federal Archives), Bild (picture) 146-1982-159-22A.

The theory’s other aspect was structural and was essentially an analysis of the extent to which the Nazi state operated under the constitutional and administrative arrangements inherited from the Weimar Republic, the state which Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945) claimed “his” National Socialist revolution had overthrown.  The indisputable fact that the Nazi dictatorship was fundamentally different from Weimar at the time obscured the continuity but maintaining the Weimar constitution was hardly unique.  Hitler choose also to adapt the existing mixed economic model, something which upset some of the more idealistic souls in his movement who had taken seriously the “socialist” bit in “National Socialism” and led to the infamous Nacht der langen Messer (Night of the Long Knives), also called Unternehmen Kolbri (Operation Hummingbird) a purge executed between 30 June-2 July 1934, when the regime carried out a number of extrajudicial executions, ostensibly to crush what was referred to as “the Röhm Putsch” (Ernst Röhm (1887–1934; chief of the Sturmabteilung (the stormtroopers (the SA)), head of the four-million strong SA had certainly in the past hinted at one but there’s no doubt no such thing was imminent).

The USGS’s (US Geological Survey (1879)) depiction of the Mohorovičić discontinuity (the Moho).

The Mohorovičić discontinuity (which geologists tend to call “the Moho”) is the boundary between the Earth's crust and mantle, the extent defined by the distinct change in velocity of seismic waves as they pass through changing densities of rock.  The phenomenon is named after Croatian geophysicist Andrija Mohorovičić (1857–1936; one of the seminar figures in modern seismology), who first published his findings (based on seismographic observations of shallow-focus earthquakes) in 1909.

Wednesday, December 7, 2022

Caliginous

Caliginous (pronounced kuh-lij-uh-nuhs)

Misty; dim; dark; gloomy, murky (archaic).

1540-1550: From the Middle English caliginous (dim, obscure, dark), from either the Middle French caligineux (misty; obscure) or directly from its Latin etymon cālīginōsus (misty; dark, obscure), from caliginem (nominative caligo) (mistiness, darkness, fog, gloom), of uncertain origin.  The construct of cālīginōsus was cālīgin- (stem of cālīgō or cālīginis (mist; darkness)) + -ōsus or –ous (the suffix meaning “full of, prone to” used to form adjectives from nouns.  The origin of caliginem has attracted speculation, one etymologist pondering links with the Greek kēlas (mottled; windy (of clouds)) & kēlis (stain, spot), the Sanskrit kala- (black) or the Latin calidus (with a white mark on the forehead).  Caliginous is an adjective, caliginousness is a noun and caliginously is an adverb.

Procession in the Fog (1828) by Ernst Ferdinand Oehme (1797-1855), oil on canvas, Galerie Neue Meister, Staatliche Kunstsammlungen, Dresden, Germany.

Lindsay Lohan in Among the Shadows (2019).  In film, using a dark and murky environment can help create an ambiance of gloom and doom, something helpful for several genres, most obviously horror.  Directed by Tiago Mesquita with a screenplay by Mark Morgan, Among the Shadows is a thriller which straddles the genres, elements of horror and the supernatural spliced in as required.  Although in production since 2015, with the shooting in London and Rome not completed until the next year, it wasn’t until 2018 when, at the European Film Market, held in conjunction with the Berlin International Film Festival, Tombstone Distribution listed it, the distribution rights acquired by VMI, Momentum and Entertainment One, and VMI Worldwide.  In 2019, it was released progressively on DVD and video on demand (VOD), firstly in European markets, the UK release delayed until mid-2020.  In some markets, for reasons unknown, it was released with the title The Shadow Within.

Not highly regarded as an example of the film-maker’s art, Among the Shadows is of some interest to students of the technique of editing and continuity.  As spliced in as some of the elements may have been, just as obviously interpolated was much of the footage involving Ms Lohan and while the editing has been done quite well, there are limitations to the extent to which this can disguise discontinuities.  In this case the caliginous atmospherics probably did help the editing process, the foggy dimness providing its own ongoing visual continuity.

Daytime in London during the Great Smog of 1952.

Ghastly things had been seen in the London air before the Great Smog of 1952.  In the high summer of 1858, there had been the Great Stink, caused by an extended spell of untypically hot and windless weather, conditions which exacerbated the awfulness of the smell of the untreated human waste and industrial effluent flowing in the Thames river, great globs of the stuff accumulating on the banks, the consequence of a sewerage system which had been out-paced by population growth, the muck still discharged untreated,  straight into the waterway.

The weather played a part too in the caliginous shroud which for almost a week engulfed the capital early in December 1952.  That year, mid-winter proved unusually cold and windless, resulting in an anti-cyclonic system (which usually would have passed over the British Isles) remaining static, trapping airborne pollutants and forming a thick layer of smog over the city.  The conditions lasted for several days and cleared only when the winter winds returned.  What made things especially bad was that in the early post-war years, most of the UK’s high quality coal was exported to gain foreign exchange.  Despite having been on the winning side in World War II, the cost of the struggle had essentially bankrupted the country and the mantra to industry quickly became “export or die”; thus the coal allocated for domestic consumption was “dirty” and of poor quality.  The official reports at the time indicated a death-toll of some 4000 directly attributed to the Great Smog (respiratory conditions, car accidents, trips & falls etc) with another 10,000 odd suffering some illnesses of some severity.  However, more recent statistical analysis, using the same methods of determining “surplus deaths” as were applied to the COVID-19 numbers, suggested there may have been as many as 12,000 fatalities.  It was the public disquiet over the Great Smog of 1952 which ultimately would trigger the Clean Air Act (1956), which although not the UK’s first environmental legislation, did until the 1980s prove the most far reaching.

Friday, July 5, 2024

Interregnum

Interregnum (pronounced inn-ter-reg-numb)

(1) (a) An interval of time between the close of a sovereign's reign and the accession of his or her normal or legitimate successor.  (b) A period when normal government is suspended, especially between successive reigns or regimes.  (c)  Any period during which a state has no ruler or only a temporary executive

(2) The period in English history from the execution of Charles I in 1649 to the Restoration of Charles II in 1660.

(3) An interval in the Church of England dioceses between the periods of office of two bishops.

(4) In casual use, any pause or interruption in continuity.

1570-1580: From the Latin interregnum (an interval between two reigns (literally "between-reign), the construct being inter (between; amid) + rēgnum (kingship, dominion, reign, rule, realm (and related to regere (to rule, to direct, keep straight, guide), from the primitive Indo-European root reg- (move in a straight line), with derivatives meaning "to direct in a straight line", thus "to lead, rule"). To illustrate that linguistic pragmatism is nothing new, in the Roman republic, the word was preserved to refer to a vacancy in the consulate.  The word is now generally applied to just about any situation where an organization is between leaders and this seems an accepted modern use. The earlier English noun was interreign (1530s), from French interrègne (14c.).  Interregnum & interregent are nouns and interregnal is an adjective; the noun plural is interregnums or interregna.

The classic interregnum.  One existed between 1204 and 1261 in the Byzantine Empire.  Following the Sack of Constantinople during the Fourth Crusade, the Byzantine Empire was dissolved, to be replaced by several Crusader states and several Byzantine states.  It was re-established by Nicean general Alexios Strategopoulos who placed Michael VIII Palaiologos back on the throne of a united Byzantine Empire.

The retrospective interregnum.  The Interregnum of (1649–1660) was a republican period in the three kingdoms of England, Ireland and Scotland.  Government was carried out by the Commonwealth and the Protectorate of Oliver Cromwell after the execution of Charles I and before the restoration of Charles II; it became an interregnum only because of the restoration.  Were, for example, a Romanov again to be crowned as Tsar, the period between 1917 and the restoration would become the second Russian interregnum, the first being the brief but messy business of 1825, induced by a disputed succession following the death of the Emperor Alexander I on 1 December.  The squabble lasted less than a month but in those few weeks was conducted the bloody Decembrist revolt which ended when Grand Duke Konstantin Pavlovich renounced his claim to throne and Nicholas I declared himself Tsar.

The constitutional interregnum.  In the UK, under normal conditions, there is no interregnum; upon the death of one sovereign, the crown is automatically assumed by the next in the line of succession: the King is dead, long live the King.  The famous phrase signifies the continuity of sovereignty, attached to a personal form of power named auctoritas.  Auctoritas is from the Old French autorité & auctorité (authority, prestige, right, permission, dignity, gravity; the Scriptures) from the Latin auctoritatem (nominative auctoritas) (invention, advice, opinion, influence, command) from auctor (master, leader, author).  From the fourteenth century, it conveyed the sense of "legal validity" or “authoritative doctrine", as opposed to opposed to reason or experience and conferred a “right to rule or command, power to enforce obedience, power or right to command or act".  It’s a thing which underpins the legal theory of the mechanics of the seamless transition in the UK of one the sovereign to the next, coronations merely ceremonial and proclamations procedural.  Other countries are different.  When a King of Thailand dies, there isn’t a successor monarch until one is proclaimed, a regent being appointed to carry out the necessary constitutional (though not ceremonial) duties.  A number of monarchies adopt this approach including Belgium and the Holy See.  The papal interregnum is known technically as sede vacante (literally "when the seat is vacant") and ends upon the election of new pope by the College of Cardinals.

The interregnum by analogy.  The term has been applied to the period of time between the election of a new President of the United States and his (or her!) inauguration, during which the outgoing president remains in power, but as a lame duck in the sense that, except in extraordinary circumstances, there is attention only to procedural and ceremonial matters.  So, while the US can sometimes appear to be in a state with some similarities to an interregnum between the election in November and the inauguration in January, it’s  merely a casual term without a literal meaning.  The addition in 1967 of the twenty-fifth amendment (A25) to the US Constitution which dealt with the mechanics of the line of succession in the event of a presidential vacancy, disability or inability to fulfil the duties of the office, removed any doubt and established there is never a point at which the country is without someone functioning as head of state & commander-in-chief.

Many turned, probably for the first time, to A25 after watching 2024’s first presidential debate between sleazy old Donald and senile old Joe.  Among historians, comparisons were made between some revealing clips of Ronald Reagan (1911-2004; US president 1981-1989) late in his second term and reports of the appearance and evident mental state of Franklin Delano Roosevelt (FDR, 1882–1945, US president 1933-1945) during the Yalta conference (February 1945).  In 1994, Reagan’s diagnosis of Alzheimer's disease was revealed and within two months of Yalta, FDR would be dead.  Regarding the matter of presidential incapacity or inability, the relevant sections of A25 are:

Section 3: Presidential Declaration of Inability: If the President submits a written declaration to the President pro tempore of the Senate and the Speaker of the House of Representatives that he is unable to discharge the powers and duties of his office, the Vice President becomes Acting President until the President submits another declaration stating that he is able to resume his duties.

Section 4: Vice Presidential and Cabinet Declaration of Presidential Inability: If the Vice President and a majority of the principal officers of the executive departments (or another body as Congress may by law provide) submit a written declaration to the President pro tempore of the Senate and the Speaker of the House of Representatives that the President is unable to discharge the powers and duties of his office, the Vice President immediately assumes the powers and duties of the office as Acting President.

If the President then submits a declaration that no inability exists, he resumes the powers and duties of his office unless the Vice President and a majority of the principal officers (or another body as Congress may by law provide) submit a second declaration within four days that the President is unable to discharge the powers and duties of his office. In this case, Congress must decide the issue, convening within 48 hours if not in session. If two-thirds of both Houses vote that the President is unable to discharge the powers and duties of his office, the Vice President continues as Acting President; otherwise, the President resumes his powers and duties.

Quite what the mechanism would be for a vice president and the requisite number of the cabinet to issue such a certificate is not codified.  Every president in the last century-odd has been attended by a doctor with the title “Physician to the President” (both John Kennedy (JFK, 1917–1963; US president 1961-1963) and Bill Clinton (b 1946; US president 1993-2001), uniquely, appointed women) and presumably they would be asked for an opinion although, even though FDR’s decline was apparent to all, nobody seems to have suggested Vice Admiral Ross McIntire (1889–1959) would have been likely to find the threshold incapacity in a president he’d known since 1917 as served as physician since 1933.  Vice presidents and troubled cabinet members may need to seek a second opinion.

Fashions change: The dour Charles I (left), the puritanical Oliver Cromwell (centre) and the merry Charles II (right).

The famous interregnum in England, Scotland, and Ireland began with the execution of Charles I (1600-1649) and ended with the restoration to the thrones of the three realms of his son Charles II (1630-1685) in 1660.  Immediately after the execution, a body known as the English Council of State (later re-named the Protector's Privy Council) was created by the Rump Parliament.  Because of the implication of auctoritas, the king's beheading was delayed half a day so the members of parliament could pass legislation declaring themselves the sole representatives of the people and the House of Commons the repository of all power.  Making it a capital offence to proclaim a new king, the laws abolished both the monarchy and the House of Lords.  For most of the interregnum, the British Isles were ruled by Oliver Cromwell (1599–1658) an English general and statesman who combined the roles of head of state and head of government of the republican commonwealth.

When Queen Elizabeth II (1926-2022; Queen of England and other places variously 1952-2022) took her last breath, Charles (b 1948) in that moment became King Charles III; the unbroken line summed up in the phrase "The King is dead.  Long Live the King".  In the British constitution there is no interregnum and a coronation (which may happen weeks, months or even years after the succession) is, in secular legal terms, purely ceremonial although there have been those who argued it remains substantive in relation to the monarch's role as supreme governor of the established Church of England, a view now regarded by most with some scepticism.  As a spectacle however it's of some interest (as the worldwide television ratings confirmed) and given the history, there was this time some interest in the wording used in reference to the queen consort.  However, constitutional confirmed that had any legal loose ends been detected or created at or after the moment of the succession they would have been "tidied up" at a meeting of the Accession Council, comprised of a number of worthies who assemble upon the death of a monarch and issue a formal proclamation of accession, usually in the presence of the successor who swears oaths relating to both church (England & Scotland) and state.  What receives the seal of the council is the ultimate repository of monarchical authority (on which the laws and mechanisms of the state ultimately depend) and dynastic legitimacy, rather than the coronation ceremony.

Some fashions did survive the interregnum: Charles II in his coronation regalia (left) and Lindsay Lohan (right) demonstrate why tights will never go out of style.

Tuesday, October 12, 2021

Gap

Gap (pronounced gap)

(1) A break or opening, as in a fence, wall, or military line; breach; an opening that implies a breach or defect (vacancy, deficit, absence, or lack).

(2) An empty space or interval; interruption in continuity; hiatus.

(3) A wide divergence or difference; disparity

(4) A difference or disparity in attitudes, perceptions, character, or development, or a lack of confidence or understanding, perceived as creating a problem.

(5) A deep, sloping ravine or cleft through a mountain ridge.

(6) In regional use (in most of the English-speaking world and especially prominent in the US), a mountain pass, gorge, ravine, valley or similar geographical feature (also in some places used of a sheltered area of coast between two cliffs and often applied in locality names).

(7) In aeronautics, the distance between one supporting surface of an airplane and another above or below it.

(8) In electronics, a break in a magnetic circuit that increases the inductance and saturation point of the circuit.

(9) In various field sports (baseball, cricket, the football codes etc), those spaces between players which afford some opportunity to the opposition.

(10) In genetics, an un-sequenced region in a sequence alignment.

(11) In slang (New Zealand), suddenly to depart.

(12) To make a gap, opening, or breach in.

(13) To come open or apart; form or show a gap.

1350–1400: From the Middle English gap & gappe (an opening in a wall or hedge; a break, a breach), from Old Norse gap (gap, empty space, chasm) akin to the Old Norse gapa (to open the mouth wide; to gape; to scream), from the Proto-Germanic gapōną, from the primitive Indo-European root ghieh (to open wide; to yawn, gape, be wide open) and related to the Middle Dutch & Dutch gapen, the German gaffen (to gape, stare), the Danish gab (an expanse, space, gap; open mouth, opening), the Swedish gap & gapa and the Old English ġeap (open space, expanse).  Synonyms for gap can include pause, interstice, break, interlude, lull but probably not lacuna (which is associated specifically with holes).  Gap is a noun & verb, gapped & gapping are verbs, Gapless & gappy are adjectives; the noun plural is gaps.

Lindsay Lohan demonstrates a startled gape, MTV Movie-Awards, Gibson Amphitheatre, Universal City, California, June 2010.

The use to describe natural geographical formations (“a break or opening between mountains” which later extended to “an unfilled space or interval, any hiatus or interruption”) emerged in the late fifteenth century and became prevalent in the US, used of deep breaks or passes in a long mountain chain (especially one through which a waterway flows) and often used in locality names.  The use as a transitive verb (to make gaps; to gap) evolved from the noun and became common in the early nineteenth century as the phrases became part of the jargon of mechanical engineering and metalworking (although in oral use the forms may long have existed).  The intransitive verb (to have gaps) is documented only since 1948.  The verb gape dates from the early thirteenth century and may be from the Old English ġeap (open space, expanse) but most etymologists seem to prefer a link with the Old Norse gapa (to open the mouth wide; to gape; to scream); it was long a favorite way of alluding to the expressions thought stereotypical of “idle curiosity, listlessness, or ignorant wonder of bumpkins and other rustics” and is synonymous with “slack-jawed yokels”).  The adjective gappy (full of gaps; inclined to be susceptible to gaps opening) dates from 1846.  The adjectival use gap-toothed (having teeth set wide apart) has been in use since at least the 1570s, but earlier, Geoffrey Chaucer (circa 1344-1400) had used “gat-toothed” for the same purpose, gat from the Middle English noun gat (opening, passage) from the Old Norse gat and cognate with gate.

Lindsay Lohan demonstrates her admirable thigh gap, November 2013.

The “thigh gap” seems first to have been documented in 2012 but gained critical mass on the internet in 2014 when it became of those short-lived social phenomenon which produced a minor moral panic.  “Thigh gap” described the empty space between the inner thighs of a women when standing upright with feet touching; a gap was said to be good and the lack of a gap bad.  Feminist criticism noted it was not an attribute enjoyed by a majority of mature human females and it thus constituted just another of the “beauty standards” imposed on women which were an unrealizable goal for the majority.  The pro-ana community ignored this critique and thinspiration (thinspo) bloggers quickly added annotated images and made the thigh gap and essential aspect of female physical attractiveness.  

A walking, talking credibility gap: crooked Hillary Clinton (b 1947; US secretary of state 2009-2013).

In English, gap has been prolific in the creation of phrases & expressions.  The “generation gap” sounds modern and as a phrase it came into wide use only in the 1960s in reaction to the twin constructs of “teenagers” and the “counter-culture” but the concept has been documented since antiquity and refers to a disconnect between youth and those older, based on different standards of behavior, dress, artistic taste and social mores.  The term “technology gap” was created in the early 1960s and was from economics, describing the various implications of a nation’s economy gaining a competitive advantage over others by the creation or adoption of certain technologies.  However, the concept was familiar to militaries which had long sought to quantify and rectify any specific disadvantage in personnel, planning or materiel they might suffer compared to their adversaries; these instances are described in terms like “missile gap”, “air gap”, “bomber gap”, “megaton gap” et al (and when used of materiel the general term “technology deficit” is also used).  Rearmament is the usual approach but there can also be “stop gap” solutions which are temporary (often called “quick & dirty” (Q&D)) fixes which address an immediate crisis without curing the structural problem.  For a permanent (something often illusory in military matters) remedy for a deficiency, one is said to “bridge the gap”, “gap-fill” or “close the gap”.  The phrase “stop gap” in the sense of “that which fills a hiatus, an expedient in an emergency” appears to date from the 1680s and may have been first a military term referring to a need urgently to “plug a gap” in a defensive line, “gap” used by armies in this sense since the 1540s.  The use as an adjective dates from the same time in the sense of “filling a gap or pause”.  A “credibility gap” is discrepancy between what’s presented as reality and a perception of what reality actually is; it’s applied especially to the statements of those in authority (politicians like crooked Hillary Clinton the classic but not the only examples).  “Pay gap” & “gender gap” are companion terms used most often in labor-market economics to describe the differences in aggregate or sectoral participation and income levels between a baseline group (usually white men) and others who appear disadvantaged.

“Gap theorists” (known also as “gap creationists”) are those who claim the account of the Earth and all who inhabit the place being created in six 24 hour days (as described in the Book of Genesis in the Bible’s Old Testament) literally is true but that there was a gap of time between the two distinct creations in the first and the second verses of Genesis.  What this allows is a rationalization of modern scientific observation and analysis of physical materials which have determined the age of the planet.  This hypothesis can also be used to illustrate the use of the phrase “credibility gap”.  In Australia, gap is often used to refer to the (increasingly large) shortfall between the amount health insurance funds will pay compared with what the health industry actually charges; the difference, paid by the consumer, (doctors still insist on calling them patients) is the gap (also called the “gap fee”).  In Australia, the term “the gap” has become embedded in the political lexicon to refer to the disparity in outcomes between the indigenous and non-indigenous communities in fields such as life expectancy, education, health, employment, incarceration rates etc.  By convention, it can be used only to refer to the metrics which show institutional disadvantage but not other measures where the differences are also striking (smoking rates, crime rates, prevalence of domestic violence, drug & alcohol abuse etc) and it’s thus inherently political.  Programmes have been designed and implemented with the object of “closing the gap”; the results have been mixed.

Opinion remains divided on the use of platinum-tipped spark plugs in the Mercedes-Benz M100 (6.3 & 6.9) V8.

A “spark gap” is the space between two conducting electrodes, filled usually with air (or in specialized applications some other gas) and designed to allow an electric spark to pass between the two.  One of the best known spark gaps is that in the spark (or sparking) plug which provides the point of ignition for the fuel-air mixture in internal combustion engines (ICE).  Advances in technology mean fewer today are familiar with the intricacies of spark plugs, once a familiar (and often an unwelcome) sight to many.  The gap in a spark plug is the distance between the center and ground electrode (at the tip) and the size of the gap is crucial in the efficient operation of an ICE.  The gap size, although the differences would be imperceptible to most, is not arbitrary and is determined by the interplay of the specifications of the engine and the ignition system including (1) the compression ratio (low compression units often need a larger gap to ensure a larger spark is generated), (2) the ignition system, high-energy systems usually working better with a larger gap, (3) the materials used in the plug’s construction (the most critical variable being their heat tolerance); because copper, platinum, and iridium are used variously, different gaps are specified to reflect the variations in thermal conductivity and the temperature range able to be endured and (4) application, high performance engines or those used in competition involving sustained high-speed operation often using larger gaps to ensure a stronger and larger spark.

Kennedy, Khrushchev and the missile gap

The “missile gap” was one of the most discussed threads in the campaign run by the Democratic Party’s John Kennedy (JFK, 1917–1963; US president 1961-1963) in the 1960 US presidential election in which his opponent was the Republican Richard Nixon (1913-1994; US president 1969-1974).  The idea there was a “missile gap” was based on a combination of Soviet misinformation, a precautionary attitude by military analysts in which the statistical technique of extrapolation was applied on the basis of a “worst case scenario” and blatant empire building by the US military, notably the air force (USAF), anxious not to surrender to the navy their pre-eminence in the hierarchy of nuclear weapons delivery systems.  It’s true there was at the time a missile gap but it was massively in favor of the US which possessed several dozen inter-continental ballistic missiles (ICBM) while the USSR had either four or six, depending on the definition used.  President Dwight Eisenhower (1890-1969; US president 1953-1961), a five-star general well acquainted with the intrigues of the military top brass, was always sceptical about the claims and had arranged the spy flights which confirmed the real count but was constrained from making the information public because of the need to conceal his source of intelligence.  Kennedy may actually have known his claim was incorrect but, finding it resonated with the electorate, continued to include it in his campaigning, knowing the plausibility was enhanced in a country where people were still shocked by the USSR having in 1957 launched Sputnik I, the first ever earth-orbiting satellite.  Sputnik had appeared to expose a vast gap between the scientific capabilities of the two countries, especially in the matter of big missiles. 

President Kennedy & comrade Khrushchev at their unproductive summit meeting, Vienna, June 1961.

Fake gaps in such matters were actually nothing new.  Some years earlier, before there were ICBMs so in any nuclear war the two sides would have to have used aircraft to drop bombs on each other (al la Hiroshima & Nagasaki in 1945), there’d been a political furore about the claim the US suffered a “bomber gap” and would thus be unable adequately to respond to any attack.  In truth, by a simple sleight of hand little different to that used by Nazi Germany to 1935 to convince worried British politicians that the Luftwaffe (the German air force) was already as strong as the Royal Air Force (RAF), Moscow had greatly inflated the numbers and stated capability of their strategic bombers, a perception concerned US politicians were anxious to believe.  The USAF would of course be the recipient of the funds needed to build the hundreds (the US would end up building thousands) of bombers needed to equip all those squadrons and their projections of Soviet strength were higher still.  If all of this building stuff to plug non-existent gaps had happened in isolation it would have been wasteful of money and natural resources which was bad enough but this hardware made up the building blocks of nuclear strategy; the Cold war was not an abstract exercise where on both sides technicians with clipboards walked from silo to silo counting warheads.

Instead, the variety of weapons, their different modes of delivery (from land, sea, undersea and air), their degrees of accuracy and their vulnerability to counter-measures was constantly calculated to assess their utility as (1) deterrents to an attack, (2) counter-offensive weapons to respond to an attack or (3) first-strike weapons with which to stage a pre-emptive or preventative attack.  In the Pentagon, the various high commands and the burgeoning world of the think tanks, this analysis was quite an industry and it had to also factor in the impossible: working out how the Kremlin would react.  In other words, what the planners needed to do was create a nuclear force which was strong enough to deter an attack yet not seem to be such a threat that it would encourage an attack and that only scratched the surface of the possibilities; each review (and there were many) would produce detailed study documents several inches thick.

US Navy low-level photograph spy of San Cristobal medium-range ballistic missile (MRBM) site #1, Cuba, 23 October, 1962.

In October 1962, during the Cuban Missile Crisis, the somewhat slimmer nuclear war manuals synthesized from those studies were being read with more interest than usual.  It was a tense situation and had Kennedy and comrade Nikita Khrushchev (1894–1971; Soviet leader 1953-1964) not agreed to a back-channel deal, the US would probably have attacked Cuba in some manner, not knowing three divisions of the Red Army were stationed there to protect the Soviet missiles and that would have been a state of armed conflict which could have turned into some sort of war.  As it was, under the deal, Khrushchev withdrew the missiles from Cuba in exchange for Kennedy’s commitment not to invade Cuba and withdraw 15 obsolescent nuclear missiles from Turkey, the stipulation being the Turkish component must be kept secret.  That secrecy colored for years the understanding of the Cuban Missile Crisis and the role of the US nuclear arsenal played in influencing the Kremlin.  The story was that the US stayed resolute, rattled the nuclear sabre and that was enough to force the Soviet withdrawal.  One not told the truth was Lyndon Johnson (LBJ, 1908–1973; US president 1963-1969) who became president after Kennedy was assassinated in 1963 and historians have attributed his attitude to negotiation during the Vietnam War to not wishing to be unfavorably compared to his predecessor who, as Dean Rusk (1909–1994; US secretary of state 1961-1969) put it, stood “eyeball to eyeball” with Khrushchev and “made him blink first”.  The existence of doomsday weapon of all those missiles would distort Soviet and US foreign policy for years to come.

Wednesday, November 8, 2023

Rimbellisher

Rimbellisher (pronounced rhim-bell-lysh)

A decorative ring attached to the rim of a car's wheel.

1940s: A portmanteau word, the construct being rim +‎ (em)bellish +-er and originally a trademarked brand of the Ace company.  Rim was from the tenth century Middle English rim, rym & rime, from the Old English rima (rim, edge, border, bank, coast), from the Proto-Germanic rimô & rembô (edge, border), from the primitive Indo-European rem- & remə- (to rest, support, be based).   It was cognate with the Saterland Frisian Rim (plank, wooden cross, trellis), the Old Saxon rimi (edge; border; trim) and the Old Norse rimi (raised strip of land, ridge).  Rim generally means “an edge around something, especially when circular” and is used in fields a different as engineering and vulcanology.  The use in political geography is an extension of the idea, something like “PacRim” (Pac(ific) + rim) used to group the nations with coastlines along the Pacific Ocean.  The use in print journalism referred to “a semicircular copydesk”.   The special use in metallurgy described the outer layer of metal in an ingot where the composition was different from that of the centre.  The word rim is an especially frustrating one for golfers to hear because it describes the ball rolling around the rim of the hole but declining to go in.

Embellish dates from the early fourteenth century and was from the Middle English embelisshen from the Anglo-French, from the Middle French embeliss- (stem of embelir), the construct being em- (The form taken by en- before the labial consonants “b” & “p”, as it assimilates place of articulation).  The en- prefix was from the Middle English en- & in-.  In the Old French it existed as en- & an-, from the Latin in- (in, into); it was also from an alteration of in-, from the Middle English in-, from the Old English in- (in, into), from the Proto-Germanic in (in).  Both the Latin and Germanic forms were from the primitive Indo-European en (in, into) and the frequency of use in the Old French is because of the confluence with the Frankish an- intensive prefix, related to the Old English on-.) + bel-, from the Latin bellus (pretty) + -ish.  The –ish suffix was from the Middle English –ish & -isch, from the Old English –isċ, from the Proto-West Germanic -isk, from the Proto-Germanic –iskaz, from the primitive Indo-European -iskos.  It was cognate with the Dutch -s; the German -isch (from which Dutch gained -isch), the Norwegian, Danish, and Swedish -isk & -sk, the Lithuanian –iškas, the Russian -ский (-skij) and the Ancient Greek diminutive suffix -ίσκος (-ískos); a doublet of -esque and -ski.  There exists a welter of synonyms and companion phrases such as decorate, grace, prettify, bedeck, dress up, exaggerate, gild, overstate, festoon, embroider, adorn, spiff up, trim, magnify, deck, color, enrich, elaborate, ornament, beautify, enhance, array & garnish.  Embellish is a verb, embellishing is a noun & verb, embellished is a verb & adjective and embellisher & embellishment are nouns; the noun plural is embellishments.

The –er suffix was from the Middle English –er & -ere, from the Old English -ere, from the Proto-Germanic -ārijaz, thought most likely to have been borrowed from the Latin –ārius where, as a suffix, it was used to form adjectives from nouns or numerals.  In English, the –er suffix, when added to a verb, created an agent noun: the person or thing that doing the action indicated by the root verb.   The use in English was reinforced by the synonymous but unrelated Old French –or & -eor (the Anglo-Norman variant -our), from the Latin -ātor & -tor, from the primitive Indo-European -tōr.  When appended to a noun, it created the noun denoting an occupation or describing the person whose occupation is the noun.  Rimbellisher is a noun and the noun plural is rimbellishers.  All other forms are non-standard but a wheel to which a rimbellisher has been fitted could be said to be rimbellished (adjective) white the person doing the fitting would be a rimbellisher (noun), the process an act of rimbellishing (verb) and the result a rimbellishment (noun).

Jaguar XK120 with wire wheels (left), with hubcaps (centre) and with hubcaps and rimbellishers (right).

The Jaguar XK range (1948-1961) was available either with solid or wire wheels and while the choice was usually on aesthetic grounds, those using the things in competition sometimes had to assess the trade-offs.  The wire wheels were lighter and provided better cooling of the brakes (especially those connected to the rear wheels which were covered with fender skirts (spats) when the steel wheels were fitted.  In many forms of motor sport that was of course a great advantage but the spokes and the deletion of the skirts came at an aerodynamic cost, the additional drag induced by the combination reducing top speed by a up to 5 mph (8 km/h) and increasing fuel consumption.  It was thus a question of working out what was most valued and in the early days, where regulations permitted, some drivers used wire-wheels at the front and retained the skirts at the rear, attempting to get as much as possible of the best of both worlds (the protrusion of the hubs used on the wire wheels precluded them from fitting behind the skirts).  Jaguar XK owners would never refer to their wheels as “rims” although there may be some who have added “rims” to their modern (post Tata ownership) Jaguars.

Hofit Golan (b 1985) and Lindsay Lohan (b 1986) attending Summer Tour Maserati in Porto Cervo, Sardinia, July 2016.  The Maserati Quattroporte is a 1964 Series I, fitted with steel wheels and rimbellishers.

Among certain classes, it’s now common to refer to wheels as rims, and the flashier the product, the more likely it is to be called a “rim”.  Good taste is of course subjective but as a general rule, the greater the propensity to being described as a rim, the more likely it is to be something in poor taste.  That’s unless it actually is a rim, some wheels being of multi-part construction where the rim is a separate piece (and composed sometimes from a different metal).  In the early days of motoring this was the almost universal method of construction and it persisted in trucks until relatively recently (although still used in heavy, earth-moving equipment and such).  However, those dealing with the high-priced, multi-pieced wheels seem still to call them wheels and use the term “rim” only when discussing the actual rim component.

1937 Rolls-Royce Phantom III four-door cabriolet with coachwork by German house Voll Ruhrbeck, fitted with the standard factory wire wheels (left) and 1937 Rolls-Royce Phantom III fixed head coupé (FHC) with coachwork by Belgium house Vesters et Neirinck, fitted with the “Ace Deluxe” wheel discs which fitted over the standard factory wire wheels (right).  The coupé, fabricated in Brussels, was unusual in pre-war coachbuilding in that there was no B-pillar, the style which would become popular in the US between the 1950s-1970s where in two & four-door form it would be described as a “hardtop”, the nomenclature which would over the years be sometimes confused with the “hard-tops” sometime supplied with convertibles as a more secure alternative to the usual “soft top”.

According to the Oxford English Dictionary (OED), the first known instance of rimbellisher in print was in The Motor (1903-1988) magazine in 1949 although they seem first to have been so-described when on sale in England in 1948.  The rimbellishers were a new product for the Ace Company which in the 1930s had specialized in producing disk-like covers for wire-wheels.  That might seem strange because wire wheels are now much admired but in the 1930s many regarded them as old-fashioned although their light-weight construction meant they were often still used.  What Ace’s aluminium covers provided was the advantage of the lighter weight combined with a shiny, modernist look and they were also easy to keep clean, unlike wire wheels which could demand hours each month to maintain.

The Ace company's publicity material from the 1950s.

In the post-war years the rimbellishers became popular because they were a detail which added a “finished” look.  They were a chromed ring which attached inside the rim of the wheel, providing a contrasting band between the tyre and the centre of the wheel, partially covered usually by a hubcap.  Ace’s original Rimbellishers were secured using a worm-drive type of fastening which ensured the metal of the wheel suffered no damage but as other manufacturers entered the market, the trend became to use a cheaper method of construction using nothing more than multiple sprung tags and with the devices push-fitted into the well of the wheel, some scraping of the paint being inevitable.  Rimbellisher (always with an initial capital) was a registered trademark of the Ace company but the word quickly became generic and was in the 1950s & 1960s used to describe any similar device.  Interestingly, by the mid 1950s, Ace ceased to use “rimbellisher” in its advertising copy and described the two ranges as “wheel discs” and “wheel trims”.

The early versions did nothing more than produce a visual effect but the stylists couldn’t resist the opportunities and some rimbellishers grew to the extent they completely blocked the flow of air through the vents in the wheels and that adversely affected the cooling of the brakes, the use of some of the new generation of full-wheel covers also having this consequence.  The solution was to ensure there was some airflow but to maintain as much as possible of the visual effect, what were often added were little fins and for these to work properly, they needed to catch the airflow so there were left and right-side versions, an idea used to this day in the alloy wheels of some high-price machinery.

1969 Pontiac GTO with standard trim rings (left) and 1969 Pontiac GTO Judge supplied without trim rings (right).

Ace in the early 1950s had distributers in the US and both their rimbellishers and full-wheel covers were offered.  They took advantage of the design which enabled the same basic units to be used, made specific only the substitution of a centre emblem which was varied to suit different manufacturers.  Ace’s market penetration for domestic vehicles didn’t last because Detroit soon began producing their own and within a short time they were elaborate and often garish, something which would last into the twenty-first century.  The Americans soon forgot about the rimbellisher name and started calling them trim-rings and they became a feature of the steel “sports wheels” manufacturers offered on their high-performance ranges in the years before aluminium wheels became mainstream products.  The trim-rings of course had a manufacturing cost and this was built into the price when the wheels were listed as an option.  The cost of production wouldn’t have been great but interestingly, when General Motors’ Pontiac division developed the “Judge” option for its GTO to compete with the low-cost Plymouth Road Runner, the trim-rings were among the items deleted.  However, the Judge package evolved to the point where it became an extra-cost option for the GTO with the missing trim-rings about the only visible concession to economy.

Mercedes-Benz W111s: 1959 220 SE with 8-slot rimbellishers (left), 1967 250 SE with the briefly used solid rimbellishers (centre) and 1971 280 SE 3.5 with the later 12-slot rimbellishers which lacked the elegance of the 8-slots.

Like many companies, Mercedes-Benz used wheel covers as a class identifier.  When the W111 saloons (1959-1968) were released in 1959, the entry-level 220 was fitted with just hubcaps while the up market twin-carburetor 220 S and the fuel-injected 220 SE had rimbellishers (made by the factory, not Ace).  Within a few years, the use of rimbellishers was expanded but by the mid-1960s, the elegant 8-slot units mostly had been replaced with a less-pleasing solid metal pressing (albeit one which provided a gap for brake cooling).  That didn’t last and phased-in between 1967-1968, the company switched from the hubcap / rimbellisher combination to a one-piece wheel cover which included the emulation of a 12-slot rimbellisher.  There were no objections to the adoption of one-piece construction but few found the new design as attractive as the earlier 8-slot.

MG publicity photograph (left) showing MGA and Magnettes, the former fitted with the Cornercroft's Ace Rimbellishers which were a factory option.  The MGA (right) uses a different third-party rimbellisher which was physically bigger and overlapped the edge of the rim to a greater degree.  The factory-option is preferred by most because it better suits the MGA's delicate lines.

1958 Jaguar 3.4 (top) and 1960 MGA Coupé (bottom) with the relevant Ace-Mercury wheel discs.

As well as an after-market product sold through retail outlets and offered as a dealer-fitted accessory, the Ace-Mercury wheel discs were at various times a factory option, MG sometimes listing them for the MGA (1955-1962), ZA-ZB Magnettes (1953-1958) and early versions of both the MGB (1962-1980) & Midget (1961-1979).  For Coventry-based Cornercroft (manufacturer of the Ace Mercury range), the attraction was the (more or less) standardized shape of steel wheels meant it was possible to use the one basic design in a variety of diameters (13, 14 & 15 inch), able to be marketed for use with vehicles from different makers simply by changing the centre-cap to a fitting with the name of the relevant marque (others would also use the same technique).  Cornercraft also offered a (fake) eared spinner in the style of a knockoff nut but these seem never to have been factory options.  The Ace-Mercury was made from bright anodized aluminium and thus was both lightweight and corrosion-resistant but somewhat fragile if subjected to stresses which steel would easily cope.  Additionally, like many “big” wheel-covers, they could be prone to “popping-off” during hard cornering, a phenomenon familiar to students of the car chases in Hollywood movies between the 1960 and 1990s, the film pedants (of which there seem to be many) documenting instances where the “continuity girl” either failed to notice or ignore a wheel-cover inexplicably re-attached, mid-chase.  All of this meant the survival rate of Ace-Mercury is low and many were anyway discarded as subsequent owners preferred the sexier look of wire, alloy or styled-steel wheels.  That makes them now a valuable period-piece and it’s not unusual to see an MG, Riley or Jaguar driven to a show with bare wheels, the Ace-Mercurys put on only for the duration of the exhibition.

Crornercraft advertisement, 1957.

One difference from the usual practice was that unlike most hub-caps or wheel covers which tended to be the same for all four wheels, the Ace-Mercury’s small louvers operated as air scoops when the wheels were in rotation, ducting cooling air through the ventilation holes in the wheel to assist in cooling the brakes.  A set for four was thus supplied in left & right-hand pairs and needed to be installed with the louvers’ open edge “facing the breeze”.  That wasn’t unique but was untypical and the concept was sometimes made more intricate still, such as when the fourth-generation Chevrolet Corvette (C4, 1983-1996) was introduced with alloy wheels of a different width front & rear, meaning than for the cooling ducts to work there were four different wheels.  Another quirk of the Ace-Mercury was that although the visual similarities make them all barely distinguishable except for the diameter, in January 1959, MG changed the design of the MGA’s disc wheels, competing Cornercroft to re-design to the internal structure (the MGA version’s left/right part numbers changing from 97H675/97H676 to BHA4165/BHA4166.  Details like this litter the car restoration business which is why replicating exactly what was done decades ago can be both challenging and expensive.

1966 Jaguar Mark X with factory rimbellishers.