Monday, January 29, 2024

Fecund & Fertile

Fecund (pronounced fuh-khunt, fee-kuhnd or fek-uhnd)

(1) Producing or capable of producing offspring, fruit, vegetation, etc in abundance; prolific; fruitful.

(2) Figuratively, highly productive or creative intellectually; innovative.

Circa 1525: From the mid-fifteenth century Middle English fecounde from the Middle French fecund, from the Old French fecund & fecont (fruitful), from the Latin fēcundus (fruitful, fertile, productive; rich, abundant (and related to the Latin fētus (offspring) and fēmina (“woman”)), from fe-kwondo-, an adjectival suffixed form of the primitive Indo-European root dhei or dhe- (to suck, suckle), other derivatives meaning also “produce” & “yield”.  in this case wasn’t a prefix but a link to fetus whereas -cundus was the adjectival suffix.  It replaced the late Middle English fecounde.  The spelling fecund was one of the “Latinizing” revisions to spelling which was part of the framework of early Modern English, (more or less) standardizing use and replacing the Middle English forms fecond, fecound & fecounde.  The Latin root itself proved fecund; from it came also felare (to suck), femina (woman (literally “she who suckles”)); felix (happy, auspicious, fruitful), fetus (offspring, pregnancy); fenum (hay (which seems literally to have meant “produce”)) and probably filia (daughter) & filius (son), assimilated from felios (originally “a suckling”).  The noun fecundity emerged in the early fifteenth century and was from the Latin fecunditatem (nominative fecunditas) (fruitfulness, fertility), from fecundus (fruitful, fertile).  The old spelling fœcund is obsolete.  Fecund is an adjective and fecundity & fecundation are nouns; the noun plural is fecundities.

In his A Dictionary of Modern English Usage (1926), Henry Fowler (1858–1933) noted without comment the shift in popular pronunciation but took the opportunity to cite the phrase of a literary critic (not a breed of which he much approved) who compared the words of HG Wells (1866-1946) & Horace Walpole (1717–1797): “The fecund Walpole and the facund Wells”.  The critic, Henry Fowler noted: “fished up the archaic facund for the sake of the play on words”.  Never much impressed by flashy displays of what he called a “pride of knowledge”, his objection here was that there was nothing in the sentence to give readers any idea of the change in meaning caused by the substituted vowel.  Both were from Latin adjectives, fēcundus (prolific) and facundus (elegant).

Fertile (pronounced fur-tl or fur-tahyl (mostly UK RP))

(1) Of land, bearing, producing, or capable of producing vegetation, crops etc, abundantly; prolific.

(2) Of living creatures, bearing or capable of bearing offspring; Capable of growth or development.

(3) Abundantly productive.

(4) Conducive to productiveness.

(5) In biology, fertilized, as an egg or ovum; fecundated; capable of developing past the egg stage.

(6) In botany, capable of producing sexual reproductive structures; capable of causing fertilization, as an anther with fully developed pollen; having spore-bearing organs, as a frond.

(7) In physics (of a nuclide) capable of being transmuted into a fissile nuclide by irradiation with neutrons (Uranium 238 and thorium 232 are fertile nuclides); (a substance not itself fissile, but able to be converted into a fissile material by irradiation in a reactor).

(8) Figuratively, of the imagination, energy etc, active, productive, prolific.

1425–1475: From the Late Middle English fertil (bearing or producing abundantly), from the Old French fertile or the Latin fertilis (bearing in abundance, fruitful, productive), from ferō (I bear, carry) and .akin to ferre (to bear), from the primitive Indo-European root bher (to carry (also “to bear children”)).  The verb fertilize dates from the 1640s in the sense of “make fertile” although the use in biology meaning “unite with an egg cell” seems not to have been used until 1859 and use didn’t become widespread for another fifteen years.  The noun fertility emerged in the mid-fifteenth century, from the earlier fertilite, from the Old French fertilité, from the Latin fertilitatem (nominative fertilitas) (fruitfulness, fertility), from fertilis (fruitful, productive).  Dating from the 1660s, the noun fertilizer was initially specific to the technical literature associated with agriculture in the sense of “something that fertilizes (land)”, and was an agent noun from the verb fertilize.  In polite society, fertilizer was adopted as euphemism for “manure” (and certainly “shit”), use documented since 1846.  The noun fertilization is attested since 1857 and was a noun of action from fertilize; it was either a creation of the English-speaking world or a borrowing of the Modern French fertilisation.  The common antonyms are barren, infertile and sterile.  Fertile is an adjective, fertility, fertilisation & fertileness are nouns, fertilize fertilized & fertilizing are verbs.  Technical terms like sub-fertile, non-fertile etc are coined as required.

The term “Fertile Crescent” was coined in 1914 was coined by US-born University of Chicago archaeologist James Breasted (1865-1935); it referred to the strip of fertile land (in the shape of an irregular crescent) described the stretching from present-day Iraq through eastern Turkey and down the Syrian and Israeli coasts.  The significance of the area in human history was it was here more than ten-thousand years ago that settlements began the practice of structured, seasonal agriculture.  The Middle English synonym childing is long obsolete but the more modern term “at risk” (of falling pregnant) survives for certain statistical purposes and was once part of the construct of a “legal fiction” in which the age at which women were presumed to be able to conceive was set as high as 65; advances in medical technology have affected this.

The difference

So often are “fecund” & “fertile” used interchangeably that there may be case to be made that in general use they are practically synonyms.  However, the use is slanted because fertile is a common word and fecund is rare; it’s the use of fertile when, strictly speaking, fecund is correct which is the frequent practice.  Technically, the two have distinct meanings although there is some overlap and agriculture is a fine case-study: Fertile specifically refers to soil rich in nutrients and able to support the growth of plants.  Fecund can refer to soil capable of supporting plant growth but it has the additional layer of describing something capable of producing an abundance of offspring or new growth.  This can refer to animals, humans, bacteria or (figuratively), ideas.  Used interchangeably, expect between specialists who need to differentiate, this linguistic swapping probably doesn’t cause many misunderstandings because the context of conversations will tend to make the meaning clear and for most of use, the distinction between a soil capable of growing plants and one doing so prolifically is tiresomely technical.  Still, as a rule of thumb, fertile can be thought of as meaning “able to support the growth of offspring or produce” while fecund implies “producing either in healthy volumes”.

Ultimate fecundity: Fast breeding

Although there are differences in meaning, fertile and fecund tend to be used interchangeably, especially in agriculture.  As adjectives, the difference is that fecund means highly fertile whereas fertile is the positive side of the fertile/infertile binary; capable of producing crops or offspring.  Fecundity may thus be thought a measure of the extent to which fertility is realised.  In nuclear physics, fertile material is that which, while not itself fissile (ie fissionable by thermal neutrons) is able to be converted into fissile material by irradiation in a reactor.  Three basic fertile materials exist: thorium-232, uranium-234 & uranium-238 and when these materials capture neutrons, respectively they are converted into uranium-233, uranium-235 & fissile plutonium-239.  Artificial isotopes formed in the reactor which can be converted into fissile material by one neutron capture include plutonium-238 and plutonium-240 which convert respectively into plutonium-239 & plutonium-241.

Obviously fertile and recently fecund.  In July 2023 Lindsay Lohan announced the birth of her first child.

Further along the scale are the actinides which demand more than one neutron capture before arriving at an isotope which is both fissile and long-lived enough to capture another neutron and reason fission instead of decaying.  These strings include (1) plutonium-242 to americium-243 to curium-244 to curium-245, (2) uranium-236 to neptunium-237 to plutonium-238 to plutonium-239 and (3) americium-241 to curium-242 to curium-243 (or, more likely, curium-242 decays to plutonium-238, which also requires one additional neutron to reach a fissile nuclide).  Since these require a total of three or four thermal neutrons eventually to fission, and a thermal neutron fission generates typically only two to three neutrons, these nuclides represent a net loss of neutrons although, in a fast reactor, they may require fewer neutrons to achieve fission, as well as producing more neutrons when they do.

Fast breeder (fusion) reactors have existed in labs for decades but, because of the need to contain sustainably very high temperatures, the challenge has always been to build something which (1) produces more energy than it consumes and (2) does so indefinitely.  On paper (and physicists admit the design is now so well understood a conceptual diagram can be sketched on a sheet in minutes) the science and engineering works so all that stands in the way is economics.  The lure of the fast breeder reactor is that, theoretically endlessly, one can produce more fissile material than it consumes (they're constructed using fertile material either wrapped around the core or encased in fuel rods).  Because plutonium-238, plutonium-240 and plutonium-242 are fertile, their accumulation is more manageable than that produced in conventional thermal reactors.  On planet Earth, the economics remain un-compelling, practical application of the technology having been thirty years off since the mid-1950s.  One proposal however transcends economics because it solves an otherwise insoluble problem.  If a facility for the manufacture of fissile material for spacecraft nuclear propulsion could be located on a space facility located at a point beyond the gravitational pull of Earth, it would be safe both to transport fertile materials to the facility and there manufacture fissile material which could provide the energy required for space exploration.

Sunday, January 28, 2024

Adiaphoron

Adiaphoron (pronounced add-e-ah-for-on or eh-dee-ah-for-on)

(1) A matter of indifference.

(2) In philosophy, a matter held to be morally neutral.

(3) In Christian theology, something neither forbidden nor commanded by scripture and thus neither prescribed nor proscribed in church law.

(4) In Christian theology, the position that adherence to certain religious doctrines, rituals or ceremonies (even if non-standard) are not matters of concerned and may be practices or not, according to local preference.

1630s: From the Latin adjective adiaphoron, an inflection of adiaphoros (indifferent, non-essential, morally neither right nor wrong), neuter of Ancient Greek ἀδιάφορος (adiáphoros) (not different; indifferent), the construct being from a- (used in the sense of “not”) + diaphoros (different).  The Greek ἀδιάφορον (not different or differentiable) was thus the negation of διαφορά (diaphora) (difference).  The noun adiaphoria (a failure to respond to stimulation after a series of previously applied stimuli) is unrelated in meaning, the construct being a- (not) +‎ dia- (through) +‎ -phor (bearer) +‎ -ia (the suffix used to form abstract nouns).  Adiaphoron is a noun & adjective, adiaphorist & adiaphorism are nouns, adiaphorous, adiaphoristic & adiaphoric are adjectives; the noun plural is adiaphora.

In the philosophy of the Ancient Greeks, adiaphorism was an aspect in more than one school of thought.  To the Cynics it was used in the sense of “indifference” to both unfortunate events and the “stuff” which, then as now, functioned as the markers of success in society: power, fame & money.  The ancestor of the anti-materialists of the modern age, Cynicism understandably had more admirers than adherents.  The Stoics were more deterministic, dividing all the concerns of humanity into (1) good, (2) bad and (3) indifferent (adiaphora).  What they listed as good & bad was both predictable and (mostly) uncontroversial, something like a form of utilitarianism but without that creed’s essential component of distributive justice.  The implication, which retains much appeal to modern libertarians, was that for anything to be thought a matter of ethical concern, it needed to be defined as “good” or “bad”, the adiaphora being outside the scope of morality.  Acknowledged or not, this is what all but the most despotic legal and social systems can be reduced to although, being culturally and historically specific, the results can vary greatly.  In Athenian thought, the word also had a technical meaning wholly removed from morality.  To the Pyrrhonists (the most uncompromising of the philosophical sceptics) who essentially discarded all forms of imposed values in favor of defining everything by objective truth alone, the significance of the adiaphora was that these were things which, as a technical point, could not logically be differentiated.

Lindsay Lohan and her lawyer in court, Los Angeles, December 2011.

In Christianity, the adiaphora are those matters which, while they might be a significant or traditional part of worship either universally or sectionally, are not regarded as essential components of belief but may be practiced where the preference exists.  Within the schismatic world of Christianity, views differ and what is essential doctrinal orthodoxy in some denominations can be mere adiaphora in others.  Historically, the matter of what is and is not adiaphoric has been a matter of dispute and was a significant factor in the sixteenth century Protestant Reformation, a movement much concerned with the appropriateness of non-biblical ritual, rites, decorations and “the other detritus of Popery”.  It took some time to work out but what emerged was a political compromise which defined adiaphora essentially as those traditions “neither commanded nor forbidden in the Word of God”, thus permitting the ongoing observation of the “bells & whistles” of worship which had evolved over centuries and despite the entreaties of the iconoclasts, continued to be clung to by congregations.  The lesson of this compromise to accommodate “harmless regionalisms” was well learned by some later leaders, religious and secular.

Between the Christian denominations, the same thing can variously be dogma, heresy or mere adiaphora and an illustrative example of disagreement lies in the cult of Mary (Mariology to the theologians).  In the Roman Catholic Church, the cult of Mary is based on dogma worked out over centuries: (1) that Mary was a pure virgin, before, during and after giving birth to Christ, (2) that Mary was the “Mother of God”, (3) that Mary, at her conception was preserved immaculate from Original Sin and (4) that at the conclusion of her earthly existence, Mary was assumed, body and soul into heaven (it has never been made explicit whether Mary died on earth although this does seem long to have been theological orthodoxy, the essential point being the physical assumption (from the Latin assūmptiō (taking up)) meant her body did not remain to be corrupted).

In the intricate interplay of theology and church politics, what really appealed to nineteenth century popes was linked to Gnosticism, the notion of “the dual realms of darkness and light beyond the mere veil of appearances, where reside the Godhead, the Virgin Mary, Michael, and all the angels and the saints, opposed by the powers of the Prince of Darkness and his fallen angels who wander through the world for the ruin of souls” as Leo XIII (1810–1903; pope 1878-1903) wrote in a prayer to be recited at the end of every Mass.  In other words, whatever happens depends on Mary’s intercession with her Christ child “to so curb the power of Satan that war and discord will be vanquished.  In turn, this depends on Marian revelations sanctioned as authentic by the pope, whose power is thus parallel to Mary’s.  It's something which has been criticized as "opportunistic constructed symbiosis".

Assumption of the Virgin Mary (circa 1637) by Peter Paul Rubens (1577-1640), Liechtenstein Museum, Vienna.

Modern popes, if they hold such a view, no longer dwell on it but it remains church dogma and because it was in the 1950s proclaimed with the only (formal) invocation of papal infallibility since the First Vatican Council (Vatican I; 1869-1870), any change would be something extraordinary.  In some other denominations Mary is more a historical figure than a cult and in the Anglican Church the doctrine of the Assumption ceased to be part of orthodoxy in the sixteenth century; while the Protestant Reformation wasn’t a project of rationalism, it was certainly about simplicity and a rejection of some of the mysticism upon which whole the clerical class depended for their authority.  Despite that, in Anglicanism, the Assumption of Mary seems never to have been proscribed and in the twentieth century it re-appeared in the traditions of the so-called “Anglo-Catholics” who adore the "Romish ways".  For most of the Anglican communion however, it seems to be thought of as adiaphora, one of those details of religious life important to some but which seems neither to add much or threaten anything.

Saturday, January 27, 2024

Synecdoche

Synecdoche (pronounced si-nek-duh-kee)

In the study of rhetoric, a figure of speech in which a part is used for the whole or the whole for a part, the special for the general or the general for the special; a member of the figurative language set, a group which includes metaphors, similes and personification; it describes using part of a whole to represent the whole.

Late 1400s: As a "figure of speech in which a part is taken for the whole or vice versa," synecdoche is a late fifteenth century correction of the late fourteenth century synodoches, from the Medieval Latin synodoche, an alteration of the Late Latin synecdochē, from the Ancient Greek συνεκδοχή (sunekdokhḗ) (the putting of a whole for a part; an understanding one with another (and literally "a receiving together or jointly" (ekdokhē the root of interpretation)) from synekdekhesthai (supply a thought or word; take with something else, join in receiving).  The construct was syn- (with) + ek (out) + dekhesthai (to receive), related to dokein (seem good) from the primitive Indo-European root dek- (to take, accept).  The construct of the Greek form was σύν (sún) (with) + ἐκ (ek) (out of) + δέχεσθαι (dékhesthai) (to accept), this final element related to δοκέω (dokéō) (to think, suppose, seem).  The alternative spellings syndoche & synechdoche are rare.  Synecdoche, synecdochization & synecdochy are nouns, synecdochic & synecdochical are adjectives, synecdochize is a verb and synecdochically is an adverb; the noun plural is synecdoches.  

Synecdoche vs. Metonymy

It’s one of those places in English where rules or descriptions overlap and it's easy to confuse synecdoche and metonymy because they both use a word or phrase to represent something else (and there are authorities which classify synecdoche as merely a type of metonymy although this appals the more fastidious).  Technically, while a synecdoche takes an element of a word or phrase and uses it to refer to the whole, a metonymy replaces the word or phrase entirely with a related concept.  Synecdoche and metonymy have much in common and there are grey areas: synecdoche refers to parts and wholes of a thing, metonymy to a related term. The intent of synecdoche is usually either (1) to deviate from a literal term in order to spice up everyday language or (2) a form of verbal shorthand.  In the discipline of structural linguistics, it's noted the distinction is between using a part to represent the whole (pars pro toto, from the Latin, the construct being pars (part) + prō (for) + tōtō, the ablative singular of tōtus (whole, entire)) or using the whole to represent a part (totum pro parte , from the Latin, the construct being tōtum (whole) + prō (for) + parte, the ablative singular of pars (part)).

The Pentagon, Arlington County, Virginia, USA.  Advances in technology have made the site vulnerable to long-range attacks as early as the 1950s and many critical parts of the military's administration are now located elsewhere.  After construction ended in 1943, for some 80 years the Pentagon was (in terms of floor area) the world's largest office building.  It's place on this architectural pecking order has since been supplanted by the Surat Diamond Bourse in Gujarat, India, opened in 2023.

Forms of Synecdoche

(1) A part to represent a whole: The word "head" can refer to counting cattle or people; hands for people on a specific job or members of a crew etc.

(2) A whole to represent a part: The word "Pentagon", while literally a very big building, often refers to the few decision-making generals who comprise the Joint Chiefs of Staff or more generally, the senior ranks of the US military.  However, the use of "the White House" (a smaller building) operates synecdochically to refer to "the administration" rather than "the president" and while it should be reasonable to assume some interchangeably, under both Donald Trump (b 1946; US president 2017-2021 and Joe Biden (b 1942; US president since 2021), it's been not uncommon to hear "the White House" being quoted "clarifying" (ie correcting" something said by the president .    

(3) A synecdoche may use a word or phrase as a class to express more or less than the word or phrase actually means: The USA is often referred to as “America” although this is a term from geography while "USA" is from political geography.  The word "crown" is often used to refer to a monarch or the monarchy as a whole but in some systems (notably the UK and Commonwealth nations which retain the UK's monarch as their head of state) the term "The Crown" is a synecdoche for "executive government".  

(4) Material representing an object: Cutlery and flatware is often (and often erroneously) referred to as "silver" or "silverware" even though there may not be a silver content in the metal although, "silver" being also a term referencing a color, the use is thought acceptable.

(5) A single (acceptable) word to suggest to the listener or reader another (unacceptable) word; commonly used as a linguistic work-around of NSFW (not suitable for work) rules on corporate eMail or other systems: “crock” or “cluster” are examples, pointing respectively to “crock of shit” and “cluster-fuck”.

Lindsay Lohan and her lawyer in court, Los Angeles, December 2011.

Friday, January 26, 2024

Brand

Brand (pronounced brand)

(1) The kind, grade, or make of a product or service, as indicated by a stamp, trademark, or such.

(2) A mark made by burning or otherwise, to indicate kind, grade, make, ownership (of both objects and certain animals) etc.

(3) A mark formerly put upon slaves or criminals, made on the skin with a hot iron.

(4) Any mark of disgrace; stigma.

(5) A kind or variety of something distinguished by some distinctive characteristic.

(6) A set of distinctive characteristics that establish a recognizable image or identity for a person or thing.

(7) A conflagration; a flame.  A burning or partly burned piece of wood (now rare except regionally although the idea of brand as “a flaming torch” still exists as a poetic device).  In the north of England & Scotland, a brand is a torch used for signalling. 

(8) A sword (archaic except as a literary or poetic device).

(9) In botany, a fungal disease of garden plants characterized by brown spots on the leaves, caused by the rust fungus Puccinia arenariae

(10) A male given name (the feminine name Brenda was of Scottish origin and was from the Old Norse brandr (literally “sword” or “torch”).

(11) To label or mark with or as if with a brand.

(12) To mark with disgrace or infamy; to stigmatize.

(13) Indelibly to impress (usually in the form “branded upon one’s mind”)

(14) To give a brand name to (in commerce including the recent “personal brand).

Pre 950: From the Middle English, from the Old English brond & brand (fire, flame, destruction by fire; firebrand, piece of burning wood, torch (and poetically “sword”, “long blade”) from the Old High German brant, the ultimate source the primitive Indo-European bhrenu- (to bubble forth; brew; spew forth; burn).  It was cognate with the Scots brand, the Dutch & German Brand, the Old Norse brandr, the Swedish brand (blaze, fire), the Icelandic brandur and the French brand of Germanic origin.  The Proto-Slavic gorěti (to burn) was a distant relation.  Brand is a noun & verb, brander is a noun, brandless is an adjective, branded is a verb and branding is a noun & verb; the noun plural is brands.  Forms (hyphenated and not) like de-brand, non-brand, mis-brand & re-brand are created as required and unusually for English, the form brander seems never to have been accompanied by the expected companion “brandee”.

Some work tirelessly on their “personal brand”, a term which has proliferated since social media gained critical mass.  Lindsay Lohan’s existence at some point probably transcended the notion of a personal brand and became an institution; the details no longer matter.

The verb brand dates from the turn of the fifteenth century in the sense of “to impress or burn a mark upon with a hot iron, cauterize; stigmatize” and originally described the marks imposed on criminal or cauterized wounds, the used developed from the noun.  The figurative use (often derogatory) of “fix a character of infamy upon” emerged in the mid-fifteenth century, based on the notion of the association with criminality.  The use to refer to a physical branding as a mark of ownership or quality dates from the 1580s and from this developed the familiar modern commercial (including “personal brands”) sense of “brand identity”, “brand recognition”, “brand-name” etc.  Property rights can also attach to brands, the idea of “brand-equity”.

Although it’s unknown just when the term “branding iron” (the (almost always) iron instrument which when heated burned brands into timber, animal hides etc) was first used (it was an ancient device), the earliest known citation dates only from 1828.  The “mark made by a hot iron” was older and in use since at least the 1550s, noted especially of casks and barrels”, the marks indicating variously the maker, the type of contents, the date (of laying down etc) or the claimed quality..  By the early-mid nineteenth century the meaning had broadened to emphasise “a particular make of goods”, divorced from a particular single item and the term “brand-name” appears first to have been used in 1889, something significant in the development of the valuable commodity of “brand-loyalty” although that seems not to have been an acknowledged concept in marketing until 1961.  The idea of “brand new” is based on the (not always accurate) notion a brand was the last thing to be applied to a product before it left the factory.

BMC ADO16 brands, clockwise from top left: Wolseley 1300, Riley Kestrel 1300, MG 1300, Austin 1300 GT, Morris 1100 and Vanden Plas Princess 1300.  The British Motor Corporation's (BMC) ADO16 (Austin Drawing Office design 16) was produced between 1962-1974 and was a great success domestically and in many export markets, more than two million sold in 1.1 & 1.3 litre form.  The Austin & Morris brands made up the bulk of the production but versions by Wolseley, Riley, MG & Vanden Plas versions were at various times available.  All were almost identically mechanically with the brand differentiation restricted to the interior trim and the frontal panels.  This was the high (or low) point of the UK industry's “badge engineering”.  The abbreviation ADO is still sometimes said to stand for “Amalgamated Drawing Office”, a reference to the 1952 creation of BMC when the Austin & Morris design & engineering resources were pooled.  Like many such events subsequently, the amalgamation was more a “takeover” than a “merger” and the adoption of “Austin Drawing Office” reflected the priorities and loyalties of Leonard Lord (later Lord Lambury, 1896–1967), the former chairman of Austin who was appointed to head the conglomerate.  The appearance of “Amalgamated Drawing Office” appears to be a creation of the internet age, the mistake still circulating.

Since the beginnings of mass-production made possible by powered industrial processes and the ability to distribute manufactured stuff world-wide, brand-names have become (1) more prevalent and (2) not of necessity as distinctive as once they were.  Historically, in commerce, a brand was an indication of something unique but as corporations became conglomerates they tended to accumulate brands (sometimes with no other purpose than ceasing production in order to eliminate competition) and over time, it was often tempting to reduce costs by ceasing separate development and simply applying a brand to an existing line, hoping the brand loyalty would be sufficient to overlook the cynicism.  The British car manufactures in the 1950s use the idea to maintain brand presence without the expense of developing unique products and while originally some brand identity was maintained with the use of unique mechanical components or coachwork while using a common platform, by the late 1960s the system had descended to what came to be called “badge engineering”, essentially identical products sold under various brand-names, the differences restricted to minor variations in trim and, of course, the badge.

Australia Day vs Invasion Day: The case for a re-brand

Although it came to be known as “Australia’s national day” and in some form or other had been celebrated or at last marked since the early nineteenth century, as a large-scale celebration (with much flag waving) it has been a thing only since the 1988 bi-centennial of white settlement.  What the day commemorated was the arrival in 1788 in what is now Sydney of the so-called “First Fleet” of British settlers, the raising of the Union Flag the first event of legal significance in what ultimately became the claiming of the continental land-mass by the British crown.  Had that land been uninhabited, things good and bad would anyway have happened but in 1788, what became the Commonwealth of Australia was home to the descendants of peoples who had been in continuous occupation sine first arriving up to 50,000 years earlier (claims the history extends a further 10,000 remain unsupported by archaeological evidence); conflict was inevitable and conflict there was, the colonial project a violent and bloody business, something the contemporary records make clear was well understood at the time but which really entered modern consciousness only in recent decades.

What the colonial authorities did was invoke the legal principle of terra nullius (from the Latin terra nūllīus (literally “nobody's land”)) which does not mean “land inhabited by nobody” but “land not owned by anyone”.  The rational for that was the view the local population had no concept of land “ownership” and certainly no “records” or “title deeds” as they would be understood in English law.  Given that, not only did the various tribes not own the land but they had no system under which they could own land; thus the place could be declared terra nullis.  Of late, some have devoted much energy to justifying all that on the basis of “prevailing standards” and “accepted law” but even at the time there were those in London who were appalled at what was clearly theft on a grand scale, understanding that even if the indigenous population didn’t understand their connection to the land and seas as “ownership” as the concept was understood in the West, what was undeniable by the 1830s when the doctrine of terra nullius was formally interpolated into colonial law was that those tribes understood what “belonged” to them and what “belonged” to other tribes.  That’s not to suggest it was a wholly peaceful culture, just that borders existed and were understood, even if sometimes transgressed.  Thus the notion that 26 January should better be understood as “Invasion Day” and what is more appropriate than a celebration of a blood-soaked expropriation of a continent is there should be a treaty between the colonial power (and few doubt that is now the Australian government) and the descendants of the conquered tribes, now classified as “first nations”.  Although the High Court of Australia in 1992 overturned the doctrine of terra nullius when it was recognized that in certain circumstances the indigenous peoples could enjoy concurrent property rights to land with which they could demonstrate a continuing connection, this did not dilute national sovereignty nor in any way construct the legal framework for a treaty (or treaties).

The recognition that white settlement was an inherently racist project based on theft is said by some to be a recent revelation but there are documents of the colonial era (in Australia and elsewhere in the European colonial empires) which suggest there were many who operated on a “we stole it fair and square” basis and many at the time probably would not have demurred from the view 26 January 1788 was “Invasion Day” and that while it took a long time, ultimately that invasion succeeded.  Of course, elsewhere in the British Empire, other invasions also proved (militarily) successful but usually these conflicts culminated in a treaty, however imperfect may have the process and certainly the consequences.  In Australia, it does seem there is now a recognition that wrong was done and a treaty is the way to offer redress.  That of course is a challenging path because, (1) as the term “first nations” implies, there may need to be dozens (or even hundreds according to the count of some anthropologists) of treaties and (2) the result will need to preserve the indivisible sovereignty of the Commonwealth of Australia, something which will be unpalatable to the most uncompromising of the activists because it means that whatever the outcome, it will still be mapped onto the colonial model.

As the recent, decisive defeat of a referendum (which would have created an constitutionally entrenched Indigenous advisory body) confirmed, anything involving these matters is contentious and while there are a number of model frameworks which could be the basis for negotiating treaties, the negotiating positions which will emerge as “the problems” are those of the most extreme 1% (or some small number) of activists whose political positions (and often incomes) necessitate an uncompromising stance.  Indeed, whatever the outcome, it’s probably illusory to imagine anything can be solved because there are careers which depend on there being no solution and it’s hard to envisage any government will be prepared to stake scare political capital on a venture which threatens much punishment and promises little reward.  More likely is a strategy of kicking the can down the road while pretending to be making progress; many committees and boards of enquiry are likely to be in our future and, this being a colonial problem, the most likely diversion on that road will be a colonial fix.

One obvious colonial fix would be a double re-branding exercise.  The New Year’s Day public holiday could be shifted from 1 January to December 31 and re-branded “New Year’s Eve Holiday”, about the only practical change being that instead of the drinking starting in the evening it can begin early in the day (which for many it doubtless anyway does).  Australia Day could then be marked on 1 January and could be re-branded to “Constitution Day” although given the history that too might be found objectionable.  Still, the date is appropriate because it was on 1 January 1901 the country and constitution came into existence as a consequence of an act of the Imperial Parliament, subsequently validated by the parliament of the Commonwealth of Australia (an institution created by the London statute).  It’s the obvious date to choose because that was the point of origin of the sovereign state although in the narrow technical sense, true sovereignty was attained only in steps (such as the Statute of Westminster (1931)), the process not complete until simultaneously both parliaments passed their respective Australia Acts (1986).  The second re-branding would be to call 26 January “Treaty Day” although the actual date is less important than the symbolism of the name and Treaty Day could be nominated as the day on which a treaty between the First Nations and the Commonwealth could be signed.  The trick would be only to name 26 January as the date of the signing, the year a function of whenever the treaty negotiations are complete.  The charm of this approach is the can can be kicked down the road for the foreseeable future.  Any colonial administrator under the Raj would have recognized this fix.

Thursday, January 25, 2024

Alexithymia

Alexithymia (pronounced ey-lek-suh-thahy-mee-uh)

In psychiatry, a range of behaviors associated with certain conditions which manifests as a difficulty in experiencing, processing, expressing and describing emotional responses.

1973: The construct was the Ancient Greek a- (not) + λέξις (léxis) (speaking) + θυμός (thumós) (heart (in the sense of “soul”)) which deconstructs as a- + lexi + -thymia (in a medical context a suffix meaning “one’s state of mind”), alexithymia thus understood as “without words for emotions”.  Alexithymia is a noun and alexithymic & alexithymiac are nouns & adjectives; the noun plural of alexithymia is also alexithymia but alexithymics, the plural of alexithymic is the more common form.

The word alexithymia was in 1973 coined by US based psychiatrists John Nemiah (1918–2009) and Peter Sifneos (1920-2008) to describe a psychological state as well known to the general population as the profession, the former preferring terms “emotionless”, “taciturn”, “feelingless” or “impassive” although alexithymia has meanings which are more specific.  Translated literally as “no words for emotions”, in practice it’s a spectrum condition which references individual degrees of difficulty in recognizing, processing or expressing emotional states or experiences.  Although it appears in both the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) and the World Health Organization’s (WHO) International Classification of Diseases (ICD), neither class it as either a diagnosable mental disorder or a symptom.  Instead, it should be regarded as a dimensional construct and one distributed normally in the general population.  In other words it’s a personality trait and like all spectrum conditions, it varies in frequency and intensity between individuals.

Alexithymia was first described as a psychological construct characterized by difficulties in identifying, describing, and interpreting one's emotions but it was soon realized individuals less able to recognize and express their own feelings would often have a diminished ability to understand the emotional experiences of others.  Clinically, alexithymia is classified in two sub-groups: (1) Primary (or Trait) Alexithymia is considered more stable and enduring and the evidence suggests there is often a genetic or developmental basis, those with primary alexithymia displaying indications from an early age.  (2) Secondary (or State) Alexithymia is something usually temporary and often associated with specific psychological or medical conditions, noted especially in patients suffering post-traumatic stress disorder (PTSD) and depressive illnesses.

Available for both Android and iOS, there are Alexithymia apps and it's possible there are those who wish to increase the extent of at least some aspects of the condition in their own lives, the apps presumably a helpful tool in monitoring progress in either direction.  There must be emos who would like to be more alexithymic. 

The characteristics common to alexithymia include (1) a limited imaginative capacity and “fantasy life”, (2) a difficulty in identifying and describing emotions, (3) thought processes which focus predominately on external events rather than internal emotional experience, (3) a difficulty in distinguishing between emotions and bodily sensations and (4) challenges in understanding (or even recognizing) the emotions of others.  As a spectrum condition, alexithymia greatly can vary in severity, and not all with alexithymia will experience the same symptoms with there being a high instance reported among patients with psychiatric and psychosomatic disorders.  Additionally, it does seem a common feature of neurological disease with most evidence available for patients with traumatic brain injury, stroke, and epilepsy although the numbers may be slanted because of the greater volume of study of those affected and it remains unclear how independent it is from affective disorders such as depression and anxiety, both common in neurological conditions.

A sample from the validation study of the Toronto Alexithymia Scale (TAS-26) (in the Croatian population).

Clinicians have available a number of questionnaires which can be use to assess a patient’s state of alexithymia and these can do more than provide a metric; the limitation of drawing a conclusion from observation alone is that with such an approach it can genuinely be impossible to distinguish between the truly alexithymic and those who have no difficulties in experiencing, processing, expressing and describing emotional responses but for some reason choose not to.  Such behavior can of course induce problems in inter-personal relationships but it’s something distinct from alexithymia and importantly too, it is clinically distinct from psychiatric personality disorders, such as antisocial personality disorder.  However, as a structural view of the DSM over the seventy-odd years would indicate, within psychiatry, mission creep has been a growing phenomenon and the definitional nets tend to be cast wide and wider and it’s not impossible that alexithymia may in some future edition be re-classified as a diagnostic criterion or at least recognized formally as a symptom.  It has for some time been acknowledged the DSM has over those decades documented the reassessment of some aspects of the human condition as mental disorders but what is less discussed is the relationship between cause and effect and there will be examples of both: it would be interesting to try to work out if there’s a pattern in the nature of (1) the changes the DSM has driven compare with (2) those which retrospectively have been codified.

Lindsay Lohan and her lawyer in court, Los Angeles, December 2011,

There may be movement because alexithymia has many of the qualities and attributes which appeal to both academia and the pharmaceutical industry.  The orthodoxy is that it occurs in some 10% of the general population but is disproportionately seen in patients suffering certain mental conditions, notably neuro-developmental disorders; the prevalence among those with autism spectrum disorder (ASD) estimated at anything up to 90%.  What will probably surprise few is that within any sub-group, it is males who are by far the most represented population and there is even the condition normative male alexithymia (NMA) although that describes the behavior and not the catchment, NMA identified also in females.

Wednesday, January 24, 2024

Digit

Digit (pronounced dij-it)

(1) In anatomy and zoology, a jointed body part at the end of the limbs of many vertebrates. The limbs of primates end in five digits, while the limbs of horses end in a single digit that terminates in a hoof.  In humans, digit is an alternative name for a finger or toe; dactyl.

(2) In zoology, a similar or similar-looking structures in other animals.

(3) As a historical unit of lineal measure, a unit of length notionally based upon the width of an adult human finger, standardized differently in various places and times (and still used as a measure in certain alcoholic spirits and among those fitting bras who recommend the finger as the gauge of the space between skin & fabric).  The most frequently cited is the English digit of 1/16 of a foot (about 19mm).  Prior to standardization, digit was used as a synonym of inch (the synonyms including “finger”, “fingerbreadth” & “fingersbreadth”).

(4) In modern mathematics, the whole numbers from 0 (zero) to 9 and the Arabic numerals representing them, which are combined to represent base-ten numbers; a position in a sequence of numerals representing a place value in a positional number system (ie any of the symbols of other number systems).

(5) In astronomy, the twelfth part of the sun's or moon's diameter; used often to express the magnitude of an eclipse.

(6) In geometry, a synonym for degree (1/360 of a circle) (obsolete).

(7) An index (obsolete).(7) An index (obsolete).

1350–1400: From the Middle English digit, from the Latin digitus (a fingerbreadth; a number); doublet of digitus.  The Latin from the primitive Indo-European deyǵ- (to show, point out, pronounce solemnly), a variant of the root dey- & deik (to show; pronounce solemnly) from which Latin also gained dīcō (I say, speak talk) & dicere (to say, speak) and English picked up toe.  Fingers were thus “pointers & indicators” and digit gained the meanings related to mathematics and numbers; fingers were used for counting up to ten (and, with recycling beyond).  The finger or toe sense in English is documented from the 1640s but the date of origin is speculative.  Indo-European cognates include the Sanskrit दिशति (diśáti) (to show, point out), the Ancient Greek δείκνυμι (deíknumi) (to show) & δίκη (díkē) (manner, custom), the Old English tǣċan (to show, point out (source of the English teach)) and tācen (the English token).  Digit is a noun & verb, digitize is a verb and digital & digitigrade are nouns & adjectives; the noun plural is digits.

Great moments in digits

The phalanx of the ten digits of two human hands are presumed to have been the integers of the hand-held calculator and in this use it would have predated formal structures of language, the concepts of “one” and “two” the origin of mathematics.  All humans naturally having ten digits, the decimal (Base-10) numeral system emerged (apparently independently) in many ancient cultures although there was some intellectual transfer, the Greeks gaining the system from Egypt although neither the Greeks or Romans exclusively used Base-10, some industry-specific methods of calculation based on the capacity of the containers in traditional use.  In China, there’s evidence of use from the first century BC.  The familiar numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9 & 0) which underpin all mathematics were developed by Arabic and Indian scholars although the elusive “0” wasn’t in widespread use until the ninth century. 

Zero though had a long history and texts from circa 300 BC detailing Babylonian mathematics display a placeholder symbol for zero in their positional numeral system (without which the representation of big numbers would practically have been impossible) although there is no evidence of the concept of zero existing as a stand-alone number.  Around the fifth century AD, Indian mathematicians documented why zero was so essential although it was a big country and there was no standardization in the symbolic representation of the value; the math however remains recognizably identical as one would expect.  Whether through the exchange of texts or (as many suspect is more likely) through the trade routes, the zero travelled east to the Islamic world where both Persian and Arabic mathematician published works explaining the implications of the still novel digit.  In the Medieval West, translations of the texts appeared but zero’s path to acceptance in Europe was slow and resisted, both by merchants and the Church, institutions with their own system, mastery of which was in the hands of an educated few.  However, so compelling were the advantages offered by adoption that by the thirteenth century, it was clear zero was here to stay.

Ten digit human hands might have been (more or less) universal but historically, Base-10 was not.  The Maya civilization used a vigesimal system (Base-20) and vigesimal components were in the counting systems of the Aztecs and some African cultures, the latter presumably an independent development.  The assumption of anthropologists is the Base-20 is a “fingers & toes” system and it does seem to be something restricted to warm climates where the removal of footwear doesn’t risk frostbite.  Nor were the hands always dealt with in multiples of five, the Yuki language of what is modern California uses Octal (Base-8) which counted the spaces between the fingers rather than the digits.  The ancient Mesopotamians (most famously the Babylonians & Sumerians) had a Sexagesimal (Base-60) system and that endures to this day in the measurement of time (60 seconds in a minute, 60 minutes in an hour) although there was an attempt to change that during the French Revolution (1789), the new republic introducing decimal time in 1793, seen as an act of democratic modernization which would include a programme to decimalize all units of measurement; the day became 10 hours long, an hour was 100 minutes and a minute 100 seconds.  However, the experiment did not prove a success, the critical mass of the old ways too embedded in the culture and the idea was abandoned in 1795 although the metric system did debut in 1799 and thrived, eventually world-wide (except in the US and a couple of quixotic hold-outs).  The Duodecimal (Base-12) system was used by the Mayans and in ancient Egypt and it too persists in commerce in the measures like dozen (12) and gross (12 dozen (144).  Binary (Base-2) of course runs the modern world because that is how (non-quantum) computers work, “0” & “1” being “on” & “off” respectively, most of what a computer does able ultimately to be reduced to a rapid succession of on/off transactions.  Nerds like Hexadecimal (Base-16) which uses the digits 0-9 and the letters A-F, representing values from 0 to 15.  Not the most unambiguous system, developers use hexadecimal numbers because in certain circumstances they make available an easier way to represent binary-coded values.

During an Aegean cruise in October 2016, Lindsay Lohan suffered a finger injury.  In this dreadful nautical incident, the tip of one digit was severed by the boat's anchor chain but details of the circumstances are sketchy although there was speculation that upon hearing the captain give the command “weigh anchor”, she decided to help but, lacking any background in admiralty jargon, misunderstood the instruction.

Detached chunk of the ring-finger's distal digit was salvaged from the deck and expertly re-attached by a micro-surgeon ashore, digit and the rest of the patient said to have both made full recoveries.  Despite the injury to the ring-finger, Ms Lohan still managed to find a husband so all's well that ends well.

Tuesday, January 23, 2024

Nuncio

Nuncio (pronounced nuhn-shee-oh, nuhn-see-oh or noo-see-oh)

(1) In the Roman Catholic Church, the ecclesiastic title of a permanent diplomatic representative of the Holy See to a foreign court, capital or international organization, ranking above an internuncio and accorded a rank equivalent to an accredited ambassador.

(2) By extension, one who bears a message; a messenger.

(3) Any member of any Sejm of the Kingdom of Poland, Polish–Lithuanian Commonwealth, Galicia (of the Austrian Partition), Duchy of Warsaw, Congress Poland, or Grand Duchy of Posen (historic reference only).

1520–1530: From the older Italian nuncio (now nunzio) from the Classical Latin nūncius & nūntius (messenger) of uncertain origin.  It may be from the primitive Indo-European root neu- (to shout) or new (to nod), same source as the Latin nuō, the Ancient Greek νεύω (neúō) (to beckon, nod) and the Old Irish noid (make known).  The alternative view is it was contracted from noventius, from an obsolete noveō, from novus.  Nuncio, nunciature & nuncioship are nouns and nunciotist is an adjective; the noun plural is nuncios but according to the text trawlers, the more frequently used plural is nunciature ((1)the status or rank of a nuncio, (2) the building & staff of a nuncio and (3) the term of service of a nuncio) which seems strange and may reflect the selection of documents scanned. Nunciatory & nunciate are unrelated (directly) and are form of the Latin Latin nuncius & nuntius (messenger, message).

In diplomatic service

An apostolic nuncio (also known as a papal nuncio or nuncio) is an ecclesiastical diplomat, serving as envoy or permanent diplomatic representative of the Holy See to a state or international organization and is head of the Apostolic Nunciature, the equivalent of an embassy or high-commission.  The Holy See is legally distinct from the Vatican City, an important theological distinction for the Vatican although one without practical significance for the states to which they’re accredited.  Most nuncios have been bishops or Archbishops and, by convention, in historically Catholic countries, the nuncio usually enjoys seniority in precedence, appointed ex officio as dean of the diplomatic corps.  Between 1965 and 1991, the term pro-nuncio was applied to a representative of full ambassadorial rank accredited to a country that did not accord precedence and de jure deanship of the diplomatic corps and in countries with which Holy See does not have diplomatic ties, an apostolic delegate may be sent to act as liaison with the local church.  Apostolic delegates have the same ecclesiastical rank as nuncios, but no diplomatic status except those which the country may choose to extend.

Der Apostolische Nuntius (Apostolic Nuncios) to Germany leaving the presidential palace  of Generalfeldmarshall Paul von Hindenburg (1847-1934), Reichspräsident (1925-1934) of the Weimar Republic 1918-1933): Archbishop Eugenio Pacelli (1876–1958, later Pope Pius XII 1939-1958), October 1927 (left) and Archbishop Cesare Orsenigo (1873–1946), May 1930 (right).

The above photograph of Archbishop Pacelli was central to what proved a fleeting literary scandal.  In 1999, journalist John Cornwell (b 1940) published Hitler's Pope, a study of the actions of Pacelli from the decades before the coming to power of the Nazis in 1933 until the end of the Third Reich in 1945.  As a coda, the final years of the pontificate of Pius XII (1939-1958) were also examined.  Cornwell’s thesis was that in his pursuit of establishing a centralized power structure with which the rule of the Holy See could be enforced over the entire church around the world, Pacelli so enfeebled the Roman Catholic Church in Germany that the last significant opposition to absolute Nazi rule was destroyed, leaving Adolf Hitler (1889-1945; Führer (leader) and German head of government 1933-1945 & head of state 1934-1945) able to pursue his goals which include military conquest and ultimately, what proved to be the attempted genocide of the Jews of Europe.  For a historian that would be an indictment damning enough but Cornwell went further, citing documentary sources which he claimed established Pacelli’s anti-Semitism.  More controversially still, the author was critical of Pius' conduct during the war, arguing that he did little to protect the Jews and did not even loudly protest against the Holocaust.  

Critical response to Hitler’s Pope was, as one might imagine, varied and understandably did focus on the most incendiary of the claims: the lifetime of anti-Semitism and the almost lineal path the book tracked from Pacelli’s diplomacy (which few deny did smooth Hitler’s path to power) to Auschwitz.  The consensus of professional historians was that case really wasn’t made and by 1933 Pacelli’s view of Hitler as (1) a staunch anti-communist and (2) likely to provide German with the sort of rule Benito Mussolini (1883-1945; Duce (leader) & prime-minister of Italy 1922-1943) had delivered in Italy, then the only model of a fascist regime and one with which the Holy See had successfully negotiated a concordat (a convention or treaty) which resolved issues which between the papacy and the Italian state had festered since 1870.  Pacelli was hardly the only notable figure to misjudge Hitler and few in 1933 anticipated anything like the events which would unfold in Europe over the next dozen years.  The critics however were legion and in the years after publication Cornwell did concede that in the particular circumstances of wartime Italy the “scope” for a pope to act was limited and he needed carefully to consider what might be the repercussions for others were his words to be careless; he was at the time playing for high stakes.  Cornwell though did not retreat from his criticism of the pope’s post-war reticence to discuss the era and appeared still to regard the documents he’d quoted and the events he described as evidence of anti-Semitism.

An example of how the book enraged Pius XII’s Praetorian Guard was the brief controversy about the cover, the allegation being there had been a “constructive manipulation” of the image used on the hardback copies of the US edition, the argument being the juxtaposition of the title “Hitler’s Pope” with the photograph of him leaving the presidential palace in Berlin implied the image dated from March 1939, the month Pacelli was elected Pope.  To add to the deception, it was noted the photograph (actually from 1927) had been cropped to remove (1) one soldier of the guard obviously not in a Nazi-era uniform and (2) the details identifying an automobile as obviously from the 1920s.  Whether any reader deduced from the cropped image that the pope and Führer (the two never met) had just been scheming and plotting together isn’t known but the correct details of the photograph were printed on back flap of the jacket, as in common in publishing.

Pius XII giving a blessing, the Vatican, 1952.  The outstretched arms became his signature gesture after his visit to South America in 1934.  Pius XI (1857–1939; pope 1922-1939), even them grooming his successor, appointed him papal legate to the International Eucharistic Congress in Buenos Aires and his itinerary included Rio de Janeiro where he saw the Redēmptōre statue (Christ the Redeemer) which had been dedicated three years earlier.    

That storm in a tea cup quickly subsided and people were left to draw their own conclusions on substantive matters but it was unfortunate the sensational stuff drew attention from was a genuinely interesting aspect explored in the book: Pacelli’s critical role in the (re-)creation of the papacy and the Roman Curia as a centralized institution with absolute authority over the whole Church.  This was something which had been evolving since Pius IX (1792–1878; pope 1846-1878) convened the First Vatican Council (Vatican I; 1869-1870) and under subsequent pontificates the process had continued but it was the publication of Pacelli’s codification of canon law in 1917 which made this administratively (and legally) possible.  Of course, any pope could at any time have ordered a codification but it was only in the late nineteenth century that modern communications made it possible for instructions issued from the Vatican to arrive within days, hours or even minutes, just about anywhere on the planet.  Previously, when a letter could take months to be delivered, a central authority simply would not function effectively.  It was the 1917 codification of canon law which realised the implications of the hierarchical theocracy which the Roman church had often appeared to be but never quite was because until the twentieth century such things were not possible and (as amended), it remains the document to which the curia cling in their battles.  Although, conscious of the mystique of their two-thousand year history, the Holy See likes people to imagine things about which they care have been unchanged for centuries, it has for example been only sine the codification that the appointment of bishops is vested exclusively in the pope, that battle with the Chinese Communist Party (CCP) still in an uneasy state of truce.