Showing posts with label English Rules. Show all posts
Showing posts with label English Rules. Show all posts

Saturday, December 27, 2025

Curious

Curious (pronounced kyoor-ee-uhs)

(1) Eager to learn or know; inquisitive; interested, inquiring

(2) Prying; meddlesome, overly inquisitive.

(3) Arousing or exciting speculation, interest, or attention through being inexplicable or highly unusual; odd; strange.

(4) Made or prepared skilfully (archaic).

(5) Done with painstaking accuracy or attention to detail (archaic).

(6) Careful; fastidious (archaic).

(7) Marked by intricacy or subtlety (archaic).

(8) In inorganic chemistry, containing or pertaining to trivalent curium (rare).

1275–1325: From the Middle English curious, from the Old French curius (solicitous, anxious, inquisitive; odd, strange (which endures in Modern French as curieux)), from the Latin cūriōsus (careful, diligent; inquiring eagerly, meddlesome, inquisitive), the construct being cūri- (a combining form of cūra (care) + -ōsusThe –ōsus suffix (familiar in English as –ous) was from Classical Latin from -ōnt-to-s from -o-wont-to-s, the latter form a combination of two primitive Indo-European suffixes: -went & -wont.  Related to these were –entus and the Ancient Greek -εις (-eis) and all were used to form adjectives from nouns.  In Latin, -ōsus was added to a noun to form an adjective indicating an abundance of that noun.  The English word was cognate with Italian curioso, the Occitan curios, the Portuguese curioso and the Spanish curioso.  The original sense in the early fourteenth century appears to have been “subtle, sophisticated” but by the late 1300s this had been augmented by “eager to know, inquisitive, desirous of seeing” (often in a bad (ie “busybody”) sense and also “wrought with or requiring care and art”, all these meaning reflecting the Latin original.  The objective sense of “exciting curiosity” was in use by at least 1715 but in booksellers' catalogues of the mid-nineteenth century, the word was a euphemism for “erotic, pornographic”, such material called curiosa the Latin neuter plural of cūriōsus.  That was not however what was in the mind of Charles Dickens (1812–1870) when he wrote The Old Curiosity Shop (1840-1841).

The derived forms include noncurious, overcurious, supercurious, uncurious & incuruious.  Both uncurious and incurious are rare and between them there is a difference in meaning and usage, but it is much weaker and less consistently observed than the distinction drawn (though not always observed) between disinterest and uninterest.  Incurious means “lacking curiosity; not inclined to inquire or wonder” and often carries a critical or evaluative tone, implying intellectual complacency or narrow-mindedness; it can be applied to individuals but seems more often used of groups.  Uncurious means “usually not curious” and tends to be descriptive rather than judgmental.  Being rarely used and obscure in what exactly is denoted, some style guides list them as awkward and best avoided, recommending being explicit about what is meant.  Curious is an adjective, curiousness & curiosity are nouns, curiously is an adverb; the noun plural curiosities.  The comparative more curious or curiouser and the superlative most curious or curiousest

The proverb “curiosity killed the cat” means “one should not be curious about things that don’t concern one”.  The phrase “curiouser and curiouser” comes from Alice’s Adventures in Wonderland (1865) by the English author Lewis Carroll (pen name of Charles Lutwidge Dodgson (1832–1898)).  As a modern, idiomatic form, it’s used to describe or react to an increasingly mysterious or peculiar situation (though usually not one thought threatening).  Alice made her famous exclamation after experiences increasingly bizarre transformations and other strange events in Wonderland; later, what was described would be thought surrealistic.  The phrase has endured and it appears often in literature and popular culture, London’s Victoria and Albert Museum even holding the Alice: Curiouser and Curiouser event.  The author’s use of “bad English” was deliberate, a device to convey the child’s sense of bewildered confusion.  In standard English, the comparative of "curious" is “more curious” with the –er suffix usually appended to words with one or two syllables.  The word “curiouser” thus inhabits a special niche in that although mainstream dictionaries usually list it as “informal” or “non-standard” (ie “wrong”), unlike most “mistakes”, because it’s a literary reference, it’s a “respectable” word (if used in the phrase).  In that, it’s something like “it ain’t necessarily so”.

Depiction of the mad hatter’s tea party by Sir John Tenniel (1820-1914) in an edition called Nursery Alice (1890), an abridged version of Alice's Adventures in Wonderland intended for children under five (the original drawing now held by the British Museum).  The book contained 20 illustrations by Sir John who also provided the artwork for the full-length publication.  A fine craftsman, Sir John was noted also for his moustache which “out-Nietzsched” Friedrich Nietzsche (1844–1900).  Despite much later speculation, no evidence has ever emerged to suggest Lewis Carroll was under the influence of drugs when writing the “Alice” books

Special derived adjectival uses of curious include the portmanteau word “epicurious” (curious about food, especially wishing to try new dishes and cuisines), the construct being epicu(reean) +‎ (cu)rious.  Although the notion of Epicureans (those who are followers of Epicureanism) being focused on food is overstated, that’s the way the word usually appears in popular use.  “Indy-curious” is from UK politics and refers to those interested in the possibility of independence for Wales, without necessarily being a supporter of the proposal.  Those who are “veg-curious” are interested in or contemplating a vegetarian or vegan diet.

The word “curious” became an element in the punch-lines of some “gay jokes” (a now extinct species outside the gay community) but survived in derived forms in sexology, presumably because they can be used neutrally.  The constructs include (1) “pancurious” (exhibiting a state of uncertainty about one's pansexual or panromantic status), (2) “bi-curious” (interested in having relationships with both men and women, curious about one's potential bisexuality; considering a first sexual experience with a member of the same sex (used especially of heterosexuals), (3) gay-curious (curious about one's homosexuality; curious to try homosexuality (4) homocurious (questioning whether one is homosexual), (5) polycurious (curious about or open to polyamory; potentially interested in having relationships with multiple partners and (6) trans-curious (interested in one's potential transness or the experience of a sexual encounter with a trans person.  None of these forms seem to be in frequent use and some may have been created to “cover the field” and there may be some overlap (such as between pancurious and polycurious) and that at least some may be spectrum conditions seems implicit in the way dictionaries list comparative and superlative forms (eg more bi-curious; most bi-curious).

The synonyms include enquiring, inquiring; exquisitive; investigative and the now rare peery, the latter a use of curious in the vein of the “meddling priest” (ie a “busybody” tending to ask questions or wishing to explore or investigate matters not of their concern).  Such a person could be labelled a quidnunc (gossip-monger, one who is curious to know everything that happens) a word (originally as quid nunc) from the early 1700s, the construct being the Latin quid (what? (neuter of interrogative pronoun quis (who?) from the primitive Indo-European root kwo-, stem of relative and interrogative pronouns)) + nunc (now); the idea was of someone habitually asking “What's the news?” and that phrase was one with which for decades the press baron Lord Beaverbrook (Maxwell Aitken, 1879-1964) would pester his editors.  The other group of synonyms reference the word in its “funny-peculiar” sense and include queer, curious: weird, odd, strange & bizarre.  Such an individual, concept or object can be called “a curiosity” and that’s reflected in the noun “curio” which dates from 1851 and meant originally “piece of bric-a-brac from the Far East” and was a short form of curiosity in the mid seventeenth century sense of “object of interest”’ by the 1890s it was in use to refer to rare or interesting bric-a-brac (or just about anything otherwise unclassified) from anywhere.  The related curioso was in use by the 1650s and for two centuries-odd was a word describing “one who is curious" (of science, art, metaphysics and such) or “one who admires or collects curiosities”; it was from the Italian curioso (a curious soul (person)).

1971 Plymouths in Curious Yellow (code GY3): 'Cuda 340 (left) and GTX (right). 

Although buyers of Ferraris, Porsches, Lamborghinis and such still often order cars in bright colors, most of the world’s fleet had for some years been restricted mostly to white, black and variants of silver & gray; it’s a phase the world is going through and it can’t be predicted how long this visually sober ere will last.  In the US in the late 1960s it was different and like other manufacturers, Chrysler had some history in the coining of fanciful names for the “High Impact” colors dating from the psychedelic era.  Emerging from their marketing departments came Plum Crazy, In-Violet, Tor Red, Limelight, Sub Lime, Sassy Grass, Panther Pink, Moulin Rouge, Top Banana, Lemon Twist & Citron Yella.  That the most lurid colors vanished during the 1970s was not because of changing tastes but in response to environmental & public health legislation which banned the use of lead in automotive paints; without the additive, production of the bright colours was prohibitively expensive.  Advances in chemistry meant that by the twenty-first century brightness could be achieved without the addition of lead so Dodge revived psychedelia for a new generation although Sub Lime became Sublime.

Criterion's re-issue of I Am Curious (Blue) and I Am Curious (Blue) with edited (colorized) artwork.  The original posters were monochrome.  

Two years into the first administration of Richard Nixon (1913-1994; US VPOTUS 1953-1961 & POTUS 1969-1974), and a year on from his declaration of a “War on Drugs”, it was obvious the psychedelic era was over but bright colors were still popular so come were carried over although the advertising became noticeably “less druggy”.  Although it may be an industry myth, the story told is that Plum Crazy & In-Violet (lurid shades of purple) were in 1969 late additions because the killjoy board refused to sign-off on Statutory Grape but despite that, Plymouth for 1971 decided to change the name of their vibrant hue of yellow from “Citron Yella” to “Curious Yellow” (code GY3), that apparently borrowed from the controversial 1967 Swedish erotic film I Am Curious (Yellow), directed by Vilgot Sjöman (1924-2006); it was followed the following year by I Am Curious (Blue), the two intended originally as 3½ hour epic.  As promoted at the time, the films were advertised as “I Am Curious: A Film in Yellow” and “I Am Curious: A Film in Blue”, the mention of the colors an allusion to the Swedish flag.

Lindsay Lohan does her bit to revive Chrysler’s 1971 Curious Yellow, the New York Post’s Alexa magazine, 5 December 2024.

A footnote to the earlier film is an uncredited appearance by Olof Palme (1927–1986; Prime Minister of Sweden 1969-1976 & 1982-1986) whose assassination remains unsolved. The films are very much period pieces of a time when on-screen depictions of sex were for the first time in some places liberated from most censorship and while this produced an entire genre of blends of eroticism and pornography, some directors couldn’t resist interpolating political commentary (of the left and right); at the time, just about everything (sex included) could be sociological.  Critic and audiences mostly were unconvinced but films like the “Curious” brace and Michelangelo Antonioni’s (1912–2007) Zabriskie Point (1970) later gained a cult following.  Problems encountered during production resulted in the release of Zabriskie Point being delayed until 1970 but in retrospective this was a blessing because if anyone doubted the spirit of the 1960s had died, the film was there to remove all doubt.  A commercial failure, visually, it remains a feast for students of pre-digital cinematography and some maintain the best way to enjoy subsequent viewings is to mute the sound and play the soundtrack on repeat; unsynchronized with the scenes, its an experience rewarding in its own way.  

Saturday, December 20, 2025

Enthrone

Enthrone (pronounced en-throhn)

(1) To put on the throne in a formal installation ceremony (sometimes called an enthronement) which variously could be synonymous with (or simultaneously performed with) a coronation or other ceremonies of investiture.

(2) Figuratively in this context, to help a candidate to the succession of a monarchy or by extension in any other major organisation (ie the role of “kingmakers”, literal and otherwise).

(3) To invest with sovereign or episcopal authority (ie a legal instrument separate from any ceremony).

(4) To honour or exalt (now rare except in literary or poetic use).

(5) Figuratively, to assign authority to or vest authority in.

Circa 1600: The construct was en- + throne and the original meaning was “to place on a throne, exalt to the seat of royalty”.  For this purpose it replaced the late fourteenth century enthronize, from the thirteenth century Old French introniser, from the Late Latin inthronizare, from Greek the enthronizein.  In the late fourteenth century the verb throne (directly from the noun) was used in the same sense.  Throne (the chair or seat occupied by a sovereign, bishop or other exalted personage on ceremonial occasions) dates from the late twelfth century and was from the Middle English trone, from the Old French trone, from the Latin thronus, from the Ancient Greek θρόνος (thrónos) (chair, high-set seat, throne).  It replaced the earlier Middle English seld (seat, throne).  In facetious use, as early as the 1920s, throne could mean “a toilet” (used usually in the phrase “on the throne”) and in theology had the special use (in the plural and capitalized) describing the third (a member of an order of angels ranked above dominions and below cherubim) of the nine orders into which the angels traditionally were divided in medieval angelology.  The en- prefix was from the Middle English en- (en-, in-), from the Old French en- (also an-), from the Latin in- (in, into).  It was also an alteration of in-, from the Middle English in-, from the Old English in- (in, into), from the Proto-Germanic in (in).  Both the Latin & Germanic forms were from the primitive Indo-European en (in, into).  The intensive use of the Old French en- & an- was due to confluence with Frankish intensive prefix an- which was related to the Old English intensive prefix -on.  It formed a transitive verb whose meaning is to make the attached adjective (1) in, into, (2) on, onto or (3) covered.  It was used also to denote “caused” or as an intensifier.  The prefix em- was (and still is) used before certain consonants, notably the labials b and p.  Enthrone, dethrone, enthronest & enthronize are verbs, enthronementm, enthronization & enthroner are nouns, enthroning is a noun & verb, enthroned is a verb & adjective; the noun plural is enthronements.  The noun enthronable is non-standard.  The derived forms include the verb unenthrone, reenthrone & disenthrone and although there have been many enthroners, the form enthronee has never existed.

Alhaji Ibrahim Wogorie (b 1967) being enskinned as North Sisala community chief, Ghana, July 2023.

In colonial-era West Africa the coined forms were “enskin” (thus enskinment, enskinning, enskinned) and “enstool” (thus enstoolment, enstooling, enstooled).  These words were used to refer to the ceremonies in which a tribal chief was installed in his role; the meanings thus essentially the same as enjoyed in the West by “enthrone”.  The constructs reflected a mix of indigenous political culture and English morphological adaptation during the colonial period, the elements explained by (1) the animal skins (the distinctive cheetah often mentioned in the reports of contemporary anthropologists although in some Islamic and Sahelian-influenced chieftaincies (including the Dagomba, Mamprusi, Hausa emirates), a cow or lion skin often was the symbol of authority) which often surrounded the new chief and (2) the tradition in Africa of a chief sitting on a stool.  Sometimes, the unfortunate animal’s skin would be laid over the stool (and almost always, one seems to have been laid at the chief’s feet) but in some traditions (notably in northern Ghana and parts of Nigeria) it was a mark of honor for the chief to sit on a skin spread on the ground.

Dr Mahamudu Bawumia (b 1963), enstooled as Nana Ntentankesehene (Chief of the Internet/Web), Ghana, August 2024.  Note the cheetah skin used to trim the chair.

The stool was the central symbol of chieftaincy and kingship among Akan-speaking peoples (still in present-day Ghana where “to enskin” is used generally to mean “to install as a leader of a group” and the constitution (1992) explicitly protects the institution of chieftaincy and judicial decisions routinely use “enstool” or “enskin” (depending on region)).  In Akan political culture, the most famous use was the Sika Dwa Kofi (the Golden Stool) of the Asante and it represented the embodiment of the polity and ancestors, not merely a seat (used rather like the synecdoches “the Pentagon” (for the US Department of Defense (which appears now to be headed by a cabinet office who simultaneously is both Secretary of Defense & Secretary of War)) or “Downing Street” (for the UK prime-minister or the government generally).  Thus, to be “enstooled” is ritually to be placed into office as chief, inheriting the authority vested in the stool.  Enskin & enstool (both of which seem first to have appeared in the records of the Colonial Office in the 1880s and thus were products of the consolidation of British indirect rule in West Africa, rather than being survivals from earlier missionary English which also coined its own terms) were examples of semantic calquing (the English vocabulary reshaped to encode indigenous concepts) and, as it was under the Raj in India, it was practical administrative pragmatism, colonial officials needing precise (and standardized) terms that distinguished between different systems of authority.  In truth, they were also often part of classic colonial “fixes” in which the British would take existing ceremonies and add layers of ritual to afforce the idea of a chief as “their ruler” and within a couple of generations, sometimes the local population would talk of the newly elaborate ceremony as something dating back centuries; the “fix” was a form of constructed double-legitimization.

A classic colonial fix was the Bose Levu Vakaturaga (Great Council of Chiefs) in Fiji which the British administrators created in 1878.  While it's true that prior to European contact, there had been meetings between turaga (tribal chiefs) to settle disputes and for other purposes, all the evidence suggests they were ad-hoc appointments with little of the formality, pomp and circumstance the British introduced.  Still, it was a successful institution which the chiefs embraced, apparently with some enthusiasm because the cloaks and other accoutrements they adopted for the occasion became increasingly elaborate and it was a generally harmonious form of indigenous governance which enabled the British to conduct matters of administration and policy-making almost exclusively through the chiefs.  The council survived even after Fiji gained independence from Britain in 1970 until it was in 2012 abolished by the military government of Commodore Frank Bainimarama (b 1954; prime minister of Fiji 2007-2022), as part of reform programme said to be an attempt to reduce ethnic divisions and promote a unified national identity.  The commodore's political future would be more assured had he learned lessons from the Raj.

There was of course an element of racial hierarchy in all this and “enskin” & “enstool” denoted a “tribal chief” under British rule whereas “enthrone” might have been thought to imply some form of sovereignty because that was the linkage in Europe and that would never do.  What the colonial authorities wanted was to maintain the idea of “the stool” as a corporate symbol, the office the repository of the authority, not the individual.  The danger with using a term like “enthronement” was the population might be infected by the European notion of monarchy as a hereditary kingship with personal sovereignty; what the Europeans wanted was “a stool” and they would decide who would be enstooled, destooled or restooled. 

Prince Mangosuthu Buthelezi, Moses Mabhida Stadium, Durban, South Africa, October 2022.

English words and their connotations did continue to matter in the post-colonial world because although the colonizers might have departed, often the legacy of language remained, sometimes as an “official” language of government and administration.  In the 1990s, the office of South Africa’s Prince Mangosuthu Buthelezi (1928–2023) sent a series of letters to the world’s media outlets advising he should be styled as “Prince” and not “Chief”, on the basis of being the grandson of one Zulu king and the nephew of another.  The Zulus were once described as a “tribe” and while that reflected the use in ethnography, the appeal in the West was really that it represented a rung on the racist hierarchy of civilization, the preferred model being: white people have nations or states, Africans cluster in tribes or clans.  The colonial administrators recognized these groups had leaders and typically they used the style “chief” (from the Middle English cheef & chef, from the Old French chef & chief (leader), from the Vulgar Latin capus, from the Classical Latin caput (head), from the Proto-Italic kaput, from the primitive Indo-European káput).  As the colonial records make clear, there were “good” chiefs and “troublesome” chiefs, thus the need sometimes to arrange a replacement enstooling.

Unlike in the West where styles of address and orders of precedence were codified (indeed, somewhat fetishized), the traditions in Africa seem to have been more fluid and Mangosuthu Buthelezi didn’t rely on statute or even documented convention when requesting the change.  Instead, he explained “prince” reflected his Zulu royal lineage not only was appropriate (he may have cast an envious eye at the many Nigerian princes) but was also commonly used as his style by South African media, some organs or government and certainly his own Zulu-based political party (IQembu leNkatha yeNkululeko (the IPF; Inkatha Freedom Party).  He had in 1953 assumed the Inkosi (chieftainship) of the Buthelezi clan, something officially recognized four year laters by Pretoria although not until the early 1980s (when it was thought he might be useful as a wedge to drive into the ANC (African National Congress) does the Apartheid-era government seem to have started referring to him as “prince”).  Despite that cynical semi-concession, there was never a formal re-designation.

Enthroned & installed: Lindsay Lohan in acrylic & rhinestone tiara during “prom queen scene” in Mean Girls (2004).

In the matter of prom queens and such, it’s correct to say there has been “an enthronement” because even in the absence of a physical throne (in the sense of “a chair”), the accession is marked by the announcement and the placing of the crown or tiara.  This differs from something like the “enthroning” of a king or queen in the UK because, constitutionally, there is no interregnum, the new assuming the title as the old took their last breath and “enthronement” is a term reserved casually to apply to the coronation.  Since the early twentieth century, the palace and government have contrived to make an elaborate “made for television” ceremony although it has constitutional significance beyond the rituals related to the sovereign’s role as Supreme Governor of the Church of England.

Dame Sarah Mullally in the regalia of Bishop of London; in January 2026, she will take office as Archbishop of Canterbury, the formal installation in March.  No longer one of the world's more desirable jobs (essentially because it can't be done), all wish her the best of British luck.

In October 2025, the matter of enthronement (or, more correctly, non-enthronement) in the Church of England made a brief splash in some of the less explored corners of social media after it was announced the ceremony marking the accession of the next Archbishop of Canterbury would be conducted in Canterbury Cathedral in March 2026.  The announcement was unexceptional in that it was expected and for centuries Archbishops of Canterbury have come and gone (although the last one was declared gone rather sooner than expected) but what attracted some comment was the new appointee was to be “installed” rather than the once traditional “enthroned”.  The conclusion some drew was this apparent relegation was related to the next archbishop being Dame Sarah Mullally (née Bowser; b 1962) the first woman to hold the once desirable job, the previous 105 prelates having been men, the first, Saint Augustine of Canterbury in 597.

However, there is in the church no substantive legal or theological significance in the use of “installed” rather than “enthroned” and the choice reflects modern ecclesiastical practice rather than having any doctrinal or canonical effect.  A person become Archbishop of Canterbury through a sequence of juridical acts and these constitute the decisive legal instruments; ceremonial rites have a symbolic value but nothing more, the power of the office vested from the point at which the legal mechanisms have correctly been executed (in that, things align with the procedures used for the nation’s monarchs).  So the difference is one of tone rather than substance and the “modern” church has for decades sought to distance itself from perceptions it may harbor quasi-regal aspirations or the perpetuation of clerical grandeur and separateness; at least from Lambeth Palace, the preferred model long has been: pastoral; most Church of England bishops have for some times been “installed” in their cathedrals (despite “enthronement” surviving in some press reports, a product likely either of nostalgia or “cut & paste journalism”).  That said, some Anglican provinces outside England still “enthrone” (apparently on the basis “it’s always been done that way” rather than the making of a theological or secular point”).

Lambeth Palace, the Archbishop of Canterbury's official London residence.

Interestingly, Archbishops of York (“the church in the north”) have continued to be enthroned while those at Canterbury became installations.  Under canon law, the wording makes literally no difference and historians have concluded the retention of the older form is clung to for no reason other than “product differentiation”, York Minster often emphasizing their continuity with medieval ceremonial forms; it’s thus a mere cultural artefact, the two ceremonies performing the same liturgical action: seating the archbishop in the cathedra (the chair (throne) of the archbishop).  Because it’s the Archbishop of Canterbury and not York who sits as the “spiritual head of the worldwide Anglican community”, in York there’s probably not the same sensitivity to criticism of continuing with “Romish ways” with the whiff of “popery”.

In an indication of how little the wording matters, it’s not clear who was the last Archbishop of Canterbury who could be said to have been “enthroned” because there was never any differentiation of form in the ceremonies and the documents suggest the terms were used casually and even interchangeably.  What can be said is that Geoffrey Fisher (1887–1972; AoC-99: 1945-1961) was installed at a ceremony widely described (in the official programme, ecclesiastical commentaries and other church & secular publications) as an “enthronement” and that was the term used in the government Gazette; that’s as official an endorsement of the term as seems possible because, being an established church, bishops are appointed by the Crown on the advice of the prime minister although the procedure has at least since 2007 been a “legal fiction” because the church’s CNC (Crown Nominations Commission) sends the names to the prime minister who acts as a “postbox”, forwarding them to the palace for the issuing of letters patent confirming the appointment.  When Michael Ramsey (1904–1988; AoC-100: 1961-1974), was appointed, although the term “enthrone” did appear in press reports, the church’s documents almost wholly seem to have used “install” and since then, in Canterbury, it’s been installations all the way,

Pope Pius XII in triple tiara at his coronation, The Vatican, March, 1939.

So, by the early 1960s the church was responding, if cautiously, to the growing anti-monarchical sentiment in post-war ecclesiology although this does seem to have been a sentiment of greater moment to intellectuals and theologians than parishioners.  About these matters there was however a kind of ecumenical sensitivity emerging and the conciliar theology later was crystallised (if not exactly codified) in the papers of Second Vatican Council (Vatican II, 1962-1965, published 1970).  The comparison with the practice in Rome is interesting because there are more similarities than differences although that is obscured by words like “enthronement” and “coronation” being seemingly embedded in the popular (and journalistic) imagination. That’s perhaps understandable because for two millennia as many as 275 popes (officially the count is 267 but it’s not certain how many there have been because there have been “anti-popes” and allegedly even one woman (although that’s now largely discounted)) have sat “on the throne of Saint Peter” (retrospectively the first pope) so the tradition is long.  In Roman Catholic canon law, “enthronement” is not a juridical term; the universal term is capio sedem (taking possession of the cathedral (ie “installation”)) and, as in England, an appointment is formalized once the legal instruments are complete, the subsequent ceremony, while an important part of the institution’s mystique, exists for the same reason as it does for the Church of England or the House of Windsor: it’s the circuses part of panem et circenses (bread and circuses).  Unlike popes who once had coronations, archbishops of Canterbury never did because they made no claim to temporal sovereignty.

Pope Paul VI in triple tiara at his coronation, The Vatican, June. 1963.  It was the last papal coronation.

So, technically, modern popes are “installed as Bishop of Rome” and in recent decades the Holy See has adjusted the use of accoutrements to dispel any implication of an “enthronement”, the last papal coronation at which a pope was crowned with the triple tiara was that of Paul VI (1897-1978; pope 1963-1978) but in “an act of humility” he removed it, placing it on the on the alter where (figuratively), it has since sat.  Actually, Paul VI setting aside the triple tiara as a symbolic renunciation of temporal and monarchical authority was a bit overdue because the Papal States had been lost to the Holy See with the unification of Italy in 1870 though the Church refused to acknowledge that reality; in protest, no pope for decades set foot outside the Vatican.  However, in the form of the Lateran Treaty (1929), the Holy See entered into a concordat with the Italian state whereby the (1) the Vatican was recognized as a sovereign state and (2) the church was recognized as Italy’s state religion in exchange for which the territorial and political reality was recognized.  Despite that, until 1963 the triple tiara (one tier of which was said to symbolize the pope’s temporal authority over the papal states) appeared in the coronations of Pius XII (1876-1958; pope 1939-1958), John XXIII (1881-1963; pope 1958-1963) and Paul VI (who didn’t formal abolished the rite of papal coronation from the Ordo Rituum pro Ministerii Petrini Initio Romae Episcopi (Order of Rites for the Beginning of the Petrine Ministry of the Bishop of Rome (the liturgical book detailing the ceremonies for a pope's installation)) until 1975.

The Chair of St Augustine.  In church circles, archbishops of Canterbury are sometimes said to "occupy the Chair of St Augustine".

The Chair of St Augustine sits in Canterbury Cathedral but technically, an AoC is “twice installed”: once on the Diocesan throne as the Bishop of the see of Canterbury and also on the Chair of St Augustine as Primate of All England (the nation's first bishop) and spiritual leader of the worldwide Anglican Communion. So, there’s nothing unusual in Sarah Mullally being “installed” rather than “enthroned” as would have been the universal terminology between the reformation and the early twentieth century.  Linguistically, legally and theologically, the choice of words is a non-event and anyone who wishes to describe Dame Sarah as “enthroned” may do so without fear of condemnation, excommunication or a burning at the stake.  What is most likely is that of those few who notice, fewer still are likely to care.

Wednesday, December 17, 2025

Inkhorn

Inkhorn (pronounced ingk-hawrn)

A small container of horn or other material (the early version would literally have been hollowed-out horns from animals), formerly used to hold writing ink.

1350-1400: From the Middle English ynkhorn & inkehorn (small portable vessel, originally made of horn, used to hold ink), the construct being ink +‎ horn.  It displaced the Old English blæchorn, which had the same literal meaning but used the native term for “ink”.  It was used attributively from the 1540s as an adjective for things (especially vocabulary) supposed to be beloved by scribblers, pedants, bookworms and the “excessively educated”).  Inkhorn, inkhornery & inkhornism are nouns, inkhornish & inkhornesque are adjectives and inkhornize is a verb; the noun plural is inkhorns.

Ink was from the Middle English ynke, from the Old French enque, from the Latin encaustum (purple ink used by Roman emperors to sign documents), from the Ancient Greek ἔγκαυστον (énkauston) (burned-in”), the construct being ἐν (en) (in) + καίω (kaíō) (burn). In this sense, the word displaced the native Old English blæc (ink (literally “black” because while not all inks were black, most tended to be).  Ink came ultimately from a Greek form meaning “branding iron”, one of the devices which should make us grateful for modern medicine.  Because, in addition to using the kauterion to cauterize (seal wounds with heat), essentially the same process was used to seal fast the colors used in paintings.  Then, the standard method was to use wax colors fixed with heat (encauston (burned in)) and in Latin this became encaustum which came to be used to describe the purple ink with which Roman emperors would sign official documents.  In the Old French, encaustum became enque which English picked up as enke & inke which via ynk & ynke, became the modern “ink”.  Horn was from the Middle English horn & horne, from the Old English horn, from the Proto-West Germanic horn, from the Proto-Germanic hurną; it was related to the West Frisian hoarn, the Dutch hoorn, the Low German Hoorn, horn, the German, Danish & Swedish horn and the Gothic haurn.  It was ultimately from the primitive Indo-European r̥h-nó-m, from erh- (head, horn) and should be compared with the Breton kern (horn), the Latin cornū, the Ancient Greek κέρας (kéras), the Proto-Slavic sьrna, the Old Church Slavonic сьрна (sĭrna) (roedeer), the Hittite surna (horn), the Persian سر (sar) and the Sanskrit शृङ्ग (śṛṅga) (horn

Inkhorn terms & inkhorn words

The phrase “inkhorn term” days from the 1530s and was used to criticize the use of language in an obscure or way difficult for most to understand, usually by an affected or ostentatiously erudite borrowing from another language, especially Latin or Greek.  The companion term “inkhorn word” was used of such individual words and in modern linguistics the whole field is covered by such phrases as “lexiphanic term”, “pedantic term” & “scholarly term”, all presumably necessary now inkhorns are rarely seen.  Etymologists are divided on the original idea behind the meaning of “inkhorn term” & “inkhorn word”.  One faction holds that because the offending words tended to be long or at least multi-syllabic, a scribe would need more than once to dip their nib into the horn in order completely write things down while the alternative view is that because the inkhorn users were, by definition, literate, they were viewed sometimes with scepticism, one suspicion they used obscure or foreign words to confuse or deceive the less educated.  The derived forms are among the more delightful in English and include inkhornism, inkhornish, inkhornery inkhornesque & inkhornize.  The companion word is sesquipedalianism (a marginal propensity to use humongous words).

Lindsay Lohan and her lawyer in court, Los Angeles, December 2011.

Inkhorn words were in the fourteenth & fifteenth centuries known also as “gallipot words”, derived from the use of such words on apothecaries' jars, the construct being galli(s) + pot.  Gallis was from the Latin gallus (rooster or cock (male chicken)), from the Proto-Italic galsos, an enlargement of gl̥s-o-, zero-grade of the primitive Indo-European gols-o-, from gelh- (to call); it can be compared with the Proto-Balto-Slavic galsas (voice), the Proto-Germanic kalzōną (to call), the Albanian gjuhë (tongue; language), and (although this is contested) the Welsh galw (call).  Appearing usually in the plural a gallipot word was something long, hard to pronounce, obscure or otherwise mysterious, the implication being it was being deployed gratuitously to convey the impression of being learned.  The companion insult was “you talk like an apothecary” and “apothecary's Latin” was a version of the tongue spoken badly or brutishly (synonymous with “bog Latin” or “dog Latin” but different from “schoolboy Latin” & “barracks Latin”, the latter two being humorous constructions, the creators proud of their deliberate errors).  The curious route which led to “gallipot” referencing big words was via the rooster being the symbol used by apothecaries in medieval and Renaissance Europe, appearing on their shop signs, jars & pots.  That was adopted by the profession because the rooster symbolized vigilance, crowing (hopefully) at dawn, signaling the beginning of the day and thus the need for attentiveness and care.  Apothecaries, responsible for preparing and dispensing medicinal remedies, were expected to be vigilant and attentive to detail in their work to ensure the health and well-being of their patients who relied on their skill to provided them the potions to “get them up every morning” in sound health.  Not all historians are impressing by the tale and say a more convincing link is that in Greek mythology, the rooster was sacred to Asclepius (Aesdulapius in the Latin), the god of medicine, and was often depicted in association with him.  In some tales, Asclepius had what was, even by the standards of the myths of Antiquity, a difficult birth and troubled childhood.

The quest for the use of “plain English” is not new.  The English diplomat and judge Thomas Wilson (1524–1581) wrote The Arte of Rhetorique (1553), remembered as the “the first complete works on logic and rhetoric in English” and in it he observed the first lesson to be learned was never to affect “any straunge ynkhorne termes, but to speak as is commonly received.  Wring a decade earlier, the English bishop John Bale (1495–1563) had already lent an ecclesiastical imprimatur to the task, condemning one needlessly elaborate text with: “Soche are your Ynkehorne termes” and that may be the first appearance of the term in writing.  A religious reformer of some note, he was nicknamed “bilious Bale”, a moniker which politicians must since have been tempted to apply to many reverend & right-reverend gentlemen.  A half millennium on, the goal of persuading all to use “plain English” is not yet achieved and a fine practitioner of the art was Dr Kevin Rudd (b 1957; Australian prime-minister 2007-2010 & 2013): from no one else would one be likely to hear the phrase detailed programmatic specificity” and to really impress he made sure he spoke it to an audience largely of those for whom English was not a first language.

An inkhorn attributed to Qes Felege, a scribe and craftsman.

Animal horns were for millennia re-purposed for all sorts of uses including as drinking vessels, gunpowder stores & loaders, musical instruments and military decoration and in that last role they’ve evolved into a political fashion statement, Jacob Chansley (b 1988; the “QAnon Shaman”) remembered for the horned headdress worn during the attack on the United States Capitol building in Washington DC on 6 January 2021.  Inkhorns tended variously to be made from the horns of sheep or oxen, storing the ink when not as use and ideal as a receptacle into which the nib of a quill or pen could be dipped.  Given the impurities likely then to exist a small stick or nail was left in the horn to stir away any surface film which might disrupts a nib’s ability to take in free-flowing ink, most of which were not pre-packaged products by mixed by the user from a small solid “cake” of the base substance in the desired color, put into the horn with a measure starchy water and left overnight to dissolve.  The sharp point of a horn allowed it to be driven into the ground because the many scribes were not desk-bound and actually travelled from place to place to do their writing, quill and inkhorn their tools of trade.

A mid-Victorian (1837-1901) silver plated three-vat inkwell by George Richards Elkington (1801–1865) of Birmingham, England.  The cast frame is of a rounded rectangular form with outset corners, leaf and cabuchons, leaf scroll handle and conforming pen rest.  The dealer offering this piece described the vats as being of "Vaseline" glass with fruit cast lids and in the Elkington factory archives, this is registered: "8 Victoria Chap 17. No. 899, 1 November 1841".

“Vaseline glass” is a term describing certain glasses in a transparent yellow to yellow-green color attained by virtue of a uranium content.  It's an often used descriptor in the antique business because some find the word “uranium” off-putting although inherently the substance is safe, the only danger coming from being scratched by a broken shard.  Also, some of the most vivid shades of green are achieved by the addition of a colorant (usually iron) and these the cognoscenti insist should be styled “Depression Glass” a term which has little appeal to antique dealers.  The term “Vaseline glass” wasn’t used prior to the 1950s (after the detonation of the first A-bombs in 1945, there emerged an aversion to being close to uranium) and what's used in this inkwell may actually be custard glass or Burmese glass which is opaque whereas Vaseline glass is transparent.  Canary glass was first used in the 1840s as the trade name for Vaseline glass, a term which would have been unknown to George Richards Elkington.

English silver plate horn and dolphin inkwell (circa 1909) with bell, double inkwell on wood base with plaque dated 1909.  This is an inkwell made using horns; it is not an inkhorn.

So inkhorns were for those on the move while those which sat on desks were called “ink wells” or “ink pots” and these could range from simple “pots” to elaborate constructions in silver or gold.  There are many ink wells which use horns as part of their construction but they are not inkhorns, the dead animal parts there just as decorative forms of structure.

Dr Rudolf Steiner’s biodynamic cow horn fertilizer.

Horns are also a part of the “biodynamic” approach to agriculture founded by the Austrian occultist & mystic Rudolf Steiner (1861-1925), an interesting figure regarded variously as a “visionary”, a “nutcase” and much between.  The technique involves filling cow horns with cow manure which are buried during the six coldest months so the mixture will ferment; upon being dug up, it will be a sort of humus which has lost the foul smell of the manure and taken on a scent of undergrowth.  It may then be used to increase the yield generated from the soil.  It’s used by being diluted with water and sprayed over the ground.  Dr Steiner believed the forces penetrating the digestive organ of cows through the horn influence the composition of their manure and when returned to the environment, it is enriched with spiritual forces that make the soil more fertile and positively affect it.  As he explained: “The cow has horns to send within itself the etheric-astral productive forces, which, by pressing inward, have the purpose of penetrating directly into the digestive organ. It is precisely through the radiation from horns and hooves that a lot of work develops within the digestive organ itself.  So in the horns, we have something well-adapted, by its nature, to radiate the vital and astral properties in the inner life.”  Now we know.

Saturday, December 13, 2025

Narratology

Narratology (pronounced nar-uh-tol-uh-jee)

The study of narrative & narrative structure and the ways these affect human perception (with some mission creep over the years).

1967: The construct was narrate +‎ -ology, an Anglicization of the French narratologie, coined by Bulgarian-French historian, philosopher & structuralist literary critic Tzvetan Todorov (1939–2017), it first appeared in his book Grammaire du Décaméron (1967), a structural analysis of Decameron (The Decameron (1348-1353)) by the Italian writer Giovanni Boccaccio (1313–1375).  Although once thought an arcane appendage to literature and a mere academic abstraction, structuralism and narratology in the 1970s and 1980s became a very popular (and controversial) field and while postmodernism’s historic movement may have passed, the tools are an important part of the “learning” process used by generative AI (artificial intelligence) to produce meaning from the LLM (large language models.)

Title page from a 1620 printing of Decameron.

Boccaccio’s Decameron (literally “ten days”) was a collection of short stories, structured into a hundred tales of seven young women and three young men who had secluded themselves in a villa outside Florence, seeking to avoid the Black Death pandemic (1346-1353) then sweeping Europe.  Although not too much should be made of this comparison, the work in some aspects is not dissimilar to reality television, being a mash-up of erotic scenes, set-piece jokes, suspense and unrequited love.  Todorov’s Grammaire du Décaméron was a literary analysis of the work but “grammaire” must be understood as meaning “grammar” in the sense of the structural or narratological principles rather than as its used in its “everyday” sense.  Historians and literary scholars have for centuries regarded Decameron as a valuable document because, written in the Florentine vernacular of the era, although fictional, it’s a kind of “snapshot” of life in what was one of Europe’s many troubled times.  It was Boccaccio who dubbed Dante’s (Dante Alighieri (circa 1265–1321)) Divina Commedia (Divine Comedy (circa 1310-1321)) “divine” (in the sense of “very good” rather than “holy”).

Narrate (to relate a story or series of events (historically in speech or writing)) may for years (or even decades) have been in oral use in English before the first known use in print in 1656, etymologists noting that until the nineteenth century it was stigmatized as “Scottish” (long a slur among the more fastidious) although it’s thought it was derived from the “respectable” narration.  Narrative ((1) a story or account of events or (2) the art, process or technique or telling the story) was in use by the 1440s and was from the Middle French noun & adjective narrative, from the Late Latin narrātīvus (narration (noun) & suitable for narration (adjective)), the construct being narrāt(us) (related, told), past participle of narrāre (to relate, tell, say) + -īvus (the adjectival suffix).  Again, like “narrate”, narrative was once used exclusively of speech or writing but in recent decades the terms have been more widely applied and not restricted to describing the efforts of humans.

Since the nineteenth century, “-ologies” have proliferated.

The suffix -ology was formed from -o- (as an interconsonantal vowel) +‎ -logy.  The origin in English of the -logy suffix lies with loanwords from the Ancient Greek, usually via Latin and French, where the suffix (-λογία) is an integral part of the word loaned (eg astrology from astrologia) since the sixteenth century.  French picked up -logie from the Latin -logia, from the Ancient Greek -λογία (-logía).  Within Greek, the suffix is an -ία (-ía) abstract from λόγος (lógos) (account, explanation, narrative), and that a verbal noun from λέγω (légō) (I say, speak, converse, tell a story).  In English the suffix became extraordinarily productive, used notably to form names of sciences or disciplines of study, analogous to the names traditionally borrowed from the Latin (eg astrology from astrologia; geology from geologia) and by the late eighteenth century, the practice (despite the disapproval of the pedants) extended to terms with no connection to Greek or Latin such as those building on French or German bases (eg insectology (1766) after the French insectologie; terminology (1801) after the German Terminologie).  Within a few decades of the intrusion of modern languages, combinations emerged using English terms (eg undergroundology (1820); hatology (1837)).  In this evolution, the development may be though similar to the latter-day proliferation of “-isms” (fascism; feminism et al).

A narrative is a story and it can run to thousands of pages or appear in a few words on a restaurant menu describing their fish & chips: “Ethically sourced, line-caught Atlantic cod, liberated from the frigid depths, encased in a whisper-light, effervescent golden shroud of our signature micro-foamed artisanal lager batter and served with hand-sliced, elongated potato batons fried to a crisp perfection in sustainably produced vegetable oil.”  In the age of every customer being able to post from their phone a rating and review of a restaurant, wisely, some institutions include a footnote along the lines: “These narratives are a guide and because natural products vary greatly, there will be variation.”  That’s an aspect of narratology, a process which is not the reading and interpretation of individual texts but an attempt to study the nature of “story” itself, as a concept and as a cultural practice or construct.

Crooked Hillary Clinton's book tour (2017).

Narratologists know that what to a narrator can be a narrative, a naratee will receive as spin.  In What Happened (2017), a work of a few dozen pages somehow padded out to a two-inch thick wad of over 500 using the “how to write an Amazon best-seller” template, crooked Hillary Clinton (b 1947; US secretary of state 2009-2013) explained who was to blame for her loss in the 2016 US presidential election (spoiler alert: it was everybody except her).

Presumably not comparing what they’re doing with making “fish & chips” sound like something expensive, politicians and their operatives will often describe something they offer as a “narrative” although were mush the same stuff to come from their opponents it might be dismissed as “spin”.  A political narrative functions as a cognitive schema intended to simplify complexity, motivate support and legitimizes particular courses of action.  The concept has a long history but in recent decades the emphasis has been on “simplicity”, something illustrated by comparing a narrative like The Federalist Papers (1878-1788; a collection of several dozen essays advocating the ratification of the Constitution of the United States) with how things are now done (mostly fleshed-out, three-word slogans endlessly repeated).  That descent doesn’t mean both are not narratives in that both are crafted interpretive frame rather than objective descriptions although the extent of the deception obviously had tended to change.  Political spin can also be a narrative and should be thought a parallel stream rather than a tributary; variations on a theme as it were.  Although the purpose may differ (a narrative a storyline intended to set and define and agenda whereas spin is a “damage control” story designed to re-shape perceptions.  Given that, a narrative can be thought of a “macro-management” and spin “micro-management”, both providing fine case-studies for narratologists.

Narratology is a noun; the noun plural is narratologies.  The derived forms are the noun antenarratology (the study of antenarratives and their interplay with narratives and stories), the noun antenarrative (the process by which a retrospective narrative is linked to a living story (the word unrelated to the noun antinarrative (a narrative, as of a play or novel, that deliberately avoids the typical conventions of the narrative, such as a coherent plot and resolution)), the noun  econarratology (an approach to literary criticism combining aspects of ecocriticism (the interdisciplinary study of literature and ecology) and narratology), the noun narratologist (one who (1) studies or (2) practices narratology), the adjective narratological (of or pertaining to narratology) and the adverb narratologically (in terms of narratology).  Remarkably (given the literary theory industry), the adjective narratologistic seems never to have appeared; it can be only a matter of time.

Tzvetan Todorov on the rooftop of Casa Milà (La Pedrera), Barcelona, Spain, November 2014.

Although not a lineal descendent, what Todorov did in Grammaire du Décaméron was in the tradition of Aristotle’s (384-322 BC) work, especially ποιητικῆς (Peri poietikês (De Poetica De Poe in the Latin and traditionally rendered in English as Poetics).  Poetics is notable as the earliest known study of the structure of Greek drama and remains the oldest known text written exclusively in the form of what now would be called literary theory.  To a modern audience the word “poetics” can mislead because the author’s focus was ποιητική (literally “the poetic art”, from ποιητής (poet, author, writer) and his scope encompassed verse drama (comedy, tragedy, and the satyr play), lyric poetry, and the epic.  For centuries, Poetics loomed over the Western understanding of Greek theatre; it was revered by scholars of the late Medieval period and especially the Renaissance and their influence endured.  As far as is known, the Greeks were the first of the tragedians and it’s through the surviving texts of Aristotle that later understandings were filtered but all of his conclusions were based only on the tragedies and such was his historic and intellectual authority that for centuries those theories came to be misapplied and misused, either by mapping them on to all forms of tragedy or using them as exclusionary, dismissing from the canon those works which couldn’t be made to fit his descriptions.  However, as well as being an invaluable historic text explain how Greek theatre handled mimesis (imitation of life, fiction, allegory etc), Poetics genuinely can be read as proto-critical theory and in it lies a framework for structuralism.

Paintings of Claude Lévi-Strauss: Portrait de Claude Lévi-Strauss, 1991 (1991), oil on panel by Bengt Lindström (1925-2008) (left) and Claude Lévi-Strauss (undated), oil on other by Cal Lekie (b 1999).

Narratology as a distinct fork of structuralism does pre-date Todorov’s use of the word in 1967, the seminal work in the parameters of the discipline by Russian folklorist & literary historian of the formalist school Vladimir Propp (1895-1970) who doubtlessly never anticipated “formalism” would come to be weaponized by comrade Stalin (1878-1953; Soviet leader 1924-1953).  Indeed, by the late 1920s the school of formalism had become unfashionable (something which in the Soviet Union could be dangerous for authors) and their works essentially “disappeared” until being re-discovered by structuralists in the 1950s.  In the West, the idea of narratology as the “theory, discourse or critique of narrative or narration” owes a debt to Belgian-born French anthropologist & ethnologist Claude Lévi-Strauss (1908–2009) who defined the structural analysis by narrative as its now understood.  His landmark text Anthropologie structurale (Structural Anthropology (1958)) suggested myths are variations on basic themes and that in their totality (which runs to thousands) their narratives contain certain constant, basic and universal structures by which any one myth can be explained.  In that way, myths (collectively) exist as a kind “language” which can be deconstructed into units or “mythemes” (by analogy with phonemes (an indivisible unit of sound in a given language)).  Although he didn’t pursue the notion of the comparison with mathematics, others did and that (inherently more segmented) field perhaps better illustrates “structural roles” within language in elements which, although individually standing as minimal contrastive units, can be combined or manipulated according to rules to produce meaningful expressions.  As in formal language theory, in mathematical logic, the smallest units are the primitive symbols of a language which can be quantifiers, variables, logical connectives, relation symbols, function symbols or punctuation.  Broken into the individual parts, these need have no (or only minimal) semantic meaning but gain much meaning when assembled or otherwise handled through syntactic combination governed by a recognized grammar (ie although conceptual primitives rather than “building blocks”, complex meaning can be attained by applying axioms and rules).

Azerbaijani folk art, following Layla and Majnun (1188), a narrative poem by the Persian poet Nizami Ganjavi (circa 1141–1209), printed in Morphology of the Folk Tale (1928) by Vladimir Propp.  In something of a Russian tradition, there are no known photographs of Propp smiling.

Levi-Strauss’s contribution was that myths can be read in relation to each other rather than as reflecting a particular version, thus the his concept of a kind of “grammar” (the set of relations lying beneath the narrative’s surface), thus the general principle of the “collective existence of myths”, independent of individual thought.  That was of course interesting but the startling aspect was the implication myths as related to other myths rather than truth and reality; they are, in a sense, “outside” decentred, and possess their own truth and logic which, when contemplated in a “traditional” way, may be judged neither truthful nor logical.  In that, Levi-Strauss applied something of the method of Propp who, in Morphology of the Folk Tale (1928), “reduced” all folk tales to seven “spheres of action” and 31 fixed elements or “functions” of narrative.  In Propp’s scheme, the function was the basic unit of the narrative “language’ and denoted or referred to the actions which constitute the narrative while the functions tend to follow a logical sequence.  The concept would have been familiar to engineers and shipbuilders but genuinely there was some novelty when applied to literature

Lithuanian semiotician A. J “Julien” Greimas (1917–1992) was among the many academics working in France who found Propp’s reductionism compelling and in Sémantique Structurale Recherche de méthode (Structural Semantics: An Attempt at a Method (1966)) he further atomized things, apparently seeking something like a “universal macro language”, a grammar of narrative which could be derived from a semantic analysis of sentence structure.  That was as ambitious as it sounds and to replace Propp’s “spheres of action” he suggested the “actant” (or role): a structural unit which is neither character or narrative.  To handle the mechanics of this approach he posited three pairs of binary oppositions which included six actants: subject/object; sender/receiver; helper/opponent.  The interactions of these binary oppositions served to account for or describe the three basic patterns which are to be found in narrative: (1) desire, search or aim (subject/object), (2) communication (sender/receiver) and (3) auxiliary support or hindrance (helper/opponent).

An eleven-volume first edition of Marcel Proust’s À la recherche du temps perdu (published originally in seven (1913-1927); in the the original French it contained some 1.267 million words.  By comparison, Leo Tolstoy's (1828-1910) War and Peace (1898) ran ran (depending on the edition) to 560-590 thousand.

While Greimas didn’t explicitly claim his model successfully could be mapped on to “any and every” narrative, he does appear to have built his model as a general theory and while not all critics were convinced, it seems generally to have been acknowledged his toolbox would work on a much wider range than that of Propp which did break down as narrative complexity increased.  Another French literary theorist associated with the structural movement was Gérard Genette (1930–2018) and in choosing a case study for his model he described in Discours du récit est un essai de méthode (Narrative Discourse: An essay in method (1972)) he selected Marcel Proust’s (1871-1922) À la recherche du temps perdu (1913–1927) (originally translated in English as “Remembrance of Things Past” and of late as “In Search of Lost Time”) which spans many volumes and narrative streams.  This time the critics seemed more convinced and seem to have concluded Genette’s approach was “more accessible” (these things are relative).  Noting the distinctions made in Russian Formalism between fabula (story) & syuzhet (plot), Genette distinguished between récit (the chronological sequence of a narrative’s events), historie (the sequence in which the event actually occurred and narration (the act of narrating itself); atop that framework, he built a complex discussion.  Being a French structuralist, he of course added to the field some new jargon to delight the academy, concluding there were three basic kinds of narrator: (1) the heterodiegetic' (where the narrator is absent from his own narrative), (2) the homodiegetic (the narrator is inside his narrative, as in a story told in the first person) and the autodiegetic (the narrator is inside the narrative and also the main character).  Genene’s approach was thus relational, envisaging narrative as a product or consequence of the interplay of its different components, meaning all and all aspects of narrative can be seen as dependent units (or, debatably, layers).

Narrator & protagonist: Lindsay Lohan as Cady Heron in Mean Girls (2004).  What in literary theory is known as homodiegetic narration is in film production usually called “subjective narration” or “first-person narration”, realized usually in a “voice-over narration by the protagonist”.

In formulating his three categories Genene nodded to Aristotle and Plato (circa 427-348 BC), the ancient worthies distinguishing three basic kinds of narrator: (1) the speaker or writer using their own voice, (2) (b) one who assumes the voice of another or others and (3) one who uses both their own voice and that of others.  These categories need not be exclusive for a story may begin in the voice of a narrator who may then introduce another narrator who proceeds to tell the story of characters who usually have their own voices and one or more of them may turn to narration.  Structurally (and even logically), there’s no reason why such a progression (or regression) cannot be infinite.  Although it’s obvious the term “narrate” denotes the person to whom a narrative is addressed, just because there is a narrative, it need not be axiomatic a narratee is present or ever existed, T. S. Eliot (1888–1965) in The Three Voices of Poetry (1953-1954) discerning three modes (voices) of poetic expression: (1) the poet speaking to himself, a personal, often obscure meditation, (2) the poet addressing an audience, aiming to teach, persuade, or amuse and (3) the poet creating a dramatic character, as in verse drama, something demanding complex communication between imagined characters.  Eliot argued that “good” poetry often was a blend of these voices and distinguishing them helps in understanding a poem's social and artistic purpose, beyond its mere self-expression.  However, Eliot did note that in “talking to himself”, the writer could also be “talking to nobody”.  He was at pains also to point out that when speaking in the third voice, the poet is saying not what he would say in his own person, but only what he can say within the limits of one imaginary character addressing another imaginary character.  More than many, Eliot knew narrative was not always reliable but the techniques of narratology (and structuralism generally) exist for purposes other than determining truth.

Roland Barthes (2015), oil and acrylic on canvas by Benoit Erwann Boucherot (b 1983).

Layers in narrative structure were identified by the French philosopher & literary theorist Roland Barthes (1915–1980) and his work had great appeal, something of an academic cult once surrounded him and, almost half a century after his death, he retains a following.  In Introduction à l'analyse structurale des récits (Introduction to Structural Analysis of Narrative (1966)), Barthes presumed a hierarchy of levels existed within narrative, suggesting that, up to a point, they can be discussed separately.  Narrative (at least for this purpose), he conceived as a “long sentence”, just as every constative (in linguistics, pertaining to an utterance relaying information and likely to be regarded as true or false) sentence can be the “rough outline” of a short narrative.  Barthes’ model was more building block-like in that he selects basic units of narrative (such as “function” & “index”, functions constituting a chain of acts while indices are a kind of metadata containing information about characters.

François Mitterrand (1984), acrylic on canvas by Bryan Organ (b 1935).

On X (formerly known as Twitter), one tweeter analysed the images on Barthes which exists and the indexed web, finding in 72% he was smoking a cigarette or cigar.  The statistical risks associated with routinely inhaling a known carcinogen have for decades been well-known but Barthes didn’t live long enter the age of “peak statistical risk”.  In February, 1980, having just taken lunch with François Mitterrand (1916–1996; President of France 1981-1995) in a restaurant on Paris’s Rue des Blancs-Manteaux, Barthes was using a zebra crossing on the Rue des Ecoles when knocked down by a laundry van; never recovering from his injuries, he died a month later.  The van’s driver was one Yvan Delahov, of Bulgarian nationality who tested positive for alcohol, but his reading of 0.6 fell below the legal maximum of 0.8; admitting he was late delivering his shirts, he claimed he’d not exceeded 60 km/h (37.3) mph.  At the time, Barthes was carrying no identity documents but was identified his colleague, the philosopher Michel Foucault (1926–1984).

Northrop Frye, Anatomy of Criticism (first edition, 1957).

Finally must be acknowledged the contribution of Canadian literary critic & literary theorist Northrop Frye (1912–1991) whose Anatomy of Criticism (1957) is regarded still as one of the more “remarkable and original” (in the words of the English historian and critic J.A. Cuddon (1928-1966)) works of literary theory in the English-speaking world.  In the narrow technical sense, Frye's theory is not structuralist (something which doubtless burnished its reputation among many) but it certainly contains strands which can be seen as structuralist.  Frye positioned literature as an “autonomous verbal structure”' unrelated to anything beyond itself, a world which contains “life and reality in a system of verbal relationships”.  In this “self-contained literary universe”, there were four radical “mythoi” (plot forms and basic organizing structural principles) which corresponded to the four seasons of the natural order and constitute the four main genres of comedy romance, tragedy and satire.  For those non-postmodernists who still long for l'art pour l'art (art for art's sake), Frye’s mythois are there to be used and he proved their utility in a wide range of texts, including the Bible.