Showing posts with label Vatican. Show all posts
Showing posts with label Vatican. Show all posts

Monday, April 13, 2026

Nail

Nail (pronounced neyl)

(1) A slender, typically rod-shaped rigid piece of metal, usually in many lengths and thicknesses, having (usually) one end pointed and the other (usually) enlarged or flattened, and used for hammering into or through wood, concrete or other materials; in the building trades the most common use is to fasten or join together separate pieces (of timber etc).

(2) In anatomy, a thin, horny plate, consisting of modified epidermis, growing on the upper side of the end of a finger or toe; the toughened protective protein-keratin (known as alpha-keratin, also found in hair) at the end of an animal digit, such as fingernail.

(3) In zoology, the basal thickened portion of the anterior wings of certain hemiptera; the basal thickened portion of the anterior wings of certain hemiptera; the terminal horny plate on the beak of ducks, and other allied birds; the claw of a mammal, bird, or reptile.

(4) Historically, in England, a round pedestal on which merchants once carried out their business.

(5) A measure for a length for cloth, equal to 2¼ inches (57 mm) or 1⁄20 of an ell; 1⁄16 of a yard (archaic); it’s assumed the origin lies in the use to mark that length on the end of a yardstick.

(6) To fasten with a nail or nails; to hemmer in a nail.

(7) To enclose or confine (something) by nailing (often followed by up or down).

(8) To make fast or keep firmly in one place or position (also used figuratively).

(8) Perfectly to accomplish something (usually as “nailed it”).

(9) In vulgar slang, of a male, to engage in sexual intercourse with (as “I nailed her” or (according to Urban Dictionary: “I nailed the bitch”).

(10) In law enforcement, to catch a suspect or find them in possession of contraband or engaged in some unlawful conduct (usually as “nailed them”).

(11) In Christianity, as “the nails”, the relics used in the crucifixion, nailing Christ to the cross at Golgotha.

(12) As a the nail (unit), an archaic multiplier equal to one sixteenth of a base unit

(13) In drug slang, a hypodermic needle, used for injecting drugs.

(14) To detect and expose (a lie, scandal, etc)

(15) In slang, to hit someone.

(16) In slang, intently to focus on someone or something.

(17) To stud with or as if with nails.

Pre 900: From the Middle English noun nail & nayl, from the Old English nægl and cognate with the Old Frisian neil, the Old Saxon & Old High German nagal, the Dutch nagel, the German Nagel, the Old Norse nagl (fingernail), all of which were from the unattested Germanic naglaz.  As a derivative, it was akin to the Lithuanian nãgas & nagà (hoof), the Old Prussian nage (foot), the Old Church Slavonic noga (leg, foot), (the Serbo-Croatian nòga, the Czech noha, the Polish noga and the Russian nogá, all of which were probably originally a jocular reference to the foot as “a hoof”), the Old Church Slavonic nogŭtĭ, the Tocharian A maku & Tocharian B mekwa (fingernail, claw), all from the unattested North European Indo-European ənogwh-.  It was further akin to the Old Irish ingen, the Welsh ewin and the Breton ivin, from the unattested Celtic gwhīnā, the Latin unguis (fingernail, claw), from the unattested Italo-Celtic əngwhi-;the Greek ónyx (stem onych-), the Sanskrit ághri- (foot), from the unattested ághli-; the Armenian ełungn from the unattested onogwh-;the Middle English verbs naile, nail & nayle, the Old English næglian and cognate with the Old Saxon neglian, the Old High German negilen, the Old Norse negla, from the unattested Germanic nagl-janan (the Gothic was ganagljan).  The ultimate source was the primitive Indo-European h₃nog- (nail) and the use to describe the metal fastener was from the Middle English naylen, from the Old English næġlan & nægl (fingernail (handnægl)) & negel (tapering metal pin), from the Proto-Germanic naglaz (source also of Old Norse nagl (fingernail) & nagli (metal nail).  Nail is a noun & verb, nailernailless & naillike are adjectives, renail is a verbs, nailing is a noun & vern and nailed is a verb & adjective; the noun plural is nails.

Nail is modified or used as a modifier in literally dozens of examples including finger-nail, toe-nail, nail-brush, nail-file, rusty-nail, garden-nail, nail-fungus, nail-gun & frost-nail.  In idiomatic use, a “nail in one's coffin” is a experience or event that tends to shorten life or hasten the end of something (applied retrospectively (ie post-mortem) it’s usually in the form “final nail in the coffin”.  To be “hard as nails” is either to be “in a robust physical state” or “lacking in human feelings or without sentiment”. To “nail one's colors to the mast” is to declare one’s position on something.  Something described as “better than a poke in the eye with a rusty nail” is a thing, which while not ideal, is not wholly undesirable or without charm.  In financial matters (of payments), to be “on the nail” is to “pay at once”, often in the form “pay on the nail”.  To “nail something down” is to finalize it. To have “nailed it” is “to perfectly have accomplished something” while “nailed her” indicates “having enjoyed sexual intercourse with her”.  The “right” in the phrase “hit the nail right on the head” is a more recent addition, all known instances of use prior to 1700 being “hit the nail on the head” and the elegant original is much preferred.  It’s used to mean “correctly identify something or exactly to arrive at the correct answer”.  Interestingly, the Oxford English Dictionary (OED) notes there is no documentary evidence that the phrase comes from “nail” in the sense of the ting hit by a hammer.

Double-headed nails.

Double-headed nails are used for temporary structures like fencing.  When the shaft is hammered in to the point where the surface of the lower head is flat against the surface of that into which it's being hammered, it leaves the upper head standing proud with just enough of the shaft exposed to allow a claw-hammer to be used to extract nail.  There is a story that as part of an environmental protest against the building or demolition of some structure (the tales vary), activists early one morning went to the temporary fencing around the contested site and hammered in all the double-headed nails.  This is believed to be an urban myth.

The sense of “fingernail” appears to be the original which makes sense give there were fingernails before there were spikes (of metal or any other material) used to build stuff.  The verb nail was from the Old English næglian (to fix or fasten (something) onto (something else) with nails), from the Proto-Germanic ganaglijan (the source also of the Old Saxon neglian, the Old Norse negla, the Old High German negilen, the German nageln and the Gothic ganagljan (to nail), all developed from the root of the nouns.  The colloquial meaning “secure, succeed in catching or getting hold of (someone or something)” was in use by at least the 1760; hence (hence the law enforcement slang meaning “to effect an arrest”, noted since the 1930s.  The meaning “to succeed in hitting” dates from 1886 while the phrase “to nail down” (to fix in place with nails) was first recorded in the 1660s.

Colors: Lindsay Lohan with nails unadorned and painted.

As a noun, “nail-biter” (worrisome or suspenseful event), perhaps surprisingly, seems not to have been in common use until 1999 an it’s applied to things from life-threatening situations to watching close sporting contests.  The idea of nail-biting as a sign of anxiety has been in various forms of literature since the 1570s, the noun nail-biting noted since 1805 and as a noun it was since the mid-nineteenth century applied to those individuals who “habitually or compulsively bit their fingernails” although this seems to have been purely literal rather than something figurative of a mental state.  Now, a “nail-biter” is one who is “habitually worried or apprehensive” and they’re often said to be “chewing the ends of their fingernails” and in political use, a “nail biter” is a criticism somewhat less cutting than “bed-wetter”.  The condition of compulsive nail-biting is the noun onychophagia, the construct being onycho- (a creation of the international scientific vocabulary), reflecting a New Latin combining form, from the Ancient Greek νυξ (ónux) (claw, nail, hoof, talon) + -phagia (eating, biting or swallowing), from the Ancient Greek -φαγία (-phagía).  A related form was -φαγος (-phagos) (eater), the suffix corresponding to φαγεν (phageîn) (to eat), the infinitive of φαγον (éphagon) (I eat), which serves as aorist (essentially a compensator for sense-shifts) (for the defective verb σθίω (esthíō) (I eat).  Bitter-tasting nail-polish is available for those who wish to cure themselves.  Nail-polish as a product dates from the 1880s and was originally literally a clear substance designed to give the finger or toe-nails a varnish like finish upon being buffed.  By 1884, it was being sold as “liquid nail varnish” including shads of black, pink and red although surviving depictions in art suggests men and women in various cultures have for thousands of years been coloring their nails.  Nail-files (small, flat, single-cut file for trimming the fingernails) seem first to have been sold in 1819 and nail-clippers (hand-tool used to trim the fingernails and toenails) in 1890.

Francis (1936-2025; pope 2013-2025) at the funeral of Cardinal George Pell (1941-2023), St Peter’s Basilica, the Vatican, January 2023.

The expression "nail down the lid" is a reference to the lid of a coffin (casket), the implication being one wants to make doubly certain anyone within can't possible "return from the dead".  The noun doornail (also door-nail) (large-headed nail used for studding batten doors for strength or ornament) emerged in the late fourteenth century and was often used of many large, thick nails with a large head, not necessarily those used only in doors.  The figurative expression “dead as a doornail” seems to be as old as the piece of hardware and use soon extended to “dumb as a doornail” and “deaf as a doornail).  The noun hangnail (also hang-nail) is a awful as it sounds and describes a “sore strip of partially detached flesh at the side of a nail of the finger or toe” and appears in seventeenth century texts although few etymologists appear to doubt it’s considerably older and probably a folk etymology and sense alteration of the Middle English agnail & angnail (corn on the foot), from the Old English agnail & angnail.  The origin is likely to have been literally the “painful spike” in the flesh when suffering the condition.  The first element was the Proto-Germanic ang- (compressed, hard, painful), from the primitive Indo-European root angh- (tight, painfully constricted, painful); the second the Old English nægl (spike), one of the influences on “nail”.  The noun hobnail was a “short, thick nail with a large head” which dates from the 1590s, the first element probably identical with hob (rounded peg or pin used as a mark or target in games (noted since the 1580s)) of unknown origin.  Because hobnails were hammered into the leather soles of heavy boots and shoes, “hobnail” came in the seventeenth century to be used of “a rustic person” though it was though less offensive than forms like “yokel”.

Nails and pins

Mug shot “pin” from TeePublic featuring Lindsay Lohan (b 1986, left), Donald Trump (b 1946; POTUS 2017-2021 and since 2025) and Paris Hilton (b 1981, right).  In this context, although the product really is “the badge”, the name was gained from the built-in pin supplied to secure the object to clothing.

As designs, a nail and a pin are similar, obviously differing only in scale but the function of each is different.  A nail’s primary purpose is to function as a structural fastener joining materials (most typically two or more pieces of wood) although there are specialized nails driven into substrate by impact (variously with hammers or nail guns (sometimes called “pin-nailers”, some of which are built to fire “panel pins” (very slender nails) or small “headless nails”).  A nail relies on friction and compression in the surrounding material for its holding strength.  Pins look like scaled-down nails but mostly are used for alignment, retention or pivoting, rather than structural load-bearing.  Because of their more delicate construction, pins often are inserted through specific-purpose, pre-existing holes and in many cases are intended to be temporary and are thus removable.  Visually, both nails and pins have heads (round, flat, clipped etc) and a tapered shank with a tip pointed for pointed tip for penetration (“snub-nosed” nails do exist but are rare) and both are designed slightly to deform the surrounding material when driven.  The most obvious difference is that a pin’s head is very small and some are spherical and made from plastic; they’re designed only to be pushed with finger-pressure rather than being hit with a hammer.  Although the term “pin” is used for some specialized devices used in building and engineering (dowel pin, pivot pin, gudgeon pin (also as wrist pin), roll pin, cotter pin etc), the word is most associated with the tailor’s pin (used mostly in textiles and usually clipped to “pin”).  In jewelry design and textiles there are also variants including the “lapel pin” and the fashion industry’s device of last resort, the “safety pin”.

Pinhead in publicity shot for Hellraiser III: Hell on Earth (1992).

Clive Barker's (b 1952) supernatural horror movie Hellraiser (1987) was based on his novella The Hellbound Heart (1986) and was a surprise hit, making it a franchise which has thus far spawned nine sequels, the most recent released in 2022.  The plot involved a mysterious puzzle box that, when opened, summoned the Cenobites, a group of extra-dimensional, sadomasochistic beings unable to differentiate between pain and pleasure.  It was a good premise for a horror movie but the character who really captured the audience's imagination was the unnamed figure viewers dubbed “Pinhead”.  Although Pinhead appeared in the original film for fewer than ten minutes, the character became the franchise’s focal point and has since dominated the publicity material for subsequent releases.  The popularity of Hellraiser has been maintained and it’s hoped that for the next release the producers will offer the part to Peter Dutton (b 1970; leader of the Liberal Party of Australia 2022-2025).

Peter Dutton captured by a photographer during a happy moment (left), Pinhead with the box able to summon the Cenobites (centre) and and artist's depiction of Mr Dutton in “Pinhead mode” (digitally altered image, right).

No longer burdened with tiresome parliamentary duties since losing his seat in the 2025 Australian general election, Mr Dutton has time for a third career and he should be good at playing an unsmiling character who speaks in a relentless monotone; really, all he need do is act naturally.  It’s suspected also he’ll be good at learning a script given the decades he spent parroting “talking points” and TWS (three word slogans).  While it’s an urban myth Mr Dutton wasn’t offered the part of Lord Voldemort in the Harry Potter movie franchise because he was deemed “too scary”, as Pinhead he’d be “just scary enough”.  While the LNP (Liberal National Party) state government in Queensland recently has appointed Mr Dutton to the board of the QIC (Queensland Investment Corporation, the investment manager of the state’s Aus$135 billion in assets), it’s understood his duties in the Aus$130,000 per annum role will be neither onerous or time-consuming so there’ll be ample opportunity for film-shoots.  Although when in opposition the LNP had decried the ALP (Australian Labor Party) government’s frequent appointment of ALP figures to lucrative sinecures, once in office the LNP continued the “jobs for the boys” tradition.  In the modern era, the two most striking characteristics of right-wing fanatics is (1) a fondness for sitting safely in a bunker while advocating for (and sometimes sending) other people's children to go a fight a war somewhere and (2) after a career spent extolling the virtues of “private enterprise” and criticizing “government waste”, being anxious to get back on the public payroll as soon as their political careers end.  Reassuringly for taxpayers who may have been worried Mr Dutton would not be able to continue to enjoy the lifestyle to which their taxes made him accustomed (“entitled” as he might have put it), it’s believed his director’s fees from QIC will not affect his parliamentary pension (understood to be between Aus$260,000-Aus$280,000 per annum).

The Buick Nailhead

In the 1930s, the straight-8 became a favorite for manufacturers of luxury cars, attracted by its ease of manufacture (components and assembly-line tooling able to be shared with those used to produce a straight-6), the mechanical smoothness inherent in the layout and the ease of maintenance afforded by the long, narrow configuration, ancillary components readily accessible.  However, the limitations were the relatively slow engine speeds imposed by the need to restrict the “crankshaft flex” and the height of the units, a product of the long strokes used to gain the required displacement.  By the 1950s, it was clear the future lay in big-bore, overhead valve V8s although the Mercedes-Benz engineers, unable to forget the glory days of the 1930s when the straight-eight W125s built for the Grand Prix circuits generated power and speed Formula One wouldn’t see until the 1980s, noted the relatively small 2.5 litre (153 cubic inch) displacement limit for 1954 and conjured up a final fling for the layout.  Used in both Formula One as the W196R and in sports car racing as the W196S (better remembered as the 300 SLR) the new 2.5 & 3.0 litre (183 cubic inch) straight-8s, unlike their pre-war predecessors, solved the issue of crankshaft flex (the W196's redline was 9500 compared with the W125's 5800) by locating the power take-off at the centre, adding mechanical fuel-injection and a desmodromic valve train to make the things an exotic cocktail of ancient & modern (on smooth racetracks and in the hands of skilled drivers, the swing axles at the back not the liability they might sound).  Dominant during 1954-1955 in both Formula One & the World Sports Car Championship, they were the last of the straight-8s in top-line competition.

Schematic of Buick “Nailhead” V8, 1953-1966.

Across the Atlantic, the US manufacturers also abandoned their straight-8s.  Buick introduced their overhead valve (OHV) V8 in 1953 but, being much wider than before, the new engine had to be slimmed somewhere to fit between the existing inner-fenders (it would not be until later the platform was widened).  To achieve this, the engineers narrowed the cylinder heads, compelling both a conical (the so-called “pent-roof”) combustion chamber and an arrangement in which the sixteen valves pointed directly upwards on the intake side, something which not only demanded an unusual pushrod & rocker mechanism but also limited the size of the valves.  So, the valves had to be tall and narrow and, with some resemblance to nails, they picked up the nickname “nail valves”, morphing eventually to “nailhead” as a description of the whole engine.  The valve placement and angle certainly benefited the intake side but the geometry compromised the flow of exhaust gases which were compelled by their anyway small ports to make a turn of almost 180o on their way to the tailpipe.  As an indication of the heat-soak generated by that 180turn, the surrounding water passages were very wide. 

It wasn't the last time the head design of a Detroit V8 would be dictated by considerations of width.  When Chrysler in 1964 introduced the 273 cubic inch (4.5 litre) V8 as the first of its LA-Series (that would begat the later 318 (5.2), 340 (5.5) & 360 (5.9) as well as the V10 made famous in the Dodge Viper), the most obvious visual difference from the earlier A-Series V8s was the noticeably smaller cylinder heads.  The A engines used as skew-type valve arrangement in which the exhaust valve was parallel to the bore with the intake valve tipped toward the intake manifold (the classic polyspherical chamber).  For the LA, Chrysler rendered all the valves tipped to the intake manifold and in-line (as viewed from the front), the industry’s standard approach to a wedge combustion chamber.  The reason for the change was that the decision had been taken to offer the compact Valiant with a V8 but it was a car which had been designed to accommodate only a straight-six and the wide-shouldered polyspheric head A-Series V8s simply wouldn’t fit.  So, essentially, wedge-heads were bolted atop the old A-Series block but the “L” in LA stood for light and the engineers wanted something genuinely lighter for the compact (in contemporary US terms) Valiant.  Accordingly, in addition to the reduced size of the heads and intake manifold, a new casting process was developed for the block (the biggest, heaviest part of an engine) which made possible thinner walls.  "Light" is however a relative term and the LA series was notably larger and heavier than Ford's "Windsor" V8 (1961-2000) which was the exemplar of the "thin-wall" technique.  This was confirmed in 1967 when, after taking control of Rootes Group, Chrysler had intended to continue production of the Sunbeam Tiger, by then powered by the Ford Windsor 289 (4.7 litre) but with Chrysler’s 273 LA V8 substituted.  Unfortunately, while 4.7 Ford litres filled it to the brim, 4.4 Chrysler litres overflowed; the Windsor truly was compact.  Allowing it to remain in production until the stock of already purchased Ford engines had been exhausted, Chrysler instead changed the advertising from emphasizing the “…mighty Ford V8 power plant” to the vaguely ambiguous…an American V-8 power train”.

322 cubic inch Nailhead in 1953 Buick Skylark convertible (left) and 425 cubic inch Nailhead in 1966 Buick Riviera GS (with dual-quad MZ package, right).  Note the “Wildcat 465” label on the air cleaner, a reference to the claimed torque rating, something most unusual, most manufacturers using the space to advertise horsepower or cubic inch displacement (CID).

The nailhead wasn’t ideal for producing top-end power but the design did generate prodigious low-end torque, something much appreciated by Buick's previous generation of buyers who much had relished the low-speed responsiveness of the famously smooth straight-8.  However, like everybody else, Buick hadn’t anticipated that as the 1950s unfolded, the industry would engage in a “power race”, something to which the free-breathing Cadillac V8s and Chrysler’s Hemis were well-suited.  For that, the somewhat strangulated Buick Nailhead was not at all suited and to gain power the engineers were compelled to add high-lift, long-duration camshafts which enabled the then magic 300 HP (horsepower) number to be achieved but at the expense of smoothness; tales of Buick buyers (long accustomed to straight-8s that ran so smoothly at idle it could be hard to tell if the things were running) returning to the dealer to fix the “rumpity-rump” became legion.  Still, the Nailhead was robust, relatively light and offered what was then a generous displacement and the ever inventive hot-rod community soon worked out the path to power was to use forced induction and invert the valve use, the supercharger blowing the fuel-air mix into the combustion chambers through the exhaust ports while the exhaust gases were evacuated through the larger intake ports.  Thus, for a while, the Nailhead enjoyed a role as a niche player although the arrival in the mid 1950s of the much more tuneable Chevrolet V8s ended the vogue for all but a few devotees who continued use well into the 1960s.  Buick acknowledged reality and, unusually, instead of following the industry trend and drawing attention to displacement & power, publicized their big torque numbers, confusing some (though probably not Buick buyers who were a loyal crew who sometimes would look down on more expensive Cadillacs because they were "flashy").  The unique appearance of the old Nailhead retains some nostalgic appeal for the modern hot-rod community and they do sometimes appear, a welcome change from the more typical small-block Fords or Chevrolets.

Lockheed SR-71 Blackbird (1964-1999).

Not confused about numbers was the USAF (United States Air Force) which was much interested in power for its aircraft but also had a special need for torque on the tarmac and briefly that meant another quirky niche for the Nailhead.  The Lockheed SR-71 Blackbird (1964-1979) was a long-range, high-altitude supersonic (Mach 3+) aircraft used by the USAF for reconnaissance between 1966-1998 and by the NASA (National Aeronautics & Space Administration) for observation missions as late as 1999.  Something of a high-water mark among the extraordinary advances made in aeronautics and materials construction during the Cold War, the SR-71 used Pratt & Whitney J58 turbojet engines which featured an innovative, secondary air-injection system for the afterburner, permitting additional thrust at high speed.  The SR-71 still holds a number of altitude and speed records and Lockheed’s SR-72, a hypersonic unmanned aerial vehicle (UAV) is said to be in an “advanced stage” of design and construction although whether any test flights will be conducted before 2030 remains unclear, the challenges of sustaining in the atmosphere velocities as high as Mach 6+ onerous given the heat generated and stresses imposed by the the fluid dynamics of air at high speed.

Drawing from user manual for AG330 starter cart (left) and AG330 starter cart with dual Buick Nailhead V8s (right).

At the time, the SR-71 was the most exotic aircraft on the planet but during testing and early in its career, just for the engines to start it relied on a pair of even then technologically bankrupt Buick Nailhead V8s.  These were mounted in a towed cart and were effectively the turbojet’s starter motor, a concept developed in the 1930s as a work-around for the technology gap which emerged as the V12 aero-engines became too big to start by hand but lacked on-board electrical systems to trigger ignition.  The two Nailheads were connected by gears to a single, vertical drive shaft which ran the jet up to the critical speed at which ignition became self-sustaining.  The engineers chose the Nailheads after comparing them to other large displacement V8s, the aspect of the Buicks which most appealed being the torque generated at relatively low engine speeds, a characteristic ideal for driving an output shaft, torque best visualized as a "twisting" force.  After the Nailhead was retired in 1966, later carts used Chevrolet big-block V8s but in 1969 a pneumatic start system was added to the infrastructure of the USAF bases from which the SR-71s most frequently operated, the sixteen-cylinder carts relegated to secondary fields the planes rarely used.

Saturday, January 10, 2026

Aphantasia

Aphantasia (pronounced ay-fan-tay-zhuh)

The inability voluntarily to recall or form mental images.

2015: The word (not the diagnosis) was coined by UK neurologist Dr Adam Zeman (b 1957), neuropsychologist Dr Michaela Dewar (b 1976) and Italian neurologist Sergio Della Sala (b 1955), first appearing the paper Lives without imagery.  The construct was a- (from the Ancient Greek ἀ- (a-), used as a privative prefix meaning “not”, “without” or “lacking” + phantasía (from the Greek φαντασία (“appearance”, “imagination”, “mental image” or “power of imagination”, from φαίνω (phaínō) ( “to show”, “to make visible” or “to bring to light”).  Literally, aphantasia can be analysed as meaning “an absence of imagination” or “an absence of mental imagery” and in modern medicine it’s defined as “the inability voluntarily to recall or form mental images”.  Even in Antiquity, there was some meaning shift in phantasía, Plato (circa 427-348 BC) using the word to refer generally to representations and appearances whereas Aristotle (384-322 BC) added a technical layer, his sense being faculty mediating between perception (aisthēsis) and thought (noēsis).  It’s the Aristotelian adaptation (the mind’s capacity to form internal representations) which flavoured the use in modern neurology.  Aphantasia is a noun and aphantasic is a noun & adjective; the noun plural is aphantasics.

Scuola di Atene (The School of Athens, circa 1511), fresco by Raphael (Raffaello Sanzio da Urbino, 1483–1520), Apostolic Palace, Vatican Museums, Vatican City, Rome.  Plato and Aristotle are the figures featured in the centre.

In popular use, the word “aphantasia” can be misunderstood because of the paths taken in English by “phantasy”, “fantasy” and “phantasm”, all derived from the Ancient Greek φαντασία (phantasía) meaning “appearance, mental image, imagination”.  In English, this root was picked up via Latin and French but the multiple forms each evolved in distinct semantic trajectories.  The fourteenth century phantasm came to mean “apparition, ghost, illusion” so was used of “something deceptive or unreal”, the connotation being “the supernatural; spectral”.  This appears to be the origin of the association of “phantas-” with unreality or hallucination rather than normal cognition.  In the fifteenth & sixteenth centuries, the spellings phantasy & fantasy were for a time interchangeable although divergence came with phantasy used in its technical senses of “mental imagery”; “faculty of imagination”; “internal representation”, this a nod to Aristotle’s phantasía.  Fantasy is the familiar modern form, used to suggest “a fictional invention; daydream; escapism; wish-fulfilment, the connotation being “imaginative constructions (in fiction); imaginative excess (in the sense of “unreality” or the “dissociative”); indulgence (as in “speculative or wishful thoughts”)”.

While the word “aphantasia” didn’t exist until 2015, in the editions of the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) published between 1952 (DSM-I) and 2013 (DSM-5), there was not even any any discussion (or even mention) of a condition anything like “an inability voluntarily to recall or form mental images”.  That’s because despite being “a mental condition” induced by something happening (or not happening) in the brain, the phenomenon has never been classified as “a mental disorder”.  Instead it’s a cognitive trait or variation in the human condition and technically is a spectrum condition, the “pure” aphantasic being one end of the spectrum, the hyperaphantasic (highly vivid, lifelike mental imagery, sometimes called a “photorealistic mind's eye”) the other.  That would of course imply the comparative adjective would be “more aphantasic” and the superlative “most aphantasic” but neither are standard forms.

If That rationale for the “omission” was the DSM’s inclusion criteria including the requirement of some evidence of clinically significant distress or functional impairment attributable to a condition.  Aphantasia, in isolation, does not reliably meet this threshold in that many individuals have for decades functioned entirely “normally” without being aware they’re aphantasic  while others presumably had died of old age in similar ignorance.  That does of course raise the intriguing prospect the mental health of some patients may have been adversely affected by the syndrome only by a clinician informing them of their status, thus making them realize what they were missing.  This, the latest edition of the DSM (DSM-5-TR (2022)) does not discuss.  The DSM does discuss imagery and perceptual phenomena in the context of other conditions (PTSD (post-traumatic stress disoder), psychotic disorders, dissociative disorders etc), but these references are to abnormal experiences, not the lifelong absence of imagery.  To the DSM’s editors, aphantasis remains a recognized phenomenon, not a diagnosis.

Given that aphantasia concerns aspects of (1) cognition, (2) inner experience and (3) mental representation, it wouldn’t seem unreasonable to expect the condition now described as aphantasia would have appeared in the DSM, even if only in passing or in a footnote.  However, in the seventy years between 1952-2022, over nine editions, there is no mention, even in DSM-5-TR (2022), the first volume released since the word was in 2015 coined.  That apparently curious omission is explained by the DSM never having been a general taxonomy of mental phenomena.  Instead, it’s (an ever-shifting) codification of the classification of mental disorders, defined by (1) clinically significant distress and/or (2) functional impairment and/or (3) a predictable course, prognosis and treatment relevance.  As a general principle the mere existence of an aphantasic state meets none of these criteria.

Crooked Hillary Clinton in Orange Nina McLemore pantsuit, 2010.  If in response to the prompt "Imagine a truthful person" one sees an image of crooked Hillary Clinton (b 1947; US secretary of state 2009-2013), that obviously wrong but is not an instance of aphantasia because the image imagined need not be correct, it needs just to exist.

The early editions (DSM-I (1952) & DSM-II (1968)) heavily were slanted to the psychoanalytic, focusing on psychoses, neuroses and personality disorders with no mention of any systematic treatment of cognition as a modular function; the matter of mental imagery (even as abstract though separated from an imagined image), let alone its absence, wholly is ignored.  Intriguingly, given what was to come in the field, there was no discussion of the cognitive phenomenology beyond gross disturbances (ie delusions & hallucinations).  Even with the publication of the DSM-III (1980) & DSM-III-R (1987), advances in scanning and surgical techniques, cognitive psychology and neuroscience seem to have made little contribution to what the DSM’s editorial board decided to include and although DSM-III introduced operationalized diagnostic criteria (as a part of a more “medicalised” and descriptive psychiatry), the entries still were dominated by a focus on dysfunctions impairing performance, the argument presumably that it was possible (indeed, probably typical) for those with the condition to lead, full, happy lives; the absence of imagery ability thus not considered a diagnostically relevant variable.  Even in sections on (1) amnestic disorders (a class of memory loss in which patients have difficulty forming new memories (anterograde) or recalling past ones (retrograde), not caused by dementia or delirium but of the a consequence of brain injury, stroke, substance abuse, infections or trauma), with treatment focusing on the underlying cause and rehabilitation, (2) organic mental syndromes or (3) neuro-cognitive disturbance, there was no reference to voluntary imagery loss as a phenomenon in its own right.

Although substantial advances in cognitive neuroscience meant by the 1990s neuropsychological deficits were better recognised, both the DSM-IV (1994) and DSM-IV-TR (2000) continued to be restricted to syndromes with behavioural or functional consequences.  In a way that was understandable because the DSM still was seen by the editors as a manual for working clinicians who were most concerned with helping those afflicted by conditions with clinical salience; the DSM has never wandered far into subjects which might be matters of interesting academic research and mental imagery continued to be mentioned only indirectly, hallucinations (percepts without stimuli) and memory deficits (encoding and retrieval) both discussed only in the consequence of their affect on a patient, not as phenomenon.  The first edition for the new century was DSM-5 (2013) and what was discernible was that discussions of major and mild neuro-cognitive disorders were included, reflecting the publication’s enhanced alignment with neurology but even then, imagery ability is not assessed or scaled: not possessing the power of imagery was not listed as a symptom, specifier, or associated feature.  So there has never in the DSM been a category for benign cognitive variation and that is a product of a deliberate editorial stance rather than an omission, many known phenomenon not psychiatrised unless in some way “troublesome”.

The term “aphantasia” was coined to describe individuals who lack voluntary visual mental imagery, often discovered incidentally and not necessarily associated with brain injury or psychological distress.  In 2015 the word was novel but the condition had been documented for more than a century, Sir Francis Galton (1822–1911) in a paper published in 1880 describing what would come to be called aphantasia.  That work was a statistical study on mental imagery which doubtless was academically solid but Sir Francis’s reputation later suffered because he was one of the leading lights in what was in Victorian times (1837-1901) the respectable discipline of eugenics.  Eugenics rightly became discredited so Sir Francis was to some extent retrospectively “cancelled” (something like the Stalinist concept of “un-personing”) and these days his seminal contribution to the study of behavioural genetics is acknowledged only grudgingly.

Galton in 1880 noted a wide variation in “visual imagination” (ie it was understood as a “spectrum condition”) and in the same era, in psychology publications the preferred term seems to have been “imageless thought”.  In neurology (and trauma medicine generally) there were many reports of patients losing the power of imagery after a brain injury but no agreed name was ever applied because the interest was more in the injury.  The unawareness that some people simply lacked the facility presumably must have been held among the general population because as Galton wrote: “To my astonishment, I found that the great majority of the men of science to whom I first applied, protested that mental imagery was unknown to them, and they looked on me as fanciful and fantastic in supposing that the words “mental imagery” really expressed what I believed everybody supposed them to mean. They had no more notion of its true nature than a colour-blind man who has not discerned his defect has of the nature of colour.

His paper must have stimulated interest because one psychologist reported some subjects possessing what he called a “typographic visual type” imagination in which ideas (which most would visualize as an image of some sort) would manifest as “printed text” which was intriguing because in the same way a computer in some aspects doesn’t distinguish between an image file (jpeg, TIFF, webp, avif etc) which is a picture of (1) someone and (2) their name in printed form, that would seem to imply at least some who are somewhere on the aphantasia spectrum retain the ability to visualize printed text, just not the object referenced.  Professor Zeman says he first became aware of the condition in 2005 when a patient reported having lost the ability to visualize following minor surgery and after the case was in 2010 documented in the medical literature in the usual way, it provoked a number of responses in which multiple people informed Zeman they had never in their lifetime been able to visualize objects.  This was the origin of Zeman and his collaborators coining “congenital aphantasia”, describing individuals who never enjoyed the ability to generate voluntary mental images.  Because it was something which came to general attention in the age of social media, great interest was triggered in the phenomenon and a number of “on-line tests” were posted, the best-known of which was the request for readers to “imagine a red apple” and rate their “mind's eye” depiction of it on a scale from 1 (photorealistic visualisation) through to 5 (no visualisation at all).  For many, this was variously (1) one’s first realization they were aphantasic or (2) an appreciation one’s own ability or inability to visualise objects was not universal.

How visualization can manifest: Lindsay Lohan and her lawyer in court, Los Angeles, December. 2011.  If an aphantasic person doesn't know about aphantasia and doesn't know other people can imagine images, their lives are probably little different from them; it's just their minds have adapted to handle concepts in another way.

Top right: What’s thought “normal” visualization (thought to be possessed by most of the population) refers to the ability to imagine something like a photograph of what’s being imagined.  This too is a spectrum condition in that some will be able to imagine an accurate “picture”, something like a HD (high definition photograph” while others will “see” something less detailed, sketchy or even wholly inaccurate.  However, even if when asked to visualize “an apple” one instead “sees a banana”, that is not an indication of aphantasia, a condition which describes only an absence of an image.  Getting it that wrong is an indication of something amiss but it’s not aphantasia.

Bottom left: “Seeing” text in response to being prompted to visualize something was the result Galton in 1880 reported as such a surprise.  It means the brain understands the concept of what is being described; it just can’t be imagined as an image.  This is one manifestation of aphantasia but it’s not related to the “everything is text” school of post-modernism.  Jacques Derrida’s (1930-2004) fragment “Il n'y a pas de hors-texte” (literally “there is no outside-text”) is one of the frequently misunderstood phrases from the murky field of deconstruction bit it has nothing to do with aphantasia (although dedicated post-modernists probably could prove a relationship).

Bottom right: The absence of any image (understood as a “blankness” which does not necessarily imply “whiteness” or “blackness” although this is the simple way to illustrate the concept), whether text or to some degree photorealistic is classic aphantasia.  The absence does not mean the subject doesn’t understand the relevant object of concept; it means only that their mental processing does not involve imagery and for as long as humans have existed, many must have functioned in this way, their brains adapted to the imaginative range available to them.  What this must have meant was many became aware of what they were missing only when the publicity about the condition appeared on the internet, am interesting example of “diagnostic determinism”.

WebMD's classic Aphantasia test.

The eyes are an out-growth of the brain and WebMD explains aphantasia is caused by the brain’s visual cortex (the part of the brain that processes visual information from the eyes) “working differently than expected”, noting the often quoted estimate of it affecting 2-4% of the population may be understated because many may be unaware they are “afflicted”.  It’s a condition worthy of more study because aphantasics handle the characteristic by processing information differently from those who rely on visual images.  There may be a genetic element in aphantasia and there’s interest too among those researching “Long Covid” because the symptom of “brain fog” can manifest much as does aphantasia.

Aphantasia may have something to do with consciousness because aphantasics can have dreams (including nightmares) which can to varying degrees be visually rich.  There’s no obvious explanation for this but while aphantasia is the inability voluntarily to generate visual mental imagery while awake, dreaming is an involuntary perceptual experience generated during sleep; while both are mediated by neural mechanisms, these clearly are not identical but presumably must overlap.  The conclusions from research at this stage remains tentative the current neuro-cognitive interpretation seems to suggest voluntary (conscious) imagery relies on top-down activation of the visual association cortex while dream (unconscious) dream imagery relies more on bottom-up and internally driven activation during REM (rapid eye movement) sleep.  What that would seem to imply is that in aphantasia, the former pathway is impaired (or at least inaccessible), while the latter may remain intact (or accessible).

The University of Queensland’s illustration of the phantasia spectrum.

The opposite syndrome is hyperphantasia (having extremely vivid, detailed, and lifelike mental imagery) which can be a wonderful asset but can also be a curse, rather as hyperthymesia (known also as HSAM (Highly Superior Autobiographical Memory) and colloquially as “total recall”) can be disturbing.  Although it seems not to exist in the sense of “remembering everything, second-by-second”, there are certainly those who have an extraordinary recall of “events” in their life and this can have adverse consequences for mental health because one of the mind’s “defensive mechanisms” is forgetting or at least suppressing memories which are unwanted.  Like aphantasia & hyperphantasia, hyperthymesia is not listed by the DSM as a mental disorder; it is considered a rare cognitive trait or neurological phenomenon although like the imaging conditions it can have adverse consequences and these include disturbing “flashbacks”, increased rumination and increased rates of anxiety or obsessive tendencies.

Saturday, December 20, 2025

Enthrone

Enthrone (pronounced en-throhn)

(1) To put on the throne in a formal installation ceremony (sometimes called an enthronement) which variously could be synonymous with (or simultaneously performed with) a coronation or other ceremonies of investiture.

(2) Figuratively in this context, to help a candidate to the succession of a monarchy or by extension in any other major organisation (ie the role of “kingmakers”, literal and otherwise).

(3) To invest with sovereign or episcopal authority (ie a legal instrument separate from any ceremony).

(4) To honour or exalt (now rare except in literary or poetic use).

(5) Figuratively, to assign authority to or vest authority in.

Circa 1600: The construct was en- + throne and the original meaning was “to place on a throne, exalt to the seat of royalty”.  For this purpose it replaced the late fourteenth century enthronize, from the thirteenth century Old French introniser, from the Late Latin inthronizare, from Greek the enthronizein.  In the late fourteenth century the verb throne (directly from the noun) was used in the same sense.  Throne (the chair or seat occupied by a sovereign, bishop or other exalted personage on ceremonial occasions) dates from the late twelfth century and was from the Middle English trone, from the Old French trone, from the Latin thronus, from the Ancient Greek θρόνος (thrónos) (chair, high-set seat, throne).  It replaced the earlier Middle English seld (seat, throne).  In facetious use, as early as the 1920s, throne could mean “a toilet” (used usually in the phrase “on the throne”) and in theology had the special use (in the plural and capitalized) describing the third (a member of an order of angels ranked above dominions and below cherubim) of the nine orders into which the angels traditionally were divided in medieval angelology.  The en- prefix was from the Middle English en- (en-, in-), from the Old French en- (also an-), from the Latin in- (in, into).  It was also an alteration of in-, from the Middle English in-, from the Old English in- (in, into), from the Proto-Germanic in (in).  Both the Latin & Germanic forms were from the primitive Indo-European en (in, into).  The intensive use of the Old French en- & an- was due to confluence with Frankish intensive prefix an- which was related to the Old English intensive prefix -on.  It formed a transitive verb whose meaning is to make the attached adjective (1) in, into, (2) on, onto or (3) covered.  It was used also to denote “caused” or as an intensifier.  The prefix em- was (and still is) used before certain consonants, notably the labials b and p.  Enthrone, dethrone, enthronest & enthronize are verbs, enthronementm, enthronization & enthroner are nouns, enthroning is a noun & verb, enthroned is a verb & adjective; the noun plural is enthronements.  The noun enthronable is non-standard.  The derived forms include the verb unenthrone, reenthrone & disenthrone and although there have been many enthroners, the form enthronee has never existed.

Alhaji Ibrahim Wogorie (b 1967) being enskinned as North Sisala community chief, Ghana, July 2023.

In colonial-era West Africa the coined forms were “enskin” (thus enskinment, enskinning, enskinned) and “enstool” (thus enstoolment, enstooling, enstooled).  These words were used to refer to the ceremonies in which a tribal chief was installed in his role; the meanings thus essentially the same as enjoyed in the West by “enthrone”.  The constructs reflected a mix of indigenous political culture and English morphological adaptation during the colonial period, the elements explained by (1) the animal skins (the distinctive cheetah often mentioned in the reports of contemporary anthropologists although in some Islamic and Sahelian-influenced chieftaincies (including the Dagomba, Mamprusi, Hausa emirates), a cow or lion skin often was the symbol of authority) which often surrounded the new chief and (2) the tradition in Africa of a chief sitting on a stool.  Sometimes, the unfortunate animal’s skin would be laid over the stool (and almost always, one seems to have been laid at the chief’s feet) but in some traditions (notably in northern Ghana and parts of Nigeria) it was a mark of honor for the chief to sit on a skin spread on the ground.

Dr Mahamudu Bawumia (b 1963), enstooled as Nana Ntentankesehene (Chief of the Internet/Web), Ghana, August 2024.  Note the cheetah skin used to trim the chair.

The stool was the central symbol of chieftaincy and kingship among Akan-speaking peoples (still in present-day Ghana where “to enskin” is used generally to mean “to install as a leader of a group” and the constitution (1992) explicitly protects the institution of chieftaincy and judicial decisions routinely use “enstool” or “enskin” (depending on region)).  In Akan political culture, the most famous use was the Sika Dwa Kofi (the Golden Stool) of the Asante and it represented the embodiment of the polity and ancestors, not merely a seat (used rather like the synecdoches “the Pentagon” (for the US Department of Defense (which appears now to be headed by a cabinet office who simultaneously is both Secretary of Defense & Secretary of War)) or “Downing Street” (for the UK prime-minister or the government generally).  Thus, to be “enstooled” is ritually to be placed into office as chief, inheriting the authority vested in the stool.  Enskin & enstool (both of which seem first to have appeared in the records of the Colonial Office in the 1880s and thus were products of the consolidation of British indirect rule in West Africa, rather than being survivals from earlier missionary English which also coined its own terms) were examples of semantic calquing (the English vocabulary reshaped to encode indigenous concepts) and, as it was under the Raj in India, it was practical administrative pragmatism, colonial officials needing precise (and standardized) terms that distinguished between different systems of authority.  In truth, they were also often part of classic colonial “fixes” in which the British would take existing ceremonies and add layers of ritual to afforce the idea of a chief as “their ruler” and within a couple of generations, sometimes the local population would talk of the newly elaborate ceremony as something dating back centuries; the “fix” was a form of constructed double-legitimization.

A classic colonial fix was the Bose Levu Vakaturaga (Great Council of Chiefs) in Fiji which the British administrators created in 1878.  While it's true that prior to European contact, there had been meetings between turaga (tribal chiefs) to settle disputes and for other purposes, all the evidence suggests they were ad-hoc appointments with little of the formality, pomp and circumstance the British introduced.  Still, it was a successful institution which the chiefs embraced, apparently with some enthusiasm because the cloaks and other accoutrements they adopted for the occasion became increasingly elaborate and it was a generally harmonious form of indigenous governance which enabled the British to conduct matters of administration and policy-making almost exclusively through the chiefs.  The council survived even after Fiji gained independence from Britain in 1970 until it was in 2012 abolished by the military government of Commodore Frank Bainimarama (b 1954; prime minister of Fiji 2007-2022), as part of reform programme said to be an attempt to reduce ethnic divisions and promote a unified national identity.  The commodore's political future would be more assured had he learned lessons from the Raj.

There was of course an element of racial hierarchy in all this and “enskin” & “enstool” denoted a “tribal chief” under British rule whereas “enthrone” might have been thought to imply some form of sovereignty because that was the linkage in Europe and that would never do.  What the colonial authorities wanted was to maintain the idea of “the stool” as a corporate symbol, the office the repository of the authority, not the individual.  The danger with using a term like “enthronement” was the population might be infected by the European notion of monarchy as a hereditary kingship with personal sovereignty; what the Europeans wanted was “a stool” and they would decide who would be enstooled, destooled or restooled. 

Prince Mangosuthu Buthelezi, Moses Mabhida Stadium, Durban, South Africa, October 2022.

English words and their connotations did continue to matter in the post-colonial world because although the colonizers might have departed, often the legacy of language remained, sometimes as an “official” language of government and administration.  In the 1990s, the office of South Africa’s Prince Mangosuthu Buthelezi (1928–2023) sent a series of letters to the world’s media outlets advising he should be styled as “Prince” and not “Chief”, on the basis of being the grandson of one Zulu king and the nephew of another.  The Zulus were once described as a “tribe” and while that reflected the use in ethnography, the appeal in the West was really that it represented a rung on the racist hierarchy of civilization, the preferred model being: white people have nations or states, Africans cluster in tribes or clans.  The colonial administrators recognized these groups had leaders and typically they used the style “chief” (from the Middle English cheef & chef, from the Old French chef & chief (leader), from the Vulgar Latin capus, from the Classical Latin caput (head), from the Proto-Italic kaput, from the primitive Indo-European káput).  As the colonial records make clear, there were “good” chiefs and “troublesome” chiefs, thus the need sometimes to arrange a replacement enstooling.

Unlike in the West where styles of address and orders of precedence were codified (indeed, somewhat fetishized), the traditions in Africa seem to have been more fluid and Mangosuthu Buthelezi didn’t rely on statute or even documented convention when requesting the change.  Instead, he explained “prince” reflected his Zulu royal lineage not only was appropriate (he may have cast an envious eye at the many Nigerian princes) but was also commonly used as his style by South African media, some organs or government and certainly his own Zulu-based political party (IQembu leNkatha yeNkululeko (the IPF; Inkatha Freedom Party).  He had in 1953 assumed the Inkosi (chieftainship) of the Buthelezi clan, something officially recognized four year laters by Pretoria although not until the early 1980s (when it was thought he might be useful as a wedge to drive into the ANC (African National Congress) does the Apartheid-era government seem to have started referring to him as “prince”).  Despite that cynical semi-concession, there was never a formal re-designation.

Enthroned & installed: Lindsay Lohan in acrylic & rhinestone tiara during “prom queen scene” in Mean Girls (2004).

In the matter of prom queens and such, it’s correct to say there has been “an enthronement” because even in the absence of a physical throne (in the sense of “a chair”), the accession is marked by the announcement and the placing of the crown or tiara.  This differs from something like the “enthroning” of a king or queen in the UK because, constitutionally, there is no interregnum, the new assuming the title as the old took their last breath and “enthronement” is a term reserved casually to apply to the coronation.  Since the early twentieth century, the palace and government have contrived to make an elaborate “made for television” ceremony although it has constitutional significance beyond the rituals related to the sovereign’s role as Supreme Governor of the Church of England.

Dame Sarah Mullally in the regalia of Bishop of London; in January 2026, she will take office as Archbishop of Canterbury, the formal installation in March.  No longer one of the world's more desirable jobs (essentially because it can't be done), all wish her the best of British luck.

In October 2025, the matter of enthronement (or, more correctly, non-enthronement) in the Church of England made a brief splash in some of the less explored corners of social media after it was announced the ceremony marking the accession of the next Archbishop of Canterbury would be conducted in Canterbury Cathedral in March 2026.  The announcement was unexceptional in that it was expected and for centuries Archbishops of Canterbury have come and gone (although the last one was declared gone rather sooner than expected) but what attracted some comment was the new appointee was to be “installed” rather than the once traditional “enthroned”.  The conclusion some drew was this apparent relegation was related to the next archbishop being Dame Sarah Mullally (née Bowser; b 1962) the first woman to hold the once desirable job, the previous 105 prelates having been men, the first, Saint Augustine of Canterbury (circa 630s-circa 604) in 597 (not to be confused with the still influential Saint Augustine of Hippo (354–430)).

Despite suspicions the event was in some was being "devalued" because a woman got the job, there is in the church no substantive legal or theological significance in the use of “installed” rather than “enthroned” and the choice reflects modern ecclesiastical practice rather than having any doctrinal or canonical effect.  A person becomes Archbishop of Canterbury through a sequence of juridical acts and these constitute the decisive legal instruments; ceremonial rites have a symbolic value but nothing more, the power of the office vested from the point at which the legal mechanisms have correctly been executed (in that, things align with the procedures used for the nation’s monarchs).  So the difference is one of tone rather than substance and the “modern” church has for decades sought to distance itself from perceptions it may harbor quasi-regal aspirations or the perpetuation of clerical grandeur and separateness.  At least in Lambeth Palace, the preferred model long has been pastoral; most Church of England bishops have for some times been “installed” in their cathedrals (despite “enthronement” surviving in some press reports, a product likely either of nostalgia or “cut & paste journalism”).  That said, some Anglican provinces outside England still “enthrone”, apparently on the basis “it’s always been done that way” rather than the making of a theological or secular point”.

Lambeth Palace, the Archbishop of Canterbury's official London residence.

Interestingly, Archbishops of York (“the church in the north”) continued to enjoy ceremonies of enthronement even after those those at Canterbury underwent installations.  Under canon law, the wording literally makes no difference and historians have concluded the retention of the older form is clung to for no reason other than “product differentiation”, York Minster often emphasizing their continuity with medieval ceremonial forms; it’s thus a mere cultural artefact, the two ceremonies performing the same liturgical action: seating the archbishop in the cathedra (the chair (throne) of the archbishop).  Because it’s the Archbishop of Canterbury and not York who sits as the “spiritual head of the worldwide Anglican community”, in York there’s probably no lingering sensitivity to criticism of continuing with “Romish ways”.  It's not that northern noses are less troubled by the “whiff of popery”, it just that few now care.

In an indication of how little the wording matters, it’s not clear who was the last Archbishop of Canterbury who could be said to have been “enthroned” because there was never any differentiation of form in the ceremonies and the documents suggest the terms were used casually and even interchangeably.  What can be said is that Geoffrey Fisher (1887–1972; AoC-99: 1945-1961) was installed at a ceremony widely described (in the official programme, ecclesiastical commentaries and other church & secular publications) as an “enthronement” and that was the term used in the government Gazette; that’s as official an endorsement of the term as seems possible because, being an established church, bishops are appointed by the Crown on the advice of the prime minister although the procedure has long (and formalized in 2007) been a “legal fiction” because the church’s CNC (Crown Nominations Commission) sends the names to the prime minister who acts as a “postbox”, forwarding them to the palace for the issuing of letters patent confirming the appointment.  When Michael Ramsey (1904–1988; AoC-100: 1961-1974), was appointed, although the term “enthrone” did appear in press reports, the church’s documents almost wholly seem to have used “install” and since then, in Canterbury, it’s been installations all the way.

Pope Pius XII in triple tiara at his coronation, The Vatican, March, 1939.

So, by the early 1960s the church was responding, if cautiously, to the growing anti-monarchical sentiment in post-war ecclesiology although this does seem to have been a sentiment of greater moment to intellectuals and theologians than parishioners.  About these matters there was however a kind of ecumenical sensitivity emerging and the conciliar theology later was crystallised (if not exactly codified) in the papers of Second Vatican Council (Vatican II, 1962-1965, published 1970).  The comparison with the practice in Rome is interesting because there are more similarities than differences although that is obscured by words like “enthronement” and “coronation” being seemingly embedded in the popular (and journalistic) imagination. That’s perhaps understandable because for two millennia as many as 275 popes (officially the count is 267 but it’s not certain how many there have been because there have been “anti-popes” and allegedly even one woman (although that’s now largely discounted)) have sat “on the throne of Saint Peter” (retrospectively the first pope) so the tradition is long.  In Roman Catholic canon law, “enthronement” is not a juridical term; the universal term is capio sedem (taking possession of the cathedral (ie “installation”)) and, as in England, an appointment is formalized once the legal instruments are complete, the subsequent ceremony, while an important part of the institution’s mystique, exists for the same reason as it does for the Church of England or the House of Windsor: it’s the circuses part of panem et circenses (bread and circuses).  Unlike popes who once had coronations, archbishops of Canterbury never did because they made no claim to temporal sovereignty.

Pope Paul VI in triple tiara at his coronation, The Vatican, June. 1963.  It was the last papal coronation.

So, technically, modern popes are “installed as Bishop of Rome” and in recent decades the Holy See has adjusted the use of accoutrements to dispel any implication of an “enthronement”, the last papal coronation at which a pope was crowned with the triple tiara was that of Paul VI (1897-1978; pope 1963-1978) but in “an act of humility” he removed it, placing it on the on the alter where (figuratively), it has since sat.  Actually, Paul VI setting aside the triple tiara as a symbolic renunciation of temporal and monarchical authority was a bit overdue because the Papal States had been lost to the Holy See with the unification of Italy in 1870 though the Church refused to acknowledge that reality; in protest, no pope for decades set foot outside the Vatican.  However, in the form of the Lateran Treaty (1929), the Holy See entered into a concordat with the Italian state whereby the (1) the Vatican was recognized as a sovereign state and (2) the church was recognized as Italy’s state religion in exchange for which the territorial and political reality was recognized.  Despite that, until 1963 the triple tiara (one tier of which was said to symbolize the pope’s temporal authority over the papal states) appeared in the coronations of Pius XII (1876-1958; pope 1939-1958), John XXIII (1881-1963; pope 1958-1963) and Paul VI (who didn’t formally abolish the rite of papal coronation from the Ordo Rituum pro Ministerii Petrini Initio Romae Episcopi (Order of Rites for the Beginning of the Petrine Ministry of the Bishop of Rome (the liturgical book detailing the ceremonies for a pope's installation)) until 1975.

The Chair of St Augustine.  In church circles, archbishops of Canterbury are sometimes said to "occupy the Chair of St Augustine".

The Chair of St Augustine sits in Canterbury Cathedral but technically, an AoC is “twice installed”: once on the Diocesan throne as the Bishop of the see of Canterbury and also on the Chair of St Augustine as Primate of All England (the nation's first bishop) and spiritual leader of the worldwide Anglican Communion. So, there’s nothing unusual in Sarah Mullally being “installed” rather than “enthroned” as would have been the universal terminology between the reformation and the early twentieth century.  Linguistically, legally and theologically, the choice of words is a non-event and anyone who wishes to describe Dame Sarah as “enthroned” may do so without fear of condemnation, excommunication or a burning at the stake.  What is most likely is that of those few who notice, fewer still are likely to care.