Showing posts with label Antiquity. Show all posts
Showing posts with label Antiquity. Show all posts

Saturday, January 10, 2026

Aphantasia

Aphantasia (pronounced ay-fan-tay-zhuh)

The inability voluntarily to recall or form mental images.

2015: The word (not the diagnosis) was coined by UK neurologist Dr Adam Zeman (b 1957), neuropsychologist Dr Michaela Dewar (b 1976) and Italian neurologist Sergio Della Sala (b 1955), first appearing the paper Lives without imagery.  The construct was a- (from the Ancient Greek ἀ- (a-), used as a privative prefix meaning “not”, “without” or “lacking” + phantasía (from the Greek φαντασία (“appearance”, “imagination”, “mental image” or “power of imagination”, from φαίνω (phaínō) ( “to show”, “to make visible” or “to bring to light”).  Literally, aphantasia can be analysed as meaning “an absence of imagination” or “an absence of mental imagery” and in modern medicine it’s defined as “the inability voluntarily to recall or form mental images”.  Even in Antiquity, there was some meaning shift in phantasía, Plato (circa 427-348 BC) using the word to refer generally to representations and appearances whereas Aristotle (384-322 BC) added a technical layer, his sense being faculty mediating between perception (aisthēsis) and thought (noēsis).  It’s the Aristotelian adaptation (the mind’s capacity to form internal representations) which flavoured the use in modern neurology.  Aphantasia is a noun and aphantasic is a noun & adjective; the noun plural is aphantasics.

Scuola di Atene (The School of Athens, circa 1511), fresco by Raphael (Raffaello Sanzio da Urbino, 1483–1520), Apostolic Palace, Vatican Museums, Vatican City, Rome.  Plato and Aristotle are the figures featured in the centre.

In popular use, the word “aphantasia” can be misunderstood because of the paths taken in English by “phantasy”, “fantasy” and “phantasm”, all derived from the Ancient Greek φαντασία (phantasía) meaning “appearance, mental image, imagination”.  In English, this root was picked up via Latin and French but the multiple forms each evolved in distinct semantic trajectories.  The fourteenth century phantasm came to mean “apparition, ghost, illusion” so was used of “something deceptive or unreal”, the connotation being “the supernatural; spectral”.  This appears to be the origin of the association of “phantas-” with unreality or hallucination rather than normal cognition.  In the fifteenth & sixteenth centuries, the spellings phantasy & fantasy were for a time interchangeable although divergence came with phantasy used in its technical senses of “mental imagery”; “faculty of imagination”; “internal representation”, this a nod to Aristotle’s phantasía.  Fantasy is the familiar modern form, used to suggest “a fictional invention; daydream; escapism; wish-fulfilment, the connotation being “imaginative constructions (in fiction); imaginative excess (in the sense of “unreality” or the “dissociative”); indulgence (as in “speculative or wishful thoughts”)”.

While the word “aphantasia” didn’t exist until 2015, in the editions of the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) published between 1952 (DSM-I) and 2013 (DSM-5), there was not even any any discussion (or even mention) of a condition anything like “an inability voluntarily to recall or form mental images”.  That’s because despite being “a mental condition” induced by something happening (or not happening) in the brain, the phenomenon has never been classified as “a mental disorder”.  Instead it’s a cognitive trait or variation in the human condition and technically is a spectrum condition, the “pure” aphantasic being one end of the spectrum, the hyperaphantasic (highly vivid, lifelike mental imagery, sometimes called a “photorealistic mind's eye”) the other.  That would of course imply the comparative adjective would be “more aphantasic” and the superlative “most aphantasic” but neither are standard forms.

If That rationale for the “omission” was the DSM’s inclusion criteria including the requirement of some evidence of clinically significant distress or functional impairment attributable to a condition.  Aphantasia, in isolation, does not reliably meet this threshold in that many individuals have for decades functioned entirely “normally” without being aware they’re aphantasic  while others presumably had died of old age in similar ignorance.  That does of course raise the intriguing prospect the mental health of some patients may have been adversely affected by the syndrome only by a clinician informing them of their status, thus making them realize what they were missing.  This, the latest edition of the DSM (DSM-5-TR (2022)) does not discuss.  The DSM does discuss imagery and perceptual phenomena in the context of other conditions (PTSD (post-traumatic stress disoder), psychotic disorders, dissociative disorders etc), but these references are to abnormal experiences, not the lifelong absence of imagery.  To the DSM’s editors, aphantasis remains a recognized phenomenon, not a diagnosis.

Given that aphantasia concerns aspects of (1) cognition, (2) inner experience and (3) mental representation, it wouldn’t seem unreasonable to expect the condition now described as aphantasia would have appeared in the DSM, even if only in passing or in a footnote.  However, in the seventy years between 1952-2022, over nine editions, there is no mention, even in DSM-5-TR (2022), the first volume released since the word was in 2015 coined.  That apparently curious omission is explained by the DSM never having been a general taxonomy of mental phenomena.  Instead, it’s (an ever-shifting) codification of the classification of mental disorders, defined by (1) clinically significant distress and/or (2) functional impairment and/or (3) a predictable course, prognosis and treatment relevance.  As a general principle the mere existence of an aphantasic state meets none of these criteria.

Crooked Hillary Clinton in Orange Nina McLemore pantsuit, 2010.  If in response to the prompt "Imagine a truthful person" one sees an image of crooked Hillary Clinton (b 1947; US secretary of state 2009-2013), that obviously wrong but is not an instance of aphantasia because the image imagined need not be correct, it needs just to exist.

The early editions (DSM-I (1952) & DSM-II (1968)) heavily were slanted to the psychoanalytic, focusing on psychoses, neuroses and personality disorders with no mention of any systematic treatment of cognition as a modular function; the matter of mental imagery (even as abstract though separated from an imagined image), let alone its absence, wholly is ignored.  Intriguingly, given what was to come in the field, there was no discussion of the cognitive phenomenology beyond gross disturbances (ie delusions & hallucinations).  Even with the publication of the DSM-III (1980) & DSM-III-R (1987), advances in scanning and surgical techniques, cognitive psychology and neuroscience seem to have made little contribution to what the DSM’s editorial board decided to include and although DSM-III introduced operationalized diagnostic criteria (as a part of a more “medicalised” and descriptive psychiatry), the entries still were dominated by a focus on dysfunctions impairing performance, the argument presumably that it was possible (indeed, probably typical) for those with the condition to lead, full, happy lives; the absence of imagery ability thus not considered a diagnostically relevant variable.  Even in sections on (1) amnestic disorders (a class of memory loss in which patients have difficulty forming new memories (anterograde) or recalling past ones (retrograde), not caused by dementia or delirium but of the a consequence of brain injury, stroke, substance abuse, infections or trauma), with treatment focusing on the underlying cause and rehabilitation, (2) organic mental syndromes or (3) neuro-cognitive disturbance, there was no reference to voluntary imagery loss as a phenomenon in its own right.

Although substantial advances in cognitive neuroscience meant by the 1990s neuropsychological deficits were better recognised, both the DSM-IV (1994) and DSM-IV-TR (2000) continued to be restricted to syndromes with behavioural or functional consequences.  In a way that was understandable because the DSM still was seen by the editors as a manual for working clinicians who were most concerned with helping those afflicted by conditions with clinical salience; the DSM has never wandered far into subjects which might be matters of interesting academic research and mental imagery continued to be mentioned only indirectly, hallucinations (percepts without stimuli) and memory deficits (encoding and retrieval) both discussed only in the consequence of their affect on a patient, not as phenomenon.  The first edition for the new century was DSM-5 (2013) and what was discernible was that discussions of major and mild neuro-cognitive disorders were included, reflecting the publication’s enhanced alignment with neurology but even then, imagery ability is not assessed or scaled: not possessing the power of imagery was not listed as a symptom, specifier, or associated feature.  So there has never in the DSM been a category for benign cognitive variation and that is a product of a deliberate editorial stance rather than an omission, many known phenomenon not psychiatrised unless in some way “troublesome”.

The term “aphantasia” was coined to describe individuals who lack voluntary visual mental imagery, often discovered incidentally and not necessarily associated with brain injury or psychological distress.  In 2015 the word was novel but the condition had been documented for more than a century, Sir Francis Galton (1822–1911) in a paper published in 1880 describing what would come to be called aphantasia.  That work was a statistical study on mental imagery which doubtless was academically solid but Sir Francis’s reputation later suffered because he was one of the leading lights in what was in Victorian times (1837-1901) the respectable discipline of eugenics.  Eugenics rightly became discredited so Sir Francis was to some extent retrospectively “cancelled” (something like the Stalinist concept of “un-personing”) and these days his seminal contribution to the study of behavioural genetics is acknowledged only grudgingly.

Galton in 1880 noted a wide variation in “visual imagination” (ie it was understood as a “spectrum condition”) and in the same era, in psychology publications the preferred term seems to have been “imageless thought”.  In neurology (and trauma medicine generally) there were many reports of patients losing the power of imagery after a brain injury but no agreed name was ever applied because the interest was more in the injury.  The unawareness that some people simply lacked the facility presumably must have been held among the general population because as Galton wrote: “To my astonishment, I found that the great majority of the men of science to whom I first applied, protested that mental imagery was unknown to them, and they looked on me as fanciful and fantastic in supposing that the words “mental imagery” really expressed what I believed everybody supposed them to mean. They had no more notion of its true nature than a colour-blind man who has not discerned his defect has of the nature of colour.

His paper must have stimulated interest because one psychologist reported some subjects possessing what he called a “typographic visual type” imagination in which ideas (which most would visualize as an image of some sort) would manifest as “printed text” which was intriguing because in the same way a computer in some aspects doesn’t distinguish between an image file (jpeg, TIFF, webp, avif etc) which is a picture of (1) someone and (2) their name in printed form, that would seem to imply at least some who are somewhere on the aphantasia spectrum retain the ability to visualize printed text, just not the object referenced.  Professor Zeman says he first became aware of the condition in 2005 when a patient reported having lost the ability to visualize following minor surgery and after the case was in 2010 documented in the medical literature in the usual way, it provoked a number of responses in which multiple people informed Zeman they had never in their lifetime been able to visualize objects.  This was the origin of Zeman and his collaborators coining “congenital aphantasia”, describing individuals who never enjoyed the ability to generate voluntary mental images.  Because it was something which came to general attention in the age of social media, great interest was triggered in the phenomenon and a number of “on-line tests” were posted, the best-known of which was the request for readers to “imagine a red apple” and rate their “mind's eye” depiction of it on a scale from 1 (photorealistic visualisation) through to 5 (no visualisation at all).  For many, this was variously (1) one’s first realization they were aphantasic or (2) an appreciation one’s own ability or inability to visualise objects was not universal.

How visualization can manifest: Lindsay Lohan and her lawyer in court, Los Angeles, December. 2011.  If an aphantasic person doesn't know about aphantasia and doesn't know other people can imagine images, their lives are probably little different from them; it's just their minds have adapted to handle concepts in another way.

Top right: What’s thought “normal” visualization (thought to be possessed by most of the population) refers to the ability to imagine something like a photograph of what’s being imagined.  This too is a spectrum condition in that some will be able to imagine an accurate “picture”, something like a HD (high definition photograph” while others will “see” something less detailed, sketchy or even wholly inaccurate.  However, even if when asked to visualize “an apple” one instead “sees a banana”, that is not an indication of aphantasia, a condition which describes only an absence of an image.  Getting it that wrong is an indication of something amiss but it’s not aphantasia.

Bottom left: “Seeing” text in response to being prompted to visualize something was the result Galton in 1880 reported as such a surprise.  It means the brain understands the concept of what is being described; it just can’t be imagined as an image.  This is one manifestation of aphantasia but it’s not related to the “everything is text” school of post-modernism.  Jacques Derrida’s (1930-2004) fragment “Il n'y a pas de hors-texte” (literally “there is no outside-text”) is one of the frequently misunderstood phrases from the murky field of deconstruction bit it has nothing to do with aphantasia (although dedicated post-modernists probably could prove a relationship).

Bottom right: The absence of any image (understood as a “blankness” which does not necessarily imply “whiteness” or “blackness” although this is the simple way to illustrate the concept), whether text or to some degree photorealistic is classic aphantasia.  The absence does not mean the subject doesn’t understand the relevant object of concept; it means only that their mental processing does not involve imagery and for as long as humans have existed, many must have functioned in this way, their brains adapted to the imaginative range available to them.  What this must have meant was many became aware of what they were missing only when the publicity about the condition appeared on the internet, am interesting example of “diagnostic determinism”.

WebMD's classic Aphantasia test.

The eyes are an out-growth of the brain and WebMD explains aphantasia is caused by the brain’s visual cortex (the part of the brain that processes visual information from the eyes) “working differently than expected”, noting the often quoted estimate of it affecting 2-4% of the population may be understated because many may be unaware they are “afflicted”.  It’s a condition worthy of more study because aphantasics handle the characteristic by processing information differently from those who rely on visual images.  There may be a genetic element in aphantasia and there’s interest too among those researching “Long Covid” because the symptom of “brain fog” can manifest much as does aphantasia.

Aphantasia may have something to do with consciousness because aphantasics can have dreams (including nightmares) which can to varying degrees be visually rich.  There’s no obvious explanation for this but while aphantasia is the inability voluntarily to generate visual mental imagery while awake, dreaming is an involuntary perceptual experience generated during sleep; while both are mediated by neural mechanisms, these clearly are not identical but presumably must overlap.  The conclusions from research at this stage remains tentative the current neuro-cognitive interpretation seems to suggest voluntary (conscious) imagery relies on top-down activation of the visual association cortex while dream (unconscious) dream imagery relies more on bottom-up and internally driven activation during REM (rapid eye movement) sleep.  What that would seem to imply is that in aphantasia, the former pathway is impaired (or at least inaccessible), while the latter may remain intact (or accessible).

The University of Queensland’s illustration of the phantasia spectrum.

The opposite syndrome is hyperphantasia (having extremely vivid, detailed, and lifelike mental imagery) which can be a wonderful asset but can also be a curse, rather as hyperthymesia (known also as HSAM (Highly Superior Autobiographical Memory) and colloquially as “total recall”) can be disturbing.  Although it seems not to exist in the sense of “remembering everything, second-by-second”, there are certainly those who have an extraordinary recall of “events” in their life and this can have adverse consequences for mental health because one of the mind’s “defensive mechanisms” is forgetting or at least suppressing memories which are unwanted.  Like aphantasia & hyperphantasia, hyperthymesia is not listed by the DSM as a mental disorder; it is considered a rare cognitive trait or neurological phenomenon although like the imaging conditions it can have adverse consequences and these include disturbing “flashbacks”, increased rumination and increased rates of anxiety or obsessive tendencies.

Tuesday, January 6, 2026

Inamorata

Inamorata (pronounced in-am-uh-rah-tuh or in-am-uh-rah-tuh)

A woman with whom one is in love; a female lover

1645-1655: From the Italian innamorata (mistress, sweetheart), noun use of the feminine form of innamorato (the noun plural innamoratos or innamorati) (lover, boyfriend), past principle of innamorare (to inflame with love), the construct being in- (in) + amore (love), from the Latin amor.  A familiar modern variation is enamor.  Inamorata is a noun; the noun plural is inamoratas.

Words like inamorata litter English and endure in their niches, not just because poets find them helpful but because they can be used to convey subtle nuances in a way a word which appears synonymous might obscure.  One might think the matter of one’s female lover might be linguistically (and sequentially) covered by (1) girlfriend, (2) fiancé, (3) wife and (4) mistress but to limit things to those is to miss splitting a few hairs.  A man’s girlfriend is a romantic partner though not of necessity a sexual one because some religions expressly prohibit such things without benefit of marriage and there are the faithful who follow these teachings.  One can have as many girlfriends as one can manage but the expectation they should be enjoyed one at a time.  Women can have girlfriends too but (usually) they are “friends who are female” rather than anything more except of course among lesbians where the relationship is the same as with men.  Gay men too have girlfriends who are “female friends”, some of whom may be “fag hags” a term which now is generally a homophobic slur unless used within the LGB factions of the LGBTQQIAAOP community where it can be jocular or affectionate.

A fiancé is a women to whom one is engaged to be married, in many jurisdictions once a matter of legal significance because an offer of marriage could be enforced under the rules of contract law.  While common law courts didn’t go as far as ordering “specific performance of the contract”, they would award damages on the basis of a “breach of promise”, provided it could be adduced that three of the four essential elements of a contract existed: (1) offer, (2) certainty of terms and (3) acceptance.  The fourth component: (4) consideration (ie payment), wasn’t mentioned because it was assumed to be implicit in the nature of the exchange; a kind of deferred payment as it were.  It was one of those rarities in common law where things operated almost wholly in favor of women in that they could sue a man who changed his mind while they were free to break-off an engagement without fear of legal consequences though there could be social and familial disapprobation.  Throughout the English-speaking world, the breach of promise tort in marriage matters has almost wholly been abolished, remaining on the books in the a couple of US states (not all of which lie south of the Mason-Dixon Line) but even where it exists it’s now a rare action and one likely to succeed only in exceptional circumstances or where a particularly fragrant plaintiff manages to charm a particularly sympathetic judge.

The spelling fiancé (often as fiance) is now common for all purposes.  English borrowed both the masculine (fiancé) and feminine (fiancée) from the French verb fiancer (to get engaged) in the mid nineteenth century and that both spellings were used is an indication it was one of those forms which was, as an affectation, kept deliberately foreign because English typically doesn’t use gendered endings. Both the French forms were ultimately from the Classical Latin fidare (to trust), a form familiar in law and finance in the word fiduciary, from the Latin fīdūciārius (held in trust), from fīdūcia (trust) which, as a noun & adjective, describes relationships between individuals and entities which rely on good faith and accountability.  Pronunciation of both fiancé and fiancée is identical so the use of the differentiated forms faded by the late twentieth century and even publications like Country Life and Tattler which like writing with class-identifiers seem to have updated.  Anyway, because English doesn’t have word endings that connote gender, differentiating between the male and the female betrothed would seem unfashionable in the age of gender fluidity but identities exist as they’re asserted and one form or the other might be deployed as a political statement by all sides in the gender wars.

Model Emily Ratajkowski's (b 1991) clothing label is called Inamorata, a clever allusion to her blended nickname EmRata.  This is Ms Ratajkowski showing Inamorata’s polka-dot line in three aspects.

Wife was from the Middle English wyf & wif, from the Old English wīf (woman, wife), from the Proto-West Germanic wīb, from the Proto-Germanic wībą (woman, wife) and similar forms existed as cognates in many European languages.  The wife was the woman one had married and by the early twentieth century, in almost all common law jurisdictions (except those where systems of tribal law co-existed) it was (more or less) demanded one may have but one at a time.  Modern variations include “common-law wife” and the “de-facto wife”.  The common-law marriage (also known as the "sui iuris (from the Latin and literally “of one's own right”) marriage", the “informal marriage” and the “non-ceremonial marriage”) is a kind of legal quasi-fiction whereby certain circumstances can be treated as a marriage for many purposes even though no formal documents have been registered, all cases assessed on their merits.  Although most Christian churches don’t long dwell on the matter, this is essentially what marriage in many cases was before the institutional church carved out its role.  In popular culture the term is used loosely to refer sometimes just about any un-married co-habitants regardless of whether or not the status has been acknowledged by a court.  De facto was from the Latin de facto, the construct being (from, by) + the ablative of factum (fact, deed, act).  It translates as “in practice, what actually is regardless of official or legal status” and is thus differentiated from de jure, the construct being (from) + iūre (law) which describes something’s legal status.  In general use, a common-law wife and de facto wife are often thought the same thing but the latter differs that in some jurisdictions the parameters which define the status are codified in statute whereas a common law wife can be one declared by a court on the basis of evidence adduced.

Mistress dates from 1275–1325 and was from the Middle English maistresse, from the Old & Middle French maistresse (in Modern French maîtresse), feminine of maistre (master), the construct being maistre (master) + -esse or –ess (the suffix which denotes a female form of otherwise male nouns denoting beings or persons), the now rare derived forms including the adjective mistressed and the noun mistressship.  In an example of the patriarchal domination of language, when a woman was said to have acquired complete knowledge of or skill in something, she’s was said to have “mastered” the topic.  A mistress (in this context) was a woman who had a continuing, extramarital sexual relationship with one man, especially a man who, in return for an exclusive and continuing liaison, provides her with financial support.  The term (like many) has become controversial and critics (not all of them feminists) have labeled it “archaic and sexist”, suggesting the alternatives “companion” or “lover” but neither convey exactly the state of the relationship so mistress continues to endure.  The critics have a point in that mistress is both “loaded” and “gendered” given there’s no similarly opprobrious term for adulterous men but the word is not archaic; archaic words are those now rare to the point of being no longer in general use and “mistress” has hardly suffered that fate, thought-crime hard to stamp out.

This is Ms Ratajkowski showing Inamorata’s polka-dot line in another three aspects.

Inamorata was useful because while it had a whiff of the illicit, that wasn’t always true but what it did always denote was a relationship of genuine love whatever the basis so one’s inamorata could also be one’s girlfriend, fiancé or mistress though perhaps not one’s wife, however fond one might be of her.  An inamorata would be a particular flavor of mistress in the way paramour or leman didn't imply.  Paramour was from the Middle English paramour, paramoure, peramour & paramur, from the Old French par amor (literally “for love's sake”), the modern pronunciation apparently an early Modern English re-adaptation of the French and a paramour was a mistress, the choice between the two perhaps influenced by the former tending to the euphemistic.  The archaic leman is now so obscure that it tends to be used only by the learned as a term of disparagement against women in the same way a suggestion mendaciousness is thought a genteel way to call someone a liar.  Dating from 1175-1225, it was from the Middle English lemman, a variant of leofman, from the Old English lēofmann (lover; sweetheart (and attested also as a personal name)), the construct being lief + man (beloved person).  Lief was from the Middle English leef, leve & lef, from the Old English lēof (dear), from the Proto-Germanic leubaz and was cognate with the Saterland Frisian ljo & ljoo, the West Frisian leaf, the Dutch lief, the Low German leev, the German lieb, the Swedish and Norwegian Nynorsk ljuv, the Gothic liufs, the Russian любо́вь (ljubóv) and the Polish luby.  Man is from the Middle English man, from the Old English mann (human being, person, man), from the Proto-Germanic mann (human being, man) and probably ultimately from the primitive Indo-European mon (man).  A linguistic relic, leman applied originally either to men or women and had something of a romantic range.  It could mean someone of whom one was very fond or something more although usage meant the meaning quickly drifted to the latter: someone's sweetheart or paramour.  In the narrow technical sense it could still be applied to men although it has for so long been a deliberate archaic device and limited to women, that would now just confuse.

About the concubine, while there was a tangled history, there has never been much confusion.  Dating from 1250-1300, concubine was from the Middle English concubine (a paramour, a woman who cohabits with a man without being married to him) from the Anglo-Norman concubine, from the Latin concubīna, derived from cubare (to lie down), the construct being concub- (variant stem of concumbere & concumbō (to lie together)) + -ina (the feminine suffix).  The status (a woman who cohabits with a man without benefit of marriage) existed in Hebrew, Greek, Roman and other civilizations, the position sometimes recognized in law as "wife of inferior condition, secondary wife" and there’s much evidence of long periods of tolerance by religious authorities, extended both to priests and the laity.  The concubine of a priest was sometimes called a priestess although this title was wholly honorary and of no religious significance although presumably, as a vicar's wife might fulfil some role in the parish, they might have been delegated to do this and that.

Once were inamoratas: Lindsay Lohan with former special friend Samantha Ronson, barefoot in Los Cabos, Mexico, 2008.

Under Roman civil law, the parties were the concubina (female) and the concubinus (masculine).  Usually, the concubine was of a lower social order but the institution, though ranking below matrimonium (marriage) was a cut above adulterium (adultery) and certainly more respectable than stuprum (illicit sexual intercourse, literally "disgrace" from stupere (to be stunned, stupefied)) and not criminally sanctioned like rapere (“sexually to violate” from raptus, past participle of rapere, which when used as a noun meant "a seizure, plundering, abduction").  In Medieval Latin it also meant meant also "forcible violation" & "kidnapping" and a misunderstanding of the context in which the word was then used has caused problems in translation ever since .  Concubinage is, in the West, a term largely of historic interest.  It describes a relationship in which a woman engages in an ongoing conjugal relationship with a man to whom she is not or cannot be married to the full extent of the local meaning of marriage.  This may be due to differences in social rank, an existing marriage, religious prohibitions, professional restrictions, or a lack of recognition by the relevant authorities.  Historically, concubinage was often entered into voluntarily because of an economic imperative.  In the modern vernacular, wives use many words to describe their husbands’ mistress(es).  They rarely use concubine.  They might however be tempted to use courtesan which was from the French courtisane, from the Italian cortigiana, feminine of cortigiano (courtier), from corte (court), from the Latin cohors.  A courtesan was a prostitute but a high-priced one who attended only to rich or influential clients and the origin of the term was when it was used of the mistresses of kings or the nobles in the court, the word mistress too vulgar to be used in such circles.

Thursday, January 1, 2026

Acersecomic

Acersecomic (pronounced a-sir-suh-kome-ick)

A person whose hair has never been cut.

1623: From the Classical Latin acersecomēs (a long-haired youth) the word borrowed from the earlier Ancient Greek form κερσεκόµης (with unshorn hair), constructed from komē (the hair of the head (the source of the –comic)) + keirein (to cut short) + the prefix a- (not; without).  The Latin acersecomēs wasn’t a term of derision or disapprobation, merely descriptive, it being common for Roman and Greek youth to wear their hair long until manhood.  Acersecomic appeared in English dictionaries as early as 1656, the second instance noted some 30 years later.  Although of dubious linguistic utility even in seventeenth century English, such entries weren’t uncommon in early English dictionaries as editors trawled through lists of words from antiquity to conjure up something, there being some marketing advantage in being the edition with the most words.  It exists now in a lexicographical twilight zone, its only apparent purpose being to appear as an example of a useless word.  The -comic element of the word is interesting.  It’s from the Ancient Greek komē in one of the senses of coma: a diffuse cloud of gas and dust that surrounds the nucleus of a comet.  From antiquity thus comes the sense of long, flowing hair summoning an image of the comet’s trail in the sky.  The same -comic ending turns up in two terms that are probably more obscure even than acersecomic: acrocomic (having hair at the tip, as in a goat’s beard (acro- translates as “tip”) and xanthocomic (a person with yellow hair), from the Greek xanthos (yellow).  Acersecomic & acersecomism are nouns and acersecomically is an adverb; the noun plural is acersecomics.

Lindsay Lohan as Rapunzel, The Real Housewives of Disney, Saturday Night Live (SNL), 2012.

Intriguingly, even if someone is acersecomic, that does not of necessity mean they will have really long hair.  As explained by Healthline, there are four stages in hair-growth: (1) Growing phase, (2) Transition phase, (3) Resting phase and (4) Shedding phase; the first three phases (anagen, catagen & telogen) encompass the growth & maturation of hair and the activity of the hair follicles that produce individual hairs while during the final (exogen), the “old” hair sheds and, usually, a new hair is getting ready to take its place.  Each phase has its own dynamics but the behavior can be affected by age, nutrition and health conditions.

A possible acersecomic although there is some evidence of at least the odd trim.  This od one of the less confronting images at People of Walmart which documents certain aspects of the American socio-economic experience in the social media age.  Users seem divided whether People of Walmart is a celebration of DEI (diversity, equity and inclusion), a chronicle of decadence or a condemnation of deviance.

The anagen phase has the longest duration but is variable depending on the location of the follicles; the hair on one’s scalp has the longest anagen and it can last anywhere between 2-8 years.  During the anagen, the follicles “push out” hairs that will continue to grow until they’re cut or reach the end of their life span and fall out.  Over the population, typically, at any moment, as many as 90% of the hairs on the scalp will be in the anagen phase.  Trichologists (those who study the hair or scalp) list the catagen as the “transitional stage” because it lasts only some two weeks, during which follicles shrink and hair growth slows; it’s in this process the hair separates from the bottom of the hair follicle yet remains in place during the final days of growth.  At any point, no more than 3% of the hairs on the scalp will be in the catagen.  The telogen, lasting 2-3 months is called the “resting stage” and gains the description from the affected (some 10%) hairs not growing but nor do they tend to fall out and it’s at this point new hairs begin to form in follicles that have just released hairs during the catagen.  Historically, the exogen (shedding stage) was regarded as the later element of the telogen but the modern practice in trichology is to list it as the fourth stanza in the cycle.  Didactically, that does make sense although technically, the exogen is an extension of the telogen, being the point at which hair is shed from the scalp, the volume affected by washing, brushing and even the wearing of tightly fitted headwear.  Losing as many as 100 hairs per day is typical and the exogen can least several months, new hairs growing in the follicles as old fall away.

Genuinely, 15 year old Skye Merchant was acersecomic until July 2021 when she had her first haircut, part of her fund-raising efforts for cancer research.  The trimmed locks were donated to perruquiers (wigmakers) making wigs for cancer patients who'd lost their hair as a result of undergoing chemotherapy.

What all that means is that whether or not acersecomic, the maximum length one’s hair can attain is determined wholly by one’s genetics; in other words, its determined well before birth and while it’s possible to increase the rate of growth by attention to nutrition and maintaining a “healthy lifestyle”, nothing can (yet) change one’s DNA and that means some can grow hair to their ankles while for others it will never extend beyond the shoulders. While, all else being equal, the state of one’s hair depends on genetics and hormone levels (mechanisms largely locked in before birth), trichologists recommend (1) maintaining protein intake (hair being composed largely of protein), (2) ensuring nutriant intake is at the recommended daily level (vitamin D, vitamin C, vitamin B12, zinc, folic acid most associated with hair growth although iron is especially important for women) and (3) reducing physical and mental stress, something sometimes easier said than done.  There are also a variety of medical conditions which can affect hair including a misbehaving immune system but in mental health the two most documented are trichotillomania (an irresistible urge to pull hairs from the follicles) and the pica (a disorder characterized by craving and appetite for non-edible substances, such as ice, clay, chalk, dirt, or sand and named for the jay or magpie (pīca in Latin), based on the idea the birds will eat almost anything) trichophagia (the compulsion to eat hair, wool, and other fibres).  A noted feature of the fifth edition of the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM-5 (2013)), was the more systematic approach taken to eating disorders, variable definitional criteria being defined for the range of behaviours within that general rubric.

Suspected acersecomic, US suffragist and women's rights activist Maud Wood Park (1871–1955), photographed circa 1896 (the subject thus in her mid-twenties) in the studio of Frank W. Legg, at 18 Montvale Avenue, Woburn, Massachusetts.

These prints in sepia were mounted in a cardboard surround called a “cabinet card”, a popular format for commercial photographers which had first gone on sale in the mid-nineteenth century.  Because the cardboard was effective in protecting the photograph from damage, many cabinet cards have survived in museums or private collections and they’re an interesting part of the historic record, representing the way the middle-class wished to be presented.  In the Victorian era (1837-1901), long, luxuriant hair was valued as a symbol of feminine beauty and not until the 1920s did shorter styles become truly popular.  These images are untypical of the genre because the hair is unbound whereas most were photographed with their tresses restrained in the way stereotypically it’s imagined Victorian women were compelled to adopt; she was after all a proto-feminist and, as it would be for decades afterwards, hair could be a symbol of defiance against social convention.  Many of the surviving cabinet cards are the work of the obviously prolific Mr Legg and the site of his studio in Woburn, Massachusetts is now the Woburn Bowladrome which, off and on, has operated since 1940 although there’s now a large “JESUS” painted on the roof, presumably a recent addition by an owner or perhaps the hand of God.  Now again under new management, the Woburn Bowladrome hosts Candlepin, a variant of ten-pin bowling most popular in the Canadian maritime regions and the north-east of the US.  The game uses tall, narrow pins and a small, palm-sized ball with a scoring system allowing players three chances per frame to knock down all ten pins with the fallen pins remaining on the lane to be used in subsequent shots within the active frame.

In recent interviews, Russian model and singer Olga Naumova didn't make clear if she was truly an acersecomic but did reveal that in infancy her hair was so thin her parents covered her head, usually with a "babushka" headscarf (ie the style typically associated with Russian grandmothers).  It's obviously since flourished and her luxuriant locks are now 62 inches (1.57 m) long, a distinctive feature she says attracts (1) requests for selfies, (2) compliments, (3) propositions decent & otherwise, (4) public applause (in Thailand), (5) requests for technical advice (usually from women asking about shampoo, conditioner & other product) while (6) on-line, men sometimes suggest marriage, often by the expedient of elopement.

Olga Naumova and hair in motion.

Perhaps surprisingly, the Moscow-based model says she doesn't do "anything extraordinary" to maintain her mane beyond shampoo, conditioner and the odd oil treatment, adding the impressive length and volume she attributes wholly to the roll of the genetic dice.  Her plaits and braids are an impressive sight and their creation can take over two hours, depending on their number and intricacy.  She did admit she wears the "snatched high ponytail" made famous by the singer Ariana Grande (b 1993) only briefly for photo-shoots because the weight of her hair makes it "too painful" to long endure.

Greta Thunberg: BB (before-bob) and AB (after-bob).

What's not clear is whether, in the age of global warming, acersecomism will remain socially acceptable and Greta Thunberg (b 2003), something of a benchmark for environmental consciousness, in 2025 opted for a bob (one straddling chin & shoulder-length).  Having gained fame as a weather forecaster, the switch to shorter hair appears to have coincided with her branching out from environmental activism to political direct action in the Middle East.  While there's no doubt she means well, it’s something that will end badly because while the matter of greenhouse gasses in the atmospheric can (over centuries) be fixed, some problems are insoluble and the road to the Middle East is paved six-feet deep with good intentions.  Ms Thunberg seems not to have discussed why she got a bob (and how she made her daily choice of "one braid or two" also remained mysterious) but her braids were very long and she may have thought them excessive and contributing to climate change.  While the effect individually would be slight, over the entire population there would be environmental benefits if all those with long hair got a bob because: (1) use of shampoo & conditioner would be lowered (reduced production of chemicals & plastics), (2) a reduction in water use (washing the hair and rinsing out all that product uses much), (3) reduced electricity use (hair dryers, styling wands & straighteners would be employed for a shorter duration) and (4) carbon emissions would drop because fewer containers of shampoo & conditioner would be shipped or otherwise transported.

Wednesday, December 17, 2025

Inkhorn

Inkhorn (pronounced ingk-hawrn)

A small container of horn or other material (the early version would literally have been hollowed-out horns from animals), formerly used to hold writing ink.

1350-1400: From the Middle English ynkhorn & inkehorn (small portable vessel, originally made of horn, used to hold ink), the construct being ink +‎ horn.  It displaced the Old English blæchorn, which had the same literal meaning but used the native term for “ink”.  It was used attributively from the 1540s as an adjective for things (especially vocabulary) supposed to be beloved by scribblers, pedants, bookworms and the “excessively educated”).  Inkhorn, inkhornery & inkhornism are nouns, inkhornish & inkhornesque are adjectives and inkhornize is a verb; the noun plural is inkhorns.

Ink was from the Middle English ynke, from the Old French enque, from the Latin encaustum (purple ink used by Roman emperors to sign documents), from the Ancient Greek ἔγκαυστον (énkauston) (burned-in”), the construct being ἐν (en) (in) + καίω (kaíō) (burn). In this sense, the word displaced the native Old English blæc (ink (literally “black” because while not all inks were black, most tended to be).  Ink came ultimately from a Greek form meaning “branding iron”, one of the devices which should make us grateful for modern medicine.  Because, in addition to using the kauterion to cauterize (seal wounds with heat), essentially the same process was used to seal fast the colors used in paintings.  Then, the standard method was to use wax colors fixed with heat (encauston (burned in)) and in Latin this became encaustum which came to be used to describe the purple ink with which Roman emperors would sign official documents.  In the Old French, encaustum became enque which English picked up as enke & inke which via ynk & ynke, became the modern “ink”.  Horn was from the Middle English horn & horne, from the Old English horn, from the Proto-West Germanic horn, from the Proto-Germanic hurną; it was related to the West Frisian hoarn, the Dutch hoorn, the Low German Hoorn, horn, the German, Danish & Swedish horn and the Gothic haurn.  It was ultimately from the primitive Indo-European r̥h-nó-m, from erh- (head, horn) and should be compared with the Breton kern (horn), the Latin cornū, the Ancient Greek κέρας (kéras), the Proto-Slavic sьrna, the Old Church Slavonic сьрна (sĭrna) (roedeer), the Hittite surna (horn), the Persian سر (sar) and the Sanskrit शृङ्ग (śṛṅga) (horn

Inkhorn terms & inkhorn words

The phrase “inkhorn term” days from the 1530s and was used to criticize the use of language in an obscure or way difficult for most to understand, usually by an affected or ostentatiously erudite borrowing from another language, especially Latin or Greek.  The companion term “inkhorn word” was used of such individual words and in modern linguistics the whole field is covered by such phrases as “lexiphanic term”, “pedantic term” & “scholarly term”, all presumably necessary now inkhorns are rarely seen.  Etymologists are divided on the original idea behind the meaning of “inkhorn term” & “inkhorn word”.  One faction holds that because the offending words tended to be long or at least multi-syllabic, a scribe would need more than once to dip their nib into the horn in order completely write things down while the alternative view is that because the inkhorn users were, by definition, literate, they were viewed sometimes with scepticism, one suspicion they used obscure or foreign words to confuse or deceive the less educated.  The derived forms are among the more delightful in English and include inkhornism, inkhornish, inkhornery inkhornesque & inkhornize.  The companion word is sesquipedalianism (a marginal propensity to use humongous words).

Lindsay Lohan and her lawyer in court, Los Angeles, December 2011.

Inkhorn words were in the fourteenth & fifteenth centuries known also as “gallipot words”, derived from the use of such words on apothecaries' jars, the construct being galli(s) + pot.  Gallis was from the Latin gallus (rooster or cock (male chicken)), from the Proto-Italic galsos, an enlargement of gl̥s-o-, zero-grade of the primitive Indo-European gols-o-, from gelh- (to call); it can be compared with the Proto-Balto-Slavic galsas (voice), the Proto-Germanic kalzōną (to call), the Albanian gjuhë (tongue; language), and (although this is contested) the Welsh galw (call).  Appearing usually in the plural a gallipot word was something long, hard to pronounce, obscure or otherwise mysterious, the implication being it was being deployed gratuitously to convey the impression of being learned.  The companion insult was “you talk like an apothecary” and “apothecary's Latin” was a version of the tongue spoken badly or brutishly (synonymous with “bog Latin” or “dog Latin” but different from “schoolboy Latin” & “barracks Latin”, the latter two being humorous constructions, the creators proud of their deliberate errors).  The curious route which led to “gallipot” referencing big words was via the rooster being the symbol used by apothecaries in medieval and Renaissance Europe, appearing on their shop signs, jars & pots.  That was adopted by the profession because the rooster symbolized vigilance, crowing (hopefully) at dawn, signaling the beginning of the day and thus the need for attentiveness and care.  Apothecaries, responsible for preparing and dispensing medicinal remedies, were expected to be vigilant and attentive to detail in their work to ensure the health and well-being of their patients who relied on their skill to provided them the potions to “get them up every morning” in sound health.  Not all historians are impressing by the tale and say a more convincing link is that in Greek mythology, the rooster was sacred to Asclepius (Aesdulapius in the Latin), the god of medicine, and was often depicted in association with him.  In some tales, Asclepius had what was, even by the standards of the myths of Antiquity, a difficult birth and troubled childhood.

The quest for the use of “plain English” is not new.  The English diplomat and judge Thomas Wilson (1524–1581) wrote The Arte of Rhetorique (1553), remembered as the “the first complete works on logic and rhetoric in English” and in it he observed the first lesson to be learned was never to affect “any straunge ynkhorne termes, but to speak as is commonly received.  Wring a decade earlier, the English bishop John Bale (1495–1563) had already lent an ecclesiastical imprimatur to the task, condemning one needlessly elaborate text with: “Soche are your Ynkehorne termes” and that may be the first appearance of the term in writing.  A religious reformer of some note, he was nicknamed “bilious Bale”, a moniker which politicians must since have been tempted to apply to many reverend & right-reverend gentlemen.  A half millennium on, the goal of persuading all to use “plain English” is not yet achieved and a fine practitioner of the art was Dr Kevin Rudd (b 1957; Australian prime-minister 2007-2010 & 2013): from no one else would one be likely to hear the phrase detailed programmatic specificity” and to really impress he made sure he spoke it to an audience largely of those for whom English was not a first language.

An inkhorn attributed to Qes Felege, a scribe and craftsman.

Animal horns were for millennia re-purposed for all sorts of uses including as drinking vessels, gunpowder stores & loaders, musical instruments and military decoration and in that last role they’ve evolved into a political fashion statement, Jacob Chansley (b 1988; the “QAnon Shaman”) remembered for the horned headdress worn during the attack on the United States Capitol building in Washington DC on 6 January 2021.  Inkhorns tended variously to be made from the horns of sheep or oxen, storing the ink when not as use and ideal as a receptacle into which the nib of a quill or pen could be dipped.  Given the impurities likely then to exist a small stick or nail was left in the horn to stir away any surface film which might disrupts a nib’s ability to take in free-flowing ink, most of which were not pre-packaged products by mixed by the user from a small solid “cake” of the base substance in the desired color, put into the horn with a measure starchy water and left overnight to dissolve.  The sharp point of a horn allowed it to be driven into the ground because the many scribes were not desk-bound and actually travelled from place to place to do their writing, quill and inkhorn their tools of trade.

A mid-Victorian (1837-1901) silver plated three-vat inkwell by George Richards Elkington (1801–1865) of Birmingham, England.  The cast frame is of a rounded rectangular form with outset corners, leaf and cabuchons, leaf scroll handle and conforming pen rest.  The dealer offering this piece described the vats as being of "Vaseline" glass with fruit cast lids and in the Elkington factory archives, this is registered: "8 Victoria Chap 17. No. 899, 1 November 1841".

“Vaseline glass” is a term describing certain glasses in a transparent yellow to yellow-green color attained by virtue of a uranium content.  It's an often used descriptor in the antique business because some find the word “uranium” off-putting although inherently the substance is safe, the only danger coming from being scratched by a broken shard.  Also, some of the most vivid shades of green are achieved by the addition of a colorant (usually iron) and these the cognoscenti insist should be styled “Depression Glass” a term which has little appeal to antique dealers.  The term “Vaseline glass” wasn’t used prior to the 1950s (after the detonation of the first A-bombs in 1945, there emerged an aversion to being close to uranium) and what's used in this inkwell may actually be custard glass or Burmese glass which is opaque whereas Vaseline glass is transparent.  Canary glass was first used in the 1840s as the trade name for Vaseline glass, a term which would have been unknown to George Richards Elkington.

English silver plate horn and dolphin inkwell (circa 1909) with bell, double inkwell on wood base with plaque dated 1909.  This is an inkwell made using horns; it is not an inkhorn.

So inkhorns were for those on the move while those which sat on desks were called “ink wells” or “ink pots” and these could range from simple “pots” to elaborate constructions in silver or gold.  There are many ink wells which use horns as part of their construction but they are not inkhorns, the dead animal parts there just as decorative forms of structure.

Dr Rudolf Steiner’s biodynamic cow horn fertilizer.

Horns are also a part of the “biodynamic” approach to agriculture founded by the Austrian occultist & mystic Rudolf Steiner (1861-1925), an interesting figure regarded variously as a “visionary”, a “nutcase” and much between.  The technique involves filling cow horns with cow manure which are buried during the six coldest months so the mixture will ferment; upon being dug up, it will be a sort of humus which has lost the foul smell of the manure and taken on a scent of undergrowth.  It may then be used to increase the yield generated from the soil.  It’s used by being diluted with water and sprayed over the ground.  Dr Steiner believed the forces penetrating the digestive organ of cows through the horn influence the composition of their manure and when returned to the environment, it is enriched with spiritual forces that make the soil more fertile and positively affect it.  As he explained: “The cow has horns to send within itself the etheric-astral productive forces, which, by pressing inward, have the purpose of penetrating directly into the digestive organ. It is precisely through the radiation from horns and hooves that a lot of work develops within the digestive organ itself.  So in the horns, we have something well-adapted, by its nature, to radiate the vital and astral properties in the inner life.”  Now we know.