Showing posts sorted by date for query Syndrome. Sort by relevance Show all posts
Showing posts sorted by date for query Syndrome. Sort by relevance Show all posts

Sunday, February 8, 2026

Heptadecaphobia

Heptadecaphobia (pronounced hepp-tah-dech-ah-foh-bee-uh)

Fear of the number 17.

1700s: The construct was the Ancient Greek δεκαεπτά (dekaepta) (seventeen) + φόβος (phobos).  The alternative form is septadecaphobia, troubling some the purists because they regard it as a Greek-Latin mongrel, the construct being the Latin septem (seven) + deca, from the Latin decas (ten), from the Ancient Greek δεκάς (dekás) (ten) + the Ancient Greek φόβος) (phobos) (fear).  Heptadecaphobia deconstructs as hepta- “seven” + deca (ten) + phobos.  The suffix -phobia (fear of a specific thing; hate, dislike, or repression of a specific thing) was from the New Latin, from the Classical Latin, from the Ancient Greek -φοβία (-phobía) and was used to form nouns meaning fear of a specific thing (the idea of a hatred came later).  Heptadecaphobia, heptadecaphobist, heptadecaphobism, heptadecaphobiac and heptadecaphobe are nouns, heptadecaphobic is a noun & adjective and heptadecaphobically is an adverb; the common (sic) noun plural is heptadecaphobes and they should number 59 million-odd (the population of Italy).

Morphologically, “heptadecaphilliac” is possible but is clumsy and unnecessary, the standard noun agent (-phile) rendering it redundant and although used, not all approve of the suffix -phobiac because it’s a later hybrid formation from modern English and thus judged “less elegant”.  The opposite condition (a great fondness for 17) is the noun heptadecaphilia, those with the condition being Heptadecaphiles, the derived words following the conventions used with heptadecaphobia.  Whether any of the derived forms have much (or ever) been used beyond lists asserting they exist (which, except as abstractions, may be dubious) is unlikely but concerned Italians should note the noun heptadecaphobist would seem to imply doctrinal adherence rather than suffering the fear.  Still, it’s there if the need exists for precision in one’s behavioural descriptors.  Modern English constructions (like heptadecaphobia) built from Greek morphemes are neo-classical” compounds rather than a “proper” words from the Ancient Greek and while some amuse or appal the classicists, in practice, variations in suffix-use have long be tolerated.

In Classical Greek, the cardinal number 17 was πτακαίδεκα (heptakaídeka; literally “seven-and-ten”) but the Ancients were as adept as us at clipping for convenience and the variant πταδέκα (heptadéka; literally “seven-ten”) also exists in surviving texts.  The shorter element embedded in heptadecaphobia corresponds to heptadeca- (from πταδέκα) and genuinely that is Classical Greek, although, on the basis of the count from what documents are extant, it was less common than πτακαίδεκα. The latter-day hybridization was inevitable because, as far as in known, “seventeen” had not before been used as a combining stem in compounds.  In English, the convention in neoclassical formation tends the sequence: (1) take the cardinal form, (2) drop the inflection and (3) treat it as a stem, thus the construct heptadeca + phobia, familiar to structuralists in the more common triskaidekaphobia which uses the Greek tris-kai-deka (“three and ten”) despite in genuine Greek morphology, compounds being not usually directly from πταδέκα as a bound stem.  It’s better to follow modern practice rather than try to conjure something “classically pure” because although one could argue heptakaidekaphobia (closer to πτακαίδεκα) is a better tribute to Antiquity, as well as being historically unattested, it’s phonetically cumbersome which seems a worse linguistic sin.

Just because a “fear of a number” is listed somewhere as a “phobia” doesn’t mean the condition has much of a clinical history or even that a single case is to be found in the literature; many may have been coined just for linguistic fun and students in classics departments have been set assessment questions like “In Greek, construct the word meaningfear of the number 71” (the correct answer being “hebdomekontahenophobia”).  Some are well documented such as tetraphobia (fear of 4) which is so prevalent in East Asia it compelled BMW to revise the release strategy of the “4 Series” cars and triskaidekaphobia (fear of 13) which has such a history in the West it’s common still for hotels not to have a 13th floor or rooms which include “13”, something which in the pre-digital age was a charming quirk but when things were computerized added a needless complication.  The use of the actual number is important because in such a hotel the “14th” floor is (in the architectural sense) of course the 13th but there’s little to suggest there’s ever been resistance from guests being allocated room 1414.

Some number phobias are quite specific: Rooted in the folklore of Australian cricket is a supposed association of the number 87 with something bad (typically a batter (DEI (diversity, equity & inclusion) means they're no longer "batsmen") being dismissed) although it seems purely anecdotal and more than one statistical analysis (cricket is all about numbers) has concluded there's nothing “of statistical significance” to be found and there’s little to suggest players take the matter seriously.  One English umpire famously had “a routine” associated with the score reaching a “repunit” (a portmanteau (or blended) word, the construct being re(eated) +‎ unit) (eg 111, 222, 333 etc) but that was more fetish than phobia.

No fear of 17: Lindsay Lohan appeared on the covers of a number of issues of Seventeen magazine.  Targeted at the female market (age rage 12-18), the US edition of Seventeen is now predominately an on-line publication, printed only as irregular "special, stand-alone issues" but a number of editions in India and the Far East continue in the traditional format. 

Other illustrative number phobias include oudenophobia (fear of 0), (trypophobia (fear of holes) said to sometimes be the companion condition), henophobia (fear of 1) (which compels sufferer to avoid being associated with “doing something once”, being the “first in the group” etc), heptaphobia (fear of 7) (cross-culturally, a number also with many positive associations), eikosiheptaphobia (fear of 27) (a pop-culture thing which arose in the early 1970s when a number of rock stars, at 27, died messy, drug-related deaths), tessarakontadyophobia (fear of 42) (which may have spiked in patients after the publication of Douglas Adams’ (1952–2001) Hitchhiker's Guide to the Galaxy (1979-1992), enenekontenneaphobia (fear of 99) (thought not related to the Get Smart TV series of the 1960s), tetrakosioeikosiphobia (fear of 420) (the syndrome once restricted to weed-smokers in the US but long internationalized), the well-documented hexakosioihexekontahexaphobia (fear of 666), heftakosioitessarakontaheptaphobia (fear of 747) (though with the withdrawal from passenger service of the tough, reliable (four engines and made of metal) Boeing 747 and its replacement with twin-engined machines made increasingly with composites and packed with lithium-ion batteries, a more common fear may be “not flying on a 747”, most common among heftakosioitessarakontaheptaphiles).  Enniakosioihendecaphobia (fear of 911) was, in the US, probably a co-morbidity with tetrakosioeikosiphobia but it may also have afflicted also those with a bad experience of a pre-modern Porsche 911 (1963-) which, in inexpert hands, could behave as one would expect of a very powerful Volkswagen Beetle, the most acute cases manifesting as triskaidekaphobia (fear of 930, that number being the internal designation for the original 911 Turbo (1974-1989), the fastest of the breed, soon dubbed the "widow-maker").

Nongentiseptuagintatrestrillionsescentiquinquagintanovemmiliacentumtredecimdeciesoctingentivigintiquattuormiliatrecentiphobia (fear of 973,659,113,824,315) describes a the definitely rare condition and it's assumed that was word was coined by someone determined to prove it could be done. There’s also compustitusnumerophobia (fear of composite numbers), meganumerophobia (fear of large numbers), imparnumerophobia (fear of odd numbers), omalonumerophobia (fear of even numbers), piphobia (fear of pi), phiphobia (fear of the golden ratio), primonumerophobia (fear of prime numbers), paranumerophobia (fear of irrational numbers), neganumerophobia (fear of negative numbers) and decadisophobia (fear of decimals).  All such types are unrelated to arithmophobia (or numerophobia) which is the "fear of numbers, calculations & math", a syndrome common among students who "just don't get it" and there are many because those "good at math" and those not really are two separate populations; it's rare to be able to transform the latter into the former, a better solution being to send them to law school where many flourish, needing to master the arithmetic only of billing their time in six-minute increments (1/10th of an hour).  Having ten fingers and thumbs, most manage the calculations.  The marvellous Wiki Fandom site and The Phobia List are among the internet’s best curated collection of phobias.

The only one which debatably can’t exist is neonumerophobia (fear of new numbers) because, given the nature of infinity, there can be no “new numbers” although, subjectively, a number could be “new” to an individual so there may be a need.  Sceptical though mathematicians are likely to be, the notion of the “new number” ("zero" debatably the last) has (in various ways) been explored in fiction including by science fiction (SF or SciFi) author & engineer Robert A Heinlein (1907–1988) in The Number of the Beast (1980), written during his “later period”.  More challenging was Flatland: A Romance of Many Dimensions by English schoolmaster & Anglican priest Edwin Abbott (1838–1926) which was published under the pseudonym “A Square”, the layer of irony in that choice revealed as the protagonist begins to explore dimensions beyond his two-dimensional world (in Victorian England).  Feminists note also Ursula K Le Guin’s (1929–2018) The Left Hand of Darkness (1969) in which was created an entirely new numerical system of “genderless" numbers”.  That would induce fear in a few.

Lindsay Lohan's cover of the song Edge of Seventeen appeared on the album A Little More Personal (2005).  Written by Stevie Nicks (b 1948), it appeared originally on her debut solo studio album Bella Donna (1981).

In entymology, there are insects with no fear of the number 17.  In the US, the so-called “periodical cicadas” (like those of the genus Magicicada) exist in a 17 year life cycle, something thought to confer a number of evolutionary advantages, all tied directly to the unique timing of their mass emergence: (1) The predator satiation strategy: The creatures emerge in massive numbers (in the billions), their sheer volume meaning it’s physically impossible for predators (both small mammals & birds) to eat enough of them to threaten the survival of the species. (2) Prime number cycles: Insects are presumed unaware of the nature of prime numbers but 17 is a prime number and there are also periodic cicadas with a 13 (also a prime) year cycle.  The 13 (Brood XIX) & 17-year (Brood X) periodic cicadas do sometimes emerge in the same season but, being prime numbers, it’s a rare event, the numbers' LCM (least common multiple) being 221 years; the last time the two cicadas emerged together was in 1868 and the next such event is thus expected in 2089.  The infrequency in overlap helps maintain the effectiveness of the predator avoidance strategies, the predators typically having shorter (2-year, 5-year etc) cycles which don’t synchronize with the cicadas' emergence, reducing chances a predator will evolve to specialize in feeding on periodical cicadas. (3) Avoidance of Climate Variability: By remaining underground for 17 years, historically, periodical cicadas avoided frequent climate changes or short-term ecological disasters like droughts or forest fires. The long underground nymph stage also allows them to feed consistently over many years and emerge when the environment is more favorable for reproduction.  Etymologists and biological statisticians are modelling scenarios under which various types of accelerated climate change are being studied to try to understand how the periodic cicadas (which evolved under “natural” climate change) may be affected. (4) Genetic Isolation: Historically, the unusually extended period between emergences has isolated different broods of cicadas, reducing interbreeding and promoting genetic diversity over time, helping to maintain healthy populations over multiple life-cycles.

No 17th row: Alitalia B747-243B I-DEMP, Johannesburg International Airport, South Africa, 2001.

There are a variety of theories to account for the Italian superstition which had rendered 17 the national “unlucky number” but it does seem to be due primarily to a linguistic and symbolic association from ancient Rome.  The most accepted explanation is that in Roman numerals 17 is XVII which, anagrammatically, translates to VIXI (Latin for “I have lived” (the first-person singular perfect active indicative of vīvō (to live; to be alive)), understood in the vernacular as “my life is over” or, more brutally: “I am dead”.  It was something which appeared often on Roman tombstones, making an enduring record which ensured the superstition didn’t have to rely on collective memory or an oral tradition for inter-generational transfer.  That would have been ominous enough but Romans noted also that Osiris, the Egyptian god of, inter alia, life, death, the afterlife and resurrection, had died on the 17th day of the month, 17 thus obviously a “death number” to the logical Roman mind and the worst 17th days of the month were those which coincided with a full moon.  The cosmic coincidence was an intensifier in the same sense that in the English-speaking world the conjunction leading to a Friday falling on the 13th makes the day seem threatening.  Thus, just as in some places hotels have neither 13th floor or rooms containing “13”, in Italy it’s “17” which is avoided although not having a row 17 in its airliners didn’t save Alitalia (Società Aerea Italiana, the now-defunct national carrier) from its COVID-era demise.  Of course not labelling a row or floor “13” or “17” doesn’t mean a 13th or 17th something doesn’t exist, just that it’s called “14” or “18” so it’s the symbolic association which matters, not the physical reality.  Mashing up the numerical superstitions, that 17 is an “unlucky number” shouldn’t be surprising because it’s the sum of 13 + 4, the latter being the most dreaded number in much of East Asia, based on the pronunciation resembling “death” in both Chinese and Japanese.

In automotive manufacturing, there was nothing unusual about unique models being produced for the Italian domestic market, the most common trick being versions with engines displacing less than 2.0 litres to take advantage of the substantially lower tax regime imposed below that mark.  Thus Ferrari (1975-1981) and Lamborghini (1974-1977) made available 2.0 litre V8s (sold in RoW (rest of the world) markets variously in 2.5 & 3.0 litre displacements), Maserati a 2.0 V6 (usually a 3.0 in the Maserati Merak (1972-1983) although it appeared in 2.7 & 3.0 litre form in the intriguing but doomed Citroën SM (1970-1975)) and Mercedes-Benz created a number of one-off 2.0 litre models in the W124 range (1974-1977) exclusive to the Italian domestic market (although an unrelated series of 2.0 litre cars was also sold in India).  Others followed the trend although, the more expensive they were, the less appeal seemed to exist despite, in absolute terms, the saving increasing as the price rose.  Maserati offered a twin-turbo 2.0 in the aptly named BiTurbo, BMW did a one off 320is and Alfa Romeo produced a run of 2.0 V6s.

Lindsay Lohan, aged 17, Teen Choice Awards, Universal Amphitheatre, Universal City, California, 2 August 2003.

From an engineering point of view, most audacious doubtlessly was the 2.0 litre version of TVR's V8S (1991-1994).  Supplied usually with a 4.0 litre version of the versatile Rover V8, the capacity of the version for the Italian market was halved by de-stroking, the bore of 88.9 and stroke of 40.25 mm creating an outrageously oversquare bore/stroke ratio of 45.28 but, with the assistance of a supercharger, the quirky engine almost matched in power and torque the naturally aspirated original with twice the displacement; It was a classic example of the effectiveness of forced-aspiration although it did demand of drivers a different technique.  By comparison, the Formula One BRM H16’s (1966-1967) bore & stroke was 69.8 x 49.9 mm and it was so oversquare to reduce the frictional losses which would have been induced had a longer stroke been used with that many cylinders; its bore/stroke ration was 71.48 compared with the almost square BRM V16 designed in the 1940s, the latter able to be in that configuration because (1) it was supercharged and (2) being only 1.5 litres, the stroke was anyway physically short in absolute terms.  The 2.4 litre V8s used in Formula One between 2006-2013 had to have a maximum bore of 98 and stroke of 40 mm (bore/stroke ratio 40.81) and that’s an indication of the characteristics the 2.0 litre TVR V8S offered.  Disappointingly, it was an experience few Italians sought and only seven were built.

It was Suzuki which had more success with work-arounds to Rome’s tiresome regulations.  Their two-stroke, triple cylinder GT380 (1972-1980) motorcycle was for most of its existence made with an actual displacement of 371 cm3 but in 1975, the Italian government passed a law banning the importation of motor-cycles under 380 cm3 and weighing less than 170 kg.  Accordingly, the Japanese produced a “big bore” 380 exclusively for the Italian market displacing an actual 384 cm3.  The portly triple would never have run afoul of the weight limit but just to avoid any unpleasantness, the data plate riveted to the frame recorded a verified mass of 171 kg.  Honor apparently satisfied on both sides, the GT380 remained available in some places until 1980, outliving the Suzuki’s other two-strokes triples by three seasons.

US advertisement for the Renault 17 (1974), the name Gordini adopted as a "re-brand" of the top-of-the-range 17TS,  Gordini was a French sports car producer and tuning house, absorbed by Renault in 1968, the name from time-to-time used for high-performance variants of various Renault models.

One special change for the Italian market was a nod to the national heptadecaphobia, the car known in the rest of the world (RoW) as the Renault 17 (1971-1979) sold in Italy as the R177.  For the 17, Renault took the approach which had delivered great profits: use the underpinnings of mundane mass-produced family cars with a sexy new body draped atop.  Thus in the US the Ford Falcon (1959-1969) begat the Mustang (1964-) and in Europe Ford made the Capri (1968-1976) from the Cortina (1962-1982).  Opel’s swoopy GT (1968-1973) was (most improbably) underneath just the modest Kadett.  It wasn’t only the mass-market operators which used the technique because in the mid 1950s, Mercedes-Benz understood the appeal of the style of the 300 SL (W198, 1954-1957) was limited by the high price which was a product of the exotic engineering (the space-frame, gullwing doors, dry sump and the then novel MFI (mechanical fuel-injection)), the solution being to re-purpose the platform of the W120, the small, austere sedan which helped the company restore its fortunes in the post-war years before the Wirtschaftswunder (economic miracle) was celebrated in 1959 with the exuberance of the Heckflosse (tailfin) cars (1959-1968).  On the W120 platform was built the 190 SL (W121, 1955-1963), an elegant (it not especially rapid) little roadster which quickly became a trans-Atlantic favourite, particularly among what used to be called the “women’s market”.

Only in Italy: The Renault 177, exclusively for heptadecaphobes.

Using the same formula, the Renault 17 was built on the underpinnings of the Renault 12, a remarkably durable platform, introduced in 1969 and, in one form or another, manufactured or assembled in more than a dozen countries, the last not produced until 2006.  Like the Anglo-German Ford Capri, the 17 was relatively cheap to develop because so much was merely re-purposed but for a variety of reasons, it never managed to come close to match the sales of the wildly successful Ford, FWD (front wheel drive) not then accepted as something “sporty” and Renault's implementation on the 17 was never adaptable to the new understanding of the concept validated by FWD machines such as Volkswagen’s Golf GTi which would define the “hot hatch”.  Like most of the world, the Italians never warmed to the 17 but presumably the reception would have been even more muted had not, in deference to the national superstition about the number 17, the name been changed to “Renault 177”, the cheaper companion model continuing to use the RoW label: Renault 15.

Saturday, January 10, 2026

Aphantasia

Aphantasia (pronounced ay-fan-tay-zhuh)

The inability voluntarily to recall or form mental images.

2015: The word (not the diagnosis) was coined by UK neurologist Dr Adam Zeman (b 1957), neuropsychologist Dr Michaela Dewar (b 1976) and Italian neurologist Sergio Della Sala (b 1955), first appearing the paper Lives without imagery.  The construct was a- (from the Ancient Greek ἀ- (a-), used as a privative prefix meaning “not”, “without” or “lacking” + phantasía (from the Greek φαντασία (“appearance”, “imagination”, “mental image” or “power of imagination”, from φαίνω (phaínō) ( “to show”, “to make visible” or “to bring to light”).  Literally, aphantasia can be analysed as meaning “an absence of imagination” or “an absence of mental imagery” and in modern medicine it’s defined as “the inability voluntarily to recall or form mental images”.  Even in Antiquity, there was some meaning shift in phantasía, Plato (circa 427-348 BC) using the word to refer generally to representations and appearances whereas Aristotle (384-322 BC) added a technical layer, his sense being faculty mediating between perception (aisthēsis) and thought (noēsis).  It’s the Aristotelian adaptation (the mind’s capacity to form internal representations) which flavoured the use in modern neurology.  Aphantasia is a noun and aphantasic is a noun & adjective; the noun plural is aphantasics.

Scuola di Atene (The School of Athens, circa 1511), fresco by Raphael (Raffaello Sanzio da Urbino, 1483–1520), Apostolic Palace, Vatican Museums, Vatican City, Rome.  Plato and Aristotle are the figures featured in the centre.

In popular use, the word “aphantasia” can be misunderstood because of the paths taken in English by “phantasy”, “fantasy” and “phantasm”, all derived from the Ancient Greek φαντασία (phantasía) meaning “appearance, mental image, imagination”.  In English, this root was picked up via Latin and French but the multiple forms each evolved in distinct semantic trajectories.  The fourteenth century phantasm came to mean “apparition, ghost, illusion” so was used of “something deceptive or unreal”, the connotation being “the supernatural; spectral”.  This appears to be the origin of the association of “phantas-” with unreality or hallucination rather than normal cognition.  In the fifteenth & sixteenth centuries, the spellings phantasy & fantasy were for a time interchangeable although divergence came with phantasy used in its technical senses of “mental imagery”; “faculty of imagination”; “internal representation”, this a nod to Aristotle’s phantasía.  Fantasy is the familiar modern form, used to suggest “a fictional invention; daydream; escapism; wish-fulfilment, the connotation being “imaginative constructions (in fiction); imaginative excess (in the sense of “unreality” or the “dissociative”); indulgence (as in “speculative or wishful thoughts”)”.

While the word “aphantasia” didn’t exist until 2015, in the editions of the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) published between 1952 (DSM-I) and 2013 (DSM-5), there was not even any any discussion (or even mention) of a condition anything like “an inability voluntarily to recall or form mental images”.  That’s because despite being “a mental condition” induced by something happening (or not happening) in the brain, the phenomenon has never been classified as “a mental disorder”.  Instead it’s a cognitive trait or variation in the human condition and technically is a spectrum condition, the “pure” aphantasic being one end of the spectrum, the hyperaphantasic (highly vivid, lifelike mental imagery, sometimes called a “photorealistic mind's eye”) the other.  That would of course imply the comparative adjective would be “more aphantasic” and the superlative “most aphantasic” but neither are standard forms.

If That rationale for the “omission” was the DSM’s inclusion criteria including the requirement of some evidence of clinically significant distress or functional impairment attributable to a condition.  Aphantasia, in isolation, does not reliably meet this threshold in that many individuals have for decades functioned entirely “normally” without being aware they’re aphantasic  while others presumably had died of old age in similar ignorance.  That does of course raise the intriguing prospect the mental health of some patients may have been adversely affected by the syndrome only by a clinician informing them of their status, thus making them realize what they were missing.  This, the latest edition of the DSM (DSM-5-TR (2022)) does not discuss.  The DSM does discuss imagery and perceptual phenomena in the context of other conditions (PTSD (post-traumatic stress disoder), psychotic disorders, dissociative disorders etc), but these references are to abnormal experiences, not the lifelong absence of imagery.  To the DSM’s editors, aphantasis remains a recognized phenomenon, not a diagnosis.

Given that aphantasia concerns aspects of (1) cognition, (2) inner experience and (3) mental representation, it wouldn’t seem unreasonable to expect the condition now described as aphantasia would have appeared in the DSM, even if only in passing or in a footnote.  However, in the seventy years between 1952-2022, over nine editions, there is no mention, even in DSM-5-TR (2022), the first volume released since the word was in 2015 coined.  That apparently curious omission is explained by the DSM never having been a general taxonomy of mental phenomena.  Instead, it’s (an ever-shifting) codification of the classification of mental disorders, defined by (1) clinically significant distress and/or (2) functional impairment and/or (3) a predictable course, prognosis and treatment relevance.  As a general principle the mere existence of an aphantasic state meets none of these criteria.

Crooked Hillary Clinton in Orange Nina McLemore pantsuit, 2010.  If in response to the prompt "Imagine a truthful person" one sees an image of crooked Hillary Clinton (b 1947; US secretary of state 2009-2013), that obviously wrong but is not an instance of aphantasia because the image imagined need not be correct, it needs just to exist.

The early editions (DSM-I (1952) & DSM-II (1968)) heavily were slanted to the psychoanalytic, focusing on psychoses, neuroses and personality disorders with no mention of any systematic treatment of cognition as a modular function; the matter of mental imagery (even as abstract though separated from an imagined image), let alone its absence, wholly is ignored.  Intriguingly, given what was to come in the field, there was no discussion of the cognitive phenomenology beyond gross disturbances (ie delusions & hallucinations).  Even with the publication of the DSM-III (1980) & DSM-III-R (1987), advances in scanning and surgical techniques, cognitive psychology and neuroscience seem to have made little contribution to what the DSM’s editorial board decided to include and although DSM-III introduced operationalized diagnostic criteria (as a part of a more “medicalised” and descriptive psychiatry), the entries still were dominated by a focus on dysfunctions impairing performance, the argument presumably that it was possible (indeed, probably typical) for those with the condition to lead, full, happy lives; the absence of imagery ability thus not considered a diagnostically relevant variable.  Even in sections on (1) amnestic disorders (a class of memory loss in which patients have difficulty forming new memories (anterograde) or recalling past ones (retrograde), not caused by dementia or delirium but of the a consequence of brain injury, stroke, substance abuse, infections or trauma), with treatment focusing on the underlying cause and rehabilitation, (2) organic mental syndromes or (3) neuro-cognitive disturbance, there was no reference to voluntary imagery loss as a phenomenon in its own right.

Although substantial advances in cognitive neuroscience meant by the 1990s neuropsychological deficits were better recognised, both the DSM-IV (1994) and DSM-IV-TR (2000) continued to be restricted to syndromes with behavioural or functional consequences.  In a way that was understandable because the DSM still was seen by the editors as a manual for working clinicians who were most concerned with helping those afflicted by conditions with clinical salience; the DSM has never wandered far into subjects which might be matters of interesting academic research and mental imagery continued to be mentioned only indirectly, hallucinations (percepts without stimuli) and memory deficits (encoding and retrieval) both discussed only in the consequence of their affect on a patient, not as phenomenon.  The first edition for the new century was DSM-5 (2013) and what was discernible was that discussions of major and mild neuro-cognitive disorders were included, reflecting the publication’s enhanced alignment with neurology but even then, imagery ability is not assessed or scaled: not possessing the power of imagery was not listed as a symptom, specifier, or associated feature.  So there has never in the DSM been a category for benign cognitive variation and that is a product of a deliberate editorial stance rather than an omission, many known phenomenon not psychiatrised unless in some way “troublesome”.

The term “aphantasia” was coined to describe individuals who lack voluntary visual mental imagery, often discovered incidentally and not necessarily associated with brain injury or psychological distress.  In 2015 the word was novel but the condition had been documented for more than a century, Sir Francis Galton (1822–1911) in a paper published in 1880 describing what would come to be called aphantasia.  That work was a statistical study on mental imagery which doubtless was academically solid but Sir Francis’s reputation later suffered because he was one of the leading lights in what was in Victorian times (1837-1901) the respectable discipline of eugenics.  Eugenics rightly became discredited so Sir Francis was to some extent retrospectively “cancelled” (something like the Stalinist concept of “un-personing”) and these days his seminal contribution to the study of behavioural genetics is acknowledged only grudgingly.

Galton in 1880 noted a wide variation in “visual imagination” (ie it was understood as a “spectrum condition”) and in the same era, in psychology publications the preferred term seems to have been “imageless thought”.  In neurology (and trauma medicine generally) there were many reports of patients losing the power of imagery after a brain injury but no agreed name was ever applied because the interest was more in the injury.  The unawareness that some people simply lacked the facility presumably must have been held among the general population because as Galton wrote: “To my astonishment, I found that the great majority of the men of science to whom I first applied, protested that mental imagery was unknown to them, and they looked on me as fanciful and fantastic in supposing that the words “mental imagery” really expressed what I believed everybody supposed them to mean. They had no more notion of its true nature than a colour-blind man who has not discerned his defect has of the nature of colour.

His paper must have stimulated interest because one psychologist reported some subjects possessing what he called a “typographic visual type” imagination in which ideas (which most would visualize as an image of some sort) would manifest as “printed text” which was intriguing because in the same way a computer in some aspects doesn’t distinguish between an image file (jpeg, TIFF, webp, avif etc) which is a picture of (1) someone and (2) their name in printed form, that would seem to imply at least some who are somewhere on the aphantasia spectrum retain the ability to visualize printed text, just not the object referenced.  Professor Zeman says he first became aware of the condition in 2005 when a patient reported having lost the ability to visualize following minor surgery and after the case was in 2010 documented in the medical literature in the usual way, it provoked a number of responses in which multiple people informed Zeman they had never in their lifetime been able to visualize objects.  This was the origin of Zeman and his collaborators coining “congenital aphantasia”, describing individuals who never enjoyed the ability to generate voluntary mental images.  Because it was something which came to general attention in the age of social media, great interest was triggered in the phenomenon and a number of “on-line tests” were posted, the best-known of which was the request for readers to “imagine a red apple” and rate their “mind's eye” depiction of it on a scale from 1 (photorealistic visualisation) through to 5 (no visualisation at all).  For many, this was variously (1) one’s first realization they were aphantasic or (2) an appreciation one’s own ability or inability to visualise objects was not universal.

How visualization can manifest: Lindsay Lohan and her lawyer in court, Los Angeles, December. 2011.  If an aphantasic person doesn't know about aphantasia and doesn't know other people can imagine images, their lives are probably little different from them; it's just their minds have adapted to handle concepts in another way.

Top right: What’s thought “normal” visualization (thought to be possessed by most of the population) refers to the ability to imagine something like a photograph of what’s being imagined.  This too is a spectrum condition in that some will be able to imagine an accurate “picture”, something like a HD (high definition photograph” while others will “see” something less detailed, sketchy or even wholly inaccurate.  However, even if when asked to visualize “an apple” one instead “sees a banana”, that is not an indication of aphantasia, a condition which describes only an absence of an image.  Getting it that wrong is an indication of something amiss but it’s not aphantasia.

Bottom left: “Seeing” text in response to being prompted to visualize something was the result Galton in 1880 reported as such a surprise.  It means the brain understands the concept of what is being described; it just can’t be imagined as an image.  This is one manifestation of aphantasia but it’s not related to the “everything is text” school of post-modernism.  Jacques Derrida’s (1930-2004) fragment “Il n'y a pas de hors-texte” (literally “there is no outside-text”) is one of the frequently misunderstood phrases from the murky field of deconstruction bit it has nothing to do with aphantasia (although dedicated post-modernists probably could prove a relationship).

Bottom right: The absence of any image (understood as a “blankness” which does not necessarily imply “whiteness” or “blackness” although this is the simple way to illustrate the concept), whether text or to some degree photorealistic is classic aphantasia.  The absence does not mean the subject doesn’t understand the relevant object of concept; it means only that their mental processing does not involve imagery and for as long as humans have existed, many must have functioned in this way, their brains adapted to the imaginative range available to them.  What this must have meant was many became aware of what they were missing only when the publicity about the condition appeared on the internet, am interesting example of “diagnostic determinism”.

WebMD's classic Aphantasia test.

The eyes are an out-growth of the brain and WebMD explains aphantasia is caused by the brain’s visual cortex (the part of the brain that processes visual information from the eyes) “working differently than expected”, noting the often quoted estimate of it affecting 2-4% of the population may be understated because many may be unaware they are “afflicted”.  It’s a condition worthy of more study because aphantasics handle the characteristic by processing information differently from those who rely on visual images.  There may be a genetic element in aphantasia and there’s interest too among those researching “Long Covid” because the symptom of “brain fog” can manifest much as does aphantasia.

Aphantasia may have something to do with consciousness because aphantasics can have dreams (including nightmares) which can to varying degrees be visually rich.  There’s no obvious explanation for this but while aphantasia is the inability voluntarily to generate visual mental imagery while awake, dreaming is an involuntary perceptual experience generated during sleep; while both are mediated by neural mechanisms, these clearly are not identical but presumably must overlap.  The conclusions from research at this stage remains tentative the current neuro-cognitive interpretation seems to suggest voluntary (conscious) imagery relies on top-down activation of the visual association cortex while dream (unconscious) dream imagery relies more on bottom-up and internally driven activation during REM (rapid eye movement) sleep.  What that would seem to imply is that in aphantasia, the former pathway is impaired (or at least inaccessible), while the latter may remain intact (or accessible).

The University of Queensland’s illustration of the phantasia spectrum.

The opposite syndrome is hyperphantasia (having extremely vivid, detailed, and lifelike mental imagery) which can be a wonderful asset but can also be a curse, rather as hyperthymesia (known also as HSAM (Highly Superior Autobiographical Memory) and colloquially as “total recall”) can be disturbing.  Although it seems not to exist in the sense of “remembering everything, second-by-second”, there are certainly those who have an extraordinary recall of “events” in their life and this can have adverse consequences for mental health because one of the mind’s “defensive mechanisms” is forgetting or at least suppressing memories which are unwanted.  Like aphantasia & hyperphantasia, hyperthymesia is not listed by the DSM as a mental disorder; it is considered a rare cognitive trait or neurological phenomenon although like the imaging conditions it can have adverse consequences and these include disturbing “flashbacks”, increased rumination and increased rates of anxiety or obsessive tendencies.

Sunday, December 28, 2025

Osphresiolagnia

Osphresiolagnia (pronounced aus-free-see-a-lan-gee-ah)

A paraphilia characterized by recurrent sexually arousing fantasies, sexual urges, or behaviour involving smells.

Early-mid twentieth century: A coining in clinical psychiatry the construct being osphres(is) + lagina.  Osphresis was from the Ancient Greek ὀσφρῆσις (osphrēsis) (sense of smell; olfaction).  Lagina was from the Ancient Greek λαγνεία (lagina) (lust; sexual desire), from λᾰγνός (lagnos) (lustful; sexually aroused).  Osphresiolagnia thus translated literally as “lust or sexual arousal related to or induced by one’s sense of smell”. Osphresiolagnia & Osphresiolagnism are nouns and osphresiolagnic is a noun & adjective; the noun plural is Osphresiolagnias.

The synonym is olfactophilia (sexual arousal caused by smells or odors, especially from the human body) and in modern clinical use, that’s seems now the accepted form.  Although now rare, in clinical use a renifleur was paraphiliac who derived sexual pleasure from certain smells.  Renifleur was from the French noun renifleur (the feminine renifleuse, the plural renifleurs), the construct being renifler +‎ -eur.  The construct of the verb renifler was re- (used in the sense of “to do; to perform the function”) + nifler (to irritate, to annoy); it was from the same Germanic root as the Italian niffo & niffa (snout) and related to the Low German Niff (nose, mouth, bill), the Dutch neb (nose, beak) and the English neb (nose, beak, face).  The French suffix -eur was from the Middle French, from the Old French -eor or -or, from the Latin -ātōrem & -tor and a doublet of -ateur.  It was used to form masculine agent nouns from verbs (some of which were used also as adjectives).

Pioneering Austrian psychoanalyst Sigmund Freud (1856-1939) never developed his hypothesis of osphresiolagnia into a fully-developed theory and in his papers it’s mentioned only as an aspect of the psychoanalytic exploration of human sexuality, specifically focusing on the role of olfactory stimuli (sense of smell) in sexual arousal.  It was part of a body of work in which he explored his concept of fetishism and infantile sexuality.  In psychoanalysis, osphresiolagnia described the condition (“the state” might now be thought a better way of putting it) where certain smells become associated with sexual pleasure or arousal and to Freud these naturally were those related to bodily functions, such as sweat, skin, or other natural odors because he believed different sensory experiences, including smell, could become a focus of sexual fixation, particularly if something in early psychosexual development caused this association.  The tie-in with fetishism was that an obsessive focus on the sense of can form as a way of displacing or substituting more normative sexual interests.  Freud spoke also of the significance of the senses (including smell) in early childhood development and linked them to psychosexual stages, where early experiences with stimuli can influence later adult sexuality and while he didn’t use the word, he believed a smell associated with some significant childhood experience, could, even decades later, act as a “trigger”.  Although it’s been in the literature for more than a century, osmophresiolagnia (also now sometimes called “olfactory stimulation”) seems to have aroused more clinical and academic interest in the last fifteen years and while the psychological and physiological responses to certain smells have been well-documented, it was usually in the context of revulsion and the way this response could influence the decision-making processes.  However, positive responses can also be influential, thus the renewed interest.

In medicine and the study of human and animal sexuality, the significance of “olfactory attraction” has been researched and appears to be well understood.  At its most, the idea of olfactory attraction is that animals (including humans) can be attracted to someone based on scent; in the patients seen by psychiatrists, they can also be attracted to objects based on their smell, either because of their inherent quality or by their association with someone (either someone specific or “anyone”.  The best known aspect of the science is the study of pheromones (in biology A chemical secreted by an animal which acts to affects the development or behavior of other members of the same species, functioning often as a means of attracting a member of the opposite sex).  Human pheromones have been synthesised and are available commercially in convenient spray-packs for those who wish to enhance their desirability with a chemical additive.  More generally, there is also the notion of “fragrance attraction” which describes the allure another’s smell (either natural or the scent they wear) exerts and this can manifest in “objective transference” (keeping close during periods of absence a lover’s article of clothing or inhaling from the bottle of perfume they wear.

The opposite of being attracted to a smell is finding one repellent.  What is known in the profession technically as ORS (olfactory reference syndrome) has never been classified as a separate disorder in either the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) or the World Health Organization’s (WHO) International Classification of Diseases (ICD).  The DSM-III-R (1987) did mention ORS in the context of “aversion”, noting “convictions that the person emits a foul odor…are one of the most common types of delusional disorder, somatic type”, the idea extended in DSM-IV (1994) which referred to the concept as a type of delusional disorder, somatic type, although the term “olfactory reference syndrome” was not mentioned.

In October 2024, it was reported by Greek news services that a court in Thessaloniki (the capital of the Macedonia region and Greece's second city) in the north of the country had imposed a suspended one-month prison sentence on a man convicted of “…disturbing his neighbors by repeatedly sneaking into their properties to smell their shoes.  According to the AP (Associated Press), the 28-year-old man was unable to explain his behaviour although he did tell the court he was “embarrassed by it”, adding that he had “…no intention of breaking the law or harming anybody…” and his neighbours did testify he never displayed any signs of aggression during his nocturnal visits to the shoes, left outside the door to air.  The offences were committed in the village of Sindos, some 15 kilometres (9 miles) west of Thessaloniki and the police were called only after the man had ignored requests sent to his family that his conduct stop.  According to the neighbours, there had in the last six months been at least three prior instances of shoe sniffing.  In addition to the suspended sentence, the defendant was ordered to attend therapy sessions.

The postman always sniffs twice, Balnagask Circle, Torry, Aberdeen, Scotland, August 2024.  Helpfully, the video clip was posted by the Daily Mail and from his grave of a hundred-odd years, old Lord Northcliffe (Alfred Harmsworth, 1865–1922) would be delighted.

Osphresiolagnia is however not culturally specific and in August 2024, a postman delivering mail to an address on Balnagask Circle in the Torry area of South Aberdeen, Scotland was captured on a doorbell camera, pausing to “to sniff a girl's shoes.  All appeared normal until the osphresiolagnic servant of the Royal Mail had put the letters in the slot but then he turned and, after a brief glance at the shoe rack, bent down and picked up a white trainer which he sniffed before leaving to resume his round (and possibly his sniffing).  The mother of the girl whose shoes fell victim to the postman posted the video on social media, tagging the entry: “I would just like to let everyone know just to watch out for this postman; he sniffed my daughter's shoes; what an absolute creep.  The clip came to the attention of the Scottish police which issued a statement: “We received a report of a man acting suspiciously in the Balnagask Circle area of Aberdeen.  Enquiries were carried out and no criminality was established. Suitable advice was given.  It wasn’t made clear what that advice was or to whom it was delivered but presumably the constabulary’s attitude was: no shoe being harmed during this sniffing, all’s well that ends well.

Shoe-sniffing should not be confused with Podophilia (a paraphilia describing the sexualized objectification of feet (and sometimes footwear), commonly called foot fetishism although the correct clinical description is now “foot partialism”).  The construct was podo- +‎ -philia.  Podo- (pertaining to a foot or a foot-like part) was from the Ancient Greek πούς (poús), from the primitive Indo-European pds.  It was cognate with the Mycenaean Greek po, the Latin pēs, the Sanskrit पद् (pad), the Old Armenian ոտն (otn) & հետ (het), the Gothic fōtus and the Old English fōt (from which Modern English gained “foot”).  It was Sigmund Freud who admitted that, lawfulness aside, as animals, the only truly aberrant sexual behavior in humans could be said to be its absence (something which the modern asexual movement re-defines rather than disproves).  It seemed to be in that spirit the DSM-5 (2013) was revised to treat podophila and many other “harmless” behaviors as “normal” and thus within the purview of the manual only to the extent of being described, clinical intervention no longer required.  Whether all clinicians agree with the new permissiveness isn’t known but there's nothing in the DSM-5-TR (2022) to suggest podophiles will soon again be labeled deviants.

Point of vulnerability to osphresiolagnism: Lindsay Lohan taking off her shoes and putting them on the shoe rack.  The photo shoot featured Ms Lohan as a nueva embajadora de Allbirds (new Allbirds ambassador), in a promotion for Allbirds (Comfortable, Sustainable Shoes & Apparel) and the shoes are the Tree Flyer in Lux Pink which include “no plastics” in their construction.  The photo session may have been shot on a Wednesday.

Shoe sniffing is different and clinicians define it as an instance of “intimacy by proxy” in a similar class to those who steal women’s underwear from their clothes lines; an attempt to in some way be associated with the wearer (or just "women" generally).  This differs from those with an interest in the shoes or garments as objects because, conveniently & lawfully they can fulfil their desires by buying what they want from a shop.  How prevalent are such proclivities isn’t known because, the fetish being pursued in a lawful (and in most cases presumably secret) manner, unless self-reported, clinicians would never become aware of the activity.

Wednesday, December 3, 2025

Crunning & Cromiting

Crunning (pronounced khrun-ing)

In high-performance sports training, simultaneously running and crying.

Circa 2020: the construct was cr(y) + (r)unning.

Cromiting (pronounced krom-et-ing)

In high-performance sports training, simultaneously running, crying & vomiting.

Circa 2020: the construct was cr(y) + (v)omit + (runn)ing.

The verb cry was from the thirteenth century Middle English crien, from the Old French crier (to announce publicly, proclaim, scream, shout) (from which Medieval Latin gained crīdō (to cry out, shout, publish, proclaim)). The noun is from Middle English crie, from the Old French cri & crïee.  The origin of the Old French & Middle Latin word is uncertain.  It may be of Germanic origin, from the Frankish krītan (to cry, cry out, publish), from the Proto-Germanic krītaną (to cry out, shout), from the primitive Indo-European greyd- (to shout) and thus cognate with the Saterland Frisian kriete (to cry), the Dutch krijten (to cry) & krijsen (to shriek), the Low German krieten (to cry, call out, shriek”), the German kreißen (to cry loudly, wail, groan) and the Gothic kreitan (to cry, scream, call out) and related to the Latin gingrītus (the cackling of geese), the Middle Irish grith (a cry), the Welsh gryd (a scream), the Persian گریه (gerye) (to cry) and the Sanskrit क्रन्दन (krandana) (cry, lamentation).  Some etymologists however suggest a connection with the Medieval Latin quiritō (to wail, shriek), also of uncertain origin, possibly from the Latin queror (to complain) through the form although the phonetic and semantic developments have proved elusive; the alternative Latin source is thought to be a variant of quirritare (to squeal like a pig), from quis, an onomatopoeic rendition of squeaking.  An ancient folk etymology understood it as "to call for the help of the Quirites (the Roman policemen).  In the thirteenth century, the meaning extended to encompass "shed tears", previously described as “weeping”, “to weep” etc and by the sixteenth century cry had displace weep in the conversational vernacular, under the influence of the notion of "utter a loud, vehement, inarticulate sound".  The phrase “to cry (one's) eyes out” (weep inordinately) is documented since 1704 but weep, wept etc remained a favorite of poets and writers.

Vomit as a verb (the early fifteenth century Middle English vomiten) was an adoption from the Latin vomitus (past participle of vomitāre) and was developed from the fourteenth century noun vomit (act of expelling contents of the stomach through the mouth), from the Anglo-French vomit, from the Old French vomite, from the Latin vomitus, from vomō & vomitare (to vomit often), frequentative of vomere (to puke, spew forth, discharge), from the primitive Indo-European root wemh & weme- (to spit, vomit), source also of the Ancient Greek emein (to vomit) & emetikos (provoking sickness), the Sanskrit vamati (he vomits), the Avestan vam- (to spit), the Lithuanian vemti (to vomit) and the Old Norse væma (seasickness).  It was cognate with the Old Norse váma (nausea, malaise) and the Old English wemman (to defile).  The use of the noun to describe the matter disgorged during vomiting dates from the late fourteenth century and is in common use in the English-speaking world although Nancy Mitford (1904–1973 and the oldest of the Mitford sisters) in the slim volume Noblesse Oblige: an Enquiry into the Identifiable Characteristics of the English Aristocracy (1956) noted “vomit” was “non-U” and the “U” word was “sick”, something perhaps to bear in mind after, if not during, vomiting. 

Run was from the Middle English runnen & rennen (to run), an alteration (influenced by the past participle runne, runnen & yronne) of the Middle English rinnen (to run), from the Old English rinnan & iernan (to run) and the Old Norse rinna (to run), both from the Proto-Germanic rinnaną (to run) and related to rannijaną (to make run), from the Proto-Indo-European hreyh- (to boil, churn”.  It was cognate with the Scots rin (to run), the West Frisian rinne (to walk, march), the Dutch rennen (to run, race), the Alemannic German ränne (to run), the German rennen (to run, race) & rinnen (to flow), the Danish rende (to run), the Swedish ränna (to run) and the Icelandic renna (to flow).  The non-Germanic cognates includes the Albanian rend (to run, run after).  The alternative spelling in Old English was ærning (act of one who or that which runs, rapid motion on foot) and that endured as a literary form until the seventeenth century.  The adjective running (that runs, capable of moving quickly) was from the fourteenth century and was from rennynge; as the present-participle adjective from the verb run, it replaced the earlier erninde, from the Old English eornende from ærning.  The meaning "rapid, hasty, done on the run" dates from circa 1300 while the sense of "continuous, carried on continually" was from the late fifteenth century.  The language is replete with phrases including “run” & “running” and run has had a most productive history: according to one source the verb alone has 645 meanings and while that definitional net may be widely cast, all agree the count is well into three figures.  The suffix –ing was from the Middle English -ing, from the Old English –ing & -ung (in the sense of the modern -ing, as a suffix forming nouns from verbs), from the Proto-West Germanic –ingu & -ungu, from the Proto-Germanic –ingō & -ungō. It was cognate with the Saterland Frisian -enge, the West Frisian –ing, the Dutch –ing, The Low German –ing & -ink, the German –ung, the Swedish -ing and the Icelandic –ing; All the cognate forms were used for the same purpose as the English -ing).

Lilly Dick (b 1999) of the Australian Women’s Rugby Sevens.

The portmanteau words crunning (simultaneously running and crying) & cromiting (simultaneously running, crying & vomiting) are techniques used in strength and conditioning training by athletes seeking to improve endurance.  The basis of the idea is that at points where the mind usually persuades a runner or other athlete to pause or stop, the body is still capable of continuing and thus signals like crying or vomiting should be ignored in the manner of the phrase “passing through the pain barrier”.  The idea is “just keep going no matter what” and that is potentially dangerous so such extreme approaches should be pursued only under professional supervision.  Earlier (circa 2015), crunning was a blend of crawl + running, a type of physical training which was certainly self-descriptive and presumably best practiced on other than hard surfaces; it seems not to have caught on.  Crunning & cromiting came to wider attention when discussed by members of the Australian Women’s Rugby Sevens team which won gold at the Commonwealth Games (Birmingham, UK, July-August 2022).  When interviewed, a squad member admitted crunning & cromiting were “brutal” methods of training but admitted both were a vital part of the process by which they achieved the level of strength & fitness (mental & physical) which allowed them to succeed.

The perils of weed.

Although visually similar (spelling & symptoms), crunning & cromiting should not be confused with "scromiting" (a portmanteau of “screaming” and “vomiting”) a word coined in the early twenty-first century as verbal shorthand for cannabinoid hyperemesis syndrome (CHS).  Hyperemesis is extreme, persistent nausea and vomiting during pregnancy, a kind of acute morning sickness and CHS presents in much the same way.  The recreational use of cannabis was hardly new but CHS was novel and the medical community initially speculated the reaction (induced only in some users) may be caused either by specific genetic differences or something added to or bred into certain strains of weed although the condition appeared to be both rare and geographically distributed.  The long-term effects are unknown except for damage to tooth enamel caused by the stomach acid in the vomit.  In October 2025, a new layer of institutional respectability was gained by the concept of scromiting when the WHO (World Health Organization) announced it had added CHS to its diagnostic manual, the first time the disorder had been granted a dedicated code.  In the US, the existence of the code meant easily it could be adopted by the US CDC (Centers for Disease Control and Prevention) and interpolated into their reporting databases, meaning physicians nationwide could identify, track and study the condition rather than listing it in the broader vomiting or gastrointestinal categories.  Although a dangerous syndrome which for generations has been suffered by a sub-set of (mostly chronic) cannabis users, despite CHS causing severe nausea, repeated vomiting, abdominal pain, dehydration, weight loss and (in rare cases), heart rhythm problems, seizures, kidney failure and death, it was only after use of the drug was made lawful in many places that increasing incidences were noted.   The data suggests in the US CHS-related vists to hospital ERs (Emergency Room) have spiked by an impressive 650% since 2016 although it’s not known to what extent this reflects the extent of the increase in use or a willingness for patients to present now there is no potential legal jeopardy.

One theory is that since “legalization” (the term somewhat misleading because on a strict constitutional interpretation the substance remains proscribed) commercial growers (some of which operate on an industrial scale) have been “improving the breed” to gain market share and historically high levels of THC (Tetrahydrocannabinol, the cannabinoid which is the most active of the psychoactive constituents) are now common in “over the counter weed, this increasing both the instance and severity of scromiting.  Intriguingly, studies of the available ER data suggested a sharp elevation in cases of CHS during the COVID-19 pandemic and that seems to have established a new baseline, vists remaining high since.  The working assumption among clinicians is the combination of stress (induced by isolation and other factors) and the access to high-potency weed (THC levels well over 20% now often detected, compared with the 5% typically during the 1990s) may have contributed to the rise.  That however remains speculative and the alternative theory is heavy, long-term cannabis use overstimulates the body's cannabinoid system, triggering the opposite of the drug’s usual anti-nausea effect.  Ceasing use is the obvious cure (strictly speaking a preventative) but one as yet not understood amelioration is a long, hot shower and although it’s wholly anecdotal, there does seem to be a link with warming the body’s surface area because those who have experimented with “breathing in steam” report no helpful effect.

Male role model: The legendary Corey Bellemore.

An athletic pursuit probably sometimes not dissimilar to the exacting business of crunning & cromiting is the Beer Mile, conducted usually on a standard 400 m (¼ mile) track as a 1 mile (1.6 km) contest of both running & drinking speed.  Each of the four laps begins with the competitor drinking one can (12 fl oz (US) (355 ml)) of beer, followed by a full lap, the process repeated three times.  The rules have been defined by the governing body which also publishes the results, including the aggregates of miles covered and beers drunk.  Now a sporting institution, it has encouraged imitators and there are a number of variations, each with its own rules.  The holder of this most illustrious world record is Canadian Corey Bellemore (b 1994), a five-time champion, who, at the Beer Mile World Classic in Portugal in July 2025, broke his own world record, re-setting the mark to 4:27.1.  That may be compared with the absolute world record for the mile, held by Morocco’s Hicham El Guerrouj (b 1974) who in 1999 ran the distance in 3:43.13, his additional pace made possible by not being delayed by having to down four beers.

University of Otago Medical School.

Some variations of the beer mile simply increase the volume or strength of the beer consumed and a few of these are dubbed Chunder Mile (“chunder” being circa 1950s Australia & New Zealand slang for vomiting and of disputed origin) on the basis that vomiting is more likely the more alcohol is consumed.  For some however, even this wasn’t sufficiently debauched and there were events which demanded a (cold) meat pie be enjoyed with a jug of (un-chilled) beer (a jug typically 1140 ml (38.5 fl oz (US)) at the start of each of the four laps.  Predictably, these events were most associated with orientation weeks at universities, a number still conducted as late as the 1970s and the best documented seems to have been those at the University of Otago in Dunedin, New Zealand.  Helpfully, at this time, it was the site of the country’s medical school, thereby providing students with practical experience of both symptoms and treatments for the inevitable consequences.  Whether the event was invented in Dunedin isn’t known but, given the nature of males aged 17-21 probably hasn’t much changed over the millennia, it wouldn’t be surprising to learn similar competitions, localized to suit culinary tastes, have been contested by the drunken youth of many places in centuries past.  As it was, even in Dunedin, times were changing and in 1972, the Chunder Mile was banned “…because of the dangers of asphyxiation and ruptured esophaguses.”