Showing posts sorted by date for query Mean. Sort by relevance Show all posts
Showing posts sorted by date for query Mean. Sort by relevance Show all posts

Monday, July 7, 2025

Blazon

Blazon (pronounced bley-zuhn)

(1) In heraldry, an escutcheon or coat of arms or a banner depicting a coat of arms.

(2) In heraldry, a description (verbal or written or in an image) of a coat of arms.

(3) In heraldry, a formalized language for describing a coat of arms (the heraldic description of armorial bearings).

(4) An ostentatious display, verbal or otherwise.

(5) A description or recording (especially of the good qualities of a person or thing).

(6) In literature, verses which dwelt upon and described various parts of a woman's body (usually in admiration). 

(7) Conspicuously or publicly to set forth; display; proclaim.

(8) To adorn or embellish, especially brilliantly or showily.

(9) To depict (heraldic arms or the like) in proper form and color.

(10) To describe a coat of arms.

1275-1300: From the late thirteenth century Middle English blazon (armorial bearings, coat of arms), from the twelfth century Old French blason (shield, blazon (also “collar bone”).  Of the words in the Romance languages (the Spanish blason, Italian blasone, Portuguese brasao & Provençal blezo, the first two are said to be French loan-words and the origins of all remain uncertain.  According to the OED (Oxford English Dictionary), the suggestion by nineteenth century French etymologists of connections with Germanic words related to English blaze is dubious because of the sense disparities.  The verb blazon (to depict or paint (armorial bearings) dates from the mid sixteenth century and was either (or both) from the noun or the French blasonner (from the French noun).  In English, it had earlier in the 1500s been used to mean “descriptively to set forth; descriptively” especially (by at least the 1530s) specifically “to vaunt or boast” and in that sense it was probably at least influenced by the English blaze.  Blazon & blazoning are nouns & verbs, blazoner, blazonry & blazonment are nouns and blazoned & blazonable are adjectives; the noun plural is blazons.

A coat of arms, possibly of dubious provenance. 

The now more familiar verb emblazon (inscribe conspicuously) seems first to have been used around the 1590s in the sense of “extol” and the still common related forms (emblazoning; emblazoned) emerged almost simultaneously.  The construct of emblazon was en- +‎ blazon (from the Old French blason (in its primary sense of “shield”).  The en- prefix was from the Middle English en- (en-, in-), from the Old French en- (also an-), from the Latin in- (in, into).  It was also an alteration of in-, from the Middle English in-, from the Old English in- (in, into), from the Proto-Germanic in (in).  Both the Latin & Germanic forms were from the primitive Indo-European en (in, into).  The intensive use of the Old French en- & an- was due to confluence with Frankish intensive prefix an- which was related to the Old English intensive prefix -on.  It formed a transitive verb whose meaning is to make the attached adjective (1) in, into, (2) on, onto or (3) covered.  It was used also to denote “caused” or as an intensifier.  The prefix em- was (and still is) used before certain consonants, notably the labials “b” & “p”.

Google ngram: It shouldn’t be surprising there seems to have been a decline in the use of “blazon” while “emblazoned” has by comparison, in recent decades, flourished.  That would reflect matters of heraldry declining in significance, their appearance in printed materials correspondingly reduced in volume.  However, because of the way Google harvests data for their ngrams, they’re not literally a tracking of the use of a word in society but can be usefully indicative of certain trends, (although one is never quite sure which trend(s)), especially over decades.  As a record of actual aggregate use, ngrams are not wholly reliable because: (1) the sub-set of texts Google uses is slanted towards the scientific & academic and (2) the technical limitations imposed by the use of OCR (optical character recognition) when handling older texts of sometime dubious legibility (a process AI should improve).  Where numbers bounce around, this may reflect either: (1) peaks and troughs in use for some reason or (2) some quirk in the data harvested.

Self referential emblazoning: Lindsay Lohan's selfie of her modeling a sweater by Ashish, her visage emblazoned in sequins, London, November 2014.

Impressionistically though this assumption is, few would doubt “blazon” is now rare while “emblazoned” is far from uncommon.  While “emblazon” began with the meaning “that which the emblazoner does” (ie (1) to adorn with prominent, (2) to inscribe upon and (3) to draw a coat of arms) it evolved by the mid-nineteenth century with the familiar modern sense of “having left in the mind a vivid impression” (often in the form “emblazoned on one’s memory”).  In English, there’s nothing unusual in a derived or modified form of a word becoming common than its original root, even to the point the where the original is rendered rare, unfamiliar or even obsolete, a phenomenon due to changes in usage patterns, altered conventions in pronunciation or shifts in meaning that make the derived form more practical or culturally resonant.  That’s just how English evolves.

Other examples include (1) ruthless vs. ruth (ruth (pity; compassion) was once a common noun in Middle English but has long been extinct while ruthless, there being many who demand the description, remains popular), (2) unkempt vs kempt (kempt (neatly kept) would have been listed as extinct were it not for it finding a niche as a literary and poetic form and has also been used humorously or ironically), (3) disheveled vs sheveled (sheveled was from the Old French chevelé (having hair) and was part of mainstream vocabulary as late as the eighteenth century but, except in jocular use, is effectively non-existent in modern English) and (4) redolent vs dolent (redolent (evocative of; fragrant) was from dolent (sorrowful), from the Latin dolere (to feel pain)); redolent both outlived and enjoyed a meaning-shift from its root.

Etymologists think of these as part of the linguistic fossil record, noting there’s no single reason for the phenomenon beyond what survives being better adapted to cultural or conversational needs.  In that, these examples differ from the playful fork of back-formation which has produced (1) combobulate (a back-formation from discombobulate (to confuse or disconcert; to throw into a state of confusion) which was a humorous mock-Latin creation in mid-nineteenth century US English) (2) couth (a nineteenth century back-formation from uncouth and used as a humorous form meaning “refined”), (3) gruntled (a twentieth century back-formation meaning “happy or contented; satisfied”, the source being disgruntled (unhappy; malcontented) and most sources indicate it first appeared in print in 1926 but the most celebrated example comes from PG Wodehouse (1881–1975) who in The Code of the Woosters (1938) penned: “He spoke with a certain what-is-it in his voice, and I could see that, if not actually disgruntled, he was far from being gruntled.  Long a linguistic joke, some now take gruntled seriously but for the OED remains thus far unmoved and (4) ept (a back-formation from inept (not proficient; incompetent or not competent (there is a functional difference between those two)) which was from the Middle French inepte, from the Latin ineptus).

Literary use

In literary use, “blazon” was a technical term used by the Petrarchists (devotes of Francis Petrarch (1304-1374), a scholar & poet of the early Italian Renaissance renowned for his love poems & sonnets and regarded also as one of the earliest humanists).  Blazon in this context (a subset of what literary theorists call “catalogue verse”) was adopted because, like the structured and defined elements of heraldic symbolism, Petrarch’s poems contained what might be thought an “inventory” of verses which dwelt upon and detailed the various parts of a woman's body; a sort of catalogue of her physical attributes.  Petrarch’s approach wasn’t new because as a convention in lyric poetry it was well-known by the mid thirteenth century, most critics crediting the tradition to the writings of Geoffrey of Vinsauf, a figure about whom little is although it’s believed he was born in Normandy.  In England the Elizabethan sonneteers honed the technique as a devotional device, often, in imaginative ways, describing the bits of their mistresses they found most pleasing, a classic example a fragment from Amoretti and Epithalamion (1595), a wedding day ode by the English poet Edmund Spenser (circa 1552-1599) to his bride (Elizabeth Boyle) in 1594:

Her goodly eyes like sapphires shining bright.
Her forehead ivory white,
Her cheeks like apples which the sun hath rudded,
Her lips like cherries charming men to bite,
Her breast like to a bowl of cream uncrudded,
Her paps like lilies budded,
Her snowy neck like to a marble tower,
And all her body like a palace fair.



Two bowls of cream uncrudded.

So objectification of the female form is nothing new and the poets saw little wrong with plagiarism, most of the imagery summoned salvaged from the works of Antiquity by elegiac Roman and Alexandrian Greek poets.  Most relied for their effect on brevity, almost always a single, punchy line and none seem ever to attempt the scale of the “epic simile”.  As can be imagined, the novelty of the revival didn’t last and the lines soon were treated by readers (some of whom were fellow poets) as clichés to be parodied (a class which came to be called “contrablazon”), the London-based courtier Sir Philip Sidney (1554–1586) borrowing from the Italian poet Francesco Berni (1497–1535) the trick of using terms in the style of Petrarch but “mixing them up”, thus creating an early form of body dysmorphia: Mopsa's forehead being “jacinth-like”, cheeks of “opal”, twinkling eyes “bedeckt with pearl” and lips of “sapphire blue”.

William Shakespeare (1564–1616) however saw other possibilities in the blazon and in Sonnet 130 (1609) turned the idea on its head, listing the imperfections in her body parts and characteristics yet concluding, despite all that, he anyway adored her like no other (here rendered in a more accessible English):

My mistress' eyes are nothing like the sun;
Coral is far more red than her lips' red;
If snow be white, why then her breasts are dun;
If hairs be wires, black wires grow on her head.
I have seen roses damasked, red and white,
But no such roses see I in her cheeks;
And in some perfumes is there more delight
Than in the breath that from my mistress reeks.
I love to hear her speak, yet well I know
That music hath a far more pleasing sound;
I grant I never saw a goddess go;
My mistress, when she walks, treads on the ground.
   And yet, by heaven, I think my love as rare
   As any she belied with false compare.

Monday, June 16, 2025

Semaphore

Semaphore (pronounced sem-uh-fawr or sem-uh-fohr)

(1) A “line-of-sight” apparatus (mechanical, hand-held or activated and now even electronic) for conveying information by means of visual signals (typically flags or lights, the positions of which are changed as required).

(2) Any of various devices for signaling by changing the position of a light, flag or other identifiable indicator.  Historically, a common use of “semaphore” was as a noun adjunct (also called a noun modifier or attributive noun) including “semaphore flag”, “semaphore chart”, “semaphore operator et al.

(3) A codified system of signaling, especially a system by which a special flag is held in each hand and various positions of the arms denoting specific letters, numbers etc.  It remains part of Admiralty signals training.

(4) In biochemistry (as semaphoring), any of a class of proteins that assist growing axons to find an appropriate target and to form synapses.

(5) In biology (as semaphoront), an organism as seen in a specific time during its ontogeny or life cycle, as the object of identification or basis for systematics.

(6) In botany (as semaphore plant), a synonym for the telegraph plant (Codariocalyx motorius), a tropical Asian shrub, one of the few plants capable of rapid movement and so named because the jerking motions of the leaves recalled in observers the actions of the arms of Admiralty signallers and the name dates from the Raj.

(7) In programming, a bit, token, fragment of code, or some other mechanism which is used to restrict access to a shared function or device to a single process at a time, or to synchronize and coordinate events in different processes, the thread increments the semaphore to prevent other threads from entering the critical section at the same time.

(8) In figurative use (in human and animal behavior), certain non-verbal communications, used consciously and unconsciously, the concept often explored as a literary device.

(9) To signal (information) by means of semaphore

1814: From the French sémaphore, the construct being the Ancient Greek, σῆμα (sêma) (mark, sign, token) + the French -phore (from the Ancient Greek -φόρος (-phóros), the suffix indicating a bearer or carrier) and thus understood as “a bearer of signals”.  The Greek –phóros was from pherein (to carry), from the primitive Indo-European root bher- (to carry).  The verb was derived from the noun.  Semaphore is a noun & verb, semaphorist, semaphoront & semaphorin are nouns, semaphored is a verb, semaphoring is a verb & adjective, semaphoric & semaphorical are adjectives and semaphorically is an adverb; the noun plural is semaphores.  The noun semaphorism is non-standard but is used in behavioral linguistics to describe patterns of language used to convey meaning in a “coded” form which can be deconstructed for meaning only by sender and receiver.  The form semaphoreology seems not to exist but if anyone ever makes a discipline of the study semaphore (academic careers have been built from some improbable origins), presumably there will be semaphoreologists.

Chart of the standard semaphore alphabet (top left), a pair of semaphore flags (bottom left) and Lindsay Lohan practicing her semaphore signaling (just in case the need arises and this is the letter “U”), 32nd birthday party, Mykonos, Greece, July, 2018 (right).

Semaphore flags are not always red and yellow, but the colors are close to a universal standard, especially in naval and international signalling.  There was no intrinsic meaning denoted by the use of red & yellow, the hues chosen for their contrast and visual clarity, something important in maritime environments or other outdoor locations when light could often be less than ideal although importantly, the contrast was sustained even in bright sunshine.  Because semaphore often was used for ship-to-to ship signalling, the colors had to be not only easily distinguishable at a distance but not be subject to “melting” or “blending”, a critical factor when used on moving vessels in often pitching conditions, the operator’s moving arms adding to the difficulties.  In naval and maritime semaphore systems, the ICS (International Code of Signals) standardized full-solid red and yellow for the flags but variants do exist (red, white, blue & black seem popular) and these can be created for specific conditions, for a particular cultural context or even as promotional items.

L-I-N-D-S-A-Y-space-L-O-H-A-N spelled-out in ICS (International Code of Signals) semaphore.  One can never tell when this knowledge will come in handy.

Early automobiles were sometimes fitted with mechanical semaphore signals to indicate a driver’s intention to change direction; these the British called “trafficators” (“flippers” in casual use) and they were still being fitted in the late 1950s, by which time they’d long been illuminated to glow a solid amber.  What the mechanical semaphores did was use the model of the extended human arm, used by riders or drivers in the horse-drawn age to signal their intentions to others and although obviously vulnerable to damage, the devices were at the time a good solution although the plastics used from the 1930s were prone to fading, diminishing the brightness.  When electronics advanced to the point where sequentially flashing turn indicators (“flashers”) cheaply could be mass-produced the age of the semaphore signal ended although they did for a while persist on trucks where they were attached to the exterior of the driver’s door and hand activated.

Hand-operated semaphore signal on driver's door of RHD (right-hand-drive) truck (left), an Austin A30 with electrically-activated semaphore indicating impending leftward change of direction (centre) and electrically-activated right-side semaphore on 1937 Rolls-Royce Phantom III Gurney Nutting Touring Limousine (right).

The A30 (1952-1956) was powered by an 803 cm3 (49 cubic inch) four cylinder engine while the Phantom III (1936-1939) was fitted with a 7338 cm3 (447 cubic inch) V12 (noted diarist Sir Henry “Chips” Channon (1897–1958) owned one) so the driving experience was very different but both used the same Lucas semaphore assembly.  Note the "BEWARE, TRAFFICATORS IN USE" notice in A30's rear window.  Because drivers are no longer attuned to look for the now archaic semaphores, some jurisdictions (while still allowing their operation), will permit road registration only if supplementary flashing indicators (now usually amber) are fitted.  In the 1960s many trafficator-equipped cars were modernized with flashers and it's now only collectors or restorers who prize the originality of the obsolete.

Left & right semaphore signals (trafficators): Lucas part number SF80 for one’s Austin A30, Morris Minor or Rolls-Royce Silver Wraith.  In the 1950s, the price may have varied between resellers.

Although the grim realities of post-war economics meant standardization began to intrude, even in the 1950s Rolls-Royce made much of things being “bespoke” and while that was still true of some of the coach-work, what lay beneath the finely finished surface was often from the industry parts-bin and the semaphore turn signals the company fitted to the Silver Wraith (1946-1958) and Silver Dawn (1949-1955) was Lucas part number SF80, exactly the same component used by the humble Austin A30 and Morris Minor (1948-1971) where the functionality was identical.  Presumably, were one to buy the part from Rolls-Royce one would have been charged more (perhaps it was wrapped in more elaborate packaging) and that’s a well-understood industry phenomenon.  The internet has made it easier to trace such commonalities but in the 1980s there was a most useful publication which listed shared part-numbers which differed only in the prices charged, a switch for a Lamborghini which might retail for hundreds available from the Fiat parts counter (a busy place folklore suggests) for $12 while those aghast at the price quoted for a small linkage in a Triumph’s Stag’s induction system were pleased the same thing could be bought from a Ford dealer for a fraction of the cost.  Rolls-Royce fitted their last trafficator in 1958 and when Austin updated the A30 as the A35 (1956-1968) flashers were standard equipment, metal covering the apertures where once the semaphores had protruded while internally there was a panel concealing what had once been an access point for servicing.  The Morris Minor, the last of which wasn’t (in CKD (completely knocked down) form) assembled in New Zealand until 1974(!) switched from trafficators to flashers in 1961, the exterior and interior gaps concealed al la the A35.

Left-side semaphore on 1951 Volkswagen Type 1 (Beetle).

The Latin sēmaphorum (the alternative form was sēmaphoru) is thought to be a calque of the Italian semaforo (traffic light), again borrowed from the French sémaphore in the literal sense of “signaling system”.  The modern Italian for “traffic light” is semaforo although (usually for humorous effect) sēmaphorum is sometimes used as Contemporary Latin.  Traffic lights have for over a century regulated the flow of vehicles in urban areas but the first semaphore signal predated motorized transport, installed in London in 1868.  It was introduced not because it would perform the task better than the policemen then allocated but because it was cheaper and was an example of the by then common phenomenon of machines displacing human labor.  The early mechanical devices were pre-programmed and thus didn’t respond to the dynamics of the environment being controlled and that applied also to the early versions of the now familiar red-amber-green “traffic lights” which began to proliferate in the 1920s but by the 1950s there were sometime sensors (weight-sensitive points in the road) which could “trigger” a green light if the pre-set timing was creating a needless delay.  Even before the emergence of AI (artificial intelligence) in the modern sense of the term, implementations of AI had been refining the way traffic light systems regulated vehicular flow and in major cities (China apparently the most advanced), cameras, sensors, face and number plate recognition all interact to make traffic lights control the flow with an efficiency no human(s) could match.

ASMR semaphore porn: 1955 Austin A30.  ASMR (Autonomous Sensory Meridian Response) describes the physical & psychological pleasure derived from specific stimuli (usually a sound).  For some, this can be the sight & sound of South Korean girls on TikTok eating noodles while for those fond of machines it can come from hearing semaphore turn-signals being raised and lowered.

Whether it was the early semaphore signals or the soon to be ubiquitous illumined red-amber-green lights, what the system relied on was compliance; inherently, lacking physical agency, a piece of colored glass can’t stop a car but that almost always is the effect of a “red light”.  In behaviorism, this was described as discriminative stimulus (SD) in that the red light culturally is understood as a universal cue signalling a punishment might follow any transgression (ie “running the red light”), thus the incentive to obey the signal and avoid negative consequences (crashing or being fined).  What SD does is control behavior through learned association.  The use of red comes from semiotics and the color is culturally assigned to “stop” as green is to “go”, these allocated by virtue of historical associations which long pre-date the technology in the same way semiotics are used (as red & blue) to denote “hot” & “cold” water when taps are labelled, meaning for travellers no knowledge of a local language is needed to work out which is which.  In the jargon, the red light is a “signifier” and the “signified” is stop.

Modern Mechanix magazine, January 1933.

Sir William Morris (1877-1963; later Lord Nuffield) held a number of troubling and even at the time unfashionable views and he’d been sceptical about producing the Morris Minor, describing the prototype as looking “like a poached egg”; in that he was right but the Minor proved a highly profitable success.  In the 1930s however, he did have the imaginative idea of adapting the by then familiar traffic light (in miniature form) to the automobile itself.  The concept was sound, Sir William’s proposed placement even anticipating the “eye level brake lights” of the 1980s and the inclusion of green in the code was interesting but the “mini traffic light” wasn’t taken up and lesson which should have been learned is that in the absence of legislation compelling change, the industry always will be most reluctant to invest and not until the 1960s would such mandates (for better and worse) begin to be imposed.

1947 Volvo P444 (1947-1958, left) and 2022 Volvo XC 40 (introduced 2017, right).  Volvo abandoned the semaphores years before the British but the designers clearly haven’t forgotten, the rear reflectors on the XC 40 using the shape.  Volvo also adopted the conventional flasher but not before the modernist Swedes had tried the odd inventive solution.

In idiomatic use, semaphore’s deployment tends to be metaphorical or humorous, the former used as a literary device, borrowed from behavioral psychology.  “To semaphore can mean “wildly or exaggeratedly gesture” but can also convey the idea of a communication effected without explicitly stating something and that can either be as a form of “unspoken code” understood only between the interlocutors or something unconscious (often called body-language).  “Semaphoring a message” can thus be either a form of secret communication or something inferred from non-verbal clues.  Authors and poets are sometimes tempted to use “semaphore” metaphorically to describe emotional cues, especially across physical or emotional distance and one can imagine the dubious attraction for some of having “her sensuous lips silently semaphoring desire” or “her hungry eyes semaphored the truth”.  Among critics, the notion of “semaphoring” as one of the motifs of modernist literature was identified and TS Eliot’s (1888–1965) style in The Waste Land (1922) included coded fragments, often as disconnected voices and symbols, called by some an “emotional semaphore” while Samuel Beckett (1906-1989 and another Nobel laureate) was noted for having his characters exchange their feelings with repetitive gestures, signals and critically, silences, described variously as “gestural semaphore” or the “semaphoring of despair”.

Saturday, June 14, 2025

Snack

Snack (pronounced snak)

(1) A small portion of food or drink or a light meal, especially one eaten between regular meals.

(2) In the phrase “go snack”, to share profits or returns (mostly archaic).

(3) In slang, someone physically attractive and sexually desirable (regionally limited).

(4) To have a snack or light meal, especially between regular meals.

1300–1350: From the Middle English verb snacchen, snacche, snache & snak & noun snacche, snak & snakee (to snap at, bite, seize (as of dogs) and cognate with the Middle Dutch snacken (to snap (as of dogs), from snakken and a variant of snappen (to snap)) and the Norwegian dialect snaka (to snatch (as of animals)).  In many European languages, snack is used in the same sense though in Swedish technically it’s deverbal of snacka (to chat, to talk).  The pleasing recent noun snackette is either (1) A small shop or kiosk selling snacks or (2) smaller than usual snacks (the word often used by dieters to distinguish their snacks from the more indulgent choices of others).  The synonyms include morsel, refreshment, bite, eats, goodies, nibble, pickings & tidbit (often misused as "titbit").  Specific classes of snack include "halal snack" (one which would be approved by a ayatollah, mufti, mullah etc as conforming to the strictures of Islam) and kosher snack (one which would be approved by a rabbi (or other rabbinical authority) as conforming to the dietary rules in Judaism).  Snack is a noun, adjective & verb, snackability, snackette & snackery are nouns, snackable is a noun & adjective snacking & snacked are verbs and snacky, snackish & snakelike are adjectives; the noun plural is snacks.

Cadbury Snack.

The original Middle English verb (to bite or snap (as of dogs), probably came either from the Middle Dutch or Flemish snacken (to snatch, snap; chatter), the source of which is uncertain although one etymologist traces it to a hypothetical Germanic imitative root snu- used to form words relating to the snout or nose.  The sense of "having a bite to eat; a morsel or light meal” dates from 1807.  The noun snack (a snatch or snap (especially that of a dog) developed from the verb and emerged circa 1400.  The meaning extended to "a snappish remark" by the 1550s and "a share, portion, part" by the 1680s (hence the now archaic expression “go snacks” which meant "share, divide; have a share in").  The familiar modern meaning "a small dish morsel to eat hastily" was first noted in 1757.  The first snack bar (a place selling snacks) seems to have opened in 1923 and the similar (often smaller, kiosk-type operations) snackettes were a creation of US commerce in the 1940s.  Snack bars could be either stand-alone businesses or something operating within a stadium, theatre, cinema etc.  The commercial plural form "snax" was coined in 1942 for the vending machine trade and the term “snack table” has been in use since circa 1950.

Nestlé Salted Caramel Munchies.

Functionally (though not etymologically) related was munchies (food or snack) from 1959, the plural of the 1917 munchie (snack eaten to satisfy hunger) from the 1816 verb munch (to eat; to chew).  The familiar (to some) phrase “got the munchies” in the sense of "craving for food after smoking weed (marijuana)" was US stoner slang which was first documented in 1971 but Nestlé corporation’s Munchies weren’t an opportunistic attempt to grab the attention of weed smokers.  The chocolate Munchies pre-date the slang use of the word by over a decade, introduced in 1957 by the Mackintosh company, Nestlé acquiring the brand in 1988 when it acquired Rowntree Mackintosh and it’s not known if the slang use can be attributed to some stoner coming back from the shop with a bag-full of the snacks and telling his grateful and ravenous companions “I’ve got the Munchies” but it's such a good explanation it should be accepted as verified fact; etymologists who disagree have no soul.  Munchies were originally milk chocolates with a caramel and biscuit centre but the range has in recent years proliferated to include centres of mint fondant, chocolate fudge, cookie dough and salted caramel.  The latest variation has been to use a white chocolate shell; this described as a “limited-edition” but it’s presumed if demand exists, it will become a standard line.

Lindsay Lohan stocking up her snack stash, London, 2008.

This is use of the word "snack" in the most modern sense: pre-packaged items designed usually for one or for a small group to share.  Although most associated with "treats and indulgences" (chocolate bars the classic example), not all snacks can be classified as "junk food" and there's a whole sub-section of the industry dedicated to the production (and, perhaps more to the point, marketing) of "healthy snacks".  Critics however caution that unless it's simply a convenient packaging of a "whole food" (such as nuts which have been processed only to the extend of being shelled), the label should be studied because even food regarded in its natural state as a "healthy choice" can be less so when processed.  The markers to assess include the obvious (fat, salt, sugar) as well as chemicals and other additives, some with names only an industrial chemist would recognize.

Peter Dutton (b 1970; leader of the Australian Liberal Party 2022-2025) enjoying a “Dagwood Dog”, Brisbane Ekka (Exhibition), August 2022.
  Because of the context (event, location, not sitting at a table, dish, time of day), this he would probably have regarded “a snack” rather than “a meal”.  The “Dagwood Dog” was a local variant of the “HotDog” or “Corn Dog” and Mr Dutton never denied being a Freemason.

A “snack” is by definition both (1) of a lesser quantity than a “meal” and (2) eaten at a different time than the meal (as conventionally defined: breakfast, lunch, dinner) but there are nuances.  For some, the infamous “midnight snack” (a late-night or early-morning trip to the bridge for those who awake with hanger pangs or who can’t sleep because they are so hungry) sometimes evolves, ad-hoc, into what others would call “a meal” while the curious “supper” can be anything from a “light snack” to a synonym for “dinner”.  Additionally, it’s variable by individual: what a Sumo wrestler calls a “snack” might well for a week feed a ballerina.  So there’s nothing which exactly defines the point at which a “snack” should properly be called a “meal” because it’s something geographically, culturally and individualistically deterministic.  A hot dog presented on a plate might be called “a meal” whereas one eaten while wandering around the Minnesota State Fair might be though “a snack”.  It’s tempting to imagine (at least in Western culture) that if utensils (knife, fork, chopsticks et al) are used it must be a meal and snacks are inherently finger food but the list of exceptions to that will be long.

Snack-shaming: A specific sub-genre of "fat-shaming", the modern convention is that when seen with shopping carts laden with processed snacks, fat people may be photographed and posted on social media, provided their identity adequately is concealed.

A snack for one can also be something like an apple or banana (the latter pre-packaged by nature with its own bio-degradable wrapping) and "snack" was used to describe such quick and easy "bites to eat" by the early eighteenth century, building on the slightly early use meaning "a quickly prepared meal" (as opposed to an elaborate dish) and the term became popular to describe meals carried by workers (the sandwich the exemplar) to eat on their break.  Prior to that "to snack" was to suggest one was having just part of the whole (such as a "slice of cake") and that use was from the traditional use of the word to mean "a portion" of just about anything (land, money, food etc).  As English evolved, the word came to be associated almost exclusively with food and the now rare slang use in the finance industry is the only survivor of earlier use.  It has though become an idiomatic form: (1) A person with an obviously high BMI (body mass index (ie looks fat)) can be "snack-shamed" if (1a) observed eating unhealthy snacks or (1b) with supermarket cart loaded with them; (2) A "snack-slut" is one who can't resist snacking and is used as a self-descriptor (socially acceptable and usually amusing if subject has low BMI); (3) A "snaccident" (a portmanteau word, the blend being snac(k) + (ac)cident)) refers to a snack eaten "by accident" and the validity of such excuses must be assessed on a case-by-case-basis (again, tends to be BMI-dependent); (4) A "snackery" is (4a) a place where one buys one's snacks or (4b) an informal term used to describe the place where dead fat people are sent (on the model of "knackery" (a slaughterhouse where animal carcasses unfit for human consumption or other purposes are rendered down to produce useful materials such as adhesives)); (5) A "snackette" is variously (5a) an especially small snack, (5b) a small outlet selling snacks (on the model of "luncheonette" (a small restaurant with a limited range of dishes)) or (5c) a (usually one-off) sexual partner about whom one has no future plans.               

Wednesday, June 11, 2025

Hardwired

Hardwired (pronounced hahrd-whyid)

(1) In electronics, built into the hardware.

(2) In mainframe computing, a terminal connected to the CPU(s) by a direct cable rather than through a switching network.

(3) In the behavioral sciences, a cluster of theories pertaining to or describing intrinsic and relatively un-modifiable patterns of behavior by both humans and animals.  Published work describes genetically determined, instinctive behavior, as opposed to learned behavior.

(4) In computer programming, a kludge temporarily or quickly to fix a problem, done historically by bypassing the operating system and directly addressing the hardware (assembly language).

(5) Casual term for anything designed to perform a specific task.

1969:  A compound word: hard + wired.  Hard was from the Middle English hard from the Old English heard, from the Proto-Germanic harduz, derived ultimately from the primitive Indo-European kort-ús from kret (strong, powerful).  Cognate with the German hart, the Swedish hård, the Ancient Greek κρατύς (kratús), the Sanskrit क्रतु (krátu) and the Avestan xratu.  Wire was from the Middle English wir & wyr from the Old English wīr (wire, metal thread, wire-ornament) from the Proto-Germanic wīraz (wire) from the primitive Indo-European wehiros (a twist, thread, cord, wire) from wehy (to turn, twist, weave, plait).  The suffix ed was used to form past tenses of (regular) verbs and in linguistics is used for the base form of any past form.  It was from the Middle English ede & eden, from the Old English ode & odon (a weak past ending) from the Proto-Germanic ōd & ōdēdun. Cognate with the Saterland Frisian ede (first person singular past indicative ending), the Swedish ade and the Icelandic aði.  The earliest known citation is from 1969 although there are suggestions the word or its variants had been used earlier, both in electronics and forms of mechanical production, the word migrating to zoology, genetics and human behavioral studies in 1971. The spellings hardwired, hard wired and hard-wired are used interchangeably and no rules or conventions of use have ever emerged.

Lindsay Lohan in leather, hardwired to impressively chunky headphones, visiting New York’s Meatpacking District for a photo-shoot, Soho, November 2013.

The coming of the wireless hardware devices really pleased many women who, for whatever reason, often showed an aversion to the sight of cables, whether lying across the floor or cluttering up their desks, noting their curious way of attracting dust and, adding insult to injury, an apparently insoluble tendency to tangle.  There are though still genuine advantages to using a cabled connection and although wireless headphones have long been the preferred choice of most, there remains a niche in which the old ways still are the best.  The advantages include (1) typically superior sound quality (which obviously can be subjective but there are metrics confirming the higher fidelity), (2) no batteries required, (3) inherently lower latency (thus ideal for gaming, and audio or video editing because of the precision in synchronization, (4) simplified internal construction which should mean lower weight for equivalent dimensions mass and improved reliability and (5) close to universal compatibility with any device with headphone jack or adapter.  The drawbacks include (1) one’s physical movement can be limited by the tethering (thus not ideal for workouts), (2) cables can be prone to damage, (3) cables can be prone to snags & tangles, (4) compatibility emerging as an issue on mobile devices with an increasing number lacking headphone jacks or demanding adaptors.  Of course for some the existence of Bluetooth pairing will be a compelling reason to go wireless and it has to be admitted the modern devices are now of such quality that even lower cost units are now good enough to please even demand audiophiles.

SysCon

IBM explains by example.

In the pre-modern world of the mainframes, there might be a dozen or thousands of terminals (a monitor & keyboard) attached to a system but there was always one special terminal, SysCon (system console), hardwired to the central processor (something not wholly synonymous with the now familar CPU (central processing unit) in PCs.  Unlike other terminals which connected, sometimes over long distances, through repeaters and telephone lines, SysCon, often used by system administrators (who sometimes dubbed themselves "SysCon" the really nerdy ones not using capitals), plugged directly into the core CPU.  When Novell released Netware in 1983, they reprised SysCon as the name of the software layer which was the core administration tool.

Google ngram: The pre-twentieth century use of "hardwired" would have been unrelated to the modern senses.  Because of the way Google harvests data for their ngrams, they’re not literally a tracking of the use of a word in society but can be usefully indicative of certain trends, (although one is never quite sure which trend(s)), especially over decades.  As a record of actual aggregate use, ngrams are not wholly reliable because: (1) the sub-set of texts Google uses is slanted towards the scientific & academic and (2) the technical limitations imposed by the use of OCR (optical character recognition) when handling older texts of sometime dubious legibility (a process AI should improve).  Where numbers bounce around, this may reflect either: (1) peaks and troughs in use for some reason or (2) some quirk in the data harvested.

In recent decades, the word “hardwired” has become a popular form, used figuratively to describe traits, behaviors, or tendencies believed to be innate, automatic, or deeply ingrained, the idea being things “permanently programmed into a human or animal”, on the model of the fixed circuitry in an electronic device.  Although probably over-used and sometimes with less than admirable precision, the term has come to be well-understood as referring to things (1) biologically pre-determined (instincts, reflexes), (2) psychologically ingrained (personality traits, cognitive biases) or (3) culturally conditioned but so deeply entrenched they appear intrinsic.  Even in professions such as medicine, psychiatry & psychology, all noted for their lexicons of technical terms with meanings often (in context) understood only by those with the training, in colloquial use it has become a popular metaphor.  It seems also to be an established element in academic writing because it’s such convenient verbal shorthand to convey meaning.  In that sense, it’s an acceptable metaphor in a way the phrase “it’s in the DNA” is not because that can be literal in a way “it's hardwired” cannot because living organisms have no wires.  DNA (deoxyribonucleic acid) is the famous double helix of polymers which constitute the so-called “building blocks” of life and sometimes the expression “it’s in the DNA” simply is incorrect because what’s being discussed is not connected with the double helix and it would be better to say “it’s hardwired” because the latter is vague enough to convey the idea without be so specific as to mislead.  The best use of the metaphoric “hardwired” is probably in neuroscience because the brain’s neural circuits may directly be compared with electronic circuitry.  The difficulty with using “hardwired” in the behavioural sciences is that very vagueness: it’s not helpful in suggesting where the lines exists between what’s determined by evolution and what are an individual’s temperamental traits.  That said, it remains a useful word but, used carelessly, it can overstate biological determinism.

Friday, May 30, 2025

Tatterdemalion

Tatterdemalion (pronounced tat-er-di-meyl-yuhn or tat-er-di-mal-yuhn)

(1) A person in tattered clothing; a shabby person.

(2) Ragged; unkempt or dilapidated.

(3) In fashion, (typically as “a tatterdemalion dress” etc), garments styled deliberately frayed or with constructed tears etc (also described as “distressed” or “destroyed”).

(4) A beggar (archaic).

1600–1610: The original spelling was tatter-de-mallian (the “demalion” rhymed with “Italian” in English pronunciation), the construct thus tatter + -demalion, of uncertain origin although the nineteenth century English lexicographer Ebenezer Cobham Brewer (1810-1897) (remembered still for his marvelous Dictionary of Phrase and Fable (1894) suggested it might be from de maillot (shirt) which does seem compelling.  Rather than the source, tatter is thought to have been a back-formation from tattered, from the Middle English tatered & tatird, from the Old Norse tǫturr.  Originally, it was derived from the noun, but it was later re-analysed as a past participle (the construct being tatter + -ed) and from this came the verb.  As a noun a tatter was "a shred of torn cloth or an individual item of torn and ragged clothing" while the verb implied both (as a transitive) "to destroy an article of clothing by shredding" & (as an intransitive) "to fall into tatters".  Tatterdemalion is a noun & adjective and tatterdemalionism is a noun; the noun plural is tatterdemalions.

In parallel, there was also the parallel "tat", borrowed under the Raj from the Hindi टाट (ā) (thick canvas) and in English it assumed a variety of meanings including as a clipping of tattoo, as an onomatopoeia referencing the sound made by dice when rolled on a table (and came to be used especially of a loaded die) and as an expression of disapprobation meaning “cheap and vulgar”, either in the context of low-quality goods or sleazy conduct.  The link with "tatty" in the sense of “shabby or ragged clothing” however apparently comes from tat as a clipping of the tatty, a woven mat or screen of gunny cloth made from the fibre of the Corchorus olitorius (jute plant) and noted for it loose, scruffy-looking weave.  Tatterdemalion is a noun & adjective; the noun plural is tatterdemalions.  The historic synonyms were shoddy, battered, broken, dilapidated, frayed, frazzled, moth-eaten, ragged, raggedy, ripped, ramshackle, rugged, scraggy, seedy, shabby, shaggy, threadbare, torn & unkempt and in the context of the modern fashion industry, distressed & destroyed.  An individual could also be described as a tramp, a ragamuffin, a vagabond, a vagrant, a gypsy or even a slum, some of those term reflecting class and ethnic prejudice or stereotypes.  Historically, tatterdemalion was also a name for a beggar.

A similar word in Yiddish was שמאַטע‎ (shmate or shmatte and spelled variously as schmatte, schmata, schmatta, schmate, schmutter & shmatta), from the Polish szmata, of uncertain origin but possibly from szmat (a fair amount).  In the Yiddish (and as adopted in Yinglish) it meant (1) a rag, (2) a piece of old clothing & (3) in the slang of the clothing trade, any item of clothing.  That was much more specific than the Polish szmata which meant literally "rag or old, ripped piece of cloth" but was used also figuratively to mean "publication of low journalistic standard" (ie analogous the English slang use of "rag") and in slang to refer to a woman of loose virtue (used as skank, slut et al might be used in English), a sense which transferred to colloquial use in sport to mean "simple shot", "easy goal" etc.

Designer distress: Lindsay Lohan illustrates the look.

Tatterdemalion is certainly a spectrum condition (the comparative “more tatterdemalion”; the superlative “most tatterdemalion”) and this is well illustrated by the adoption of the concept by fashionistas, modern capitalism soon there to supply demand.  In the fashion business, tatterdemalion needs to walk a fine line because tattiness was historically associated with poverty while designers need to provide garments which convey a message wealth.  The general terms for such garments is “distressed” although “destroyed” is (rather misleadingly) also used.

Highly qualified porn star Busty Buffy (b 1996) in “cut-off” denim shorts with leather braces while beltless.

The ancestor of designer tatterdemalion was a pair of “cut off” denim shorts, improvised not as a fashion statement but as a form of economy, gaining a little more life from a pair of jeans which had deteriorated beyond the point where mending was viable.  Until the counter-culture movements of the 1960s (which really began the previous decade but didn’t until the 1960s assume an expression in mass-market fashion trends), wearing cut-off jeans or clothing obviously patched and repaired generally was a marker of poverty although common in rural areas and among the industrial working class where it was just part of life.  It was only in the 1960s when an anti-consumerist, anti materialist vibe attracted the large cohort of youth created by the post-war “baby boom” that obviously frayed or torn clothing came to be an expression of disregard or even disdain for the prevailing standards of neatness (although paradoxically they were the richest “young generation” ever).  It was the punk movement in the 1970s which took this to whatever extremes seemed possible, the distinctive look of garments with rips and tears secured with safety pins so emblematic of (often confected) rebellion that in certain circles it remains to this day part of the “uniform”.  The fashion industry of course noted the trend and what would later be called “distressed” denim appeared in the lines of many mainstream manufacturers as early as the 1980s, often paired with the acid-washing and stone-washing which previously had been used to make a pair of jeans appear “older”, sometimes a desired look.

Dolce & Gabbana Distressed Jeans (part number FTCGGDG8ET8S9001), US$1150.

That it started with denim makes sense because it's the ultimate "classless" fabric in that it's worn by both rich and poor and while that has advantages for manufacturers, it does mean some are compelled to find ways to ensure buyers are able (blatantly or with some subtlety) to advertise what they are wearing is expensive; while no fashion house seems yet to have put the RRP (recommended retail price) on a leather patch, it may be only a matter of time.  The marketing of jeans which even when new gave the appearance of having been “broken in” by the wearer was by the 1970s a define niche, the quasi-vintage look of “fade & age” achieved with processes such as stone washing, enzyme washing, acid washing, sandblasting, emerizing and micro-sanding but this was just to create an effect, the fabrics not ripped or torn.  Distressed jeans represented the next step in the normal process of wear, fraying hems and seams, irregular fading and rips & tears now part of the aesthetic.  As an industrial process that’s not difficult to do but if done in the wrong way it won’t resemble exactly a pair of jeans subject to gradual degradation because different legs would have worn the denim at different places.  In the 2010s, the look spread to T-shirts and (predictably) hoodies, some manufacturers going beyond mere verisimilitude to (sort of) genuine authenticity, achieving the desired decorative by shooting shirts with bullets, managing a look which presumably the usual tricks of “nibbling & slashing” couldn’t quite emulate.  Warming to the idea, the Japanese label Zoo released jeans made from material torn by lions and tigers, the company anxious to mention the big cats in Tokyo Zoo seemed to "enjoy the fun" and to anyone who has seen a kitten with a skein of wool, that will sound plausible.  Others emulated the working-class look, the “caked-on muddy coating and “oil and grease smears” another variant although one apparently short-lived; appearing dirty apparently never a fashionable choice.  All these looks had of course been seen for centuries, worn mostly by the poor with little choice but to eke a little more wear from their shabby clothes but in the late twentieth century, as wealth overtook Western society, the look was adopted by many with disposable income; firstly the bohemians, hippies and other anti-materialists before the punk movement which needed motifs with some capacity to shock, something harder to achieve than had once been the case.

Distressed top and bottom.  Gigi Hadid (b 1995) in distressed T-shirt and "boyfriend" jeans.

For poets and punks, improvising the look from the stocks of thrift shops, that was fine but for designer labels selling scruffy-looking jeans for four-figure sums, it was more of a challenge, especially as the social media generation had discovered that above all they liked authenticity and faux authenticity would not do, nobody wanting to look it to look they were trying too hard.  The might have seemed a problem, given the look was inherently fake but the aesthetic didn’t matter for its own sake, all that had to be denoted was “conspicuous consumption” (the excessive spending on wasteful goods as proof of wealth) and the juxtaposition of thousand dollar distressed jeans with the odd expensive accessory, achieved that and more, the discontinuities offering irony as a look.  The labels, the prominence of which remained a focus was enough for the message to work although one does wonder if any of the majors have been tempted to print a QR code on the back pocket, linked to the RRP because, what people are really trying to say is “My jeans cost US$1200”.

1962 AC Shelby American Cobra (CSX 2000), interior detail, 2016.

The value of selective scruffiness is well known in other fields.  When selling a car, usually a tatty interior greatly will depress the price (sometimes by more even than the cost of rectification).  However, if the tattiness is of some historic significance, it can add to car’s value, the best example being if the deterioration is part of a vehicle's provenance and proof of originality, a prized attribute to the segment of the collector market known as the “originally police”.  In 2016, what is recognized as the very first Shelby American AC Cobra (CSX 2000) sold for US$13.75 million, becoming the highest price realized at auction for what is classified as "American car".  Built in 1962, it was an AC Ace shipped to California without an engine (and apparently not AC's original "proof-of-concept" test bed which was fitted with one of the short-lived 221 cubic inch (3.6 litre) versions of Ford's new "thin-wall" Windsor V8) where the Shelby operation installed a 260 cubic inch (4.2 litre) Windsor and the rest is history.  The tatterdemalion state of the interior was advertised as one of the features of the car, confirming its status as “an untouched survivor”.  Among Cobra collectors, patina caused by Carroll Shelby's (1923–2012) butt is a most valuable tatterdemalion.

Patina plus and beyond buffing out: Juan Manuel Fangio, Mercedes-Benz W196R Stromlinienwagen (Streamliner), British Grand Prix, Silverstone, 17 July 1954.

Also recommended to be repaired before sale are dents, anything battered unlikely to attract a premium.  However, if a dent was put there by a Formula One (F1) world champion, it becomes a historic artefact.  In 1954, Mercedes-Benz astounded all when their new grand prix car (the W196R) appeared with all-enveloping bodywork, allowed because of a since closed loophole in the rule-book.  The sensuous shape made the rest of the field look antiquated although underneath it was a curious mix of old and new, the fuel-injection and desmodromic valve train representing cutting edge technology while the swing axles and drum brakes spoke to the past and present, the engineers’ beloved straight-eight configuration (its last appearance in F1) definitely the end of an era.  On fast tracks like Monza, the aerodynamic bodywork delivered great speed and stability but the limitations were exposed when the team ran the Stromlinienwagen at tighter circuits and in the 1954 British Grand Prix at Silverstone, Juan Manuel Fangio (1911–1995; winner of five F1 world-championship driver's titles) managed to clout a couple of oil-drums (those and bails of hay how track safety was then done) because it was so much harder to determine the extremities without being able to see the front wheels.  Quickly, the factory concocted a functional (though visually unremarkable) open-wheel version and the sleek original was thereafter used only on the circuits where the highest speeds were achieved.  In 1954, the factory was unconcerned with the historic potential of the dents and repaired the tatterdemalion W196R so an artefact of the era was lost.  That apart, as used cars the W196s have held their value well, an open-wheel version selling at auction in 2013 for US$29.7 million while in 2025 a Stromlinienwagen realized US$53.9 million.  

1966 Ferrari 330 GTC (1966-1968) restored by Bell Sport & Classic.  Many restored Ferraris of the pre-1973 era are finished to a much higher standard than when they left the showroom.  Despite this, genuine, original "survivors" (warts and all) are much-sought in some circles.

In the collector car industry, tatterdemalion definitely is a spectrum condition and for decades the matter of patina versus perfection has been debated.  There was once the idea that in Europe the preference was for a vehicle to appear naturally aged (well-maintained but showing the wear of decades of use) while the US market leaned towards cars restored to the point of being as good (or better) than they were on the showroom floor.  Social anthropologists might have some fun exploring that perception of difference and it was certainly never a universal rule but the debate continues, as does the argument about “improving” on the original.  Some of the most fancied machinery of the 1950s and 1960s (notably Jaguars, Ferraris and Maseratis) is now a staple of the restoration business but, although when new the machines looked gorgeous, it wasn’t necessary to dig too deep to find often shoddy standards of finish, the practice at the time something like sweeping the dirt “under the rug”.  When "restored", many of these cars are re-built to a higher standard, what was often left rough because it sat unseen somewhere now smoothed to perfection.  That’s what some customers want and the best restoration shops can do either though there are questions about whether what might be described as “fake patina” is quite the done thing.  Mechanics and engineers who were part of building Ferraris in the 1960s, upon looking at some immaculately “restored” cars have been known wryly to remark: that wasn't how we built them then.” 

Gucci offered Distressed Tights at US$190 (for a pair so quite good value).  Rapidly, they sold-out.

The fake patina business however goes back quite a way.  Among antique dealers, it’s now a definite niche but from the point at which the industrial revolution began to create a new moneyed class of mine and factory owners, there was a subset of the new money (and there are cynics who suggest it was mostly at the prodding of their wives) who wished to seem more like old money and a trend began to seek out “aged” furniture with which a man might deck out his (newly acquired) house to look as if things had been in the family for generations.  The notoriously snobbish (and amusing) diarist Alan Clark (1928–1999) once referred to someone as looking like “they had to buy their own chairs”, prompting one aristocrat to respond: “That’s a bit much from someone whose father (the art historian and life peer Kenneth Clark (1903–1983)) had to buy his own castle.  The old money were of course snooty about the such folk and David Lloyd George (1863–1945; UK prime-minister 1916-1922) would lament many of the “jumped-up grocers” in his Liberal Party were more troublesome and less sympathetic to the troubles of the downtrodden than the "backwoodsmen" gentry in their inherited country houses.