Showing posts sorted by relevance for query Thus. Sort by date Show all posts
Showing posts sorted by relevance for query Thus. Sort by date Show all posts

Friday, February 10, 2023

IIII

IIII (pronounced fawr (U) or fohr (non-U))

A translingual form, an alternative form of IV: the Roman numeral representing four (4), the other known forms being iv, iiii & iiij

Circa 500 BC: The Roman numeral system spread as Roman conquest expanded and remained widely used in Europe until from circs 1300 it was replaced (for most purposes) with the more adaptable Hindu-Arabic system (including the revolutionary zero (0) which remains in use to this day.

IIII as a representation where the value four is involved has long been restricted to the value 4.  To avoid numbers becoming too cumbersome, the Roman system always used subtraction when a smaller numeral precedes a larger numeral so the number 14 would be represented as XIV instead of XIIII.  The convention which emerged was that a numeral can precede only another numeral which is less than or equal to ten times the value of the smaller so I can precede only (and thus be subtracted from) V (five) & X (ten).  However, these “rules” didn’t exist during Antiquity and weren’t (more or less) standardized until well into the medieval period; it’s thus not unusual to find old documents where 9 is represented as VIIII instead of IX.  The practical Romans, unlike the Greeks for whom abstraction was a calling, were little concerned with the concepts of pure mathematics, such as number theory or geometric proofs, and other abstract ideas, devoted instead to utilitarian purposes such as financial accounting, keeping military records and building things.

The numeral system had to be manageable to make simple calculations like addition and subtraction so it was attractive to make the text strings conveniently short: 44 as XLIV obvious preferable to XXXXIIII.  Although its limitations seem obvious to modern eyes, given the demands of the times, the system worked remarkably well for almost two millennia despite the largest numeral being M (1000).  It was silly to contemplate writing a string of 1000 M’s to indicate a million (presumably not a value then often used) so the Romans concocted a bar (the vinculum) which, when it appeared above a numeral, denoted a multiplier of 1000: MMMM (6000) could thus appear as V̄Ī and a million as M̄.  Compared with the Hindu-Arabic system, it was a fudged but one which for centuries proved serviceable.

Where Roman numbers are occasionally still used (book prefaces & introductions, some aeroplanes & automobiles and charmingly, some software), the number four is almost always represented by IV rather than IIII.  One exception to this however is watch & clock faces where the use of IIII outnumbers IV, regardless of the cost of the device.  Watchmakers have provided may explanations for the historical origin of this practice, the most popular of which dates from Antiquity: Because “I” stood for the “J” and “V” for the “U”, IV would be read as JU and thus Jupiter, an especially venerated Roman god, Jupiter Optimus Maximus being the king of all gods, chief of the pantheon and protector of ancient Rome.  The suggestion is that invoking the name of Jupiter for such a banal purpose would be thought offensive if not actually blasphemous.  Thus IIII it became.

Lindsay Lohan wearing 19mm (¾ inch) Cartier Tank Americaine in 18 karat white gold with a quartz movement and a silver guilloche dial with Roman numerals including the traditional IIII.  The Cartier part-number is B7018L1.

There’s the notion to that the convention arose just because of one of those haphazard moments in time by which history sometimes is made.  The appearance of IIII was said to be the personal preference of Louis XIV (1638–1715; le Roi Soleil (the Sun King), King of France 1643-1715), the Sun King apparently issuing an instruction (though there’s no evidence it was ever a formal decree) that IIII was the only appropriate way to write the number four, watchmakers ever since still tending to comply.  Whether Louis XIV wished to retain some exclusivity in the IV which was part of “his” XIV isn’t known and it may be he simply preferred the look of IIII.  Despite the belief of some, it’s anyway wrong to suggest IIII is wrong and IV right.  The design of the IIII was based upon four outstretched fingers which surely had for millennia been the manner in which the value of 4 was conveyed in conversation and V denoted 5 in tribute to the shape the hand formed when the thumb was added.  The IV notation came later and because it better conformed with the conventions used for writing bigger numbers, came in medieval times to be thought correct; it was thus adopted by the Church, becoming the “educated” form and that was that.

Not all agree with those romantic tales however, the German Watch Museum noting that in scholarly, ecclesiastical and daily use, IIII was widely used for a millennia, well into the nineteenth century, while the more efficient “IV” didn’t appear with any great frequency until circa 1500.  The museum argues that the watch and clock-makers concerns may have been readability and aesthetics rather than any devotion to historic practice, IIII having display advantages in an outward-facing arrangement relative to the centre of the dial (ie partially upside down, such as on wall, tower or cuckoo clocks), any confusion between IV (4) & VI (6) eliminated.  Also, a watch, while a functional timepiece, is also decorative and even a piece of jewellery so aesthetics matter, the use of III rendering the dial symmetrically balanced because 14 individual characters exist on each side of the dial and the IIII counterbalances the opposite VIII in the manner IX squares off against III.  So there’s no right or wrong about IIII & IV but there are reasons for the apparent anomaly of the more elegant IV appearing rarely on the dials of luxury watches.

Friday, November 13, 2020

Homonym

Homonym (pronounced hom-uh-nim)

(1) In phonetics, a word pronounced the same as another but differing in meaning, whether spelled the same way or not, as heir and air; a homophone.

(2) In phonetics, a word of the same written form as another but of different meaning and usually origin, whether pronounced the same way or not; a homograph.

(3) In phonetics, a word that is both a homophone and a homograph, that is, exactly the same as another in sound and spelling but different in meaning.

(4) A namesake (a person with the same name as another) (obsolete).

(5) In taxonomy, a name given to a species or genus (that should be unique) that has already been assigned to a different species or genus and that is thus rejected.

1635–1645: The construct was homo- + -onym.  From the French homonyme and directly from the Latin homōnymum, from the Greek homnymon, neuter of homnymos (homonymous) (of the same name).  Homo was from the Ancient Greek μός (homós) (same).  The –onym suffix was a creation for the international scientific vocabulary, a combining from the New Latin, from the Ancient Greek νυμα (ónuma), Doric and Aeolic dialectal form of νομα (ónoma) (name), from the primitive Indo-European root no-men- (name); the related form –onymy also widely used.

For a word which some insist has a narrow definition, it’s used by many to mean quite different things, the related forms being (1) homograph which is a word that has the same spelling as another word but has a different sound and a different meaning (such as bass which can be wither “a low, deep sound” or “a type of fish”) & (2) homophone which is a word that has the same sound as another word but is spelled differently and has a different meaning (such as to, two & too).  Homograph and homophone are uncontested but homonym is used variously either to mean (1) a word that is spelled like another but has a different sound and meaning (a homograph), (2) a word that sounds like another but has a different spelling and meaning (a homophone) or (3) a word that is spelled and pronounced like another but has a different meaning (a homograph & homophone).  According to the purists, a homonym must be both a homograph and a homophone and prescriptive dictionaries still tend in this direction but the descriptive volumes (usually while noting the strict construction), acknowledge that as used in modern English, a homonym can be a homograph or a homophone.  The sage advice seems to be (1) to stick to the classics and use all three words in their strict sense, (2) maintain consistency in use and (3) don’t correct the more permissive (on the Christian basis of “forgive them for they know not what they do”).

Crooked Hillary Clinton and the crooked spire of the Church of St Mary and All Saints, Chesterfield, Derbyshire, England.  Crooked has two meanings and pronunciations but is the one word used in two senses and thus not homonymic.  Crooked (pronounced krookt) is the past tense of the verb crook (bend or curve out of shape), from the Old English crōcian (to crook, to bend) which was cognate with Danish kroget (crooked; bent) whereas crooked (pronounced lrook-id) is an adjective meaning "bent or not straight" and may be used literally or figuratively to describe someone untrustworthy or dishonest.  Crooked is thus also an example of a hetronym (same spellings with different pronunciations and meanings

Adding to the murkiness, Henry Fowler (1858-1933) noted in Modern English Usage (1926) that some confusion has long clouded homonym and synonym, something he blamed on the “loose” meaning of the latter, explaining that homonyms are “separate words happen to be identical in form” while synonyms exist as separate words which happen to mean the same thing”.  However, at this point an etymological layer intrudes, Fowler noting “pole” in the sense of “a stake or shaft” is a native English word whereas when used to mean “the terminal point of an axis” the origins lie in the Greek.  Rather than one, “pole” is thus two separate words but being identical in form are thought homonyms.  By contrast “cat” the feline and “cat” as a clipping of the Admiralty’s flogging device “cat o' nine tails” “although identical in form and meaning different things are not separate words but the one used in two senses and thus not homonymic.

Lindsay Lohan on the couch, sofa, chesterfield or settee, depending on one’s view.

Layers attach also to synonyms, a word used anyway with notorious sloppiness, true synonyms (separate words identical in meaning in the context in which they’re applied) are actually rare compared with pairs or sets frequently cited, many of which enjoy only a partial equivalence of meaning.  The imprecise use isn’t necessarily bad and often is essential for poetic or literary reasons but technically, synonyms should be separate words identical in denotation (what they reference) and connotation (what they mean); pure synonyms may thus be interchanged with no effect but such pairs or sets are rare although in technical fields (IT & various flavors of engineering) they have in recent decades became more numerous.  However, even when words satisfy Henry Fowler’s standards, nuances drawn from beyond etymology and phonetics can lend a layer of meaning which detract from the purity of the synonymousness.  Sofa & couch for example are often used interchangeably and regarded by most as synonymous but to a student of the history of furniture, because couch is from the French noun couche (a piece of furniture with no arms used for lying) from the verb meaning “to lie down”, it differs from a sofa (a long, upholstered seat usually with arms and a back).  That’s fine but “sofa” is used by some as a class-identifier, being the “U” (upper-class) form while couch, settee and such are “non-U”.

Saturday, July 4, 2020

Knickers

Knickers (pronounced nik-erz)

(1) Loose-fitting short trousers gathered in at the knees.

(2) A bloomers-like undergarment worn by women.

(3) A general term for the panties worn by women.

(4) In product ranges, a descriptor of certain styles of panties, usually the short-legged underpants worn by women or girls.

(5) As the slang “to get one's knickers in a twist”, to become flustered or agitated (mostly UK, Australia & New Zealand).

(6) In slang, a mild expression of annoyance (archaic).

1866: A clipping of knickerbockers (the plural and a special use of knickerbocker).  The use is derived from the short breeches worn by Diedrich Knickerbocker in George Cruikshank's illustrations of Washington Irving's (1783-1859) A History of New York (1809), published under the pen-name Dietrich Knickbocker.  The surname Knickerbocker (also spelled Knikkerbakker, Knikkerbacker, and Knickerbacker) is a American creation, based on the names of early Dutch early settlers of New Netherland, thought probably derived from the Dutch immigrant Harmen Jansen van Bommel(l), who went variously by the names van Wy(y)e, van Wyekycback(e), Kinnekerbacker, Knickelbacker, Knickerbacker, Kinckerbacker, Nyckbacker, and Kynckbacker.  The precise etymology is a mystery, speculations including a corruption of the Dutch Wyekycback, the Dutch knacker (cracker) + the German Bäcker (or the Dutch bakker (baker)), or the Dutch knicker (marble (toy)) + the German Bäcker (or the Dutch bakker).  Aside from the obvious application (of or relating to knickerbockers), it was in the US used attributively as a modifier, referencing the social class with which the garment was traditionally associated; this use is now listed as archaic.

Men in knickerbockers.

Washington Irving was a US writer, historian and diplomat, most remembered today as the author of Rip Van Winkle (1819).  Although the bulk of his work was that of a conventional historian, his early writing was satirical, many of his barbs aimed at New York’s high society and it was Irving who in 1807 first gave NYC the nickname "Gotham" (from the Anglo-Saxon, literally “homestead where goats are kept”, the construct being the Old English gāt (goat) + hām (home)).  The name Diedrich Knickerbocker he introduced in 1809 in A History of New York (the original title A History of New-York from the Beginning of the World to the End of the Dutch Dynasty).  A satire of local politics and personalities, it was also an elaborate literary hoax, Irving through rumor and missing person advertisements creating the impression Mr Knickerbocker had vanished from his hotel, leaving behind nothing but a completed manuscript.  The story captured the public imagination and, under the Knickerbocker pseudonym, Irving published A History of New York to critical and commercial success.  The name Diedrich Knickerbocker became a nickname for the Manhattan upper-class (later extended to New Yorkers in general) and was adopted by the New York Knickerbockers basketball team (1845-1873), the name revived in 1946 for the team now part of the US National Basketball League although their name usually appears as the New York Knicks.  The figurative use to describe New Yorkers of whatever status faded from use early in the twentieth century.  Knickerbocker was of course a real name, one of note the US foreign correspondent HR Knickerbocker (1898–1949) who in 1936 was a journalist for the Hearst Press, accredited to cover the Spanish Civil War (1936-1940).  Like many foreign reporters, his work made difficult by the military censors who, after many disputes, early in 1937 deported him after he’d tried to report the retreat of one of the brigades supplied by Benito Mussolini (1883-1945; Duce (leader) & prime-minister of Italy 1922-1943) with the words “The Italians fled, lock, stock and barrel-organ”.

Kiki de Montparnasse lace knickers, US$190 at FarFetch.

It was in the Knickerbocker tale of 1809 that Washington made the first known reference in print to the doughnut (after the 1940s often as "donut" in North American use although that spelling was noted as early as the mid-nineteenth century) although the small, spongy cake made of dough and fried in lard”) was probably best described as “a lump” because there seems to be no suggestion the size and exact shape of the things were in any way standardized beyond being vaguely roundish.  It’s not clear when the holes became common, the first mention of them apparently in 1861 at which time one writer recorded that in New York City (the old New Amsterdam) they were known also as olycokes (from the Dutch oliekoek (oily cake) and some food guides of the era listed doughnuts and crullers as “types of olycoke”.

For designers, conventional knickers can be an impediment so are sometimes discarded: Polish model Anja Rubik (b 1983), Met Gala, New York City, May, 2012.  Note JBF hair-style and commendable hip-bone definition.

Knickers dates from 1866, in reference to loose-fitting pants for men worn buckled or buttoned at the waist and knees, a clipping of knickerbockers, used since 1859 and so called for their because of their resemblance to the trousers of old-time Dutchmen in George Cruikshank's (1792-1878) illustrations in the History of New York.  A now extinct derivation was the Scottish nicky-tam (garter worn over trousers), dating from 1911, a shortened, colloquial form, the construct being knickers + the Scottish & northern English dialect taum, from Old Norse taumr (cord, rein, line), cognate with the Old English team, the root sense of which appears to be "that which draws".  It was originally a string tied by Scottish farmers around rolled-up trousers to keep the legs of them out of the dirt (in the style of the plus-fours once associated with golf, so-named because they were breeches with four inches of excess material which could hang in a fold below the fastening beneath the knee, the plus-four a very similar style to the classic knickerbocker).  The word “draws” survives in Scots-English to refer to trousers in general.  It also had a technical use in haberdashery, describing a linsey-woolsey fabric with a rough knotted surface on the right side which was once a popular fabric for women's dresses.

Cami-knickers, 1926, Marshalls & Snelgrove, Oxford Street, London.

The New York garment industry in 1882 adopted knickers to describe a "short, loose-fitting undergarment for women" apparently because of the appeal of the name.  By 1884, the word had crossed the Atlantic and in both France and the UK was used to advertise the flimsier of women’s “unmentionables” and there have long many variations (although there’s not always a consistency of style between manufacturers) including camiknickers, French knickers, the intriguingly-named witches' knickers & (the somewhat misleading) no knickers.  From the very start, women’s knickers were, as individual items, sold as “a pair” and there’s no “knicker” whereas the singular form knickerbocker, unlike the plural, may only refer to a single garment.  In the matter of English constructed plurals, the history matters rather than any rule.  Shoes and socks are obviously both a pair because that’s how they come but a pair of trousers seems strange because it’s a single item.  That’s because modern "trousers" evolved from the Old Scots Trews, Truis & Triubhas and the Middle English trouzes & trouse which were separate items (per leg) and thus supplied in pairs, the two coverings joined by a breechcloth or a codpiece.  A pair of spectacles (glasses) is similar in that lens were originally separate (al la the monocle), things which could be purchased individually or as a pair.  The idea of a pair of knickers was natural because it was an adaptation of earlier use for the men’s garments, sold as “pairs of knickerbockers” or “pairs of knickers”.

Advertisement for French lingerie, 1958.  Now owned by Munich-based Triumph International GmbH, Valisère was in the early twentieth century founded as a glove manufacturer by Perrin family in Grenoble, Isère (thus the name).  Until 1922, exclusively it made fabric gloves but in 1922 expanded to produce fine lingerie and instantly was successful, in the coming years opening factories in Brazil and then Morocco.

In English, euphemisms for underwear (especially those of women) have come and gone.  In that, the churn-rate is an example of the linguistic treadmill: Terms created as “polite forms” become as associated with the items they describe as the word they replaced and thus also come to be thought “common”, “rude” or “vulgar” etc, thus necessitating replacement.  Even the now common “lingerie” (in use in English by at least 1831), had its moments of controversy in the US where, in the mid-nineteenth century, on the basis of being so obviously “foreign” and thus perhaps suggestive of things not desirable, decent folk avoided it.  It was different in England where it was used by manufacturers and retailers to hint at “continental elegance” and imported lacy, frilly or silk underwear for women would often be advertised as “Italian lingerie” or “French lingerie”.  That was commercial opportunism because lingerie was from the French lingerie (linen closet) and thus deconstructs in English use as “linen underwear” but any sense of the exclusive use of “linen” was soon lost and the association with “luxury” stuck, lingerie coming to be understood as those undergarments which were delicate or expensive; what most wore as “everyday” wear wouldn’t be so described.

Although apparently seen used in 1866 and by the early 1880s in general commercial use to describe “underpants” (dating from 1871) for women or girls”, “knickers” was not the last word on the topic, “undies” (1906), “panties” (1908) and “briefs” (1934) following.  However, for those with delicate sensibilities, mention of “knickers” (one’s own or another’s) could be avoided because there evolved a long list of euphemisms, including “inexpressible” “unmentionables” (1806); “indispensables” (1820); “ineffable” (1823); “unutterables” (1826); “innominables” (1827); “inexplicable” (1829); “unimaginable” (1833), and “unprintables” (1860).  In modern use, “unmentionables” is still heard although use is now exclusively ironic but the treadmill is still running because as the indispensable Online Etymology Dictionary noted when compiling that list, “intimates” seems (in the context of knickers and such to have come into use as recently as 1988; it’s short for “intimate apparel”, first used 99 years earlier.

Lindsay Lohan in cage bra and knickers, Complex Magazine photo-shoot, 2011.  In the technical sense, were the distinctive elements of a cage bra truly to be structural, the essential components would be the underwire and gore

The bra, like a pair of knckers, is designed obviously to accommodate a pair yet is described in the singular for reasons different again.  Its predecessor, the bodice, was often supplied in two pieces (and was thus historically referred to as “a pair of bodies” (and later “a pair of bodicies”)) and laced together but that’s unrelated to the way a bra is described: It’s a clipping of the French brassière and that is singular.  Brasserie entered English in the late nineteenth century although the French original often more closely resembled a chemise or camisole, the adoption in English perhaps influenced by the French term for something like the modern bra being soutien-gorge (literally, "throat-supporter") which perhaps had less appeal although it may be no worse than the more robust rehausseur de poitrine (chest uplifter) which seems more accurate still.  Being English, "brassiere" was soon clipped to "bra" and a vast supporting industry evolved, with global annual sales estimated to exceed US$60 billon in 2025 although since Donald Trump's (b 1946; US president 2017-2021 and since 2025) imposition of increased tariffs, just about all projections in the world economy must be thought "rubbery".

Friday, September 10, 2021

Random

Random (pronounced ran-duhm)

(1) Proceeding, made, or occurring without definite aim, reason, or pattern; lacking any definite plan or prearranged order; haphazard.

(2) In statistics, of or characterizing a process of selection in which each item of a set has an equal probability of being chosen (the random sample); having a value which cannot be determined but only described probabilistically.

(3) Of materials used in building and related constructions, lacking uniformity in size or shape.

(4) Of ashlar (stonework), laid without continuous courses and applied without regularity:

(5) In slang (also clipped to “rando” and some on-line sources insist “randy” is also used), something or someone unknown, unidentified, unexpected or out of place; anything odd or unpredictable (not necessarily a pejorative term and used as both noun & adjective).

(6) In slang, someone unimportant; a person of no consequence (always a pejorative).

(7) In printing, the sloping work surface at the top of a compositor's workbench on which type is composed (also called a bank and use now almost exclusive to the UK).

(8) In mining, the direction of a rake-vein.

(9) Speed, full speed; impetuosity, force (obsolete).

(10) In ballistics, the full range of a bullet or other projectile and thus the angle at which a weapon is tilted to gain maximum range (obsolete).

(11) In computing (as pseudorandom), mimicking the result of random selection.

1650s: From the earlier randon, from the Middle English randoun & raundon, from the Old French randon, a derivative of randir (to run; to gallop) of Germanic origin (related to the Old High German rinnan (to run) (from which Modern French gained randonnée (long walk, hike), from either the Frankish rant (a running) & randiju (a run, race) or the Old Norse rend (a run, race), both from the Proto-Germanic randijō, from rinnaną (run), from the primitive Indo-European r̥-nw- (to flow, move, run).  It was cognate with the Middle Low German uprinden (to jump up) and the Danish rende (to run).  The development of the adjective to mean “having no definite aim or purpose, haphazard, not sent in a special direction” evolved in the 1650s from the mid-sixteenth century phrase “at random” (at great speed) which picked up the fourteenth century sense from the Middle English noun randon & randoun (impetuosity; speed).  In English, the meaning closely mirrored that in the Old French randon (rush, disorder, force, impetuosity), gained from Frankish or other Germanic sources.  The spelling shift in Modern English from -n to –m was not unusual (seldom, ransom etc).  Random is a noun & adjective, randomness & randomosity are nouns, randomize is a verb and randomly is an adverb; the noun plural is randoms.

A “random person” is one variously unknown, unidentified, unexpected or out of place.

In general use, the meanings related to speed (full speed; force, trajectory of delivery etc) faded from use between the fourteenth & seventeenth centuries but persisted in the field of ballistics where “random” described the limit of the range of a bullet or other projectile (thus the angle at which a weapon was tilted to gain the maximum range.  Even that was largely obsolete by the early twentieth century but the idea of the angle being “a random” persists still in pockets in the UK to describe a sloping work surface on which printers compose pages (although few now use physical metal type).  The now familiar twenty-first century slang use can be either pejorative (someone unimportant; a person of no consequence) or neutral tending to the amused (something or someone unknown, unidentified, unexpected or out of place; anything odd or unpredictable).  The modern adoption appears to have its origin in 1980s US college student slang when “a person who does not belong on our dormitory floor” was so described; from this the hint of “inferior, undesirable” was perhaps inevitable.  “Rando” seems to be the standard abbreviation but some on-line sources also list “randy” which would seem to risk confusion or worse.

School lunch social engineering: Some sources recommend parents cut their children’s sandwiches in random ways.  The theory is it helps train their minds to accept change and helps them learn to adapt.

In computing, random access memory (RAM) had since the 1980s become familiar as one of a handful of the critical specifications of a computer (CPU, RAM, drive space) and the origin of the terms dates from IBM’s labs in the early 1950s when it was used to describe a new form of memory which could be read non-sequentially.  The modern RAM used by personal computers, servers, smart phones etc is an evolution from the original memory model; in the world of the early mainframes there was simply storage which could fulfil the functions now performed by both RAM and media like hard disks & solid state drives.  RAM is now a well-known commodity but the companion ROM (Read-Only Memory) is understood only by nerds and only an obsessional few of them give it much thought.  RAM volatile in that the contents are inherently temporary lost when the device is powered-down or re-started; it can thus be thought of as using static electricity for data storage.  That characteristic means it’s fast, affording the most rapid access by the CPU (Central Processing Unit) so is used to hold whatever data is at the time most in demand and that can be parts of the operating system, applications or documents.  ROM is non-volatile and whatever is written to ROM remains even if a device is switched-off; it’s thus used for essential, information like firmware and hardware information.

In mathematics and statistics, random does have precise definitions but in general use it’s used also as a vague synonym for “typical or average”.  To a statistician, the word implies “having unpredictable outcomes to the extent all outcomes are equally probable and if any statistical correlation is found to exist it will be wholly coincidental.  Thus, although all dictionaries list the comparative as more random and the superlative as most random, a statistician will insist these are as absurd as “very unique” although even among mathematicians phrases like “increasingly random” or “tending to randomness” are probably not unknown.  For others, the forms are useful and the colloquial use to mean “apropos of nothing; lacking context; unexpected; having apparent lack of plan, cause or reason” is widely applied to events, even those which to a specialist may not be at all random and may even be predictable.  For most of us, any sub-set of numbers which appears to have no pattern will appear random but mathematicians need to be more precise.  In the strict, technical sense, a true random number set exists only when two conditions are satisfied: (1) the values are uniformly distributed over a defined interval or set and (2) it is impossible to predict future values based on past or present ones.  In the pre-computer age, creating random number lists was challenging and subsequent analysis has found some of the sets created by manual or mechanical means were not truly random although those which were sufficiently large probably were functional for the purposes to which they were put.

“Random news” is something strange, unexpected and often amusing.    

Now, random number generators (RNG) are used and they can exist either in hardware or software and there are two types (1) pseudorandom number generators (PRNG) and true random number generators (TRNG).  A software algorithm, a PRNG emulates a TRNG by mimicking the selection of a value to approximate true randomness, the limitation being the algorithm being based on a distribution (the origin of the term pseudorandom) which can only produce something ultimately deterministic and predictable (although to determine the pattern can demand much computational power).  Relying on a seed number, if that can be isolated, other numbers can be predicted although, if the subset is large, for many purposes, what PRNGs generate is functional.  TRNGs don’t use an algorithm (although their processes can be represented by one) but are instead based on an unpredictable physical variable such as radioactive decay of isotopes, airwave static, or the behaviour of subatomic particles, the latter now favoured for their utterly unpredictable movements, now called “pure randomness”.  So random is the behaviour of subatomic particles that their observation appears to be immune to measurement biases which can (at least in theory) afflict other methods.

Random numbers are important in a number of fields including (1) statistical sampling and experimentation where it’s essential to select a random sample to ensure that the results are representative of the entire population, (2) cryptography where random numbers are used to generate the encryption keys which ensure the security of data and communications, (3) simulation and modelling where there’s a need to replicate real-world scenarios, (4) gaming & gambling where the need exists to create unpredictable outcomes and (5) randomized controlled trials (RCT), notably in medical and scientific research where true randomness is needed to assist in the assessment of the effectiveness of treatments, interventions, or policies.

Saturday, July 3, 2021

Canthus

Canthus (pronounced kan-thuhs)

The angle or corner on each side of the eye, formed by the natural junction of the upper and lower lids; there are two canthi on each eye: the medial canthus (closer to the nose) and the lateral canthus (closer to the ear).

1640–1650: From Ancient Greek κανθός (kanthós) (corner of the eye) (and also an alternative spelling of cantus (in music, sung, recited, sounded, blew, chanted etc)), which became conflated the New Latin canthus, from the Classical Latin cantus (the (iron) rim of a wheel)).  The term describing the “iron rim of a wheel” was ultimately of Gaulish origin, from the Proto-Celtic kantos (corner, rim) and related to the Breton kant (circle), the Old Irish cétad (round seat) and the Welsh cant (rim, edge).  The Greek form was borrowed by Latin as canthus and with that spelling it entered English.  In the medieval way of such things, canthus and cantus became conflated, possibly under the influence or regional variations in pronunciation but some etymologists have noted there was tendency among some scribes and scholars to favor longer Latin forms, for whatever reason more letters being thought better than fewer.  The most familiar descendent in music is the canto (a description of a form of division in composition with a surprisingly wide range of application).  Canthus is a noun and canthal is an adjective; the noun plural is canthi (pronounced kan-thahy).

One word in English which has long puzzled etymologists is the late fourteenth century cant (slope, slant) which appeared first in Scottish texts, apparently with the sense “edge, brink”.  All dictionaries list it as being of uncertain origin and the Oxford English Dictionary (OED) notes words identical in form and corresponding in sense are found in many languages including those from Teutonic, Slavonic, Romanic & Celtic traditions.  Rare in English prior to the early seventeenth century, the meaning “slope, slanting or tilting position” had been adopted by at least 1847 and may long have been in oral use.  The speculation about the origin has included (1) the Old North French cant (corner) which may be related to the Middle Low German kante or the Middle Dutch kant, (2) the New Latin canthus, from the Classical Latin cantus (the (iron) rim of a wheel), (3) the Russian kutu (corner) and (4) the Ancient Greek κανθός (kanthós) (corner of the eye).  To all of these there are objections are the source remains thus uncertain.

The metrics of the attractiveness of women

PinkMirror is a web app which helps users optimize their facial aesthetics, using an artificial intelligence (AI) engine to deconstruct the individual components an observer’s brain interprets as a whole.  Because a face is for these purposes a collection of dimensions & curves with certain critical angles determined by describing an arc between two points, it means things can be reduced to metrics, and the interaction of these numbers can used to create a measure of attractiveness.  Helpfully, PinkMirror's site is interactive and users can upload a selfie for an analysis which will reveal if one is ugly or beautiful.  That's good because people have a right to know. 

Positive, (left), neutral (centre) & negative (right) eye canthal tilt.

Perhaps the most interesting example of the components is the eye canthal tilt, a positive tilt regarded as more attractive than a negative.  The eye canthal tilt is the angle between the internal corner of the eyes (medial canthus) and the external corner of the eyes (lateral canthus) and is a critical measure of periorbital (of, pertaining to all which exists in the space surrounding the orbit of the eyes (including skin, eyelashes & eyebrows) aesthetics.  The eye canthal tilt can be negative, neutral, or positive and is defined thus:

Positive: Medial canthus tilt between +5 and +8o below the lateral canthus.

Neutral: Medial canthus and lateral canthus are in a horizontal line.

Negative: Medial canthus tilt between -5 to -8o below the lateral canthus.

Pinkmirror cites academic research which confirms a positive canthal tilt is a “power cue” for female facial attractiveness and while it’s speculative, a possible explanation for this offered by the researchers was linked to (1) palpebral (of, pertaining to, or located on or near the eyelids.) fissure inclination being steeper in children than adults (classifying it thus a neonatal feature) and (2) it developing into something steeper still in females than males after puberty (thus becoming a sexually dimorphic feature).  Pinkmirror notes also that natural selection seems to be operating to support the idea, data from Johns Hopkins Hospital finding that in women, the intercanthal axis averages +4.1 mm (.16 of an inch) or +4o, the supposition being that women with the advantage of a positive medial canthus tilt are found more attractive so attract more mates, leading to a higher degree of procreation, this fecundity meaning the genetic trait producing the characteristic feature is more frequently seen in the population.  Cosmetic surgeons add another layer to the understanding, explaining the canthal tilt is one of the marker’s of aging, a positive tilt exuding youth, health, and exuberance where as a line tending beyond the negative is associated with aging, this actually literally product of natural processes, the soft tissue gradually descending under the effect of gravity, as aspect of Vogue magazine’s definition of the aging process: “Everything gets bigger, hairier & lower”.

With people, medial canthus tilt is thus an interaction of (1) the roll of the genetic dice and (2) the cosmetic surgeon’s scalpel.  With manufactured items however, designers have some scope to anthropomorphize objects and few visages are as obviously related to a human’s eyes than the headlamps on a car.

The positive, neutral & negative: 1965 Gordon-Keeble GK-1 (left), 1958 Edsel Corsair Hardtop (centre) and 1970 Maserati Ghibli Spyder (right).

When headlamps were almost universally separate circular devices, the creation of a medial canthus tilt really became possible in the mid-1950s after dual units were first made lawful in the US and then rapidly became fashionable.  Overwhelmingly, the designers seemed to prefer the neutral and where a positive tilt was use, it was exaggerated well beyond that found in humans.  Instances of the negative were rare, which would seem to support the findings of attractiveness in humans but they were sometimes seen when hidden headlamps were used and there they were necessitate by the form of the leading edge under which they sat.  The suspicion is that designers found a negative slant acceptable if usually they were hidden from view.

Retractable headlights: 1972 Ferrari 356 GTC/4 (top left), 1968 Lamborghini Isoero (top right), 1967 Maserati Ghibli Spyder (bottom left) and 1970 Plymouth Superbird (bottom right).

With a retractable mechanism, usually they were hidden from view.  Although sometimes the diagonal placement of headlights was a deliberate choice by the stylist, it could be something dictated by the body's shape and this was the case when quad units were used in conjunction with retractable housings.  On most cars the diagonal motif appeared with the outboard lights mounted noticeably higher than those inboard but, because of the slope, when retractable lights were used the inner lights could sit higher, the visual effect sometimes exaggerated because the angle the housing (following the horizontal nose-line) assume when erected made the inboard lights seem higher still.  It was a product of shape and not something inherent to the “pop-up” retractable technique: The 1969 Dodge Daytona and 1970 Plymouth Superbird (both homologation exercises for use on the NASCAR (National Association for Stock Car Auto Racing) ovals & tracks) both had their four headlights aligned in the horizontal.

2005 Porsche 911 Turbo S (996) (left), 2016 Ford (Australia) Falcon XR8 (FG) (centre) & 2000 Ferrari 550 Maranello.

As the interest in aerodynamics grew and there were advances in shaping glass and plastic economically to render compound shapes, headlights ceased to be almost always circular (except in the US where until the 1970s protectionist legislation demanded exactly that although before their imposition, during the 1930s the US industry had flirted with other than the round).  The German-built Ford Ford Taunus P3 (1960-1964) and the French Citroën Ami (1961–1978) both used lights which were variants of the filleted rectangle (a rectangle with curved corners) although the effect was more exaggerated on the Ford and tended to be the Ford Taunus and it tended to be called "lozenge-shaped" while the Citroën was merely "rectangular".  Separately, the technology had been developed respectively by lighting manufacturers Hella and Cibie.  The demands of aesthetics however didn’t change and designers tended still to neutral or positive tilts.  Care needed still to be taken however, the derided “poached egg” shape on the 996 generation of the Porsche 911 (1997-2006) not popular with the obsessives who buy the things, their view being each update should remain as devoted to the original (1963) lines as themselves.  One of the closest to a recent negative tilt showed up on the Ferrari 550 Maranello (1996-2001) and the factory hasn’t repeated the experiment.

Deconstructing Lindsay Lohan

The Pinkmirror app exists to quantify one’s degree of attractiveness.  It’s wholly based on specific dimension and thus as piece of math, is not influenced by skin tone although presumably, its parameters are defined by the (white) western model of what constitutes attractiveness.  Users should therefore work within those limitations but the model would be adaptable, presumably not to the point of being truly cross-cultural but specifics forks could certainly be created to suit any dimensional differences between ethnicities.  Using an industry standard known as the Photographic Canthal Index (PCI), one’s place on Pinkmirror’s index of attractiveness is determined by the interplay of (1) Nose width, (2) Bi-temporal to bi-zygomatic ratio, (3) chin length, (4) chin angle, (5) lower-lip height & (6) eye height.

Lindsay Lohan scored an 8.5 (out of 10), was rated as “beautiful” and found to be “very feminine, with great features of sexual dimorphism”, scoring highly in all facets except lower lip height and eye height.  Her face shape is the heart, distinguished by a broad forehead and cheekbones, narrowing in the lines of down to the jaw-line, culminating in a cute pointy chin.  Pinkmirror say the most attractive face shape for women has been found to be the triangle, scoring about the same as the oval while the heart, round, diamond, rectangle and square are also attractive to a lesser degree.  Within the app, pears and oblongs are described as “not typically seen as attractive” and while the word “ugly” isn’t used, for the unfortunate pears and oblongs, that would seem the implication.

Wednesday, February 17, 2021

Acephalous

Acephalous (pronounced ey-sef-uh-luhs)

(1) In zoology, a creature without a head or lacking a distinct head (applied to bivalve mollusks).

(2) In the social sciences, political science & sociology, a system of organisation in a society with no centralized authority (without a leader or ruler), where power is in some way distributed among all or some of the members of the community.

(3) In medicine, as (1) acephalia, a birth defect in which the head is wholly or substantially missing & (2), the congenital lack of a head (especially in a parasitic twin).

(4) In engineering, an internal combustion piston engine without a cylinder head.

(5) In botany, a plant having the style spring from the base, instead of from the apex (as is the case in certain ovaries).

(6) In information & communications technology (ICT), a class of hardware and software (variously headless browser, headless computer, headless server etc) assembled lacking some feature or object analogous with a “head” or “high-level” component.

(7) In prosody, deficient in the beginning, as a line of poetry that is missing its expected opening syllable.

(8) In literature, a manuscript lacking the first portion of the text.

1725-1735: From French acéphale (the construct being acéphal(e) + -ous), from the Medieval Latin acephalous, from the Ancient Greek κέφαλος (aképhalos) (headless), the construct being - (a-) (not) + κεφαλή (kephal) (head), thus synchronically: a- + -cephalous.  The translingual prefix a- was from the Ancient Greek ἀ- (a-) (not, without) and in English was used to form taxonomic names indicating a lack of some feature that might be expected.  The a- prefix (with differing etymologies) was also used to form words imparting various senses.  Acephalous & acephalic are adjectives, acephalousness, acephalia & acephaly are nouns and acephalously is an adverb; the noun plural is acephali.

In biology (although often literally synonymous with “headless”), it was also used to refer to organisms where the head(s) existed only partially, thus the special use of the comparative "more acephalous" and the superlative "most acephalous", the latter also potentially misleading because it referred to extreme malformation rather than absence (which would be something wholly acephalous).  In biology, the companion terms are anencephalous (without a brain), bicephalous (having two heads), monocephalous (used in botany to describe single-headed, un-branched composite plants) & polycephalous (many-headed).

Acephalous: Lindsay Lohan “headless woman” Halloween costume.

The word’s origins were in botany and zoology, the use in political discussion in the sense of “without a leader” dating from 1751.  The Acephali (plural of acephalus) were a people, said to live in Africa, which were the product of the imagination of the writers of Antiquity, said by both the Greek historian Herodotus (circa 487-circa 425 BC) and Romano-Jewish historian Flavius Josephus (circa 37–circa 100) to have no heads (sometimes removable heads) and Medieval historians picked up the notion in ecclesiastical histories, describing thus (1) the Eutychians (a Christian sect in the year 482 without a leader), (2) those bishops certain clergymen not under regular diocesan control and later a class of levelers in the time of Henry I (circa 1068–1135; King 1100-1135).  The word still sometimes appears when discussing religious orders, denominations (or even entire churches) which reject the notion of a separate priesthood or a hierarchical order including such as bishops, the ultimate evolution of which is popery.

Acephalousness in its age of mass production: Marie Antoinette (1755–1793; Queen Consort of France 1774-1792) kneeling next to her confessor, contemplates the guillotine on the day of her execution, 16 October 1793.  Colorized version of a line engraving with etching, 1815.

In political science, acephalous refers to societies without a leader or ruler in the Western sense of the word but it does not of necessity imply an absence of leadership or structure, just that the arrangements don’t revolve around the one ruler.  Among the best documented examples were the desert-dwelling tribes of West Africa (notably those inhabiting the Northern Territories of the Gold Coast (now Ghana)), the arrangements of which required the British colonial administrators (accustomed to the ways of India under the Raj with its Maharajas and institutionalized caste system) to adjust their methods somewhat to deal with notions such as distributed authority and collective decision making.  That said, acephalous has sometimes been used too freely.  It is inevitably misapplied when speaking of anarchist societies (except in idealized theoretical models) and often misleading if used of some notionally collectivist models which are often conventional leadership models in disguise or variations of the “dictatorship of the secretariat” typified by the early structure of Stalinism.

The Acephalous Commer TS3

A curious cul-de-sac in engineering, Commer’s acephalous TS3 Diesel engine (1954-1972) was a six-cylinder, two-stroke system, the three cylinders in a horizontal layout, each with two pistons with their crowns facing each other, the layout obviating any need for a cylinder head.  The base of each piston was attached to a connecting rod and a series of rockers which then attached to another connecting rod, joined to the single, centrally located crankshaft at the bottom of the block, a departure from other “opposed piston” designs, almost all of which used twin crankshafts.  The TS3 was compact, powerful and light, the power-to-weight ratio exceptional because without components such as a cylinder heads, camshafts or valve gear, internal friction was low and thermal efficiency commendably high, the low fuel consumption especially notable.  In other companies, engineers were attracted to the design but accountants were sceptical and there were doubts reliability could be maintained were capacity significantly increased (the TS3 was 3.3 litres (200 cubic inch)) and when Chrysler purchased Commer in 1967, development ceased although an eight-piston prototype had performed faultlessly in extensive testing.  Production thus ceased in 1972 but although used mostly in trucks, there was also a marine version, many examples of which are still running, the operators maintaining them in service because of the reliability, power and economy (although the exhaust emissions are at the shockingly toxic levels common in the 1960s).

Acephalous information & communications technology (ICT)

A headless computer (often a headless server) is a device designed to function without the usual “head” components (monitor, mouse, keyboard) being attached.  Headless systems are usually administered remotely, typically over a network connection although some still use serial links, especially those emulating legacy systems.  Deployed to save both space and money, numerous headless computers and servers still exist although the availability of KVM (and related) hardware which can permit even dozens of machines to be hard-wired to the one keyboard/mouse/monitor/ combination has curbed their proliferation.

A headless browser is a web browser without a graphical user interface (GUI) and can thus be controlled only be from a command-line interface or with a (usually) automated script, often deployed in a network environment.  Obviously not intended for consumer use, they’re ideal for use in distributed test environments or automating tasks which rely on interaction between web pages.  Until methods of detection improved, headless browsers were a popular way of executing ploys such as credential stuffing, page-view building or automated clicking but there now little to suggest they’re now anymore frequently used as a vector for nefarious activity than conventional browsers with a GUI attached.

Browsing for nerds: Google’s acephalous Headless Chrome.

Headless software is analogous with but goes beyond the concept of a headless computer in that it’s designed specifically to function without not just a GUI or monitor but even the hardware necessary to support the things (notably the video card or port).  Whereas some software will fail to load if no video support is detected, headless software proceeds regardless, either because it’s written without such parameter checking or it includes responses which pass “false positives”, emulating the existence of absent software.  Headless software operated in a specialized (horizontal in terms of industries supplied but vertical in that the stuff exists usually in roles such as back-to-front-end comms on distributed servers) niche, the advantage being the two ends can remain static (as some can be for years) while the bridge between the two remains the more maintenance intensive application programming interface (API), the architecture affording great flexibility in the software stack.