Saturday, June 3, 2023

Biomimic

Biomimic (pronounced bahy-oh-mim-ik)

(1) A synthetic substance, material or device which mimics the formation, function, or structure of biologically produced substances & materials, biological mechanisms or processes.

(2) The act or processes involved in the creation of such substances, materials or devices.

1969: The construct was bio(logy) + mimic.  The bio- prefix was from the Ancient Greek βίο- (bío-), a combining form and stem of βίος (bíos) (life) used widely to construct forms in some way (even if in emulation) related to organic life (ie biological organisms in general).  Mimic was from the Latin mīmicus, from the Ancient Greek μμικός (mīmikós) (belonging to mimes), from μμος (mîmos) (imitator, actor), the source also of the modern mime.  It was used variously to mean (1) to imitate (applied especially to acts intended to ridicule), (2) to take on the appearance of another, for protection or camouflage (originally from zoology and other biological sciences but later more widely applied) and (3) in IT systems for a range of purposes.  The alternative spelling was mimick which persisted into the nineteenth century.  Biometric is a noun & verb, biomimicry & biomimesis are nouns, biomimetic is an adjective and biomimetically is an adverb; the noun plural is biometrics.

1955 D-Type (XKD510) with tailfin used on the tracks with unusually long straights (left), image of a great white shark (centre) and 1948 Tatra T87 II with stabilizing fin (right).

Jaguar’s experience in 1954 running the D-Type on the long Mulsanne Straight at Le Mans had proved the effectiveness of the re-designed bodywork, the cars more than 10 mph (16 km/h) faster in a straight line than the winning Ferrari but all the drivers reported that at speeds above 160 mph (257 km/h), straight line stability had suffered and in the cars not fitted with a tailfin, the lateral movement could sometimes be measured in feet.  Aerodynamics at the time was still in its infancy and most attention had been devoted to reducing drag in the pursuit of speed and much of the available data was from aviation where lift was a virtue; it wouldn’t be until the next decade with the advent of more available wind tunnels that designers began to understand how a compromise between slipperiness and down-force could be attained and even then, the increases in speed for years outpaced the test facilities.  Jaguar’s solution was a tailfin, something which fulfilled essentially the same function as a shark’s dorsal fin; the fish’s tailfin was used for propulsion and directional change, in a car, those dynamics are handled by other means.  The purpose of a dorsal fin is to stabilize, to prevent the rolling action which would otherwise be induced by movement through the water and Jaguar’s device likewise provided stability.  The fin was enlarged in 1955 and better integrated with the bodywork.

The Czech Tatra 87 (1936-1950) is regarded as a mid-century modernist masterpiece (as least visually, its configuration proved a cul-de-sac) and one thing which always attracts attention is the tailfin, something Tatra first put on a car in 1934.  What the fin did was split and equalize the air pressure on both sides at the rear, something designed to ameliorate the behavior induced by physics, the T87 enjoying the unfortunate combination of swing-axles and a rear-mounted V8 engine.  That configuration delivered some specific advantages but also a tendency for the back end of the car to “wander a bit”.  At speed, the fin helped but didn’t eliminate the problem and if corners were approached with too much enthusiasm, the swing axles certainly swung and it wasn’t uncommon for them to slide off the road or even overturn.  The effects of the fin can be emulated by a car towing a trailer at speed.  If a heavy load is placed in the front of the trailer, stability is usually good but if moved to the rear, there’s the danger of fishtailing which, if left uncorrected, can result in both car and trailer overturning.

The legend exists that such was the Tatra accident rate after the country was occupied in 1938-1939 that Germans there as part of the imposed administration were forbidden from driving the things.  A car must be truly evil for use by the SS to be declared verboten but historians have never unearthed the smoking gun of a documented order and declare it probably apocryphal although words of caution doubtlessly were spread.  Some versions of the story claim the order came from the Führer himself and it seems certain, whatever his tendency to micromanage, that definitely is fanciful although he was well acquainted with Tatra’s designs and their influence on the Volkswagen, the so called “peoples’ car” intended to bring to Germany the mass-market automobile which the Ford Model T (1908-1927) had delivered to US society.

Biomimicry: Lindsay Lohan in leopard-print.

Humans have been practicing biomimicry long before the emergence of any form of culture recognizable as a civilization; the use of animal skins or fur for warmth was an early example of what would later evolve into a technology.  Presumably, at least some of those who fashioned some of the early canoes and boats might have been influenced by the appearance of fish when choosing the shape a hull was to assume.  In architecture too nature seems to have provided inspiration and evidence exists of prehistoric structures which seem to owe something to both beehives and termite mounds although there’s obviously no extant documentation to verify the speculation.  Later architects and engineers did leave notes and natural structures including eggshells & mushrooms served as models of how strength and the volume of internal space could be optimized.  However, probably the best known of the early studies of biomimicry was the observation of birds undertaken in the age-old quest for human flight, many of Leonardo da Vinci’s (1452–1519) sketches of the physiology of both men and birds part of the research for his designs of “flying machines”.  For centuries, others would look to birds for inspiration although it wasn’t until the 1950s that the word “biomimic” began to evolve and that happened not among engineers or architects but in the biology labs, and at the time, what was called “bionics” was conceived as a practical application, a synthetic emulation of natural systems, then usually referred to as “biophysics”.  In the following decade, “biomimetic” came to be preferred because it exactly represented the concept and thus the discipline of “biomimmetics” was formalized: the engineering of a device, substance or material which mimics those found in the natural environment.

Northrop Grumman B-2 Spirit (Stealth Bomber) and the Peregrine Falcon.

Popular culture played a part in the evolution too.  The word “bionic” fell from academic favor because in the 1970s it was used in science fiction (SF) of sometimes dubious quality and in television programmes which were distant from what was scientifically possible.  The term biomimicry however flourished as products (such as Velcro) which owed much to models observed in the natural environment appeared with increasing frequency and the techniques came to be described as “reverse engineering”, a term later co-opted in IT to refer to the process of deconstructing a piece of compiled software in order to be able to understand the source code which underlay to program.  Biomimicry was also of interest in the social sciences.  Although there had for more than a century been studies of the organization of animal societies including bees, ants and primates, the simultaneous rise of the economist and the power of computers to construct big-machine models meant that it came to be understood there might be a financial value in observations, beyond the academic interest of the behaviorists and psychologists.

Three models: Pop artists have often been attracted to similarities between various animals and the human form, either static or in motion but Japanese painter & sculptor Showichi Kaneda san (b 1970) was much taken with the structural alignment between the hammerhead shark and the modern open wheel racing car of which the Formula One machines are the highest evolution (even if in their present form about the most boring yet regulated).

Enigma

Enigma (pronounced uh-nig-muh)

(1) A puzzling or inexplicable occurrence or situation; mysterious.

(2) A person of puzzling or contradictory character.

(3) A saying, question, picture, etc., containing a hidden meaning; riddle.

(4) A German-built enciphering machine developed for commercial use in the early 1920s and later adapted and appropriated by German and other Axis powers for military use through World War II (initial capital letter).

(5) In music, an orchestral work in fourteen parts, Variations on an Original Theme, Opus 36 (popularly known as the Enigma Variations) by Edward Elgar.

1530–1540: From the Late aenigmaticus, from aenigmat-, stem of aenigma (riddle), from the Ancient Greek verbal noun αἴνιγμα (aínigma) (dark saying; speaking in riddles), the construct being ainik- (stem of ainíssesthai (to speak in riddles), derivative of aînos (fable) + -ma, the noun suffix of result.  The sense of a "statement which conceals a hidden meaning or known thing under obscure words or forms" emerged in the 1530s although enigmate had been in use since the mid 1400s, under the influence of the Latin aenigma (riddle), the ultimate root of all being the ainos (tale, story; saying, proverb), a poetic and Ionic word, of unknown origin.  The modern sense of "anything inexplicable to an observer" is from circa 1600, the meaning also absorbing the earlier (1570s) enigmatical & enigmatically.  The derived forms are the adjectives enigmatic & enigmatical, adjective and the adverb enigmatically; enigmatic the most frequently used.  In modern English, the plural is almost always enigmas although some writing in technical publications continue to use enigmata although the once common alternative spelling ænigma is now so rare as to be probably archaic.  An enigma is something or someone puzzling, mysterious or inexplicable although use with the older meaning (a riddle) is still seen, indeed in some contexts the words are used interchangeably.  In idiomatic use in Spain, the character of an enigmatic soul is illustrated by by suggesting he’s the sort of fellow who “were one to meet him on a staircase, one wouldn’t be sure if he was going up or coming down”.  Enigma is a noun, enigmatic is an adjective and enigmatically is an adverb; the noun plural is enigmas.

Elgar’s Enigma Variations

English composer Sir Edward Elgar (1857-1934) wrote Variations on an Original Theme, Opus 36 during 1898-1899.  An orchestral work in fourteen parts, it’s referred almost always as the Enigma Variations, the enigma being the linkage to a certain piece of music is the theme.  Elgar famously wrote a dedication for the work "to my friends pictured within", each of the variations a sketch in musical form of some friend or acquaintance, including himself.  An enigma it remained, Elgar always secretive about the mysterious theme and the work has always defied the attempts of musicologists and other composers to deconstruct things to the point where a thematic agreement ensued although there have been theories and suggestions.

Lindsay Lohan in Enigma Magazine.

Mozart’s ‘Prague’ Symphony was one, the idea attractive because the slow movement fluctuates between G minor and G major, as does Enigma’s theme.  There were those who thought it might reference Auld Lang Syne as a veiled reference to a farewell to the nineteenth century, the variations completed in 1899.  The list went on, Twinkle, Twinkle Little Star; God Save The Queen; Martin Luther’s hymn tune Ein Feste Burg; Home, Sweet Home; Rule Britannia; the theme of the slow movement of Beethoven’s ‘Pathétique’ Sonata; various passages of scripture, Pop Goes The Weasel; a Shakespeare sonnet and, most recently added, Pergolesi’s Stabat Mater.  To add more mystery, the title "Enigma" didn’t appear on Elgar’s original score, added only after the papers had been delivered to the publisher and despite enquiries, the nature of the enigma he declined to discuss, saying only it was a "dark saying" which “must be left un-guessed”.  His reticence didn’t discourage further questions but his answers, if not cryptic, added little and the conclusion remained the theme was a counterpoint on some well-known melody which is never heard.

A fine recording is by the London Symphony Orchestra under Adrian Boult (1889-1983), (1970; Warner Classics 764 0152).

For over a century, just which tune has drawn the interest of musicians,  mathematicians & madmen for Elgar died without revealing the truth.  It’s been suggested artificial intelligence might be used to find the answer but there’s also the suspicion Elgar preferred the enigma to remain one and even if someone during his lifetime had cracked the code, he may have be disinclined to kill the mystique attached to the piece.  He had good reason to be fond of the fourteen variations.  It was the work which cemented his reputation internationally as a first rate composer and even today, some of the popularity probably lies in the impenetrability of the riddle.

Friday, June 2, 2023

Cobalt

Cobalt (pronounced koh-bawlt)

(1) A brittle, hard, lustrous, silvery-white-gray element (a ferromagnetic metal) which is found principally in cobaltite and smaltite and is widely used in (1) the rendering of both heat-resistant and magnetic alloys, (2) in clinical oncology and (3) as a blue pigment used to color ceramics, glass and other materials.

(2) As cobalt blue, a deep blue pigment derived from cobalt; zaffre.

(3) As cobalt therapy (known colloquially as the “cobalt ray”), a gamma ray treatment first used in the early 1950s in clinical oncology executed with external beam radiotherapy (teletherapy) machines using the radioisotope cobalt-60 with a half-life of 5.3 years.

1675–1685: From the German Kobalt & Kobold (a variant of Koboldkobold), from the Middle High German kobolt (household goblin), the name derived from the belief held by silver miners in the Harz Mountains that malicious goblins placed it in the silver ore, based on the rocks laced with arsenic and sulfur which degraded the ore and caused illness.  The construct was the Middle High German kobe (hut, shed) + holt (goblin) from hold (gracious, friendly), a euphemistic word for a troublesome being, designed to avoid offending the creature and thus inviting retribution.  It thus became part of German folk culture as an earth-elemental or nature spirit.  Although much rarer, the metallic element closely resembles nickel and was documented by but much rarer) was extracted from this rock. It was mentioned in the alchemy notes of Paracelsus (the Swiss physician, alchemist, lay theologian, and philosopher of the German Renaissance Theophrastus von Hohenheim (circa 1493-1541)), but as an element its discovery is credited to the Swedish chemist and mineralogist Georg Brandt (1694–1768) who in 1733 gave it the name.  Although it has since the mid-sixteenth century been used as a coloring agent for glass and ceramics, “cobalt blue” didn’t come into formal use until 1835.  There is also cobalt green (A variety of green inorganic pigments obtained by doping a certain cobalt oxide into colorless host oxides.  Cobalt & cobaltite are nouns and cobaltic, colbaltous & colbaltesque are adjectives; the noun plural is cobalts.

Cobalt ore.

Chemical symbol: Co.
Atomic number: 27.
Atomic weight: 58.93320.
Valency: 2 or 3.
Relative density (specific gravity): 8.9.
Melting point: 1495°C (2723°F).
Boiling point: 2928°C (5302.4°F).


Currently, most of the world's cobalt is supplied by mines in the Democratic Republic of Congo (the DRC, the old Republic of Zaire (1971-1997)) which account for some 60% of annual production.  Because of (1) industry economics and (2) the natural geological occurrence of the minerals, cobalt typically is extracted as a by-product of copper or nickel mining operations.  Smaller-scale mining is also undertaken in Canada, Australia, Russia and the Philippines.


Bugatti Type 35 (1924-1930) in Bleu de France (Blue of France, at the time often called Bleu Racing Français (French Racing Blue)) (left) and Bugatti Veyron 16.4 Super Sport Vitesse (2012–2015) in bleu cobalt over Bleu de France (right).  The factory still offers a variety of blues including bleu cobalt.

In the early days of motorsport, cars were painted in accord with their country of origin (the corporate liveries reflecting the source of the sponsorship didn’t reach all categories until the late 1960s) and the French chose blue.  Originally it was the exact shade used on the tricolore (the national flag) but teams soon adopted various shades.  The British were allocated green which became famous as the dark shade used on the Bentleys which raced at Le Mans in the 1920s but it too was never exactly defined and over the decades lighter and darker hues were seen.  The Italians of course raced in the red best represented by Ferrari’s Rosso Corsa (Racing Red) although in the era red at least once appeared on the bodywork of the car of another nation.  The winner of the 1924 Targa Florio in Sicily was a bright red Mercedes Tipo Indy and, being German, should have been painted in their racing color of white but, noting the rocks and other items the Italian crowd was inclined to throw at any machine not finished in Rosso Corsa, the team decided subterfuge was justified and the use of white by German entrants anyway didn’t last even a decade after the victory.

In 1934, with the Mercedes-Benz and Auto Union factory teams supported by the Nazi state as a propaganda project, the Mercedes-Benz W25s appeared in silver, the bare aluminum polished rather than painted.  For decades, the story told was that after a practice session, upon being weighed, the cars were found to be a kilogram-odd over the 750 KG limit for the event and the team had to work overnight to scrape off all the carefully applied, thick white paint, the weigh-in on the morning of the race yielding a compliant 749.9.  It was a romantic tale but has since been debunked, the race in question not being run under the 750 KG rule and in the 1990s, a trove of photographs was uncovered in an archive showing the cars arriving at the track unpainted, already in bare silver.  The authorities did request the Mercedes-Benz and Auto Union teams revert to white but already motorsport’s prime directive of the 1930s was operative: "Give way to the Germans".  That race in 1934 was the debut of the “silver arrows” but it happened not quite as the legend suggested.  Even the factory now refers to the tale as "the legend".

The International Organization for Standardization (ISO) has issued standard (ISO 11664-3:2019) which defines the technical terms and the colorimetric equations necessary for colorimetry and in that cobalt blue has been defined as Hex triplet #0047AB; sRGBB (r:0; g:71; b:171) & HSV (h: 215°; s: 100%; v:67%).  However, among manufacturers it’s often just a vague descriptor on the color chart and like many colors is treated as a spectrum with hues varying in shade and tone.  In the fashion industry there’s no attempt whatever at standardization or even consistency and the same house has been known to describe the fabric used in one range “cobalt blue” while in another line it might be “ultramariine”, “Prussian blue” “royal blue” or anything else which seems to suit.

Lindsay Lohan in cobalt blue dress at Nylon Magazine's launch of the Young Hollywood Issue, Tenjune, New York, May 2007.

The cobalt bomb is a speculative nuclear weapon, first suggested in 1950 by one of the leading physicists associated with the Manhattan Project which during World War II (1939-1945) developed the world’s first atomic bombs.  It was the implications of the cobalt bomb which first gave rise to the doomsday notion that it might be possible to build weapons which could kill all people on earth.  The device would be constructed as a thermo-nuclear weapon consisting of a hydrogen (fusion) bomb encased in cobalt which upon detonation releases large quantities of radioactive cobalt-60 into the atmosphere and from the site of the explosion it would be dispersed worldwide by atmospheric processes.  Because of its half-life, were the volume of the release to be sufficient, the entire planet could be affected well before radioactive decay reached the point where human (and almost all animal) life could be sustained.  It’s believed no full-scale cobalt bomb was ever built but the British did test the concept on a tiny scale and few doubt the major nuclear weapons powers have all simulated cobalt bombs in their big computers and, awesome of awful depending on one’s world view, the thing has long been a staple in science fiction and the genre called “nuclear war porn”.

The descendent of the idea was the neutron bomb which, like the cobalt device, relied for its utility on fall-out rather than the initial destructive blast.  The Pentagon-funded work on the first neutron bomb was conducted under the project name “Dove” (which seems a nice touch) and the rationale was that for use in Europe, what was needed was a weapon with a relatively low blast but which produced a nasty but relatively short-lived fallout, the idea being that there would be a high death-rate among an invading army but little physical damage to valuable real estate and infrastructure.

Yalta

Yalta (pronounced yawl-tuh or yahl-tuh (Russian))

(1) A seaport in the Crimea, South Ukraine, on the Black Sea (In 2014, Moscow annexed Crimea).

(2) The second (code-name Argonaut) of the three wartime conferences between the heads of government of the UK, USA and USSR.

(3) A variant of chess played by three on a six-sided board.

From the Crimean Tatar Yalta (Я́лта (Russian & Ukrainian)), the name of the resort city on the south coast of the Crimean Peninsula, surrounded by the Black Sea.  Origin of the name is undocumented but most etymologists think it’s likely derived from the Ancient Greek yalos (safe shore), the (plausible) legend being it was named by Greek sailors looking for safe harbour in a storm.  Although inhabited since antiquity, it was called Jalita as late as the twelfth century, later becoming part of a network of Genoese trading colonies when it was known as Etalita or Galita.  The Crimea was annexed by the Russian Empire in 1783, sparking the Russo-Turkish War, 1787-1792. Prior to the annexation of the Crimea, the Crimean Greeks were moved to Mariupol in 1778; one of the villages they established nearby is also called Yalta.  Apparently unrelated are the Jewish family names Yalta & Yaltah, both said to be of Aramaic origin meaning hind or gazelle (ayala).

Yalta Chess

Yalta Conference, 1945.

Yalta chess is a three player variant of chess, inspired by the Yalta Conference (4-11 February 1945), the second of the three (Tehran; Yalta; Potsdam) summit meetings of the heads of government of the UK, US, and USSR.  The Yalta agenda included the military operations against Germany, the war in the far-east and plans for Europe's post-war reorganization.  The outcomes of the conference, which essentially defined the borders of the cold war, were controversial even at the time, critics regarding it as a demonstration of the cynical world-view of the power-realists and their system of spheres of influence.  In the seventy-five years since, a more sympathetic understanding of what was agreed, given the circumstances of the time, has emerged.

Yalta chess reflects the dynamics of the tripartite conference; three sides, allied for immediate military purposes but with very different histories, ideologies and political objectives, working sometimes in unison and forming ad-hoc table-alliances which might shift as the topics of discussion changed.  The whole proceedings of the conference are an illustration of a practical aspect of realpolitik mentioned by Lord Palmerston (1784–1865; UK Prime Minister, 1855–1858, 1859–1865) in the House of Commons on 1 March 1848: "We have no eternal allies, and we have no perpetual enemies.  Our interests are eternal and perpetual, and those interests it is our duty to follow."  

One of many chess variants (including a variety of three-player forms, circular boards and a four-player form which was once claimed to be the original chess), Yalta chess shouldn’t be confused with three-dimensional chess, a two-player game played over three orthodox boards.  In Yalta Chess, the moves are the same as orthodox chess, except:

(1) The pawns, bishops and queens have a choice of path when they are passing the centre (the pawns just if they are capturing).

(2) The queen must be put to the left of the king.

(3) The knights always move to a square of another color.

(4) All disagreements about the rules are resolved by a majority vote of the players.  It’s not possible to abstain; at the start of the match it must be agreed between the players whether a non-vote is treated as yes or no.

(5) If a player puts the player to the right in check, the player to the left may try to help him.

(6) If a player checkmates another, he may use the checkmated player’s pieces as his own (after removing the king) but a second move is not granted.

(7) If all three players are simultaneously in check, the player forcing the first check is granted checkmate.



Thursday, June 1, 2023

Carburetor

Carburetor (pronounced kahr-buh-rey-ter or kahr-byuh-yey-tor)

(1) A device for mixing vaporized fuel with air to produce a combustible or explosive mixture for use in the cylinder(s) or chambers of an internal-combustion engine.

(2) In the slang of drug users, a water pipe or bong; a device for mixing air with burning cannabis or cocaine (rare since the 1970s and then usually in the form “carb” or “carby”).

1866: From the verb carburate, from the Italian carburate (to mix (air) with hydrocarbons”), an inflection of carburare & the feminine plural of carburato.  As a transitive verb carburet was used mean “to react with carbon”.  Strangely, the exact origin of the word is uncertain but it was likely a portmanteau of carbon (in the sensor of a clipping of hydrocarbon) + burette (a device for dispensing accurately measured quantities of liquid).  The construct was carb (a combined form of carbon) + -uret (an archaic suffix from Modern Latin) (uretum to parallel French words using ure).  The earlier compound carburet (compound of carbon and another substance; now displaced by carbide) was from 1795 and it was used as a verb (to combine with carbon) after 1802.  The use with reference to the fuel systems used in the internal combustion engines of vehicles dates from 1896.  Carburator, carbureter and carburetter were the now obsolete earlier forms and the standard spelling in the UK, Australia & New Zealand is carburettor.  Carb & carby (carbs & carbies the plural) are the the universally used informal terms (gasifer was rare) and although most sources note the shortened forms weren’t recorded until 1942 it’s assumed by most they’d long been in oral use.  Outside of a few (declining) circles, “carb” is probably now more generally recognized as the clipping of carbohydrate.  Carburetor & carburetion are nouns; the noun plural is carburetors.

One carburetor: 1931 Supercharged Duesenberg SJ with 1 x updraft Stromberg (left; the exhaust manifold the rare 8-into-1 monel "sewer-pipe") (left), 1966 Ford GT40 (Mark II, 427) with 1 x downdraft Holly (centre; the exhaust headers were referred to as the "bundle of snakes") and 1960 Austin Seven (later re-named Mini 850) with 1 x sidedraft SU.

Except for some niches in aviation, small engines (lawnmowers, garden equipment etc) and for machines where originality is required (historic competition and restorations), carburetors are now obsolete and have been replaced by fuel-injection.  There is the odd soul who misses the challenge of tinkering with a carburetor, especially those with the rare skill to hand-tune multiple systems like the six downdraft Webers found on some pre-modern Ferraris, but modern fuel injection systems are more precise, more reliable and unaffected by the G-forces which could lead to fuel starvation.  Fuel injection also made possible the tuning of induction systems to produce lower emissions and reduced fuel consumption, the latter something which also extended engine life because all the excess petrol which used to end up contaminating the lubrication system stayed instead in the fuel tank.

Two carburetors: 1970 Triumph Stag with 2 x sidedraft Strombergs (left), 1960 Chrysler 300F with 2 x Carter downdrafts on Sonoramic cross-ram (long) manifold (centre) and 1969 Ford Boss 429 with 2 x Holly downdrafts on hi-riser manifold.

Until the 1920s, all but a handful of specialized devices were simple, gravity-fed units and that was because the engines they supplied were a far cry from the high-speed, high compression things which would follow.  In the 1920s, influenced by improvements in military aviation pioneered during World War I (1914-1918), the first recognizably “modern” carburetors began to appear, the conjunction of adjustable jet metering and vacuum controls replacing the primitive air valves and pressurized fuel supply mechanisms allowed engineers to use a more efficient “downdraft” design, replacing the “updraft” principle necessitated by the use of the gravity-feed.  Between them, the “downdraft” and “sidedraft” (a favorite of European manufacturers) would constitute the bulk of carburetor production.  The next major advance was the “duplexing” of the carburetor’s internals, doubling the number of barrels (known now variously as chokes, throats or venturi).  Although such designs could (and sometimes were) implemented to double the capacity (analogous with the dual-core CPUs (central processing units) introduced in 2005), the greatest benefit was that they worked in conjunction with what was known as the “180o intake manifold”, essentially a bifurcation of the internals which allowed each barrel to operate independently through the segregated passages, making the delivery more efficient to the most distant cylinders, something of real significance with straight-eight engines.  Few relatively simple advances have delivered such immediate and dramatic increases in performance: When the system was in 1934 applied to the them relatively new Ford V8 (the “Flathead”), power increased by over 25%.

Three carburetors: 1967 Jaguar E-Type (XKE) 4.2 with 3 x sidedraft SUs (left), 1967 Ferrari 275 GTB/C with 3 x downdraft Webers (centre) and 1965 Pontiac GTO with 3 x downdraft Rochesters.

Advances however meant the demand for more fuel continued and the first solution was the most obvious: new manifolds which could accommodate two or even three carburetors depending on the configuration of the engine.  Sometimes, the multiple devices would function always in unison and sometimes a secondary unit would cut-in only on demand as engine speed rose and more fuel was needed, an idea manufacturers would perfect during the 1960s.  World War II (1939-1945) of course saw enormous advances in just about every aspect of the design of internal combustion engines (ICE) and carburetors too were improved but in a sense, the concept had plateaued and it was fuel-injection to which most attention was directed, that being something which offered real advantages in flight given it was unaffected by G-forces, atmospheric pressure or acrobatics, working as well in inverted as level flight, something no carburetor could match.

Four carburetors: 1973 Jaguar XJ12 (S1) with 4 x sidedraft Zenith-Strombergs (left; the Jaguar V12 was unusual in that the carburetors sat outside the Vee), 1976 Aston Martin V8 with 4 x downdraft Webers (centre; Aston Martin-Lagonda originally fitted the V8 with fuel injection but it proved troublesome) and 1965 Ford GT40 (X1 Roadster 1, 289) with 4 x downdraft Webers (right, again with the "bundle of snakes" exhaust headers).

After the war, like the chip manufacturers with their multi-core CPUs in the early 2000s, the carburetor makers developed four-barrel devices.  In Europe, the preference for multiple single or two barrel (though they tended to call them “chokes”) induction but in the US, by the early-1950s just beginning the power race which would rage for almost two decades, for the Americans the four-barrel was ideal for their increasingly large V8s although sometimes even the largest available wasn’t enough and the most powerful engines demanded with two four-barrels and three two-barrels.  It was in the 1950s too that fuel-injection reached road cars, appearing first in a marvelously intricate mechanical guise on the 1954 Mercedes-Benz 300 SL (W198) Gullwing.  Others understood the advantages and developed their own fuel-injection systems, both mechanical and electronic but while both worked well, the early electronics were too fragile to be used in such a harsh environment and these attempts were quickly abandoned and not revisited until the revolution in integrated circuits (IC) later in the century.  Mechanical fuel-injection, while it worked well, was expensive and never suitable for the mass-market and even Mercedes-Benz reserved it for their more expensive models, most of the range relying on one or two carburetors.  In the US, Chevrolet persisted with mechanical fuel injection but availability dwindled until only the Corvette offered the option and in 1965 when it was made available with big-block engines which offered more power at half the cost, demand collapsed and the system was discontinued, the big engines fed either by three two barrels or one very large four barrel.

Six carburetors: 1979 Honda CBX with six sidedraft Keihins (left), 1965 Lamborghini P400 Miura (prototype chassis) with 6 x downdraft Webers (centre) and 1970 Ferrari 365GTB/4 (Daytona) with 6 x downdraft Webers (right).

It was the development of these big four barrels which in the US reduced the place of the multiple systems to a niche reserved for some specialist machines and even the engineers admitted that for what most people did, most of the time, the multiple setups offered no advantage.  The research did however indicate they were still a selling point and because people were still prepared to pay, they stayed on the option list.  There were a handful of engines which actually needed the additional equipment to deliver maximum power but they were rare, racing derived units and constituted not even 1% of Detroit’s annual production.  Paradoxically, the main advantage of the multiple setups was economy, a six-barrel (ie 3 x two-barrel) engine running only on its central carburetor unless the throttle was pushed open.  As it was, the last of Detroit’s three-carb setups was sold in 1971, the configuration unable easily to be engineered to meet the increasingly onerous exhaust emission rules.

Eight carburetors: 1955 Moto Guzzi 500cm3 Ottocilindri V8 Grand Prix motorcycle with 8 x Dell'Ortos.  One carburetor per cylinder was long common practice in motorcycle design and the 1959 Daimler V8, designed along the lines of a motorcycle power-plant, was originally designed to be air-cooled and run 8 carburetors.  The production version was water-cooled and used 2 x sidedraft SUs.

Lindsay Lohan admiring Herbie’s carburetors (Herbie: Fully Loaded (2005)).

Nudiustertian

Nudiustertian (pronounced noo-dee-uhs-tur-shuhn or nyoo-dee-uhs-tur-shuhn)

Of or relating to the day before yesterday (obsolete).

1647: From the Latin nudius tertius, formed from the phrase nunc dies tertius est, (literally “today is the is the third day”).  It was coined by the author Nathaniel Ward (1578–1652) and used in his book The Simple Cobler of Aggawam in America (1647).  Nudiustertian is an adjective and no other forms seem to have evolved although the noun nudiusterianist would presumably be a slang term for one who "lives in the past" and the noun nudiusterianism would be the movement which advocates that lifestyle choice.  

Words long and short

Depending on the extent of one’s pedantry, English contains probably between a quarter and three-quarters of a million words but only a few thousand could be said to be in common use.  English speakers have been so fickle that whether a word survives, even if only as a rare and obscure thing, or become obsolete, seems random.  Constructions can of course be specific to a time, place or personality and words like Lohanic (something of or pertaining to Lindsay Lohan), Lohanistic & Lohanesque (something in the style of Lindsay Lohan) or Lohannery (a behavior ascribed to or associated with Lindsay Lohan) may end up stranded in their era whereas Orwellian (pertaining to the ideas discussed in certain novels by George Orwell (1903-1950)) will probably endure because the concepts involved transcend the people or events associated with their coining. 

Puritan lawyer and clergyman Nathaniel Ward (1578–1652), author, inter alia, of the first (1641) constitution in North America, coined nudiustertian and it quickly went extinct, despite being a handy five syllable substitute for the phrase “the day before yesterday” which demanded an extra syllabic brace.  He also invented the evidently self-referential nugiperous (given to inventing useless things (from the Latin nugae (nonsense or foolish)) which suffered the same fate.  Yet penultimate (from the Latin paenultimus, the construct being paene (almost) + ultimus (last)) survived and flourished despite needing an additional syllable compare with the punchy “second last”.  Even the once more popular “last but one” was more economical, as was the more modern creation “next to last” but penultimate kept its niche.  People must like the way or rolls of the tongue.

Penultimate must then have occupied a linguistic sweet-spot because antepenultimate (last but two), preantepenultimate (last but three) and propreantepenultimate (last but four) are essentially unknown.  Also long extinct are hesternal (from the Latin hesternus (of or pertaining to yesterday)), hodiernal (from the Latin hodiernus (today, present), ereyesterday (from the Old English ere (before) + yesterday) and overmorrow (from the Middle English overmorwe, from Old English ofermorgen (on the day after tomorrow)).

Wednesday, May 31, 2023

Context

Context (pronounced kon-tekst)

(1) In structural linguistics, the factors which may define or help disclose the meaning or effect of a written or spoken statement including (1) the words preceding or following a specific word or passage, (2) the position of the author, (3) the identity of the author, (4) the intended audience, (5) the time and place in which the words were delivered and (6) such other circumstances as may be relevant.

(2) The surroundings, circumstances, environment, background or settings that might determine, specify or clarify the meaning of an event or other occurrence.

(3) In mycology, the fleshy fibrous body (trama) of the pileus in mushrooms.

(4) In Novell’s Netware network operating system, an element of Directory Services (the hierarchical structure used to organize and manage network resources), one’s context being a specific level within the directory tree.

(5) To knit or closely bind; to interweave (obsolete).

(6) In archaeology and anthropology, the surroundings and environment in which an artifact is found and which may provide important clues about the artifact's function, age, purpose, cultural meaning etc.

(7) In formal logic (for a formula), a finite set of variables, which set contains all the free variables in the given formula.

1375–1425: From the late Middle English context (a composition, a chronicle, the entire text of a writing), from (and originally the past participle of) the Latin contextus (a joining together, scheme, structure), the construct being contex(ere) (to join by weaving; to interweave) + -tus (the suffix of a verb of action).  The construct of contexere was con- + texere (to plait or braid, to weave), from the primitive Indo-European root teks (to weave; to build; to fabricate).  The prefix con- was from the Middle English con-, from the Latin con-, from the preposition cum (with), from the Old Latin com, from the Proto-Italic kom, from the primitive Indo- European óm (next to, at, with, along).  It was cognate with the Proto-Germanic ga- (co-), the Proto-Slavic sъ(n) (with) and the Proto-Germanic hansō.  It was used with certain words to add a notion similar to those conveyed by with, together, or joint or with certain words to intensify their meaning.  The verb contex (to weave together) was known as early as the 1540s and was also from the Latin contexere; it was obsolete by the early eighteenth century.

The meaning "the parts of a writing or discourse which precede or follow, and are directly connected with, some other part referred to or quoted" developed in the mid-late sixteenth century.  The adjective contextual (pertaining to, dealing with the context) dates from 1822, on the model of textual and the phrase “contextual definition” appeared first in works of philosophy in 1873.  Contextualization from 1930 & contextualize from 1934 were both products of academic writing.  Many of the derivations (acontextual, contextual criticism, contextual inquiry, contextualist, contextuality, contextualize, metacontextual, non-contextual, sub-contextual) are associated with academic disciplines such as linguistics and anthropology but, predictably, the verb decontextualize (study or treat something in isolation from its context) emerged in 1971 and came from postmodernism where it found a home, along with the inevitable decontextualized, decontextualizing & decontextualization.  Context is a noun, verb & adjective, contextual & contextualistic are adjectives, contextualism, contextuality & contextualization are nouns, contexture is a noun & verb, contextualist is a noun & adjective, contextualize, contextualizing & contexualized are verbs and contextualistically & contextually are adverbs; the noun plural is contexts.

Contextual truth

In the law of defamation law, “contextual truth” describes one of the defences available to a defendant (ie the party accused of defaming the applicant).  It’s an unusual aspect of defamation law (and there are others) in that while it acknowledges certain statements may literally be false yet may still convey a broader truth or accurate meaning when considered in the context in which they were made or considered in the context of other statements (dealing usually with matters more serious) which were part of the case.  Although there have been reforms in many jurisdictions, as a general principle, defamation happens if statements found to be false have harmed the reputation of an individual or entity (although in some places, including some with respectable legal systems, it’s possible to defame with the truth).  Typically though, successfully to establish a claim of defamation, a plaintiff needs to prove (1) a statement was false, (2) that it was published or communicated to a third party and (3) that the plaintiff suffered harm as a consequence.  The defense of contextual truth essentially “runs on top” of the traditional rules in that while the some (or even all in legal theory) of the specific details of a statement may be factually incorrect, but when considered in context, they can be found to convey an underlying truth.

For example, if someone publishes an article stating that a public figure was involved in a scandalous incident, and it later emerges that some of the specific details in the article were incorrect, the defendant might argue contextual truth. They may claim that while the specific details were inaccurate, the overall implication of wrongdoing or impropriety by the public figure was true or substantially true.  Successfully to invoke the defense requires a defendant must demonstrate the impression conveyed by the statement was substantially accurate, even if specific details were incorrect and the form this takes is often that the statement alleged to be defamatory statement was not intended as a recounting of specific facts but rather a representation of a larger truth.  Despite the terminology, the defences of justification and partial justification really don’t sit on a continuum with contextual truth which demands at least one or more imputations complained of to be substantially true, and in light of the substantial truth of those imputations, the remainder of the imputations complained of do no further harm to the plaintiff’s reputation.  Like justification, contextual truth can be a complete defence to a claim and is often invoked as a defense where other statements being considered allege conduct much more likely to damage a reputation.

Pronunciation can of course be political so therefore can be contextual.  Depending on what one’s trying to achieve, how one chooses to pronounce words can vary according to time, place, platform or audience.  Some still not wholly explained variations in Lindsay Lohan’s accent were noted circa 2016 and the newest addition to the planet’s tongues (Lohanese or Lilohan) was thought by most to lie somewhere between Moscow and the Mediterranean, possibly via Prague.  It had a notable inflection range and the speed of delivery varied with the moment.  Psychologist Wojciech Kulesza of SWPS University of Social Sciences and Humanities in Poland identified context as the crucial element.  Dr Kulesza studies the social motives behind various forms of verbal mimicry (including accent, rhythm & tone) and he called the phenomenon the “echo effect”, the tendency, habit or technique of emulating the vocal patters of one’s conversational partners.  He analysed clips of Lilohan and noted a correlation between the nuances of the accent adopted and those of the person with who Ms Lohan was speaking.  Psychologists explain the various instances of imitative behaviour (conscious or not) as one of the building blocks of “social capital”, a means of bonding with others, something which seems to be inherent in human nature.  It’s known also as the “chameleon effect”, the instinctive tendency to mirror behaviors perceived in others and it’s observed also in politicians although their motives are entirely those of cynical self-interest, crooked Hillary Clinton’s adoption of a “southern drawl” when speaking in a church south of the Mason-Dixon Line a notorious example.

Memo: Team Douglas Productions, 29 July 2004.

Also of interest is the pronunciation of “Lohan” although this seems to be decided by something more random than context although it’s not clear what.  Early in 2022, marking her first post to TikTok, she pronounced her name lo-en (ie rhyming with “Bowen”) but to a generation brought up on lo-han it must have been a syllable too far because it didn’t catch on and by early 2023, she was back to lo-han with the hard “h”.  It’s an Irish name and according to the most popular genealogy sites, in Ireland, universally it’s lo-han so hopefully that’s the last word.  However, the brief flirtation with phonetic H-lessness did have a precedent:  When Herbie: Fully Loaded (2005) was being filmed in 2004, the production company circulated a memo to the crew informing all that Lohan was pronounced “Lo-en like Co-en” with a silent “h”.