Wednesday, June 16, 2021

That Time a Farmer was Given Ultimate Power Twice and Changed the World Forever By Walking Away Both Times

The subject of what a political leader in a democracy does after his term has ended and the merits of gracefully resigning from power has been on the news recently.

Enter the subject of today’s story which takes place in ancient Rome, at the dawn of the Republic Era. The person in question was Cincinnatus, whose actions in terms of political ethics not only shaped the political life of generations to come, but was linked with the essence of what democratical thinking is, so much so that founders of the American nation dubbed Washington with Cincinnatus’ name. So who was Cincinnatus and what made him rather unique compared to the vast majority of his political leader compatriots throughout history?

Lucius Quinctius Cincinnatus was born to the noble house Quinctii possibly around 519 B.C during the last years of the Kingdom of Rome. This means he belonged to the first generation to be raised within the just recently established grand experiment that was the Roman Republic.

In the 460s, Rome was in turmoil, with the main issue being the representation of the plebeians in government – those of its citizens not born to noble families. At one of the violent clashes, one of the two serving consuls, Publius Valerius Publicola, was killed. Cincinnatus rose to his position as replacement via a system vaguely similar to how a vice president can replace the president in the United States.

Cincinnatus therefore served a term in the highest political office in Rome. Ultimately, however, rather try to cling to power like so many others, he eventually chose to return to his private life. This was at the least unusual for various reasons. For one thing, he did not step down because he was fed up with politics. Far from it: He was highly opinionated regarding the issues of his day, with a strong stance against the plebeian demands for constitutional changes that would allow them to circumscribe the decisions of the consuls.

Furthermore, he was in a very difficult financial situation because of a fine he had to pay on account of his son Caeso, who – after causing political turmoil and violence – left the city before the court had reached a sentence. In the end, Cincinnatus had to pay a rather large fine in his stead, for which he had to sell his estate and instead live on a small farm across the Tiber (possibly around the Trastevere Region of Rome today). Thus, by stepping away he not only gave up incredible powe, but also was returning to the life, not so much as a wealthy noble as he had been before his term in office, but rather the life of a simple farmer.

While this all did nothing to advance his personal fortunes, his choice not to use his term as consul as means to broaden his political career, change his economic fortune or even to recall his son whom the republic had condemned, gained him the respect of his fellow Romans.

But the story of Cincinnatus was just beginning. Two years later, around 458 BC, Rome was once more in peril, as the army of the neighbouring nation of Aequi broke towards Rome, defeating one consular army while the other was far from the action.

To respond to this eminent threat, the senate decided to elect a dictator, which at that time was a title provided by the senate to a person who would have king-like powers for a fixed term: six months, after which the power would be returned to the senate. This enabled the appointed dictator to act swiftly, without asking for permission or waiting for the conclusion of further – and often extended – senatorial debates.

Naturally the person chosen for this role had to not only be imminently capable, but also trusted to actually step away when the term was finished. Thus, for this role, the senate chose Cincinnatus.

The historian Livy illustrates the scene. A group of senators approached the farm where Cincinnatus was working. He greeted them and asked if everything was in order. “It might turn out well for both you and your country,” they replied, and asked him to wear his senatorial toga before they spoke further. After he donned the garb of the office, they informed him of the senate’s mandate, hailed him as dictator and took him with them back to Rome.

Cincinnatus then got right to work mobilising the army, besieged the enemy at the Battle of Mount Algidus and returned victorious to Rome- all this in a span of two weeks.

After this huge success, all possible political exploits could have been available to him, especially as he was constitutionally allowed to stay in power for five and a half more months. Despite this, upon his return, he immediately abdicated and returned to his farm. The task at hand was complete, thus he saw no reason power shouldn’t be returned to the Senate.

Twice he could have used his position for his own gain, and twice he had not only chosen not to, but stepped away when his work was complete. But this isn’t the end of Cincinnatus’ tale.

Nineteen years later, in 439 BC, Cincinnatus was around 80 years old and once again asked to become dictator, this time to deal with inner political intrigue, as a certain Maelius was using his money to try to be crowned king – the ultimate threat against any republic. The episode ended with the death of the would-be king and again, his work done, Cincinnatus resigned after having served less than a month as dictator in this instance.

As you might expect from all of this, these practically unprecedented actions by a leader granted infinite power made his name synonymous with civic virtue, humility, and modesty. And they serve as an example of caring about the greater good.

 

To understand the importance of these actions one needs to zoom out and evaluate the time period in which they happened.

At the time, the system ‘republic’ was a novel occurrence in world history, to outsiders not necessarily different from a weird type of oligarchy. Furthermore, except for some initial reactions from the Etruscans directly after the founding of the Republic, the system, which dictates that the city leads itself, was not really put to the test. It would have been completely understandable if given the first opportunity, the city had turned back to a typical king-like government. The existence of a charismatic leader like Cincinnatus could easily be the catalyst to usher in the return to the era of kings, if the incredibly popular Cincinnatus was inclined to take the power. Yet he chose not to even after being granted ultimate authority twice.

This was crucial, as these events happened during the second generation of the Republic. And it was the deeds of the second and third generation after the founding of the Republic that were the ones that truly solidified the belief and generational tradition of the system which would come to be one of the most influential in human history. One can easily see how had Cincinnatus chosen to exploit his position and his popularity as the vast majority of world leaders have done throughout history, history itself as we know it might have been vastly different.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Facts:

Cincinnatus as a role model had many imitators throughout time – some more successful than others.

Sulla

Continuing with Rome, during the Late Republic, the political Sulla was, let’s say… controversial to say the least. You know that retired authoritarian navy seals commander from any movie? Well, multiply this by ten, add some crazy slaughtering frenzies and there you have it. However, in 79 BC, after putting order to the Roman empire, and having been dictator since 81 BC, he resigned.

His supporters would like to compare this to Cincinnatus, but it is a rather different situation, seeing as he did not step down to resume a simple life, but rather to write his memoirs in a fancy resort. Plutarch states that he retired to a life of luxury, where, “He consorted with actresses, harpists, and theatrical people, drinking with them on couches all day long”. So rather than stepping down to a simple life, more of a retirement package filled with partying and bliss without the cares, intrigue, and dangers that come with being dictator of Rome.

In another contrast, his reforms did not ultimately make the impact he had hoped and their results were completely thrown over after his death, with the Empire being founded just a few decades after.

Diocletian

Another controversial Roman leader – now in the not-so-brand-new empire edition – marks Diocletian. Ruling as emperor from 284 to 305 AD, Diocletian achieved what few did during the so-called ‘Crisis of the Third Century’; he not only survived long enough to establish political reforms, but actually managed to stabilize the empire for the time being. In 305, he did what no Roman emperor had done before; he abdicated voluntarily and retreated to his palace on the Dalmatian coast – now the historic core of modern day Croatia’s city Split – where he famously tended to his vegetable gardens.

Not even lasting the duration of his retirement until his death in 311 AD, Diocletian’s established tetrarchy – the splitting of the empire among four rulers – collapsed into renewed chaos, and in 308, he was asked to return to power to help fix it. To this, he replied, “If you could show the cabbage that I planted with my own hands to your emperor, he definitely wouldn’t dare suggest that I replace the peace and happiness of this place with the storms of a never-satisfied greed.”

While at first, this may seem like the perfect comparison to Cincinnatus, it should also be stated that the reason for his retirement was first and foremost Diocletian’s failing health and wish to live out his last days peacefully rather than dealing with the political intrigue of the day. In fact, in contrast to Cincinnatus, Diocletian’s attitude can be seen more as abandoning the empire in a time of great need, something even the 80 year old Cincinnatus was unwilling to do.

George Washington

Skipping ahead hundreds of years and a vast number of governing changes in the old world, the American nation appeared in the world scene with a tempo. One of the most peculiar characteristics of it was the idea of a blend of republic and democracy with a small hint of dictator thrown in, but all carefully balanced to try to produce a system of government blending the best of human governing systems, while mitigating the downsides. Today it might seem trivial, but with very few exceptions – like say the Netherlands – at the time western countries had a king figurehead, with varying degrees of authority, even in cases where parliamentarism had had a long tradition, as was the case in England.

For many, this experiment of reviving a political system based on ancient Rome was seen as weird, even eccentric. One of the many concerns was the stability of the system. Would Washington – the Commander in Chief of the Continental Army – and someone vastly popular with the general public and politicians alike, step down after victory?

Well, no. No, of course he wouldn’t, he would become a king or something amounting to the same position, just using a different title and… what? He… he actually left office? But wasn’t he very popular?

Yes. Yes, he was. And paralleling Cincinnatus, he left office because he respected the constitution and the experiment that was this new form of government, a fact that demonstrated – among other qualities – civic virtue and modesty of character.

In a final appearance in uniform he gave a statement to Congress: “I consider it an indispensable duty to close this last solemn act of my official life, by commending the interests of our dearest country to the protection of Almighty God, and those who have the superintendence of them, to his holy keeping.”

It is difficult to imagine today, but stepping down after his presidential term was a sensation. See the counterexample of, say Napoleon crowning himself emperor or other personalities who would do anything to remain in power. Washington’s resignation was acclaimed at home and abroad, and showed a skeptical world that the new republic might just not degenerate into chaos or something completely different and more familiar to the world at the time.

The parallels with Cincinnatus are obvious and were made even then. After the fact, a society of veterans of the American Revolutionary War, the ‘Society of the Cincinnati’ was founded, with the motto Omnia relinquit servare rempublicam (“He relinquished everything to save the republic”). The first major city to be founded after the war was then aptly named Cincinnati, which is the genitive case of Cincinnatus, meaning ‘belonging to / that of Cincinnatus’.

Expand for References

https://en.wikipedia.org/wiki/Lucius_Quinctius_Cincinnatus

https://en.wikipedia.org/wiki/Sulla

https://ift.tt/2SEf63D

https://ift.tt/3xvsGF4

Chernow, Ron (2010). Washington: A Life

Klaus Bringmann, A History of the Roman Republic 2007

https://thehistoryofrome.typepad.com/the_history_of_rome/page/5/  (The history of Rome podcast  (part 7)

The post That Time a Farmer was Given Ultimate Power Twice and Changed the World Forever By Walking Away Both Times appeared first on Today I Found Out.



from Today I Found Out
by Nasser Ayash - June 16, 2021 at 12:00AM
Article provided by the producers of one of our Favorite YouTube Channels!
-

Is it pronounced “Jif” or “Gif”?

It is the single most profound question of the 21st Century, a debate which has dominated intellectual discourse for more than three decades. Some of the greatest minds and institutions in the world have weighed in on the issue, from top linguists and tech giants to the Oxford English Dictionary and even the President of the United States. Yet despite 30 years of fierce debate, controversy, and division, we are still no closer to a definitive answer: is it pronounced “gif” or “jif’?

At its face, the answer might seem rather straightforward. After all, the acronym G-I-F stands for Graphics Interchange Format. “Graphics” has a hard G, so G-I-F must be pronounced “ghif.” Case closed, right? Well, not quite. As is often the case, things aren’t nearly as simple as they might appear.

The Graphics Interchange Format was first introduced in June of 1987 by programmer Steve Wilhite of the online service provider Compuserve. The format’s ability to support short, looping animations made it extremely popular on the early internet, and this popularity would only grow over the next two decades, with the Oxford English Dictionary declaring it their ‘Word of the Year’ in 2012.

As its creator, Wilhite should be the first and final authority on the word’s pronunciation. So how does he think we should say it?

“Jif.”

Yes, that’s right: despite all arguments to the contrary, the creator of everyone’s favourite embeddable animation format insists that it is pronounced with a soft G. According to Wilhite, the word is a deliberate reference to the popular peanut butter brand Jif; indeed, he and his colleagues were often heard quipping “choosy developers choose JIF” – a riff on the brand’s famous slogan “choosy mothers choose JIF.” And he has stuck to his guns ever since. When presented with a Lifetime Achievement Award at the 2013 Webby Awards, Wilhite used his 5-word acceptance speech – presented, naturally, in the form of an animation – to declare: It’s pronounced ‘jif,” not ‘gif’

In a subsequent interview with the New York Times, Wilhite reiterated his stance: “The Oxford English Dictionary accepts both pronunciations. They are wrong. It is a soft ‘G,’ pronounced ‘jif.’ End of story.”

 While the debate should have ended there, language is a strange and fickle thing, and despite Whilhite’s assertions a large segment of the population continues to insist that the hard “G” pronunciation is, in fact, the correct one. In 2020 the programmer forum StackExchange conducted a survey of more than 64,000 developers in 200 countries, asking how they pronounce the acronym. A full 65% backed the hard G and 26% the soft G, with the remainder spelling out each individual letter – “G-I-F.” This seems to agree with a smaller survey of 1000 Americans conducted by eBay Deals in 2014, in which hard G beat soft G 54 to 41%. However, as The Economist points out, people often base their pronunciation of new or unfamiliar words on that of similar existence words, and the prevalence of the hard or soft G varies widely from language to language. For example, Spanish and Finnish have almost no native soft G words, while Arabic almost exclusively uses soft Gs. Those in countries that predominantly use hard Gs make up around 45% of the world’s population and around 79% of the StackExchange survey respondents. Nonetheless, even when these differences are corrected for, hard G still narrowly beats out soft G by 44 to 32%.

In the wake of Wilhite’s Webby Award acceptance speech, many prominent figures and organizations have publicly come out in favour of the hard-G pronunciation. In April 2013 the White House launched its Tumblr account with a graphic boldly announcing that its content would include “Animated GIFs (Hard G),” while during a 2014 meeting with Tumblr CEO David Karp, U.S. President Barack Obama threw his hat into the ring, declaring: “[It’s pronounced GIF.] I’m all on top of it. That is my official position. I’ve pondered it a long time.”

Many backers of the hard-G pronunciation, like web designer Dan Cederholm, focus on the pronunciation of the acronym’s component words, with Cederholm tweeting in 2013: “Graphics Interchange Format. Graphics. Not Jraphics. #GIF #hardg”

However, this argument ignores the many other instances in which the pronunciation of an acronym does not line up with that of its components. For example, while the A in “ATM” and “NATO” stand for “Automatic” and “Atlantic,” respectively, we do not pronounce them as “Awe-TM” or “Nah-tow.” Many also point out that there already exist words such as “jiffy” in which the same sound is produced using a J, but this too ignores exceptions such as the now-archaic spelling G-A-O-L for “jail.”

So if common sense and everyday usage can’t settle the debate, then how about the rules of the English language? As noted by the good folks at the Daily Writing Tips, words in which the G is followed by an e, i, or y – like giant, gem, or gym – are more often than not pronounced with a soft G, while all others are pronounced with a hard G. According to this rule, then, “G-I-F” should be pronounced the way Steve Wilhite originally intended: as “jif.” However, there are many, many exceptions to this rule, such as gift, give, anger or margarine. In an attempt to clear up the matter, in 2020 linguist Michael Dow of the University of Montreal conducted a survey of all English words which included the letters “G-I,” grouping them according to pronunciation. The results seemed to indicate that the soft G is indeed more common as many state, with about 65% using this pronunciation rather than the hard G. However, one thing missed with this argument is that many of these soft-G words, like elegiac, flibbertigibbet, and excogitate, are rarely used in everyday communication. When the actual frequency of a word’s use is corrected for, the number of hard and soft-G words commonly used becomes about equal.

The fundamental problem with such rules-based approaches is that unlike many other languages, English evolved rather chaotically without the guidance of a central regulatory authority like the Académie Française. Consequently, English has little in the way of consistent set of pronunciation rules, and the pronunciation of any given word depends largely on its specific etymology, common usage, or even the geographic region where it is spoken. Thus, so as far as the gif/jif debate is concerned, the linguistic jury is still very much out.

But of course, it wouldn’t be America without a major corporation weighing in on the issue. On May 22, 2013, shortly after Steve Wilhite received his Webby Award, Jif brand peanut butter took to Twitter with a post reading simply: “It’s pronounced Jif® .”

Seven year later, the brand teamed up with gif website GIPHY to release a limited-edition peanut-butter jar labeled “GIF” instead of “JIF.” In an interview with Business Insider, Christine Hoffman explained: “We think now is the time to declare, once and for all, that the word of Jif should be used exclusively in reference to our delicious peanut butter, and the clever, funny animated GIFs we all use and love should be pronounced with a hard ‘G’”.”

Alex Chung, founder and CEO of Giphy, agreed, stating in a press release: “At Giphy, we know there’s only one ‘Jif’ and it’s peanut butter. If you’re a soft G, please visit Jif.com. If you’re a hard G, thank you, we know you’re right.”

 Yet despite such efforts to force a consensus, the debate continues to rage and shows no signs of stopping anytime soon. While deferring to Steve Wilhite’s originally-intended pronunciation might seem like the most logical solution, that just isn’t how language works – as John Simpson, Chief Editor of the Oxford English Dictionary, explains: “The pronunciation with a hard g is now very widespread and readily understood. A coiner effectively loses control of a word once it’s out there.”

 As evidence, Simpson cites the example of “quark,” a type of subatomic particle. The word, derived from a passage in James Joyce’s 1939 novel Finnegans Wake, was coined in 1963 by physicist Murray Gell-Mann and originally rhymed with “Mark.” Over the years, however, the word evolved and is today pronounced more like “cork.”

More close to the web, the creator of the world’s first Wiki, WikiWikiWeb, Howard G. Cunningham, also pronounced this word differently than most people today. As for the inspiration for the name, during a trip to Hawaii, Cunningham was informed by an airport employee that he needed to take the wiki wiki bus between the air port’s terminals.  Not understanding what the person was telling him, he inquired further and found out “wiki” means “quick” in Hawaiian; by repeating the word, it gives additional emphasis and thus means “very quick”.

Later, Cunningham was looking for a suitable name for his new web platform. He wanted something that was unique, as he wasn’t copying any existing medium, so something simple like how email was named after “mail” wouldn’t work.   He eventually settled on wanting to call it something to the effect of “quick web”, modeling after Microsoft’s “quick basic” name.  But he didn’t like the sound of that, so substituted “quick” with the Hawaiian, “wiki wiki”, using the doubled form as it seemed to fit; as he stated, “…doublings in my application are formatting clues: double carriage return = new paragraph; double single quote = italic; double capitalized word = hyperlink.”  The program was also extremely quick, so the “very quick” doubling worked in that sense as well.

The shorter version of the name, calling a wiki just “wiki” instead of “Wiki Wiki” came about because Cunningham’s first implementation of WikiWikiWeb named the original cgi script “wiki”; all lower case and abbreviated in the standard Unix fashion.  Thus, the first wiki url was http://c2.com/cgi/wiki.  People latched on to this and simply called it a “wiki” instead of a “Wiki Wiki”.

So how was Wiki originally pronounced? “we-key”, rather than the way most today pronounced it, “wick-ee”. However, given the popularity of the mispronunciation of the word, as with “gif” now being popularly pronounced differently than the creator intended, Cunningham and others have long since stopped trying to correct people on the correct way to pronounce wiki.

Going back to gif vs jif, in the end, the choice is entirely a matter of personal preference, and as with all language and as many a linguist will tell you, how you use a word ultimately doesn’t matter as long as you are understood, and few are going to get confused on this one. But if you’d like to pronounce it the way its creator intended, go with jif, and if you’d like to follow the crowd like sheep, go with gif.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Locker, Melissa, Here’s a Timeline of the Debate About How to Pronounce GIF, Time Magazine, February 26, 2020, https://ift.tt/2VpDmW0

Biron, Bethany, Jif is Rolling Out a Limited-Edition Peanut Butter to Settle the Debate Over the Pronunciation of ‘GIF’ Once and For All, Business Insider, February 25, 2020, https://www.businessinsider.com/jif-campaign-settle-debate-pronunciation-of-gif-2020-2

Gross, Doug, It’s Settled! Creator Tells Us How to Pronounce ‘GIF,’ CNN Business, May 22, 2013, https://www.cnn.com/2013/05/22/tech/web/pronounce-gif/index.html

GIF Pronunciation: Why Hard (G) Logic Doesn’t Rule, Jemully Media, https://jemully.com/gif-pronunciation-hard-g-logic-doesnt-rule/

Nicks, Denver, WATCH: Obama Takes a Stand in the Great GIF Wars, Time, June 13, 2014, https://time.com/2871272/obama-tumblr-gif-wars/

McCulloch, Gretchen, Why the Pronunciation of GIF Really Can Go Either Way, WIRED, October 5, 2015,

Belanger, Lydia, How Do You Pronounce GIF? It Depends on Where You Live, Entrepreneur, June 20, 2017, https://www.entrepreneur.com/article/296674

Webb, Tiger, Is it Pronounced GIF or JIF? And Why Do We Care? ABC Radio National, August 9, 2018, https://www.abc.net.au/news/2018-08-10/is-it-pronounced-gif-or-jif/10102374

The post Is it pronounced “Jif” or “Gif”? appeared first on Today I Found Out.



from Today I Found Out
by Gilles Messier - June 16, 2021 at 08:45PM
Article provided by the producers of one of our Favorite YouTube Channels!
-

Scamming Pan Am

Being an early adopter can be a risky proposition, especially for a large company. On the one hand, no company wants to fall behind as its competitors take full advantage of a new game-changing technology. On the other hand, many seemingly revolutionary developments ultimately turn out to be nothing but overhyped fads, leaving early adopters saddled with expensive white elephants. This was the dilemma facing America’s airlines and aircraft manufacturers in the early 1950s as they debated whether to embrace the futuristic new technology of jet propulsion. While jets promised unheard of speed, passenger comfort, and reliability, there were many reasons to be skeptical. For one thing, aircraft manufacturers were still equipped up to produce the same propeller-powered aircraft they had built during WWII and were reluctant to retool their factories. Jets also required longer runways, new airports, and new air traffic control systems to handle them, and the technology was so unproven that the development costs for a jet airliner were likely to be enormous. All these factors created a chicken-and-the-egg problem whereby no manufacturer was willing to invest in jets unless enough airlines would buy them – and vice-versa. And worse still, the jet age had already suffered its own tragic false start.

On May 2, 1952, the world’s first jet airliner, a De Havilland DH106 Comet, made its inaugural commercial flight from London to Johannesburg. It was an event that stunned the world; American manufacturers had nothing to compete with the Comet, and it seemed as though Britain – and not America – would rule the post-war skies. But the Comet’s reign was tragically short-lived. On January 10, 1954, a BOAC Comet departing from Rome broke up in mid-air over the Mediterranean, while three months later on April 8, a South African Airways Comet crashed near Naples. These disasters led to the entire Comet fleet being grounded until the cause of the crashes could be determined.  A lengthy investigation eventually concluded that the crashes were due to metal fatigue caused by the Comet’s square windows, whose corners caused stress to build up in the aircraft’s skin every time the cabin was pressurized. While the Comet was redesigned with round windows and returned to service in 1958, it was already too late: Britain had lost its early lead in the jetliner field, largely thanks to one man: Juan Trippe.

Trippe, the legendary founder and CEO of Pan American Airways, was known for being the first to jump on any new development in aviation technology, forcing all his competitors to play follow-the-leader. So it was that shortly after the Comet’s first flight, Trippe placed an order for three for Pan Am. While Trippe was criticized for ordering a foreign aircraft, there was method to his madness, for Trippe knew that this move would goad American manufacturers into developing their own jets. His instincts paid off, and in 1954 Seattle manufacturer Boeing unveiled its model 707, America’s first jet airliner. In response, Douglas Aircraft, which had stubbornly continued to produce propeller airliners, was forced to release their own competitor, the DC-8. While it may seem that Douglas was late to the party, they actually managed to use this late start to their advantage. Figuring that after spending nearly $15 million on development Boeing would be unwilling to make any major changes to its new aircraft, Douglas took the opportunity to improve upon the major shortcomings of the 707 design, such as its short range and small passenger capacity. The 707 might have been the first, but it would not be the best.

The gamble paid off, and on October 13, 1955, Pan Am placed an order for 25 DC-8s. It was a staggering turn of events,  given that at this point the DC-8 only existed on paper and as wooden mockups while the 707 prototype had been flying for over a year. As more and more airlines jumped on the bandwagon and ordered DC-8s, Boeing was forced to bite the bullet and produce a larger, longer-range version of the 707 called the Intercontinental. Turning the tables again on Douglas, Boeing secured the sale of 17 Intercontinental 707s to Pan Am and quickly began raking in orders from other carriers. By shrewdly playing America’s aircraft manufacturers off one another, Juan Trippe had dragged the industry kicking and screaming into the jet age – and on Pan Am’s own terms.

But Juan Trippe was not the only master of the freewheeling antics that characterized the early jet age, and he himself would soon fall for one of the most inspired ploys in civil aviation history. Enter the mastermind behind National Airlines, George Baker.

The American airline industry in the 1950s was very different from today, with international travel being dominated by giants Pan Am and TWA while domestic routes were divided among by dozens of small regional airliners who fought fiercely for territorial dominance. Among these were rivals National Airlines, headed by George Baker, and Eastern Airlines, headed by WWI flying ace Eddie Rickenbacker. In 1958 both airlines were competing for control of the lucrative New York-to-Miami route and keen to get their hands on the new jet airliners before the other. But Boeing and Douglas were full-up with orders from the big airlines, and it would be years before any aircraft became available for the smaller operators. Desperate to get a leg up on Eastern, George Baker turned to possibly the one man he hated more than Eddie Rickenbacker: Pan Am’s Juan Trippe.

Baker knew that Pan Am suffered a slump in ticket sales every winter – the same period when National did its best business. He thus proposed leasing two of Pan Am’s new Boeing 707s for the 1958-1959 winter season. And to make the deal even more enticing, he made Juan Trippe an offer he couldn’t refuse, offering him 400,000 shares in National stock with an option for another 250,000. On hearing this offer Trippe’s jaw must have hit the floor, for exercising that option would have given Pan Am a controlling interest in National Airlines. For years Trippe had wanted to establish domestic U.S. routes, but the U.S. Government, wary that Pan Am would use its political clout to monopolize the domestic market, had stood in his way. Now, out of the blue, his dream was seemingly being handed to him on a silver platter. It seemed too good to be true, but Trippe nonetheless accepted Baker’s offer and handed over the planes.

Trading controlling interest in one’s company for a temporary advantage over a rival might seem about the most idiotic thing a CEO can do, but George Baker was no fool. He was counting on the intervention of a certain Government body, the Civil Aeronautics Board, to swing the deal entirely in his favour. From 1939 until 1985, the CAB regulated all aspects of civil aviation within the United States, and was responsible for barring Pan Am from flying domestic routes. Baker knew that the CAB would likely block National’s deal with Pan Am, but gambled on the fact that such a large, slow-moving Government bureaucracy would take months to do so – giving National just enough time to beat Eastern in the race for jets and fly the New York-Miami route all winter.

Amazingly, everything worked out exactly as Baker had planned it: the CAB blocked the deal, Baker got to pull a fast one on Pan Am, and National Airlines gained the distinction of being the first American airline to fly jets on domestic route – beating its rival Eastern Airlines by two years.

But as is often the case with business, nothing lasts forever, and though National enjoyed great success in the next decades – even expanding internationally in the 1970s – following a series of takeover attempts by Texas International Airlines and old rival Eastern Airlines, in 1980 National was finally acquired by Pan Am, at last giving the airline giant the domestic routes it had been seeking for nearly five decades. However, it’s hard to tell who had the last laugh, as the acquisition of Eastern proved to be a disaster for Pan Am, contributing to its ultimate demise in 1991. That year also saw the collapse of Eastern Airlines, which despite soldiering on for 65 years was finally done in by a combination of high oil prices, competition due to airline deregulation, and labour unrest. Just like that, three of the pioneers of American civil aviation had suddenly ceased to exist. Yet despite these ignoble ends, it cannot be forgotten that it was these airlines and others like them that gave us the world of safe, reliable, and affordable jet travel we enjoy today, and it is hard not to admire the sheer wiliness, creativity, and chutzpah it took to get us there.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Facts

#1: On August 5, 1955, Boeing held a demonstration flight of the Dash-80, the prototype for the 707 jetliner, over Lake Washington just outside Seattle. At the controls was legendary test pilot Alvin M. “Tex” Johnston, who was determined to give the spectators below a show they would never forget. As the assembled delegation of airline representatives looked on in amazement, Johnston pulled the 42-ton jetliner, 49-meter wingspan and all, into a full barrel roll. Furious, then-head of Boeing, Bill Allen, called Johnston to his office and angrily asked him just what he thought he was doing. According to legend, Johnston simply replied: “I was selling airplanes.”

#2: While the De Havilland Comet was the first jet airliner to fly, Britain was very nearly beaten to the punch by an unlikely competitor: Canada. On August 10, 1949 – only 13 days after the Comet – the prototype C102 Jetliner, designed and built by A.V. Roe Canada Limited, took to the skies for the first time above Malton Airport in Ontario. While the Jetliner was smaller and had a shorter range than the Comet, it was designed not for international travel but shorter domestic routes such as between Toronto, Montreal, and New York, which it could fly 20% more cheaply than competing propeller-driven airliners. In April 1950 the Jetliner became the first jet to carry mail between Toronto and New York, a trip it completed in only 58 minutes – half the previous record. So momentous was this achievement that the crew was treated to a ticker-tape parade in Manhattan, and Avro was certain that the orders would start pouring in.

Unfortunately, the Jetliner appeared doomed from the start. Following a change of management, in 1947 Trans Canada Airlines, who had originally contracted Avro to build the Jetliner, changed its mind and pulled out of the agreement. This left Avro without a buyer, and the prototype Jetliner was only completed thanks to a cash injection from the Canadian Government. But it would be this same Government who would finally seal the Jetliner’s fate. In the early 1950s Avro was engaged in building CF-100 Canuck all-weather interceptors for the Royal Canadian Air Force, and the Government came to see the Jetliner as an unnecessary distraction from this strategically vital task. So, in December 1951, Avro received a shocking order from Minister of Supply C.D. Howe: scrap the Jetliner.

Desperate to save their advanced new aircraft, Avro turned to an unusual ally: eccentric billionaire Howard Hughes. On April 7, 1952, a delegation from Avro flew the Jetliner to Hughes’ facility in Culver City in California. Over the next week Hughes flew the aircraft several times and stayed up late with the Avro engineers poring over the blueprints. By the end of the week, Hughes was fully converted and agreed to buy 30 Jetliners for his airline, TWA. He even made an agreement with American aircraft manufacturer Convair to produce the Jetliner under license, freeing up Avro’s production capacity. Deal in hand, Avro’s executives returned to Canada and pleaded the Government for a few months to hammer out the details. But it was not to be. The Government held firm on its order, and on December 13, 1956, the only flying Jetliner and its half-completed sister were cut up for scrap. All that remains of the Jetliner today is the cockpit section, stored at the National Aviation and Space Museum in Ottawa – a sad reminder of a time when Canada almost ruled the skies.

Expand for References

Serling, Robert J, The Jet Age, The Epic of Flight Series, Time-Life Books, VA, 1986

Floyd, Jim, The Avro Canada C102 Jetliner, Boston Mills Press, ON, 1986

The post Scamming Pan Am appeared first on Today I Found Out.



from Today I Found Out
by Gilles Messier - June 16, 2021 at 08:32PM
Article provided by the producers of one of our Favorite YouTube Channels!
-

Is Stockholm Syndrome Actually a Thing?

At 10AM on August 23, 1973, Jan-Erik Olsson, a convict on leave from prison, walked into the Kreditbanken Norrmalmstorg bank in Stockholm, Sweden. Dressed in a wig with his face painted black, as he entered the bank lobby he pulled a submachine gun from under his coat, fired into the air, and yelled out: “The party has just begun!” Thus began the most infamous bank robbery in Swedish history, an event which kept Swedes glued to their television sets for six tense days and gave us us the name of a well-known but controversial psychological condition: Stockholm syndrome.

Within minutes of Olsson’s arrival the police surrounded the bank, with Criminal Inspector Ingemar Warpefeldt being the first to enter. But he was immediately shot in the arm by Olsson, who at gunpoint ordered him to sit in a chair and “sing something.” As Warpefeldt sang “Lonesome Cowboy” by Elvis Presley, the police sent in another officer, Inspector Morgan Rylander, to act as a go-between between Olsson and the authorities. It was then that Olsson made his demands, asking for three million Kronor in cash, two pistols and bulletproof vests, a fast getaway car, and free passage out of Stockholm. He also demanded that his friend and fellow bank robber Clark Olofsson be released from prison and brought to the bank. To ensure that the police complied with his demands, Olsson rounded up four bank employees – Birgitta Lundblad, Elisabeth Oldgren, Kristin Ehnmark, and Sven Safstrom – and held them hostage in the bank vault.

What followed was a tense and surreal six-day standoff as the police scrambled to find a way to take down Olsson while still appearing to comply with his demands. Late on the first day they delivered the money along with Olofsson and the car, but when they forbade the robbers from leaving with the hostages Olsson and Olofsson chose to remain barricaded in the vault. Meanwhile ordinary Swedish citizens, enthralled by the spectacle playing out on live television, called the police and suggested all sorts of harebrained rescue schemes, from bringing in a Salvation Army choir to sing religious songs to filling the vault with tennis balls to immobilize the robbers to releasing a swarm of bees into the bank. On the third day the police managed to drill through the roof of the vault and snap a photo of the robbers and hostages inside, but were swiftly repelled when Olofsson shot an officer through the hole. Finally, on the night of August 28, six days into the crisis, the police pumped tear gas into the vault and forced the robbers to surrender.

Then, something strange happened. When the police called for the hostages to exit the vault first, they refused, with Kristin Ehnmark crying out: “No, Jan and Clark go first—you’ll gun them down if we do!” 

Once outside the vault, the robbers and hostages embraced, kissed and shook hands, and as the police dragged Olsson and Olofsson away, Enmark pleaded: “Don’t hurt them—they didn’t harm us.”

In the days that followed it became increasingly clear that the hostages had formed a strangely close personal bond with their captors. Despite threatening to the police many times to execute them, Olsson and Olofsson treated the hostages with remarkable kindness. Olsson gave Kristen Enmark his coat when she began to shiver, consoled her when she had a bad dream, and even gave her a bullet as a keepsake; and when Elisabeth Oldgren complained of claustrophobia he allowed her to walk around the bank lobby tied to a 30-foot rope. Such acts of kindness endeared the hostages to their captors, and within a day everyone was on a first-name basis. As hostage Sven Safstrom later recalled: “When he treated us well, we could think of him as an emergency God.”  

Indeed, according to Ehnmark, the hostages soon came to fear and hate the police and government than their captors, accusing them of gambling with their lives by drawing out the siege: “We [were] more afraid of the policemen than these two boys. We [were] discussing, and, believe it or not, having a rather good time [there]. Why can’t they let the boys drive off with us in the car?”

Ehmnark even phoned Swedish Prime Minister Olof Palme, imploring him to let the robbers take the hostages with them in the getaway car: “I think you are sitting there playing checkers with our lives. I fully trust Clark and the robber. I am not desperate. They haven’t done a thing to us. On the contrary, they have been very nice. But, you know, Olof, what I am scared of is that the police will attack and cause us to die.”

In another incredible show of compassion for her captors, when Olsson threatened to shoot Sven Safstrom in the leg to shake up the police, Ehnmark actually urged her colleague to take the shot.

The authorities suspected early on that something strange was going on when the Police Commissioner was allowed into the vault to check on the hostages’ health, only to find them hostile to him but relaxed and jovial with the robbers. Microphones placed in the hole in the vault roof also picked up the sounds of hostages and captors joking and laughing together. Indeed, it was this which convinced the police that tear gas could be used without fear of the robbers harming the hostages.

In the wake of the robbery, criminal psychiatrist Nils Bejerot, who had advised the police during the crisis, interviewed the hostages, many of whom continued to visit their captors in prison for many years afterward. Bejerot coined the term Norrmalmstorgssyndromet or “Norrmalmstorg Syndrome” to describe this apparently contradictory phenomenon. This soon became known in outside Sweden as “Stockholm Syndrome.” 

But while the term was coined in 1973, it would gain widespread use until three years later. On February 4, 1974, Patty Hearst, the 19-year-old heiress to the Hearst publishing fortune, was kidnapped from her Berkeley apartment by the Symbionese Liberation Army or SLA, a left-wing urban guerrilla group. After ransom negotiations broke down, the SLA kept Hearst bound and blindfolded in a closet for months, forcing her to memorize left-wing literature on pain of death. As Hearst later testified: “[Donald] DeFreeze told me that the war council had decided or was thinking about killing me or me staying with them, and that I better start thinking about that as a possibility. I accommodated my thoughts to coincide with theirs.”

On April 15, two months after the kidnapping, Hearst suddenly reappeared while carrying out the armed robbery of the Sunset District Hibernia Bank in San Fransisco, identifying herself as “Tania.” Over the next year and a half Hearst participated in a number of SLA actions including another bank robbery and the attempted murder of two police officers before being arrested on September 18, 1975. While being booked, Hearst gave her occupation as “urban guerrilla.”

Hearst’s trial, which began on January 15, was a landmark case in criminal liability, with her attorney, F. Lee Bailey, arguing that she had been brainwashed by the SLA and was suffering from Stockholm syndrome – bringing the newly-invented term to the public consciousness for the first time. According to US criminal law, in the absence of a diagnosed mental illness a person is considered fully responsible for any criminal action not committed under duress. Security footage of the Hibernia Bank robbery showed no sign of Hearst acting against her will, and while a post-arrest psychiatric evaluation uncovered signs of extreme mental trauma including a significant IQ drop, nightmares, and memory loss, she did not appear to be suffering from any discernible mental illness. Thus for her to be acquitted on grounds of brainwashing would have been unprecedented in US legal history.

Unfortunately, by demonstrating numerous instances where Hearst could easily have contacted the authorities and escaped the SLA, the prosecution managed to convince the jury that she had joined the group willingly, and Hearst was convicted of armed robbery and sentenced to 35 years in prison. She served 22 months before her sentence was commuted by President Jimmy Carter, and was later granted a full pardon by Bill Clinton in 2001.

Another famous case involving Stockholm syndrome is that of Natascha Kampusch, an Austrian girl who was kidnapped in 1998 at the age of 10  by Wolfgang Prikopil and held in a cellar for 8 years. On the day Kampusch escaped, Pikopil, knowing the police were after him, committed suicide by jumping in front of a train. When Kampusch learned that her captor had died, she reportedly wept inconsolably and later lit a candle for him as hid body lay in the morgue.

According to psychiatrist Dr. Frank Ochberg, who helped define the phenomenon for the FBI and Scotland Yard in the 1970s, Stockholm syndrome develops as part of a coping strategy that helps captors adapt to a highly-stressful situation: “First people experience something terrifying that just comes at them out of the blue. They are certain they are going to die.

Then they experience a type of infantilization – where, like a child, they are unable to eat, speak or go to the toilet without permission. Small acts of kindness prompt a primitive gratitude for the gift of life.

The hostages experience a powerful, primitive positive feeling towards their captor. They are in denial that this is the person who put them in that situation. In their mind, they think this is the person who is going to let them live.”

This process is similar to the techniques allegedly used by China and North Korea to “brainwash” captured American servicemen during the Korean War. According to survivor testimony, prisoners were first tortured and deprived of sleep and food in order to break their will. They were then made to perform small tasks for their captors such as delivering mail or food, building a relationship of trust between captive and captor. These tasks grew progressively antithetical to the prisoner’s own worldview, such as writing or broadcasting anti-American propaganda, until the prisoner came to sympathize with his captor’s cause. As in the case of Patty Hearst, the prisoners adapted their thinking in order to survive.

However, despite its ubiquity in popular culture, actual instances of Stockholm syndrome are rare, and many psychiatrists don’t accept that it exists at all. According to High McGowan, a hostage negotiator for the NYPD for 35 years: “I would be hard pressed to say that it exists. Sometimes in the field of psychology people are looking for cause and effect when it isn’t there. Stockholm was a unique situation. It occurred at around the time when we were starting to see more hostage situations and maybe people didn’t want to take away something that we might see again.”

Indeed, Stockholm syndrome is not an official psychiatric diagnosis and does not appear in the American Diagnostic and Statistical Manual, the International Statistical Classification of Diseases and Related Health Problems (ICD) manual or other commonly-used diagnostic texts. According to Oxford University psychologist Jennifer Wild, what is commonly referred to as Stockholm syndrome may in fact be an amalgamation of other, more common psychological phenomena that present in extreme circumstances: “A classic example is domestic violence, when someone – typically a woman – has a sense of dependency on her partner and stays with him. She might feel empathy rather than anger. Child abuse is another one – when parents emotionally or physically abuse their children, but the child is protective towards them and either doesn’t speak about it or lies about it.”

Others argue that the whole concept of Stockholm syndrome is inherently sexist, as nearly all reported sufferers are women. The implication of the label, they argue, is that women are less resilient than men and that empathizing with kidnapper is a sign of inherent weakness. But according to American Journalist Daniel Lang, who interviewed the participants of the Normmalmstorg Robbery for the New Yorker, this view ignores a vital dimension of hostage-captor relations:

“I learned that the psychiatrists I interviewed had left out something: victims might identify with aggressors as the doctors claimed, but things weren’t all one way. Olsson spoke harshly. ‘It was the hostages’ fault,’ he said. ‘They did everything I told them to do. If they hadn’t, I might not be here now. Why didn’t any of them attack me? They made it hard to kill. They made us go on living together day after day, like goats, in that filth. There was nothing to do but get to know each other.’”

Many alleged sufferers also reject the label, including Natascha Kampusch, who stated in a 2010 interview: “I find it very natural that you would adapt yourself to identify with your kidnapper. Especially if you spend a great deal of time with that person. It’s about empathy, communication. Looking for normality within the framework of a crime is not a syndrome. It is a survival strategy.”

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Bejerot, Nils, The Six Day War in Sweden, http://www.nilsbejerot.se/sexdagar_eng.htm

Ország, Juraj, Norrmalmstorg Robbery Which Defined the Stockholm Syndrome, Trevl, February 16, 2020, https://trevl.eu/article/norrmalmstorg-robbery-which-defined-the-stockholm-syndrome

Westcott, Kathryn, What is Stockholm syndrome? BBC, August 22, 2013, https://ift.tt/2Md6CdF

Escher, Kat, The Six-Day Hostage Standoff That Gave Rise to ‘Stockholm Syndrome,’ Smithsonian Magazine, August 23, 2017, https://ift.tt/2RJCfRg

Klein, Christopher, Stockholm SyndromeL The True Story of Hostages Loyal to Their Captor, History, April 9, 2019, https://www.history.com/news/stockholm-syndrome

Boissoneault, Lorraine, The True Story of Brainwashing and How it Shaped America, Smithsonian Magazine, May 22, 2017, https://www.smithsonianmag.com/history/true-story-brainwashing-and-how-it-shaped-america-180963400/

Patty Hearst, The Famous Pictures Collection, May 14, 2013, http://www.famouspictures.org/patty-hearst/

Crime: The Hearst Nightmare, TIME, Monday, April 29, 1974, https://ift.tt/3xdNeCU

The post Is Stockholm Syndrome Actually a Thing? appeared first on Today I Found Out.



from Today I Found Out
by Gilles Messier - June 16, 2021 at 12:09AM
Article provided by the producers of one of our Favorite YouTube Channels!
-

The Curious Case of the Cat that was Turned Into a Living Telephone FOR SCIENCE!!!

The domestication of Felis catus, AKA the common house cat, began around 10,000 years ago, when their skill at hunting rats, mice, and other vermin was seemingly first taken heed to. Since then cats have enjoyed a rather comfortable relationship with humanity, even being revered by the Ancient Egyptians as representatives of the goddess Bastet. But history hasn’t all been catnip and MeowMix; in the Middle Ages, it was believed that cats were the familiars of witches and responsible for spreading the Black Death, leading to millions being slaughtered. Indeed, the superstition that black cats are unlucky has followed them all the way to the present day, with black cats being statistically less likely to be adopted from animal shelters. And then there was the time a pair of Princeton researchers turned a cat into a living telephone – FOR SCIENCE!

This bizarre experiment was the brainchild of Professor Ernest Glen Wever, a major pioneer in the field of audiology. Born in 1902 in Benton, Illinois, Wever obtained his PhD in Experimental Psychology from Harvard in 1926 before joining the faculty of UC Berkeley and then the Department of Psychology at Princeton. Interestingly, Wever’s graduate studies were not in auditory but rather visual perception, his work focusing on how the brain distinguishes between the figure and ground in images. In the course of this research, however, Wever became interested in how the auditory centres of the brain distinguish between useful signals and background noise, and began a partnership with Bell Telephone Laboratories that would soon set him down a highly-successful career path.

In 1929, Wever and his research assistant Charles William Bray were investigating how the ear and the brain encode audio signals into nerve impulses. The dominant theory at the time held that the frequency of impulses in the auditory nerve was proportional to the intensity of the auditory stimulus. Wever and Bray, however, believed that the nerve impulse frequency closely replicated the frequency of the stimulus, much like the signal in a telephone line; hence this view became known as the “telephone theory” of audio encoding. In order to test this theory, Wever and Bray decided to take the unorthodox approach of turning a cat into a living telephone.. After sedating the unfortunate animal, they removed a section of its skull and wrapped an electrode around one of its auditory nerves. The signal from the electrode was passed through a vacuum-tube amplifier and a shielded cable to a telephone receiver in a soundproof room 50 feet away. As Bray spoke into the cat’s ear, Wever listened at the receiver and noted the sounds picked up by the electrode.

To the pair’s astonishment, the signals detected from the cat’s auditory nerve were remarkably clear, almost as if they had been produced by a mechanical telephone transmitter. As Wever later noted: “Speech was transmitted with great fidelity. Simple commands, counting and the like were easily received. Indeed, under good condition the system was employed as a means of communication between operating and sound-proof rooms.”

To ensure that no other mechanism was responsible for generating these signals, Wever and Bray varied the conditions of the experiment, placing the electrode in tissues adjacent to the auditory nerve and restricting blood flow to the cat’s brain. In every case transmission abruptly stopped, confirming that the signals were indeed coming from the auditory nerve. The pair even took the extreme step of killing the cat, with Wever clinically noting: “After the death of the animal the response first diminished in intensity, and then ceased.”

Wever and Bray published their findings in a paper titled The Nature of the Acoustic Response: The Relation Between Sound Frequency and Frequency of Impulses in the Auditory Nerve, which appeared in the Journal of Experimental Psychology in 1930. Their unorthodox study provided compelling evidence for the telephone theory of audio encoding, and lead to the pair being awarded the Howard Crosby Warren Medal by the Society of Experimental Psychologists 1936.

Wever and Bray would go on to enjoy long, illustrious careers, with Bray becoming an Associate Professor of Psychology at Princeton, an Associate Research Director for U.S. Air Force Human Resources Research, and a leading civilian psychological researcher for the National Defence Research Council and the U.S. Navy. He died in 1982 at the age of 78.

Meanwhile, Ernest Wever served as a consultant for the National Defence Research Committee on anti-submarine warfare during WWII, where he discovered that men with musical training made superior sonar operators regardless of which instrument they played. After the War he became one of the leading authorities on human hearing, publishing the definitive monograph on the subject, The Theory of Hearing, in 1949. Among his greatest contributions to the field was the development of the volley theory of frequency coding. This was based on an observation made during the telephone cat experiments that the frequency of many perceivable sounds exceeds the maximum known firing frequency of nerve cells. Wever postulated that at these frequencies the entire signal is not carried by single nerve fibres but rather by clusters of nerves firing in “volleys.” Later in his career Wever became interested in the evolution of vertebrate hearing, and in 1967 established a research zoo at Princeton featuring various species of birds, fish, reptiles, amphibians, and even dolphins airlifted from Florida. This research yielded two more definitive texts: The Reptile Ear in 1978 and The Amphibian Ear in 1985. After a productive career spanning more than six decades, Ernest Wever died in 1991 at the age of 89.

Wever and Bray’s most infamous experiment, however, would go on to have a legacy all its own. Shortly after the publication of their 1930 paper on the cat telephone experiments, it was discovered that several of their major conclusions were in fact mistaken. A 1932 experiment by Hallowell Davis revealed that the signals Wever and Bray had picked up were generated not by the auditory nerve but rather by the cochlea, a spiral-shaped structure in the inner ear. In fact, the signals in the auditory nerve are so compressed and distorted that decoding them into comprehensible speech would have been all but impossible. Nonetheless, the discovery of the so-called “cochlear microphonic”, whose frequency closely mirrors that of the sound being perceived, was to have major repercussions in the field of audiology, eventually leading to the creation of the cochlear implant by French inventors André Djourno and Charles Eyries in 1957. Cochlear implants work by using a microphone to generate an electrical signal which directly stimulates the cochlear nerve, bypassing damaged or diseased parts of the ear and allowing people with certain kinds of auditory impairment to regain part of their hearing. As of this writing, more than 180,000 people worldwide have benefited from this technology; but whether this was worth turning a cat into a telephone…well, we’ll let you decide.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Fact

If being turned into a living telephone isn’t enough of an affront to our cat-loving viewers, then how about being turned into a living surveillance device? This was the aim of a bizarre project called ‘Acoustic Kitty,’ dreamed up by the CIA’s Directorate of Science & Technology in the mid-1960s.

According to Robert Wallace, former Director of the CIA’s Office of Technical Services, the project was inspired by a surveillance operation targeting an unnamed Asian head of state, during which cats were observed to wander in and out of the target’s meeting area unnoticed. It was thus decided that cats would make an ideal platform for carrying hidden microphones. To this end, several unfortunate animals were subjected to a gruesome surgery in which a microphone was implanted in their ear canal, a miniature radio transmitter and battery pack in their ribcage, and a fine antenna wire carefully woven into their fur from head-to-tail.

Almost immediately, however, the program ran into difficulties. As any cat owner can tell you, cats are independent-minded creatures and not easily trained, and despite claims by Bob Bailey, the CIA’s main animal trainer, that “we never found an animal we could not train,” convincing the cats to seek out and linger around secret conversations proved harder than anticipated, the animals being easily distracted by hunger and other natural impulses. The CIA was thus forced to perform additional surgeries to eliminate the cats’ sensation of hunger, and according to some sources achieved some success in using primitive brain stimulation devices to guide the cats via remote control.

The first Acoustic Kitty mission was carried out against two diplomats sitting in a park outside the Soviet embassy in Washington D.C. Sources vary on what happened next, with most versions of the story claiming that upon being released from the surveillance van, the cat was immediately run over by a taxi and killed. Robert Wallace disputes this, however, claiming that the handlers were simply unable to get the cat to reach the target, and that: “…the equipment was taken out of the cat; the cat was re-sewn for a second time, and lived a long and happy life afterwards.”

Further tests confirmed the concept to be unworkable, and Acoustic Kitty was cancelled in 1967 at a total cost of $20 million. The heavily-redacted report on the project, titled Views on Trained Cats, concluded that while: “… cats can indeed be trained to move short distances…the environmental and security factors in using this technique in a real foreign situation force us to conclude that for our purposes, it would not be practical.”

Expand for References

Kim, Arthur, The Cat Telephone, Mudd Manuscript Library Blog, April 26, 2017, https://blogs.princeton.edu/mudd/2017/04/the-cat-telephone/

Ernest Wever and Charles Bray, http://www.ling.fju.edu.tw/hearing/historical%20review1930-1.htm

Sterne, Jonathan, The Cat Telephone, Research Gate, January 2009, https://www.researchgate.net/publication/236758802_The_Cat_Telephone

O’Brien, Elle, Ernest Glen Wever – Historical Biographies in Acoustics, https://acousticstoday.org/7408-2/

Vanderbilt, Tom, The CIA’s Most Highly-Trained Spies Weren’t Even Human, Smithsonian Magazine, October 2013, https://www.smithsonianmag.com/history/the-cias-most-highly-trained-spies-werent-even-human-20149/

Views on Trained Cats, https://ift.tt/2E2je1l

Escher, Kat, The CIA Experimented on Animals in the 1960s Too. Just Ask ‘Acoustic Kitty,” Smithsonian Magazine, August 8, 2017, https://ift.tt/2VAP53I

The post The Curious Case of the Cat that was Turned Into a Living Telephone FOR SCIENCE!!! appeared first on Today I Found Out.



from Today I Found Out
by Gilles Messier - June 16, 2021 at 12:02AM
Article provided by the producers of one of our Favorite YouTube Channels!
-

Tuesday, June 15, 2021

How WWII Made Everybody Think Carrots Were Good for Eyes and Why It Didn’t Really Have to Do With Tricking the Germans

“Carrots help you see in the dark.” Most of us were taught this fact from an early age, presumably in a vain attempt to convince us to eat our vegetables. And on the surface, this makes sense: after all, carrots are rich in beta-carotene and Vitamin A, essential to maintaining the health of our retinas and corneas. But while carrots do have real health benefits, the idea that eating a lot of them will magically grant you night vision superpowers is a myth, one with its origins in WWII British propaganda.

In September 1940 the German Luftwaffe launched the ‘Blitz,’ an aerial bombing campaign against London and other southern English cities with the aim of demoralizing the British into ending the war. While the Luftwaffe initially bombed by day, ferocious resistance by RAF Fighter Command forced them to switch to night bombing. And while this should have made the bombers almost impossible to track and shoot down, the British had a secret weapon up their sleeve: radar.

The invention of radar came about almost by accident. In 1934, British intelligence began receiving reports that the Germans were developing some kind of “death ray” which could use powerful radio waves to stall an enemy bomber’s engines or even set it alight at a great distance. A special committee known as the Tizard Commission was established to investigate these claims, and while they found that the power consumption of such a death ray would make it impractical, they also came to another, surprising conclusion: that a far more reasonably-sized transmitter could be used to detect and track enemy aircraft long before they reached their target. The task of evaluating this possibility fell to radio expert Robert Watson-Watt and his assistant Arnold Wilkins, who on February 26, 1935 used a prototype radio transmitter to successfully detect an RAF bomber flying over Daventry. The system soon became known as “Radio Direction-finding And Ranging,” or RADAR, and just prior to the war a line of radar towers known as Chain Home was erected along the Channel coast. These proved invaluable during the Battle of Britain, allowing incoming German bombers to be detected far enough in advance for RAF fighters to be scrambled to intercept them. However, these units were extremely large and operated at relatively long wavelengths, making them all but useless for shooting down aircraft at night.  Then, in 1940 John Randall and Harry Boot at the University of Birmingham developed the cavity magnetron, a device which allowed the construction of radars light and compact enough to be mounted in aircraft. The first of these units, the Airborne Interception or AI radar Mk. IV, first entered service in early 1941, and allowed British night fighter pilots to find and shoot down German bombers in large numbers. The greatest of these aces, Squadron Leader John ‘Cat’s Eyes’ Cunningham, racked up 20 victories before the war’s – 19 of them at night.

In order to conceal the secret of radar, the British Ministry of Information launched a campaign of misinformation attributing the success of pilots like Cunningham to their superior eyesight, gained through eating large amounts of carrots. According to John Stolarzcyk of the World Carrot Museum – and yes, there is such a thing: “Somewhere on the journey the message that carrots are good for your eyes became disfigured into improving eyesight.”

But this propaganda wasn’t just aimed at the Germans. When the Luftwaffe began bombing by night, the British imposed a total blackout, requiring citizens to keep their lights off or cover their windows with heavy blackout curtains to deny German pilots a means of locating their targets. Inevitably, blackout conditions lead to a sharp increase in traffic collisions, with nearly 1,130 Britons dying in nighttime road accidents in the first year of the war alone. Soon, Ministry of Information posters appeared claiming: “Carrots keep you healthy and help you see in the blackout.” Other government departments also got in on the campaign, with the Ministry of Agriculture releasing a statement in December 1940 claiming: “If we included a sufficient quantity of carrots in our diet, we should overcome the fairly prevalent malady of blackout blindness.”

Lord Woolton, head of the Ministry of Food, even coined a catchy saying: “A carrot a day keeps the blackout at bay.”

This campaign was itself part of a larger effort to change the eating habits of the British public. Being a small island nation Britain depended on foreign imports for much of its food supply, so at the outbreak of the war the German navy launched a campaign of unrestricted submarine warfare against merchant shipping in order to starve the British into submission. In response, the British government imposed a system of rationing on food, fuel, and other everyday products. With many staple foods like butter, meat, and sugar now in short supply, the Government encouraged citizens to switch their diet to one richer in vegetables, which they could grow themselves in home “Victory Gardens.” The Ministry of Food’s “Dig for Victory” campaign cast home gardening as a patriotic, war-winning duty, with Lord Woolton stating: “This is a food war. Every extra row of vegetables in allotments saves shipping. The battle on the kitchen front cannot be won without help from the kitchen garden. Isn’t an hour in the garden better than an hour in the queue?”

This push for food sustainability led to a greater emphasis being put on the consumption of carrots, which were easy to grow, highly-nutritious, and could be used as a mild sweetener in the absence of real sugar. Cartoon characters called “Dr. Carrot” and “Potato Pete” were used to promote vegetable consumption, while Government pamphlets and radio programmes like “The Kitchen Front” touted a wide assortment of creative – and sometimes questionable – carrot-based recipes such as carrot pudding, carrot cake, carrot marmalade, carrot flan, carrot fudge, “Woolton Pie” and even a drink called “carrolade.” Children were even sold carrots-on-a-stick in place of lollipops. And in case defeating the Nazis wasn’t motivation enough to eat your carrots, the Government continued to push the notion that a carrot-rich diet would prevent you from being hit by a truck during the blackout.

Such was the British obsession with carrots that it even made the news in the United states, with Raymond Daniell, the New York Times’ London Bureau Chief, sarcastically remarking: “Lord Woolton, who is trying to wean the British away from cabbage and Brussels sprouts, is plugging carrots. To hear him talk, they contain enough Vitamin A to make moles see in a coal mine.”

But following its entry into the war, America would also jump onto the carrot bandwagon, with the New York Times printing British Ministry of Food recipes and articles stating: “England grows a lot of carrots, and it’s on them that she largely relies to prevent her people from bumping into lampposts, automobiles, and each other.”

Even Walt Disney got in on the game, with Disney cartoonist Hank Porter designing a family of cartoon characters based on the British Dr. Carrot – including Carroty George, Pop Carrot, and Clara Carrot – to promote vegetable consumption to the American public. Warner Brothers also made their own contribution in the form of a wisecracking, carrot-munching rabbit called Bugs Bunny.

These campaigns were so successful that by 1942 Britain found itself with a 100,000-ton surplus of carrots. But what of the original goal of the campaign, to deflect German attention away from the existence of airborne radar? While there are apocryphal stories of the Germans feeding their pilots larger amounts of carrots, most historians agree that they were not fooled by the British propaganda. According to Bryan Legate, assistant curator of the RAF Museum in London:

“I would say that whilst the Air Ministry were happy to go along with the story, they never set out to use it to fool the Germans. The German intelligence service were well aware of our ground-based radar installation and would not be surprised by the existence of radar in aircraft. In fact, the RAF were able to confirm the existence of German airborne radar simply by fitting commercial radios into a bomber and flying over France listening to the various radio frequencies.”

The campaign was far more successful on Allied civilians, with the myth of carrots being good for night vision stubbornly persisting to this day.

This is not to say, however, that carrots have no effect on eye health. Far from it. Mild Vitamin A deficiency can lead to night blindness – the inability to see in low light – while severe deficiency can lead to a serious degeneration of the retina and cornea. Very year an estimated quarter to a half-million children go blind from Vitamin A deficiency – mostly in poorer countries like Nepal and regions of India – and supplements of Beta Carotene and Vitamin A have proven effective in reversing much of the damage. However, Vitamin A can only restore and maintain eyesight to regular healthy levels; it cannot magically grant you night-vision superpowers. And in excessive amounts Vitamin A can actually be toxic, although getting it in the form it comes in carrots and other such vegetable sources will not be harmful as your body won’t convert it to Vitamin A if it doesn’t need it, unlike if you get it from an animal source like liver or an animal-based Vitamin A supplement where it’s already in that form. Furthermore, studies have shown that carrots are actually a relatively inefficient source of Vitamin A. As alluded to, Beta-Carotene is converted into Vitamin A by E.Coli bacteria in the gut, and it takes around 12-21 molecules of Beta Carotene to produce just one molecule of Vitamin A.

So go ahead: eat as many carrots as you want. But if you really want to see in the dark – and we won’t ask why – then may we suggest a pair of night-vision goggles?

If you liked this article,  you might also enjoy:

Expand for References

Smith, Annabelle, A WWII Propaganda Campaign Popularized the Myth that Carrots Help You See in the Dark, Smithsonian Magazine, August 13, 2013, https://www.smithsonianmag.com/arts-culture/a-wwii-propaganda-campaign-popularized-the-myth-that-carrots-help-you-see-in-the-dark-28812484/

Ewebank, Anne, Why Wartime England Thought Carrots Could Give You Night Vision, Gastro Obscura, October 25, 2017, https://www.atlasobscura.com/articles/carrots-eyesight-world-war-ii-propaganda-england

Maron, Dina, Fact or Fiction?: Carrots Improve Your Vision, Scientific American, June 23, 2014, https://www.scientificamerican.com/article/fact-or-fiction-carrots-improve-your-vision/

Johnson, Brian, The Secret War, Hutchinson Publishing, London, 1978

The post How WWII Made Everybody Think Carrots Were Good for Eyes and Why It Didn’t Really Have to Do With Tricking the Germans appeared first on Today I Found Out.



from Today I Found Out
by Gilles Messier - June 15, 2021 at 11:44PM
Article provided by the producers of one of our Favorite YouTube Channels!
-

Altoids Sour Tangerines are BACK!



You miss them, we miss them. THEY'RE BACK (kinda)

from Candy Gurus
by Jonny June 15, 2021 at 11:33AM

Monday, June 14, 2021

The World’s Most Dangerous Tree

In 1999, British radiologist Nicola Strickland went on holiday with a friend to the Caribbean island of Tobago. While exploring a deserted beach looking for seashells, the pair came upon a number of small, round, yellow-green fruits scattered among the fallen coconuts and mangoes. Intrigued, they decided to try the fruits and found them to be pleasantly sweet. But that pleasure was not to last. In a 2000 article in the British Medical Journal, Strickland describes what happened next:

“Moments later we noticed a strange peppery feeling in our mouths, which gradually progressed to a burning, tearing sensation and tightness of the throat. The symptoms worsened over a couple of hours until we could barely swallow solid food because of the excruciating pain and the feeling of a huge obstructing pharyngeal lump.

Over the next eight hours our oral symptoms slowly began to subside, but our cervical lymph nodes became very tender and easily palpable. Recounting our experience to the locals elicited frank horror and incredulity, such was the fruit’s poisonous reputation.”

Indeed, Strickland and her friend were extraordinarily lucky to survive their ordeal, for the tree the innocuous-looking fruit had fallen from was none other than the Manchineel, a plant so extraordinarily toxic that one cannot touch it, shelter beneath it, or even breathe the air around it without entering a world of hurt. It is widely considered to be the most dangerous tree in the world.

Hippomane Mancinella, also known as the “Beach Apple” or, in Spanish, Manzanilla de la Meutra – the “little apple of death” – is a small shrub-like evergreen tree native to southern Florida, the Caribbean, Mexico, Central America, and northern South America. Reaching up to 15 metres in height, it is mostly found on beaches or in brackish swamps, where it often grows between mangrove trees. Manchineel is a member of the Spurges, a large family of plants that includes the holiday poinsettia. But while, contrary to popular belief, eating a poinsettia will not hurt you nor your pets in the slightest, the Manchineel packs an altogether nastier punch. Every part of the tree, from the roots to the leaves, is filled with a milky, latex-like sap containing a deadly cocktail of toxins including phorbol, hippomanin, mancinellin, apogenin, phloracetophenone, and physostigmine. Of these, perhaps the nastiest is phorbol, a highly caustic chemical which on contact with the skin inflicts large, painful blisters and if splashed in the eyes induces temporary blindness. Even breathing the air close to the tree is enough to cause slight lung damage. Phorbol is also highly soluble in water, meaning that anyone foolish enough to shelter under a Manchineel tree during a rainstorm is likely to get soaked head to toe in the botanical equivalent of WWI mustard gas. In fact, phorbol is so corrosive it has even been known to peel the paint off of cars.

If ingested, the other toxins in the Manchineel’s sap and fruit can induce severe throat pain and swelling, vomiting, excruciating intestinal pain, psychological disturbances, and even death. Indeed, the tree’s scientific name, Hippomane Mancinella, literally translates to “the little apple that drives horses mad.” Among the many toxins found in the tree, one, physostigmine, is also found in the Calabar bean, which for centuries was used by the Efik people of south-east Nigeria as an ordeal poison. According to Efik custom, a person accused of witchcraft would be made to drink a mixture of crushed-up Calabar bean and water; if they died, they were guilty, but if they survived – usually by immediately vomiting up the poison – they would be declared innocent and released.

If by now your reaction to the big pile of “nope” that is the Manchineel tree is to yell“kill it with fire!” unfortunately you are once again out of luck, as the smoke from burning the tree can inflict severe damage to the eyes and lungs. Point Manchineel tree…

The toxic properties of the Manchineel have been known for centuries, the sap being used as a weapon by many Caribbean tribes such as the Arawak, Taino, Carib, and Calusa. Indeed, it was a Calusa arrow tipped with Manchineel sap which reportedly killed Spanish Conquistador Juan Ponce de Leon during a skirmish in Florida in 1521. There are also reports of tribes tying their enemies to the trees as a form of torture. But not all the tree’s uses were so violent; the dried sap and fruit, for example, are used in traditional medicine to treat edema and urinary issues. And incredibly, despite its dangerous reputation Manchineel wood has also been used for centuries by Caribbean carvers and cabinetmakers. As cutting the trunk with an axe is too dangerous, the tree must instead be burned at the base – with the collector, one assumes, standing far, far away – and the wood dried in the sun for several days to destroy the toxins in the sap.

Among the first Europeans to encounter the Manchineel tree was Christopher Columbus, who gave it its traditional name of “little apple of death” and described its effects on sailors who accidentally ate its fruit or cut down the tree for firewood. The tree was also commonly encountered during the Golden Age of Piracy and appears in the memoirs of many 17th and 18th Century buccaneers such as Basil Ringrose and William Stephens as well as in the diary of William Ellis, the surgeon on Captain James Cook’s last voyage.

At this point you may be wondering: how on earth did the Manchineel evolve to be so horrifically toxic? After all, most fruit-bearing trees depend on their fruit being eaten by animals in order to spread their seeds. But with the sole exception of the Black-Spined Iguana, which is even known to live among the tree’s branches with no ill effects, the Manchineel is toxic to nearly every known animal. As it turns out, the Manchineel has no need of animals as, by virtue of growing near water, its buoyant fruit are easily dispersed by ocean currents in the same manner as coconuts. Thus the tree’s extreme toxicity is likely as it poses no obstacle to its reproduction while ensuring that any potentially destructive animals keep far, far away.

Today the Manchineel is an endangered species, but rather than being exterminated like the demon spawn it is, the tree is protected as its roots help to stabilize the soil and protect shorelines from erosion. Consequently, Manchineel trees in areas accessible to the public are often clearly marked with red paint,  small fences, or explicit warning signs to make sure nobody goes anywhere near them. While no deaths from eating Machineel fruit have been confirmed in modern times, dozens of cases of burns and blindness due to contact with its sap are reported every year. So if ever you are on a Caribbean holiday and come across a small tree with reddish bark, spear-shaped leaves, and small yellow-green fruit. Don’t even think about it. Just walk away.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Fact:

A contender for the most dangerous tree in the world is one known colloquially as “The Murder Tree”.

Cerbera odollam is a small hardwood tree (see our video, How Do They Differentiate Between Hardwood and Softwood Trees? really, that title sounds boring and obvious but it’s actually not what you think and is super fascinating; I promise…) that can, under favourable conditions grow to around 10 metres in height and is endemic to India and south-east Asia. Despite its unassuming appearance, the tree hides a deadly secret inside of the husk that contains its seeds. These seeds contain a cardiac glycoside called, cerberin. Cerberin is incredibly toxic in relatively low dosages, often killing its victims within a few hours, during which time they may suffer crippling stomach pain, diarrhea, irregular heart rhythm, vomiting and sometimes a splitting headache. Eventually, once enough of it accumulates in your system, the cerberin will succeed in completely inhibiting the cellular “sodium/potassium pump” enzyme (Na+/K+-ATPase), screwing with the heart’s electrical system and ultimately stopping it dead, very similar to how lethal injections in the United States work. And for reference, a single cerbera odollam seed contains a lethal dose of cerberin for a typical adult human.

While accidental ingestion of the inner seed is not completely unheard of, due to the fruit produced by the cerbera odollam being edible, if a little bitter, it is commonly used for murder and suicide in Indian coastal towns which border the sort of marshy swampland the tree likes to grow in. Exactly how many people are killed each year due to someone having their food intentionally spiked with cerbera odollam seeds isn’t clear because the poison produced by it doesn’t show up on normal toxicology reports, and is relatively unknown in many regions of the world. This has led some experts, such as French Toxicologist Yvan Gaillard, who published the results of a decade long study on this very topic in the Journal of Ethnopharmacology, to describe the plant as being “perfect” for murder. You see, most toxicologists, even if they’ve heard of the plant, will only test for cerberin poisoning if there’s a strong suspicion the victim consumed something containing it prior to their death, because testing for cerberin poisoning is rather costly and requires the use of “high-performance liquid chromatography coupled with mass spectrometry” to detect with any degree of certainty- something that is not an option in many regions anyway. Of course, those toxicologists who haven’t heard of it would never know to check.

Because of this, the amount of deaths caused by cerbera odollam poisoning is uncertain. That said, based on the documented instances that are known (which likely make up a small percentage of the actual total), the plant is responsible for at least a death per week in the South Indian state of Kerala alone, where the plant grows wild and in abundance and is responsible for an estimated 50% of plant poisoning cases annually in the region.

You might wonder why a person being poisoned with such a seed wouldn’t taste it, given its bitter flavour, but the seeds can easily be masked by putting them into a dish containing something like chilies that are prevalent in Indian cuisine. The taste can also be masked with sugar, and the vast majority of suicides committed via ingestion of the seeds is done by removing the seed from the outer husk, then crushing and mixing it with raw cane sugar. “Just a spoon full of sugar helps the medicine go down” and all that. Not the most pleasant way to die, but given it’s freely available in places like Kerala, and has a relatively certain outcome, it remains extremely popular.

Interestingly, with the limited data we do have (including suicides and murders), it’s noted that 75% of people who die via ingesting the cerbera odollam seeds are women. Researchers at the Laboratory of Analytical Toxicology in La Voulte-sur-Rhône speculate that this massive gender discrepancy is because the plant is being used to poison newly married wives “who do not meet the exacting standards of some Indian families“. However, beyond the murders, young women are also statistically much more likely to use the cerbera odollam seeds to commit suicide; such was the case in May of 2015 when four young girls consumed cerbera odollam seeds as part of a suicide pact after being abused at an athletic training camp. That said, it should be noted that in the Western world, while about four men will commit suicide for every one woman, nearly three times as many women as men will attempt to kill themselves. If the popular method of suicide in the Western world was using something like cerbera odollam seeds where death is almost certain, it could potentially skew those tragic numbers significantly to be somewhat more inline with the Laboratory of Analytical Toxicologies data for Kerala, India.

Whatever the case, the prominent use of this plant in suicides and murders has led to local governments trying to find new uses for this natural resource. Particularly in impoverished areas, if the seeds have marketable uses, it’s literally money growing on trees for any of the populace who wants it, making the seeds less readily available as they’re harvested en masse- so less death and more money in the local economy. Towards this end, in recent years the seeds have begun being harvested for use in various products like bio-insecticides and rat poison, among other things. In Kerala, where the plant is responsible for more deaths than anywhere else on Earth, locals can earn a decent living dehusking the plant with their bare hands in various processing yards across the state. We bet lunch hour at those places is really tense…

Expand for References

MacInnis, Peter, Poisons: from Hemlock to Botox and the Killer Bean of Calabar, Allen & Unwin, 2004

Nosowitz, Dan, Do Not Eat, Touch, Or Even Inhale the Air Around the Manchineel Tree, Atlas Obscura, May 19, 2016, https://www.atlasobscura.com/articles/whatever-you-do-do-not-eat-touch-or-even-inhale-the-air-around-the-manchineel-tree

Pitts, J.F et al, Manchineel Keratoconjunctivitis, British Journal of Ophthalmology, May 1993, https://ift.tt/2VVH7lp

McLendon, Russell, Why Manchineel Might be Earth’s Most Dangerous Tree, Tree Hugger, May 14, 2020, https://www.treehugger.com/why-manchineel-might-be-earths-most-dangerous-tree-4868796

Strickland, Nicola, Eating a Manchineel “Beach Apple,” British Medical Journal, August 12, 2000, https://ift.tt/3vu9HcB

The post The World’s Most Dangerous Tree appeared first on Today I Found Out.



from Today I Found Out
by Gilles Messier - June 14, 2021 at 08:51PM
Article provided by the producers of one of our Favorite YouTube Channels!
-