Curio Cabinet / Person, Place, or Thing
-
FREEUS History PP&T CurioFree1 CQ
".... .- .--. .--. -.-- / -... .. .-. - .... -.. .- -.--" That's Morse code for happy birthday! The inventor of the electric telegraph and Morse code, Samuel Morse, was born on this day in 1791. Tinkering and inventing were just two of Morse’s varied interests, but the story of how he invented the telegram involved both genius and tragedy.
Morse was born in Charlestown, Massachusetts, and attended Yale as a young man. As a student, he had a passing interest in electricity, but his real passion was for painting. He especially enjoyed painting miniature portraits, much to the chagrin of his parents, who wanted him to start working as a bookseller's apprentice. Despite the pressure, he remained steadfast in his pursuit of the arts, and eventually made his way to London to train properly as a painter. His painting The Dying Hercules became a critical success, and by 1815 he returned to the U.S. to open his own studio. The following year, though, the 25-year-old Morse was struck by a sudden and unexpected tragedy. While he was away from home, working on a portrait of Marquis de Lafayette, a pivotal figure in the American Revolution, Morse received word that his wife had fallen critically ill. He rushed home, but he was too late—his wife had passed away days before his arrival. Morse was understandably distraught, and the tragedy marked the beginning of his renewed interest in electricity. Specifically, he believed that the key to instant communication lay with electromagnetism.
Although Morse’s name would come to be forever associated with telegrams, he wasn't the first to invent them by any means. In 1833, Germans Carl Friedrich Gauss Wilhelm Weber created the first commercial telegraph. Meanwhile, William Cooke and Charles Wheatstone in England were working on a telegraph system of their own, but their version was limited in range. Morse himself only managed to create a prototype by 1834, yet by 1838—and with the help of machinist Alfred Vail—he was able to create a telegraph system that could relay a message up to two miles. The message sent during this demonstration, "A patient waiter is no loser," was sent in the newly developed Morse code, which Morse devised with the help of Vail. Morse applied for a patent for his telegraph in 1840, and in 1844, a line connecting Baltimore, Maryland to Washington D.C. was established. Famously, the first message sent on this line was "What hath God wrought!" Although Morse Code as created by Morse was adequate for communicating in English, it wasn't particularly accommodating of other languages. So, in 1851, a number of European countries worked together to develop a variant called the International Morse Code, which was simpler. This version would come to be adopted in the U.S. as well for its simplicity, and remains more widely used.
Both the telegraph and the Morse code remained the quickest way to communicate over long distances for many years, until the advent of the radio and other mass communication devices rendered them obsolete during the 20th century. The death blow to the telegraph—and subsequently, Morse code—as the dominant form of long distance communication came after WWII, when aging telegraph lines became too great an expense to justify in the age of radio. Morse code still had its uses with radio as the medium instead of the telegraph, but its heyday was long over. Today, telegraphs and Morse code have been relegated to niche uses, but it's undeniable that they helped shape the age of instant communication that we currently occupy. Word travels fast, but Morse made it faster.
[Image description: A photo of Samuel Morse, a mall with white hair and a beard, wearing a uniform with many medals.] Credit & copyright: The Metropolitan Museum of Art, Samuel F. B. Morse, Attributed to Mathew B. Brady, ca. 1870. Gilman Collection, Gift of The Howard Gilman Foundation, 2005. Public Domain.".... .- .--. .--. -.-- / -... .. .-. - .... -.. .- -.--" That's Morse code for happy birthday! The inventor of the electric telegraph and Morse code, Samuel Morse, was born on this day in 1791. Tinkering and inventing were just two of Morse’s varied interests, but the story of how he invented the telegram involved both genius and tragedy.
Morse was born in Charlestown, Massachusetts, and attended Yale as a young man. As a student, he had a passing interest in electricity, but his real passion was for painting. He especially enjoyed painting miniature portraits, much to the chagrin of his parents, who wanted him to start working as a bookseller's apprentice. Despite the pressure, he remained steadfast in his pursuit of the arts, and eventually made his way to London to train properly as a painter. His painting The Dying Hercules became a critical success, and by 1815 he returned to the U.S. to open his own studio. The following year, though, the 25-year-old Morse was struck by a sudden and unexpected tragedy. While he was away from home, working on a portrait of Marquis de Lafayette, a pivotal figure in the American Revolution, Morse received word that his wife had fallen critically ill. He rushed home, but he was too late—his wife had passed away days before his arrival. Morse was understandably distraught, and the tragedy marked the beginning of his renewed interest in electricity. Specifically, he believed that the key to instant communication lay with electromagnetism.
Although Morse’s name would come to be forever associated with telegrams, he wasn't the first to invent them by any means. In 1833, Germans Carl Friedrich Gauss Wilhelm Weber created the first commercial telegraph. Meanwhile, William Cooke and Charles Wheatstone in England were working on a telegraph system of their own, but their version was limited in range. Morse himself only managed to create a prototype by 1834, yet by 1838—and with the help of machinist Alfred Vail—he was able to create a telegraph system that could relay a message up to two miles. The message sent during this demonstration, "A patient waiter is no loser," was sent in the newly developed Morse code, which Morse devised with the help of Vail. Morse applied for a patent for his telegraph in 1840, and in 1844, a line connecting Baltimore, Maryland to Washington D.C. was established. Famously, the first message sent on this line was "What hath God wrought!" Although Morse Code as created by Morse was adequate for communicating in English, it wasn't particularly accommodating of other languages. So, in 1851, a number of European countries worked together to develop a variant called the International Morse Code, which was simpler. This version would come to be adopted in the U.S. as well for its simplicity, and remains more widely used.
Both the telegraph and the Morse code remained the quickest way to communicate over long distances for many years, until the advent of the radio and other mass communication devices rendered them obsolete during the 20th century. The death blow to the telegraph—and subsequently, Morse code—as the dominant form of long distance communication came after WWII, when aging telegraph lines became too great an expense to justify in the age of radio. Morse code still had its uses with radio as the medium instead of the telegraph, but its heyday was long over. Today, telegraphs and Morse code have been relegated to niche uses, but it's undeniable that they helped shape the age of instant communication that we currently occupy. Word travels fast, but Morse made it faster.
[Image description: A photo of Samuel Morse, a mall with white hair and a beard, wearing a uniform with many medals.] Credit & copyright: The Metropolitan Museum of Art, Samuel F. B. Morse, Attributed to Mathew B. Brady, ca. 1870. Gilman Collection, Gift of The Howard Gilman Foundation, 2005. Public Domain. -
FREEPolitical Science PP&T CurioFree1 CQ
Tariffs, duties, customs—no matter what you call them, they can be a volatile tool capable of protecting weak industries or bleeding an economy dry. As much as tariffs have been in the news lately, they can be difficult to understand. Luckily, one can always turn to history to see how they’ve been used in the past. In the early days of the U.S., tariffs helped domestic industries stay competitive. However, they can easily turn harmful if they’re implemented without due consideration.
A tariff is a tax applied to goods that are imported or exported, though the latter is rarely used nowadays. Tariffs on exports were sometimes used to safeguard limited resources from leaving the country, but when the word "tariff" is used, it almost always means a tax applied to imports. Tariffs are paid by the entity importing the goods, and they can be either “ad valorem” or "specific" tariffs. Ad valorem tariffs are based on a percentage of the value of the goods being taxed, while specific tariffs are fixed amounts, regardless of the total value. It's easy to think that tariffs are categorically detrimental, but there’s sometimes a good reason for them. Making certain types of imported goods more expensive might help domestic producers of those goods stay more competitive by allowing them to sell their goods at lower prices. On the other hand, poorly-conceived tariffs can end up raising the prices of goods across the board, putting economic pressure on consumers without helping domestic industries. Tariffs used to be much more common around the world, but as international trade grew throughout the 20th century, they became less and less so. In the U.S., at least, many factors led to tariffs’ decline.
In 1789, the Tariff Act was one of the first major pieces of legislation passed by Congress, and it created a massive source of revenue for the fledgling nation. Tariffs helped domestic industries gain their footing by leveling the playing field against their better-established foreign competitors, particularly in Britain. By the beginning of the Civil War, tariffs accounted for around 90 percent of the U.S. government’s revenue. As Americans took up arms against each other, however, there was a sudden, dire need for other sources of government funding. Other taxes were introduced, leading to tariffs becoming less significant. Still, even immediately after the war, tariffs accounted for around 50 percent of the nation's revenue. During the Great Depression, tariffs caused more problems than they solved. The Smoot-Hawley Tariff Act of 1930 was intended to bolster domestic industries, but it also made it less feasible for those industries to export goods, hindering their overall business. By the start of World War II, the U.S. government simply could not rely on tariffs as a significant source of revenue any longer. Social security, the New Deal, and exponentially growing military expenditure among other things created mountains of expenses far too large for tariffs to cover. Thus, tariffs became less popular and less relevant over the decades.
Today, tariffs are typically used as negotiation tools between countries engaged in trade. Generally, tariffs are applied on specific industries or goods. For example, tariffs on steel have been used a number of times in recent history to aid American producers. However, the tariffs making the news as of late are unusual. Instead of targeting specific industries, tariffs are being applied across the board against entire countries, even on goods from established trade partners like Canada and Mexico. Only time will tell how this will impact U.S. consumers and U.S. industries. It’ll be historic no matter what…but that doesn’t always mean it will be smooth sailing.
[Image description: A U.S. flag with a wooden pole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Tariffs, duties, customs—no matter what you call them, they can be a volatile tool capable of protecting weak industries or bleeding an economy dry. As much as tariffs have been in the news lately, they can be difficult to understand. Luckily, one can always turn to history to see how they’ve been used in the past. In the early days of the U.S., tariffs helped domestic industries stay competitive. However, they can easily turn harmful if they’re implemented without due consideration.
A tariff is a tax applied to goods that are imported or exported, though the latter is rarely used nowadays. Tariffs on exports were sometimes used to safeguard limited resources from leaving the country, but when the word "tariff" is used, it almost always means a tax applied to imports. Tariffs are paid by the entity importing the goods, and they can be either “ad valorem” or "specific" tariffs. Ad valorem tariffs are based on a percentage of the value of the goods being taxed, while specific tariffs are fixed amounts, regardless of the total value. It's easy to think that tariffs are categorically detrimental, but there’s sometimes a good reason for them. Making certain types of imported goods more expensive might help domestic producers of those goods stay more competitive by allowing them to sell their goods at lower prices. On the other hand, poorly-conceived tariffs can end up raising the prices of goods across the board, putting economic pressure on consumers without helping domestic industries. Tariffs used to be much more common around the world, but as international trade grew throughout the 20th century, they became less and less so. In the U.S., at least, many factors led to tariffs’ decline.
In 1789, the Tariff Act was one of the first major pieces of legislation passed by Congress, and it created a massive source of revenue for the fledgling nation. Tariffs helped domestic industries gain their footing by leveling the playing field against their better-established foreign competitors, particularly in Britain. By the beginning of the Civil War, tariffs accounted for around 90 percent of the U.S. government’s revenue. As Americans took up arms against each other, however, there was a sudden, dire need for other sources of government funding. Other taxes were introduced, leading to tariffs becoming less significant. Still, even immediately after the war, tariffs accounted for around 50 percent of the nation's revenue. During the Great Depression, tariffs caused more problems than they solved. The Smoot-Hawley Tariff Act of 1930 was intended to bolster domestic industries, but it also made it less feasible for those industries to export goods, hindering their overall business. By the start of World War II, the U.S. government simply could not rely on tariffs as a significant source of revenue any longer. Social security, the New Deal, and exponentially growing military expenditure among other things created mountains of expenses far too large for tariffs to cover. Thus, tariffs became less popular and less relevant over the decades.
Today, tariffs are typically used as negotiation tools between countries engaged in trade. Generally, tariffs are applied on specific industries or goods. For example, tariffs on steel have been used a number of times in recent history to aid American producers. However, the tariffs making the news as of late are unusual. Instead of targeting specific industries, tariffs are being applied across the board against entire countries, even on goods from established trade partners like Canada and Mexico. Only time will tell how this will impact U.S. consumers and U.S. industries. It’ll be historic no matter what…but that doesn’t always mean it will be smooth sailing.
[Image description: A U.S. flag with a wooden pole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History PP&T CurioFree1 CQ
You could say that he was a man of many words. While England is the motherland of English, every Anglophone country has its own unique twist on the language. American English wasn't always held in the highest esteem, but Noah Webster helped formalize the American vernacular and help it stand on its own with his American Dictionary of the English Language, published this month in 1828.
Webster was born on October 16, 1758, in West Hartford, Connecticut, to a family of modest means. His father was a farmer and weaver, while his mother was a homemaker. Though most working class people didn't attend college at the time, Webster's parents encouraged his studies as a young man, and he began attending Yale at the age of 16. During this time, he briefly served in the local militia and even met George Washington on one occasion as the Revolutionary War raged on. Unfortunately, Webster's financial hardships kept him from pursuing law, his original passion. Instead, he chose to become a teacher. In this role, he began to see the shortcomings of existing textbooks on the English language, all of which came from Britain. These books not only failed to reflect the language as spoken by Americans, but also included a pledge of allegiance to King George. As a grammarian, educator, and proud American, Webster believed that American students should be taught American English, which he dubbed "Federal English."
Thus, Webster set out to formalize the English spoken by Americans. He published the first of his seminal works, A Grammatical Institute of the English Language, in 1783. Also called the “American Spelling Book” or the “Blue-Backed Speller” for the color of its binding, the book codified the spelling of English words as written by Americans. When writing the book, Webster set out and followed three rules—he divided each word into syllables, described how each word was pronounced, and wrote the proper way to spell each word. Webster also simplified the spelling of many words, but not all of his spellings caught on. For instance, he wanted Americans to spell "tongue" as "tung." Still, he continued his efforts to simplify spelling in A Compendious Dictionary of the English Language, published in 1806. The book contained around 37,000 words, and many of the spellings within are still used today. For example, "colour" was simplified to "color," "musick" became "music," and many words that ended in "-re" were changed to end in "-er.” The book even added words not included in British textbooks or dictionaries. Webster’s magnum opus, American Dictionary of the English Language, greatly expanded on his first dictionary by including over 65,000 words. The dictionary was a comprehensive reflection of Webster's own views on American English and its usage, and was largely defined by its "Americanisms," which included nonliterary words and technical words from the arts and sciences. It reflected Webster's belief that spoken language should shape the English language in both the definition of words and their pronunciation.
Today, Webster is remembered through the continuously-revised editions of the dictionary that bears his name. His views on language continue to influence lexicographers and linguists. In a way, Webster was his own sort of revolutionary rebel. Instead of muskets on the battlefield, he fought for his country's identity with books in classrooms by going against the grain culturally and academically. Who knew grammar and spelling could be part of a war effort?
[Image description: A portrait of a white-haired man with a white shirt and black jacket sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution; gift of William A. Ellis. Portrait of Noah Webster , James Herring, 12 Jan 1794 - 8 Oct 1867. Public Domain, CC0.You could say that he was a man of many words. While England is the motherland of English, every Anglophone country has its own unique twist on the language. American English wasn't always held in the highest esteem, but Noah Webster helped formalize the American vernacular and help it stand on its own with his American Dictionary of the English Language, published this month in 1828.
Webster was born on October 16, 1758, in West Hartford, Connecticut, to a family of modest means. His father was a farmer and weaver, while his mother was a homemaker. Though most working class people didn't attend college at the time, Webster's parents encouraged his studies as a young man, and he began attending Yale at the age of 16. During this time, he briefly served in the local militia and even met George Washington on one occasion as the Revolutionary War raged on. Unfortunately, Webster's financial hardships kept him from pursuing law, his original passion. Instead, he chose to become a teacher. In this role, he began to see the shortcomings of existing textbooks on the English language, all of which came from Britain. These books not only failed to reflect the language as spoken by Americans, but also included a pledge of allegiance to King George. As a grammarian, educator, and proud American, Webster believed that American students should be taught American English, which he dubbed "Federal English."
Thus, Webster set out to formalize the English spoken by Americans. He published the first of his seminal works, A Grammatical Institute of the English Language, in 1783. Also called the “American Spelling Book” or the “Blue-Backed Speller” for the color of its binding, the book codified the spelling of English words as written by Americans. When writing the book, Webster set out and followed three rules—he divided each word into syllables, described how each word was pronounced, and wrote the proper way to spell each word. Webster also simplified the spelling of many words, but not all of his spellings caught on. For instance, he wanted Americans to spell "tongue" as "tung." Still, he continued his efforts to simplify spelling in A Compendious Dictionary of the English Language, published in 1806. The book contained around 37,000 words, and many of the spellings within are still used today. For example, "colour" was simplified to "color," "musick" became "music," and many words that ended in "-re" were changed to end in "-er.” The book even added words not included in British textbooks or dictionaries. Webster’s magnum opus, American Dictionary of the English Language, greatly expanded on his first dictionary by including over 65,000 words. The dictionary was a comprehensive reflection of Webster's own views on American English and its usage, and was largely defined by its "Americanisms," which included nonliterary words and technical words from the arts and sciences. It reflected Webster's belief that spoken language should shape the English language in both the definition of words and their pronunciation.
Today, Webster is remembered through the continuously-revised editions of the dictionary that bears his name. His views on language continue to influence lexicographers and linguists. In a way, Webster was his own sort of revolutionary rebel. Instead of muskets on the battlefield, he fought for his country's identity with books in classrooms by going against the grain culturally and academically. Who knew grammar and spelling could be part of a war effort?
[Image description: A portrait of a white-haired man with a white shirt and black jacket sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution; gift of William A. Ellis. Portrait of Noah Webster , James Herring, 12 Jan 1794 - 8 Oct 1867. Public Domain, CC0. -
FREEUS History PP&T CurioFree1 CQ
Sometimes, you’re in the right place at the wrong time. The Pony Express was a mail delivery service that defied the perils of the wilderness to connect the Eastern and Western sides of the U.S. The riders who traversed the dangerous trail earned themselves a lasting reputation, but the famed service wasn’t destined to last.
Prior to the establishment of the Pony Express on April 3, 1860, there was only one reliable way for someone on the East Coast to send letters or parcels to the West: steamships. These ships traveled by sea from the East Coast down to Panama, where their cargo was unloaded and carried to the Atlantic side of the isthmus. There, the mail was loaded onto yet another ship and taken up to San Francisco, where it could finally be split up and sent off to various addresses. The only other option was for a ship to travel around the southern tip of South America, which could be treacherous. In addition to the inherent risks of going by sea, these routes were costly and time consuming. Mail delivery by ship took months, costing the U.S. government more money than they earned in postage. Going directly on land from East to West was also prohibitively dangerous due to a lack of established trails and challenging terrain. Various officials proposed some type of overland mail delivery system using horses, but for years none came to fruition.
Although the early history and conception of the Pony Express is disputed, most historians credit William H. Russell with the concept. He was one of the owners of Russell, Majors and Waddell, a freight, mail, and passenger transportation company. Russell and his partners later founded the Central Overland California & Pike’s Peak Express Company to serve as the parent company to the Pony Express. Simply put, the Pony Express was an express mail delivery service that used a system of relay stations to switch out riders and horses as needed. This wasn’t a unique concept by itself, as similar systems were already in use, but the Pony Express was set apart by its speed and the distance it covered. Operating out of St. Joseph, Missouri, the company guaranteed delivery of mail to and from San Francisco in 10 days. To accomplish this, riders carried up to 20 pounds of mail on horseback and rode California mustangs (feral horses trained to accept riders) 10 to 15 miles at a time between relay stations. Using this system, riders were able to cover over 1,900 miles in the promised 10 days. Riders traveled in any weather through all types of terrain on poorly established trails, both day and night.
Bridging the nearly 2,000-mile gap between U.S. coasts was no easy feat, and the Pony Express quickly established itself as a reliable service. However, just 18 months after operations began, the Pony Express became largely obsolete thanks to the establishment of a telegraph line connecting New York City and San Francisco. In October of 1861, the company stopped accepting new mail, and their last shipment was delivered in November. Despite its short-lived success, the Pony Express holds a near-mythical place in American popular history. Its riders were seen as adventurers who braved the elements through untamed wilderness, and they are considered daring symbols of the Old West. It might not be as reliable as the modern postal service, but it was a lot easier to romanticize.
[Image description: A stone and concrete pillar-style monument near the site of Rockwell's Station along the Pony Express route in Utah.] Credit & copyright: Beneathtimp, Wikimedia CommonsSometimes, you’re in the right place at the wrong time. The Pony Express was a mail delivery service that defied the perils of the wilderness to connect the Eastern and Western sides of the U.S. The riders who traversed the dangerous trail earned themselves a lasting reputation, but the famed service wasn’t destined to last.
Prior to the establishment of the Pony Express on April 3, 1860, there was only one reliable way for someone on the East Coast to send letters or parcels to the West: steamships. These ships traveled by sea from the East Coast down to Panama, where their cargo was unloaded and carried to the Atlantic side of the isthmus. There, the mail was loaded onto yet another ship and taken up to San Francisco, where it could finally be split up and sent off to various addresses. The only other option was for a ship to travel around the southern tip of South America, which could be treacherous. In addition to the inherent risks of going by sea, these routes were costly and time consuming. Mail delivery by ship took months, costing the U.S. government more money than they earned in postage. Going directly on land from East to West was also prohibitively dangerous due to a lack of established trails and challenging terrain. Various officials proposed some type of overland mail delivery system using horses, but for years none came to fruition.
Although the early history and conception of the Pony Express is disputed, most historians credit William H. Russell with the concept. He was one of the owners of Russell, Majors and Waddell, a freight, mail, and passenger transportation company. Russell and his partners later founded the Central Overland California & Pike’s Peak Express Company to serve as the parent company to the Pony Express. Simply put, the Pony Express was an express mail delivery service that used a system of relay stations to switch out riders and horses as needed. This wasn’t a unique concept by itself, as similar systems were already in use, but the Pony Express was set apart by its speed and the distance it covered. Operating out of St. Joseph, Missouri, the company guaranteed delivery of mail to and from San Francisco in 10 days. To accomplish this, riders carried up to 20 pounds of mail on horseback and rode California mustangs (feral horses trained to accept riders) 10 to 15 miles at a time between relay stations. Using this system, riders were able to cover over 1,900 miles in the promised 10 days. Riders traveled in any weather through all types of terrain on poorly established trails, both day and night.
Bridging the nearly 2,000-mile gap between U.S. coasts was no easy feat, and the Pony Express quickly established itself as a reliable service. However, just 18 months after operations began, the Pony Express became largely obsolete thanks to the establishment of a telegraph line connecting New York City and San Francisco. In October of 1861, the company stopped accepting new mail, and their last shipment was delivered in November. Despite its short-lived success, the Pony Express holds a near-mythical place in American popular history. Its riders were seen as adventurers who braved the elements through untamed wilderness, and they are considered daring symbols of the Old West. It might not be as reliable as the modern postal service, but it was a lot easier to romanticize.
[Image description: A stone and concrete pillar-style monument near the site of Rockwell's Station along the Pony Express route in Utah.] Credit & copyright: Beneathtimp, Wikimedia Commons -
FREEUS History PP&T CurioFree1 CQ
Rags to riches is an understatement. Madam C.J. Walker, the daughter of two former slaves, worked her way up the ladder of a prejudiced society to earn enormous riches as an entrepreneur. Today we're celebrating her birthday with a look back at her remarkable career.
As a young black woman living in St. Louis in the 1890s, Walker didn't start out looking for the "next big idea." She was eking out a living for her and her daughter as a washerwoman. It wasn't until she found a job as a sales agent with a haircare company that things started taking off. The role was personal for her, as she suffered from scalp rashes and balding. Plus, her brothers worked in the hair business as barbers.
Walker was successful selling other people's hair products, but employment was getting in the way of her dream. Literally: a man who visited her in a dream inspired her to start her own company, selling hair and beauty products geared towards black women. The Madam C.J. Walker Manufacturing Company, of which Walker was the sole stakeholder, made its fortunes on sales of Madam Walker’s Wonderful Hair Grower. 19th-century hygiene called for only infrequent hair washing, which led to scalp infections, bacteria, lice, and—most commonly—balding. Walker's Hair Grower combatted balding and was backed by Walker's own guarantee that she used it to fix her own hair issues. A marketing strategy focused on black women, a neglected but growing portion of consumers, was a key ingredient for success.
As the business grew, Walker revealed bigger ambitions. “I am not merely satisfied in making money for myself," she said. "I am endeavoring to provide employment for hundreds of women of my race." Her company employed some 40,000 “Walker Agents” to teach women about proper hair care. Walker stepped beyond the boundaries of her business as a social activist and philanthropist. She donated thousands to the NAACP and put her voice behind causes like preserving Frederick Douglass's home and fighting for the rights of black World War I veterans.
It's often claimed that Walker was America's first black female self-made millionaire. But when she passed away in 1919, assessors found out her estate totaled around $600,000. Not that the number matters at all, really; Walker's legacy is priceless. We're guessing the businessmen and women she inspired could more than make up the difference.Rags to riches is an understatement. Madam C.J. Walker, the daughter of two former slaves, worked her way up the ladder of a prejudiced society to earn enormous riches as an entrepreneur. Today we're celebrating her birthday with a look back at her remarkable career.
As a young black woman living in St. Louis in the 1890s, Walker didn't start out looking for the "next big idea." She was eking out a living for her and her daughter as a washerwoman. It wasn't until she found a job as a sales agent with a haircare company that things started taking off. The role was personal for her, as she suffered from scalp rashes and balding. Plus, her brothers worked in the hair business as barbers.
Walker was successful selling other people's hair products, but employment was getting in the way of her dream. Literally: a man who visited her in a dream inspired her to start her own company, selling hair and beauty products geared towards black women. The Madam C.J. Walker Manufacturing Company, of which Walker was the sole stakeholder, made its fortunes on sales of Madam Walker’s Wonderful Hair Grower. 19th-century hygiene called for only infrequent hair washing, which led to scalp infections, bacteria, lice, and—most commonly—balding. Walker's Hair Grower combatted balding and was backed by Walker's own guarantee that she used it to fix her own hair issues. A marketing strategy focused on black women, a neglected but growing portion of consumers, was a key ingredient for success.
As the business grew, Walker revealed bigger ambitions. “I am not merely satisfied in making money for myself," she said. "I am endeavoring to provide employment for hundreds of women of my race." Her company employed some 40,000 “Walker Agents” to teach women about proper hair care. Walker stepped beyond the boundaries of her business as a social activist and philanthropist. She donated thousands to the NAACP and put her voice behind causes like preserving Frederick Douglass's home and fighting for the rights of black World War I veterans.
It's often claimed that Walker was America's first black female self-made millionaire. But when she passed away in 1919, assessors found out her estate totaled around $600,000. Not that the number matters at all, really; Walker's legacy is priceless. We're guessing the businessmen and women she inspired could more than make up the difference. -
FREEUS History PP&T CurioFree1 CQ
This governor has the gift of gab. On this day in 1775, about a month before the American Revolution began in earnest, orator Patrick Henry uttered one of the most famous sentences in American history: “Give me liberty or give me death.” Or did he? There’s still some scholarly debate as to whether Henry actually said those iconic words, but there’s no doubt that his speeches stirred American imaginations.
Born on May 29, 1736, in Hanover County in the Colony of Virginia, Henry’s childhood put him on a good path for a future orator. His father, a Scottish immigrant, had been educated at King’s College, while his mother came from a wealthy local family. Since his family’s wealth would pass to Henry’s older brother rather than him, he couldn’t afford a life of leisure. He was educated at home by his father, and in his late teens tried to open and run a store with his brother, though it quickly failed. For a time he helped his father-in-law run a tavern in Hanover, before beginning at-home studies to become a lawyer. Henry already understood the power of words and the persuasive force of good oration. He’d grown up watching passionate preachers during the religious revival known as the Great Awakening, which helped drive him toward his new profession.
After earning his law license in 1760, Henry’s wit and speaking ability made him a quick success. His most important legal victory came in 1763, in the sentencing phase of a case known as the Parson’s Cause. Since tobacco was a major cash crop in Virginia, many Virginian officials received their annual pay in tobacco. When a series of droughts in the 1750s caused tobacco prices to rise from two cents per pound to three times that much, the Virginia legislature stepped in to stabilize things. They passed the Two-Penny Act, which set the price of tobacco used to pay contracts at the usual two cents per pound. Clergy in the Anglican Church, which was sponsored by the British government, didn’t want their revenue limited by the Two-Penny Act. So, they appealed to authorities in England, who sided with them, overruling the Two-Penny Act. With the power of England behind him, Reverend James Maury of Hanover County, Virginia, sued his own parish for backpay and won. All that was left was to decide exactly how much backpay Maury was owed. That’s where Patrick Henry came in. Arguing on behalf of the parish vestry, Henry gave a passionate speech about what he saw as the greed of church officials and the overreach of Britain. By overturning the Two-Penny Act, he argued, the British government was exerting tyrannical power over the people of Virginia. Though some in the courtroom accused Henry of treason, the jury sided with him, awarding Maury just one penny of backpay. The case made Henry so popular that it gained him 164 new clients within a year.
Now famous for his fiery speeches and resistance to British power, Henry was elected to the Virginia legislature’s lower chamber, the House of Burgesses, in 1765. In 1774, Henry became a delegate to the First Continental Congress. His most famous speech came the following year, at the Second Virginia Convention, where members debated whether to add language to Virginian governing documents stating that the British king could veto colonial legislation. Henry instead proposed amendments about raising an independent militia, since he believed that war with England was imminent. On March 23, he delivered a fiery address saying, “Gentlemen may cry, Peace, Peace but there is no peace. The war is actually begun!” After arguing in favor of his amendments in more detail, he ended with the famous line, “I know not what course others may take; but as for me, give me liberty or give me death!”
In truth, we’ll never know if Henry actually uttered that famous sentence. His speech was never transcribed during his lifetime, but was pieced together from recollections of those present more than 10 years after his death. Regardless, we do know that Henry went on to become Virginia’s first governor in 1776, after the United States declared independence from England, and that he served until 1779. He was elected again in 1785, and served for two years. Though Henry is best remembered for a single speech, he made plenty of others, won plenty of legal cases, and served his newly-formed state and country in both peace and wartime. No one can say that he was all talk!
[Image description: A black-and-white illustration of Patrick Henry delivering his famous speech to other men at the Virginia Assembly. He has one hand raised, as do many of the audience members. On the ground is a paper reading “Proceedings of the Virginia Assembly.”] Credit & copyright: Published by Currier & Ives, c. 1876. Library of Congress. Public Domain.This governor has the gift of gab. On this day in 1775, about a month before the American Revolution began in earnest, orator Patrick Henry uttered one of the most famous sentences in American history: “Give me liberty or give me death.” Or did he? There’s still some scholarly debate as to whether Henry actually said those iconic words, but there’s no doubt that his speeches stirred American imaginations.
Born on May 29, 1736, in Hanover County in the Colony of Virginia, Henry’s childhood put him on a good path for a future orator. His father, a Scottish immigrant, had been educated at King’s College, while his mother came from a wealthy local family. Since his family’s wealth would pass to Henry’s older brother rather than him, he couldn’t afford a life of leisure. He was educated at home by his father, and in his late teens tried to open and run a store with his brother, though it quickly failed. For a time he helped his father-in-law run a tavern in Hanover, before beginning at-home studies to become a lawyer. Henry already understood the power of words and the persuasive force of good oration. He’d grown up watching passionate preachers during the religious revival known as the Great Awakening, which helped drive him toward his new profession.
After earning his law license in 1760, Henry’s wit and speaking ability made him a quick success. His most important legal victory came in 1763, in the sentencing phase of a case known as the Parson’s Cause. Since tobacco was a major cash crop in Virginia, many Virginian officials received their annual pay in tobacco. When a series of droughts in the 1750s caused tobacco prices to rise from two cents per pound to three times that much, the Virginia legislature stepped in to stabilize things. They passed the Two-Penny Act, which set the price of tobacco used to pay contracts at the usual two cents per pound. Clergy in the Anglican Church, which was sponsored by the British government, didn’t want their revenue limited by the Two-Penny Act. So, they appealed to authorities in England, who sided with them, overruling the Two-Penny Act. With the power of England behind him, Reverend James Maury of Hanover County, Virginia, sued his own parish for backpay and won. All that was left was to decide exactly how much backpay Maury was owed. That’s where Patrick Henry came in. Arguing on behalf of the parish vestry, Henry gave a passionate speech about what he saw as the greed of church officials and the overreach of Britain. By overturning the Two-Penny Act, he argued, the British government was exerting tyrannical power over the people of Virginia. Though some in the courtroom accused Henry of treason, the jury sided with him, awarding Maury just one penny of backpay. The case made Henry so popular that it gained him 164 new clients within a year.
Now famous for his fiery speeches and resistance to British power, Henry was elected to the Virginia legislature’s lower chamber, the House of Burgesses, in 1765. In 1774, Henry became a delegate to the First Continental Congress. His most famous speech came the following year, at the Second Virginia Convention, where members debated whether to add language to Virginian governing documents stating that the British king could veto colonial legislation. Henry instead proposed amendments about raising an independent militia, since he believed that war with England was imminent. On March 23, he delivered a fiery address saying, “Gentlemen may cry, Peace, Peace but there is no peace. The war is actually begun!” After arguing in favor of his amendments in more detail, he ended with the famous line, “I know not what course others may take; but as for me, give me liberty or give me death!”
In truth, we’ll never know if Henry actually uttered that famous sentence. His speech was never transcribed during his lifetime, but was pieced together from recollections of those present more than 10 years after his death. Regardless, we do know that Henry went on to become Virginia’s first governor in 1776, after the United States declared independence from England, and that he served until 1779. He was elected again in 1785, and served for two years. Though Henry is best remembered for a single speech, he made plenty of others, won plenty of legal cases, and served his newly-formed state and country in both peace and wartime. No one can say that he was all talk!
[Image description: A black-and-white illustration of Patrick Henry delivering his famous speech to other men at the Virginia Assembly. He has one hand raised, as do many of the audience members. On the ground is a paper reading “Proceedings of the Virginia Assembly.”] Credit & copyright: Published by Currier & Ives, c. 1876. Library of Congress. Public Domain. -
FREEScience PP&T CurioFree1 CQ
If a comet is named after you, does that make you a star among the stars? German astronomer Caroline Herschel, born on this day in 1750, would probably say so. The 35P/Herschel–Rigollet comet bears her name, and she discovered plenty of other comets throughout her long career, which was mostly spent working alongside her brother, William Herschel. That’s not to say that she toiled in her sibling’s shadow, though. Herschel made a name for herself as the first woman in England to hold a government position and the first known woman in the world to receive a salary as a scientist.
Born on March 16, 1750, in Hanover, Germany, Herschel's childhood got off to a rough start. Though she was the eighth child and fourth daughter in her family, two of her sisters died in childhood, while the eldest married and left home when Herschel was just five years old, leaving her alone as the family’s main housekeeper. At just 10 years old, Herschel contracted typhus and nearly died. The infection blinded her in her left eye and severely stunted her growth, leaving her with an adult height of four feet, three inches. Though her father wanted her to be educated, her mother insisted that, since she likely wouldn’t marry due to her disabilities, she should be trained as a housekeeping servant. Though her father educated her as best he could, Herschel ultimately learned little more than basic reading, arithmetic, and some sewing throughout her teenage years.
Things changed in Herschel’s early 20s, when she received an invitation from her two brothers, William and Alexander, to join them in Bath, England. William was becoming a fairly successful singer, and the brothers proposed that Herschel sing with him during some performances. While learning to sing in Bath, Herschel was finally able to become educated in other subjects too. After a few years of running William’s household and participating in his music career, she offered him support when his interests turned from music to astronomy.
Soon, William was making his own telescope lenses, which proved to be more powerful than conventional ones. In 1781, William discovered the planet Uranus, though he mistook it for a comet. As the siblings worked together, Herschel began scanning the sky each night for interesting objects, and meticulously recording their positions in a record book, along with any discoveries that she and William made. She also compared their observations with the Messier Catalog, a book of astronomy by French astronomer Charles Messier, which at the time was considered the most comprehensive catalog of astronomical objects. On February 26, 1783, Herschel made her first two, independent discoveries when she noticed a nebula that didn’t appear in the Messier Catalog and a small galaxy that later came to be known as Messier 110, a satellite of the Andromeda Galaxy.
In 1798, Herschel presented her astronomical catalog to the Royal Society in England, to be used as an update to English astronomer John Flamsteed’s observations. Her catalog was meticulously detailed, and was organized by north polar distance rather than by constellation. Using telescopes built by William, Herschel went on to discover eight comets. As she and William published papers with the Royal Society, both of them began earning a wage for their work. Herschel was paid £50 a year, making her the first known woman to earn a wage as a scientist.
By 1799, Herschel’s work was so well known that she was independently invited to spend a week with the royal family. Three years later, the Royal Society published the most detailed version of her work yet, though they did so under William’s name. After her brother’s death, Herschel created yet another astronomical catalog, this one for William’s son, John, who had also shown a great interest in astronomy. This catalog eventually became the New General Catalogue, which gave us the NGC numbers by which many astronomical objects are still identified.
Despite her childhood hardships and growing up during a time when women weren't encouraged to practice science, Caroline Herschel made some of the 18th and 19 centuries’ most important contributions to astronomy. Her determination to “mind the heavens”, as she put it, has impacted centuries of astronomical study. Happy Women’s History Month!
[Image description: A black-and-white portrait of Caroline Herschel wearing a bonnet and high-collared, lacy blouse.] Credit & copyright: Portrait by M. F. Tielemann, 1829. From page 114-115 of Agnes Clerke's The Herschels and Modern Astronomy (1895).If a comet is named after you, does that make you a star among the stars? German astronomer Caroline Herschel, born on this day in 1750, would probably say so. The 35P/Herschel–Rigollet comet bears her name, and she discovered plenty of other comets throughout her long career, which was mostly spent working alongside her brother, William Herschel. That’s not to say that she toiled in her sibling’s shadow, though. Herschel made a name for herself as the first woman in England to hold a government position and the first known woman in the world to receive a salary as a scientist.
Born on March 16, 1750, in Hanover, Germany, Herschel's childhood got off to a rough start. Though she was the eighth child and fourth daughter in her family, two of her sisters died in childhood, while the eldest married and left home when Herschel was just five years old, leaving her alone as the family’s main housekeeper. At just 10 years old, Herschel contracted typhus and nearly died. The infection blinded her in her left eye and severely stunted her growth, leaving her with an adult height of four feet, three inches. Though her father wanted her to be educated, her mother insisted that, since she likely wouldn’t marry due to her disabilities, she should be trained as a housekeeping servant. Though her father educated her as best he could, Herschel ultimately learned little more than basic reading, arithmetic, and some sewing throughout her teenage years.
Things changed in Herschel’s early 20s, when she received an invitation from her two brothers, William and Alexander, to join them in Bath, England. William was becoming a fairly successful singer, and the brothers proposed that Herschel sing with him during some performances. While learning to sing in Bath, Herschel was finally able to become educated in other subjects too. After a few years of running William’s household and participating in his music career, she offered him support when his interests turned from music to astronomy.
Soon, William was making his own telescope lenses, which proved to be more powerful than conventional ones. In 1781, William discovered the planet Uranus, though he mistook it for a comet. As the siblings worked together, Herschel began scanning the sky each night for interesting objects, and meticulously recording their positions in a record book, along with any discoveries that she and William made. She also compared their observations with the Messier Catalog, a book of astronomy by French astronomer Charles Messier, which at the time was considered the most comprehensive catalog of astronomical objects. On February 26, 1783, Herschel made her first two, independent discoveries when she noticed a nebula that didn’t appear in the Messier Catalog and a small galaxy that later came to be known as Messier 110, a satellite of the Andromeda Galaxy.
In 1798, Herschel presented her astronomical catalog to the Royal Society in England, to be used as an update to English astronomer John Flamsteed’s observations. Her catalog was meticulously detailed, and was organized by north polar distance rather than by constellation. Using telescopes built by William, Herschel went on to discover eight comets. As she and William published papers with the Royal Society, both of them began earning a wage for their work. Herschel was paid £50 a year, making her the first known woman to earn a wage as a scientist.
By 1799, Herschel’s work was so well known that she was independently invited to spend a week with the royal family. Three years later, the Royal Society published the most detailed version of her work yet, though they did so under William’s name. After her brother’s death, Herschel created yet another astronomical catalog, this one for William’s son, John, who had also shown a great interest in astronomy. This catalog eventually became the New General Catalogue, which gave us the NGC numbers by which many astronomical objects are still identified.
Despite her childhood hardships and growing up during a time when women weren't encouraged to practice science, Caroline Herschel made some of the 18th and 19 centuries’ most important contributions to astronomy. Her determination to “mind the heavens”, as she put it, has impacted centuries of astronomical study. Happy Women’s History Month!
[Image description: A black-and-white portrait of Caroline Herschel wearing a bonnet and high-collared, lacy blouse.] Credit & copyright: Portrait by M. F. Tielemann, 1829. From page 114-115 of Agnes Clerke's The Herschels and Modern Astronomy (1895). -
FREEUS History PP&T CurioFree1 CQ
Gangway! This Civil War battle didn’t take place on horseback, but on ships. While naval battles usually come to mind in relation to the World Wars, they were also part of the Revolutionary War and the American Civil War. In fact, the Battle of Hampton Roads, which ended on this day in 1862, was the first American battle involving ironclad warships.
Just a few days after the breakout of the Civil War on April 12, 1861, President Lincoln ordered a blockade of all major ports in states that had seceded from the Union, including those around Norfolk, Virginia. While in charge of maintaining the blockade at the Gosport Navy Yard in Portsmouth, Virginia, Union leaders got word that a large Confederate force was on its way to claim control of the area. The Union thus burned parts of the naval yard and several of their own warships to prevent them from falling into Confederate hands. Among them was the USS Merrimack, a type of steam-powered warship known as a steam frigate. The ship was also a screw frigate, as it was powered by screw propellers, making it quite agile for its time. When the ship was set ablaze, it only burned to the waterline. The bottom half of the Merrimack, which included its intact steam engines, sank beneath the surface of the Norfolk Navy Yard. Union troops in the area then retreated, and the Confederacy took over the area.
The Confederacy now controlled the south side of an area called Hampton Roads. This was a roadstead, or place where boats could be safely anchored, positioned in an area where the Elizabeth, Nansemond, and James rivers met before flowing into Chesapeake Bay. Determined to destroy the Union blockade that had cut them off from trade, the Confederates began pulling up remnants of recently-burned Union ships, including the Merrimack. Since the blockade included some of the Union’s most powerful ships, the Confederacy rebuilt the Merrimack as an ironclad warship, fitting an iron ram onto her prow and rebuilding her formerly wooden upper-deck with an iron-covered citadel that could mount ten guns. This new ship was named the CSS Virginia.
Word of the CSS Virginia caused something of a panic amongst Union officers, and they quickly got permission from Congress to begin construction of their own ironclad warship. The vessel was the brainchild of Swedish engineer John Ericsson, and included novel elements like a rotating turret with two large guns, rather than many small ones. They named their ship the USS Monitor.
The Battle of Hampton Roads began on the morning of March 8, 1862, when the CSS Virginia made a run for the Union’s blockade. Although several Union ships fired on the advancing Virginia, most of their gunfire bounced off her armor. The Virginia quickly rammed and sank the Cumberland, one of the five main ships in the blockade, though doing so broke off Virginia’s iron ram. Virginia then forced the surrender of another Union ship, the Congress, before firing upon it with red hot cannonballs, lighting it on fire. Already, more than 200 Union troops had been killed while the Virginia had only lost two crewmen. As night fell and visibility waned, the ship retreated to wait for daylight.
The Union quickly dispatched the Monitor to meet Virginia the next day. When the Confederates headed for the Minnesota, a grounded Union ship, Monitor rushed in to block her path. The two ironclads fired at one another, and continued to do so for most of the day, each finding it difficult to pierce the other’s armor. At one point, Virginia ran aground, but was able to get back into water just in time to avoid being destroyed. At another point, Monitor’s captain, Lieutenant John L. Worden, was temporarily blinded when his ship’s pilot house was hit with a charge. Monitor was thus forced to retreat, but neither it nor Virginia were damaged enough to render them physically incapable of fighting, so the battle ended inconclusively. Both sides claimed victory, but with the Union blockade still intact, the Confederacy hadn’t gained much ground. Eventually, the Confederacy was forced to destroy their own ship when they abandoned Norfolk, to prevent Virginia from falling into enemy hands. The Monitor sank in late 1862 when she encountered high waves while attempting to make her way to North Carolina. A pretty unimpressive end for such inventive ships.
[Image description: A painting depicting the Battle of Hampton Roads. Soldiers on horses look down a hill over a naval battle with ships on fire.] Credit & copyright: Kurz & Allison Art Publishers, 1889. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Public Domain.Gangway! This Civil War battle didn’t take place on horseback, but on ships. While naval battles usually come to mind in relation to the World Wars, they were also part of the Revolutionary War and the American Civil War. In fact, the Battle of Hampton Roads, which ended on this day in 1862, was the first American battle involving ironclad warships.
Just a few days after the breakout of the Civil War on April 12, 1861, President Lincoln ordered a blockade of all major ports in states that had seceded from the Union, including those around Norfolk, Virginia. While in charge of maintaining the blockade at the Gosport Navy Yard in Portsmouth, Virginia, Union leaders got word that a large Confederate force was on its way to claim control of the area. The Union thus burned parts of the naval yard and several of their own warships to prevent them from falling into Confederate hands. Among them was the USS Merrimack, a type of steam-powered warship known as a steam frigate. The ship was also a screw frigate, as it was powered by screw propellers, making it quite agile for its time. When the ship was set ablaze, it only burned to the waterline. The bottom half of the Merrimack, which included its intact steam engines, sank beneath the surface of the Norfolk Navy Yard. Union troops in the area then retreated, and the Confederacy took over the area.
The Confederacy now controlled the south side of an area called Hampton Roads. This was a roadstead, or place where boats could be safely anchored, positioned in an area where the Elizabeth, Nansemond, and James rivers met before flowing into Chesapeake Bay. Determined to destroy the Union blockade that had cut them off from trade, the Confederates began pulling up remnants of recently-burned Union ships, including the Merrimack. Since the blockade included some of the Union’s most powerful ships, the Confederacy rebuilt the Merrimack as an ironclad warship, fitting an iron ram onto her prow and rebuilding her formerly wooden upper-deck with an iron-covered citadel that could mount ten guns. This new ship was named the CSS Virginia.
Word of the CSS Virginia caused something of a panic amongst Union officers, and they quickly got permission from Congress to begin construction of their own ironclad warship. The vessel was the brainchild of Swedish engineer John Ericsson, and included novel elements like a rotating turret with two large guns, rather than many small ones. They named their ship the USS Monitor.
The Battle of Hampton Roads began on the morning of March 8, 1862, when the CSS Virginia made a run for the Union’s blockade. Although several Union ships fired on the advancing Virginia, most of their gunfire bounced off her armor. The Virginia quickly rammed and sank the Cumberland, one of the five main ships in the blockade, though doing so broke off Virginia’s iron ram. Virginia then forced the surrender of another Union ship, the Congress, before firing upon it with red hot cannonballs, lighting it on fire. Already, more than 200 Union troops had been killed while the Virginia had only lost two crewmen. As night fell and visibility waned, the ship retreated to wait for daylight.
The Union quickly dispatched the Monitor to meet Virginia the next day. When the Confederates headed for the Minnesota, a grounded Union ship, Monitor rushed in to block her path. The two ironclads fired at one another, and continued to do so for most of the day, each finding it difficult to pierce the other’s armor. At one point, Virginia ran aground, but was able to get back into water just in time to avoid being destroyed. At another point, Monitor’s captain, Lieutenant John L. Worden, was temporarily blinded when his ship’s pilot house was hit with a charge. Monitor was thus forced to retreat, but neither it nor Virginia were damaged enough to render them physically incapable of fighting, so the battle ended inconclusively. Both sides claimed victory, but with the Union blockade still intact, the Confederacy hadn’t gained much ground. Eventually, the Confederacy was forced to destroy their own ship when they abandoned Norfolk, to prevent Virginia from falling into enemy hands. The Monitor sank in late 1862 when she encountered high waves while attempting to make her way to North Carolina. A pretty unimpressive end for such inventive ships.
[Image description: A painting depicting the Battle of Hampton Roads. Soldiers on horses look down a hill over a naval battle with ships on fire.] Credit & copyright: Kurz & Allison Art Publishers, 1889. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Public Domain. -
FREEPhysics PP&T CurioFree1 CQ
Sure, they’re stuck to your fridge, but they also power high-speed trains and even help you pay for things. Magnets have been an endless source of fascination since the dawn of humanity, but they've also played a subtle role in some of the most transformative parts of human history. Without them, we might never have explored the planet or attained the ability to mass-produce things.
Like gravity and electricity, magnetism is a natural force that existed long before humans appeared on Earth. In fact, the planet itself has its own magnetic fields. Magnetism is caused by the movement of electrons, tiny particles that constantly spin inside of atoms. The electrons inside any given atom always generate a magnetic field, but what we commonly think of as magnetism only occurs in certain materials, like iron, where electrons align in one direction to generate an observable magnetic force. This magnetism will attract or repel other objects in a way that can be seen and felt. While it’s impossible to know who first discovered magnetism in nature, it probably happened in prehistoric times, before the first true city had even been built. At least 2,500 years ago, magnetism was known to people in India and China. By the time of the ancient Roman Empire, magnetism was already being written and theorized about. In fact, the word “magnetism” probably comes from Magnesia, an area in modern-day Turkey where lodestones (magnetized pieces of magnetite) were common.
It’s no understatement to say that the invention of the compass changed the course of human history. People had been using stars to navigate for centuries before then, even on the open ocean, but it was a dangerous undertaking, as foul weather could unexpectedly block the sky at any point, leaving boats stranded far from land. The first compass was invented in China around 206 B.C.E., but it was originally used for religious purposes, not navigation. Spoon shaped lodestones were placed on plates that had certain words and characters etched into them, and the lodestones would spin to point at certain ones and supposedly divine the future. As this technology improved, people naturally noticed that spinning lodestones would always end up pointing north. They didn’t realize that this was because it was aligning with Earth’s magnetic north pole, but they did quickly learn that it was useful for navigation. By the 11th century, Chinese soldiers were using compasses for navigational purposes, and in the following two centuries, the practice spread all over the world. Suddenly, people could more easily cross vast oceans, discover new continents, and establish connections with other civilizations—all because of magnets.
In 1600, English physicist William Gilbert published De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). Gilbert theorized that the Earth itself was magnetic with an iron core, and that there was a relationship of some kind between magnetism and electricity. In fact, Gilbert’s novel use of the neo Latin term “electricus” (which previously had been used to describe the properties of amber) eventually led to our modern word "electricity." The industrial revolution of the mid-1700s might have come much later if not for Gilbert's work, since an understanding of electromagnetism—the relationship between magnets and electricity—was key to the creation of motors and generators, which allowed for mass production in factories.
Magnets are integral to almost every part of life in the digital age—in fact, the digital age could never have happened without them. Most computers use magnets to store information inside their hard drives. Metal plates inside of hard drives create computer code based on whether each one is magnetized or not. This code is then turned into data. The average smartphone alone contains between 5 and 14 magnets. There are magnets in the tiny motors that make smartphones vibrate, and magnets are essential for phone speakers, since the magnetic field they create is what makes the phone’s voice coil vibrate against the speaker cones, thus producing sound waves. Just as they store information in a hard drive, magnets also store information on the surface of credit and debit cards in the form of magnetic stripes. These stripes contain magnetized patterns that are decoded by card readers any time the card is swiped. Of course, magnets can also be used to stick your grocery list to the fridge…but that seems a little low-tech in comparison.
[Image description: A red-and-silver horseshoe-shaped magnet] Credit & copyright: Zureks, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Sure, they’re stuck to your fridge, but they also power high-speed trains and even help you pay for things. Magnets have been an endless source of fascination since the dawn of humanity, but they've also played a subtle role in some of the most transformative parts of human history. Without them, we might never have explored the planet or attained the ability to mass-produce things.
Like gravity and electricity, magnetism is a natural force that existed long before humans appeared on Earth. In fact, the planet itself has its own magnetic fields. Magnetism is caused by the movement of electrons, tiny particles that constantly spin inside of atoms. The electrons inside any given atom always generate a magnetic field, but what we commonly think of as magnetism only occurs in certain materials, like iron, where electrons align in one direction to generate an observable magnetic force. This magnetism will attract or repel other objects in a way that can be seen and felt. While it’s impossible to know who first discovered magnetism in nature, it probably happened in prehistoric times, before the first true city had even been built. At least 2,500 years ago, magnetism was known to people in India and China. By the time of the ancient Roman Empire, magnetism was already being written and theorized about. In fact, the word “magnetism” probably comes from Magnesia, an area in modern-day Turkey where lodestones (magnetized pieces of magnetite) were common.
It’s no understatement to say that the invention of the compass changed the course of human history. People had been using stars to navigate for centuries before then, even on the open ocean, but it was a dangerous undertaking, as foul weather could unexpectedly block the sky at any point, leaving boats stranded far from land. The first compass was invented in China around 206 B.C.E., but it was originally used for religious purposes, not navigation. Spoon shaped lodestones were placed on plates that had certain words and characters etched into them, and the lodestones would spin to point at certain ones and supposedly divine the future. As this technology improved, people naturally noticed that spinning lodestones would always end up pointing north. They didn’t realize that this was because it was aligning with Earth’s magnetic north pole, but they did quickly learn that it was useful for navigation. By the 11th century, Chinese soldiers were using compasses for navigational purposes, and in the following two centuries, the practice spread all over the world. Suddenly, people could more easily cross vast oceans, discover new continents, and establish connections with other civilizations—all because of magnets.
In 1600, English physicist William Gilbert published De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). Gilbert theorized that the Earth itself was magnetic with an iron core, and that there was a relationship of some kind between magnetism and electricity. In fact, Gilbert’s novel use of the neo Latin term “electricus” (which previously had been used to describe the properties of amber) eventually led to our modern word "electricity." The industrial revolution of the mid-1700s might have come much later if not for Gilbert's work, since an understanding of electromagnetism—the relationship between magnets and electricity—was key to the creation of motors and generators, which allowed for mass production in factories.
Magnets are integral to almost every part of life in the digital age—in fact, the digital age could never have happened without them. Most computers use magnets to store information inside their hard drives. Metal plates inside of hard drives create computer code based on whether each one is magnetized or not. This code is then turned into data. The average smartphone alone contains between 5 and 14 magnets. There are magnets in the tiny motors that make smartphones vibrate, and magnets are essential for phone speakers, since the magnetic field they create is what makes the phone’s voice coil vibrate against the speaker cones, thus producing sound waves. Just as they store information in a hard drive, magnets also store information on the surface of credit and debit cards in the form of magnetic stripes. These stripes contain magnetized patterns that are decoded by card readers any time the card is swiped. Of course, magnets can also be used to stick your grocery list to the fridge…but that seems a little low-tech in comparison.
[Image description: A red-and-silver horseshoe-shaped magnet] Credit & copyright: Zureks, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History PP&T CurioFree1 CQ
You may remember the Alamo, but how much do you actually know about it? On this day in 1836, the Battle of the Alamo began. Many Americans (especially Texans) think of the battle as a heroic last stand fought by brave patriots. As with many violent historical conflicts, though, things weren’t quite as simple or morally black-and-white as movies and folk songs might lead us to believe.
The Battle of the Alamo took place during the Texas Revolution, a war for Texan independence from Mexico. While Mexican officials obviously took issue with Texas attempting to break away, they were also upset by Texans’ use of slaves, since Mexico was cracking down on slavery. Texans argued that their economy depended on slavery, with many of their wealthiest being cotton farmers. At first, as the staunchly abolitionist Mexican government tried to outlaw slavery in their territory, some of the colonists left. By 1835, however, tensions grew into armed conflict. By the time of the battle, Texans were emboldened by their previous victories against the Mexican army, especially after they’d managed to drive the Mexicans south of the Rio Grande. However, when Mexico retaliated by sending General Antonio López de Santa Anna north with thousands of soldiers, many of the Texan rebels quickly abandoned the ground they had gained. One of the few remaining garrisons was located in a former Spanish mission called the Alamo. The adobe structure was not suitable for defending against an attack, and few rebels remained. Nevertheless, the commanders of the fort, William Travis and James Bowie, stayed behind, hoping that reinforcements would be sent their way. In the weeks leading to General Santa Anna’s arrival at the fort, the two commanders sent impassioned letters to the Texas legislature asking for reinforcements, to no avail. The Texas government was brand new and not organized enough to mobilize a large fighting force. Even if they had been, it would have been extremely difficult to get enough troops to the Alamo before the Mexican army’s arrival.
Travis and Bowie were both well aware of the approaching army, but refused to flee. Neither of them had much military experience, and with the fort poorly equipped or laid out for a defense, the odds were truly against them. On February 23, 1836, General Santa Anna arrived with his troops, and a 13-day siege began. Among the Texan soldiers was former Tennessee congressman Davy Crockett and the families of the soldiers garrisoned there. Some popular, modern imaginings of the battle, such as 1960’s The Alamo starring John Wayne, depict it as a desperate last stand. Historical accounts paint a different picture, though. Rather than a fight to the last man, about half of the Texan rebels fled before the battle’s end, and most of them were skewered by Mexican cavalry. Crockett himself surrendered and was executed, while the families of the soldiers were allowed to leave unharmed. The battle also did little to delay General Santa Anna and his troops on their way to their larger mission: capturing San Antonio. He had promised the Mexican government that he would take the city by March 2, and it was captured on the 6th.
Today, the Battle of the Alamo and the purported courage of the men who died there is an integral part of Texas state history, but that history is often not told in full. Most accounts of the Alamo make little mention of slavery, even though Texans’ desire to keep slaves was one of their main reasons for wanting to break with Mexico. Most modern adaptations of the story also leave out the Tejanos, the settlers of Mexican descent who fought alongside the white Texans. History might seem like a thing of the past, but it’s always relevant to the present.
[Image description: A watercolor drawing of the ruins of the Alamo.] Credit & copyright: Ruins of the Church of the Alamo, San Antonio de Béxar, Edward Everett (1818-1903). Amon Carter Museum of American Art, Fort Worth, Texas, Gift of Anne Burnett Tandy in memory of her father Thomas Lloyd Burnett, 1870-1938. Public Domain.You may remember the Alamo, but how much do you actually know about it? On this day in 1836, the Battle of the Alamo began. Many Americans (especially Texans) think of the battle as a heroic last stand fought by brave patriots. As with many violent historical conflicts, though, things weren’t quite as simple or morally black-and-white as movies and folk songs might lead us to believe.
The Battle of the Alamo took place during the Texas Revolution, a war for Texan independence from Mexico. While Mexican officials obviously took issue with Texas attempting to break away, they were also upset by Texans’ use of slaves, since Mexico was cracking down on slavery. Texans argued that their economy depended on slavery, with many of their wealthiest being cotton farmers. At first, as the staunchly abolitionist Mexican government tried to outlaw slavery in their territory, some of the colonists left. By 1835, however, tensions grew into armed conflict. By the time of the battle, Texans were emboldened by their previous victories against the Mexican army, especially after they’d managed to drive the Mexicans south of the Rio Grande. However, when Mexico retaliated by sending General Antonio López de Santa Anna north with thousands of soldiers, many of the Texan rebels quickly abandoned the ground they had gained. One of the few remaining garrisons was located in a former Spanish mission called the Alamo. The adobe structure was not suitable for defending against an attack, and few rebels remained. Nevertheless, the commanders of the fort, William Travis and James Bowie, stayed behind, hoping that reinforcements would be sent their way. In the weeks leading to General Santa Anna’s arrival at the fort, the two commanders sent impassioned letters to the Texas legislature asking for reinforcements, to no avail. The Texas government was brand new and not organized enough to mobilize a large fighting force. Even if they had been, it would have been extremely difficult to get enough troops to the Alamo before the Mexican army’s arrival.
Travis and Bowie were both well aware of the approaching army, but refused to flee. Neither of them had much military experience, and with the fort poorly equipped or laid out for a defense, the odds were truly against them. On February 23, 1836, General Santa Anna arrived with his troops, and a 13-day siege began. Among the Texan soldiers was former Tennessee congressman Davy Crockett and the families of the soldiers garrisoned there. Some popular, modern imaginings of the battle, such as 1960’s The Alamo starring John Wayne, depict it as a desperate last stand. Historical accounts paint a different picture, though. Rather than a fight to the last man, about half of the Texan rebels fled before the battle’s end, and most of them were skewered by Mexican cavalry. Crockett himself surrendered and was executed, while the families of the soldiers were allowed to leave unharmed. The battle also did little to delay General Santa Anna and his troops on their way to their larger mission: capturing San Antonio. He had promised the Mexican government that he would take the city by March 2, and it was captured on the 6th.
Today, the Battle of the Alamo and the purported courage of the men who died there is an integral part of Texas state history, but that history is often not told in full. Most accounts of the Alamo make little mention of slavery, even though Texans’ desire to keep slaves was one of their main reasons for wanting to break with Mexico. Most modern adaptations of the story also leave out the Tejanos, the settlers of Mexican descent who fought alongside the white Texans. History might seem like a thing of the past, but it’s always relevant to the present.
[Image description: A watercolor drawing of the ruins of the Alamo.] Credit & copyright: Ruins of the Church of the Alamo, San Antonio de Béxar, Edward Everett (1818-1903). Amon Carter Museum of American Art, Fort Worth, Texas, Gift of Anne Burnett Tandy in memory of her father Thomas Lloyd Burnett, 1870-1938. Public Domain. -
FREEWorld History PP&T CurioFree1 CQ
Did Aphrodite smile on you this Valentine’s Day? As the most romantic weekend of the year draws to a close, it seems only right to learn a bit about Aphrodite, the Greek goddess of love. Also known by her Roman name, Venus, Aphrodite is remembered today as a beautiful, romantic figure…yet she wasn’t actually the goddess of romantic love. Rather, her dominion was over physical desire and lust. This made her a surprisingly dangerous figure in Greek mythology, as she was a character driven by jealousy and prone to driving mortals mad. Even the story of her birth is surprisingly violent.
Like all Greek gods and goddesses, Aphrodite has two origins: the historical and mythical. The worship of Aphrodite might have been introduced to ancient Greece from the Middle East, and, indeed, there are similar figures in other pantheons of antiquity. Namely, she is considered to have many similarities to Ishtar of Mesopotamia and Hathor of ancient Egypt. Regardless of how she came to be introduced to the Greeks, her mythological origins are a little more fantastical, to say the least. Unlike most of the other Greek gods, Aphrodite wasn’t descended from the King of the gods, Zeus, nor any of his siblings. Instead, her birth came about as a result of divine patricide. When the titan Cronus overthrew his father, Uranus, he castrated him and threw his dismembered body into the sea. There, from the blood and foam of Uranus, Aphrodite was born. Due to the circumstances of her birth, she was strongly associated with water and was sometimes worshipped as a sea goddess.
Aphrodite was most commonly depicted as a goddess of beauty, fertility, sexuality, and, of course, love. But the Greeks strongly distinguished between erotic love and romantic love, and Aphrodite was the goddess of the former. Romantic love was seen in a positive light, while erotic love was seen as a sort of madness. With this in mind, Aphrodite’s temperament in mythology makes much more sense. She was frequently depicted as fickle, jealous, and short-tempered. In fact, despite being the goddess of love, she wasn’t particularly loyal to her own husband, Hephaestus. Various stories feature her affairs with Ares, Adonis, and other figures, often ending in humiliation or tragedy. She was also jealous when it came to her son, Eros, who shared his mother’s affinity for love and sexuality. When he fell in love with the mortal Psyche, Aphrodite conspired to break the couple apart by forcing Psyche to complete a set of seemingly impossible trials. Aphrodite’s meddling once resulted in the ruin of an entire kingdom. The goddess, along with Hera and Athena, forced Paris of Troy, a mortal man, to judge which of them to be the most beautiful. After Troy chose Aphrodite, she blessed him by having Eros strike Helen, one of the most beautiful mortal women in the world, with one of his enchanted arrows, making her fall in love with Paris. Unfortunately, Helen was already loved by Menelaus, the king of Sparta. After Paris and Helen eloped by returning to the former’s home, Menelaus rallied the rest of the Greeks to wage war against Troy. Following a 10-year conflict, Troy fell to the Greeks, resulting in Paris’s death.
Today, Aphrodite is often depicted as much more benevolent, which makes sense given the lessened distinction between erotic and romantic love, and a more positive view of sexuality. Depictions of Aphrodite also change with evolving beauty standards, reflecting the ideal female form at various points in history. Perhaps the most famous portrayal of the goddess is in the painting, The Birth of Venus by Sandro Botticelli, which takes inspiration from the story of her birth at sea. If you’ve never seen the painting, don’t worry; it’s much less graphic than the myth it’s based on!
[Image description: The Birth of Venus by Sandro Botticelli (1445–1510), a painting depicting Venus rising naked out of the ocean on a giant shell while flying winged figures blow wind toward her and woman on shore approaches with a blanket.] Credit & copyright: Sandro Botticelli (1445–1510), Uffizi Gallery. Public Domain.Did Aphrodite smile on you this Valentine’s Day? As the most romantic weekend of the year draws to a close, it seems only right to learn a bit about Aphrodite, the Greek goddess of love. Also known by her Roman name, Venus, Aphrodite is remembered today as a beautiful, romantic figure…yet she wasn’t actually the goddess of romantic love. Rather, her dominion was over physical desire and lust. This made her a surprisingly dangerous figure in Greek mythology, as she was a character driven by jealousy and prone to driving mortals mad. Even the story of her birth is surprisingly violent.
Like all Greek gods and goddesses, Aphrodite has two origins: the historical and mythical. The worship of Aphrodite might have been introduced to ancient Greece from the Middle East, and, indeed, there are similar figures in other pantheons of antiquity. Namely, she is considered to have many similarities to Ishtar of Mesopotamia and Hathor of ancient Egypt. Regardless of how she came to be introduced to the Greeks, her mythological origins are a little more fantastical, to say the least. Unlike most of the other Greek gods, Aphrodite wasn’t descended from the King of the gods, Zeus, nor any of his siblings. Instead, her birth came about as a result of divine patricide. When the titan Cronus overthrew his father, Uranus, he castrated him and threw his dismembered body into the sea. There, from the blood and foam of Uranus, Aphrodite was born. Due to the circumstances of her birth, she was strongly associated with water and was sometimes worshipped as a sea goddess.
Aphrodite was most commonly depicted as a goddess of beauty, fertility, sexuality, and, of course, love. But the Greeks strongly distinguished between erotic love and romantic love, and Aphrodite was the goddess of the former. Romantic love was seen in a positive light, while erotic love was seen as a sort of madness. With this in mind, Aphrodite’s temperament in mythology makes much more sense. She was frequently depicted as fickle, jealous, and short-tempered. In fact, despite being the goddess of love, she wasn’t particularly loyal to her own husband, Hephaestus. Various stories feature her affairs with Ares, Adonis, and other figures, often ending in humiliation or tragedy. She was also jealous when it came to her son, Eros, who shared his mother’s affinity for love and sexuality. When he fell in love with the mortal Psyche, Aphrodite conspired to break the couple apart by forcing Psyche to complete a set of seemingly impossible trials. Aphrodite’s meddling once resulted in the ruin of an entire kingdom. The goddess, along with Hera and Athena, forced Paris of Troy, a mortal man, to judge which of them to be the most beautiful. After Troy chose Aphrodite, she blessed him by having Eros strike Helen, one of the most beautiful mortal women in the world, with one of his enchanted arrows, making her fall in love with Paris. Unfortunately, Helen was already loved by Menelaus, the king of Sparta. After Paris and Helen eloped by returning to the former’s home, Menelaus rallied the rest of the Greeks to wage war against Troy. Following a 10-year conflict, Troy fell to the Greeks, resulting in Paris’s death.
Today, Aphrodite is often depicted as much more benevolent, which makes sense given the lessened distinction between erotic and romantic love, and a more positive view of sexuality. Depictions of Aphrodite also change with evolving beauty standards, reflecting the ideal female form at various points in history. Perhaps the most famous portrayal of the goddess is in the painting, The Birth of Venus by Sandro Botticelli, which takes inspiration from the story of her birth at sea. If you’ve never seen the painting, don’t worry; it’s much less graphic than the myth it’s based on!
[Image description: The Birth of Venus by Sandro Botticelli (1445–1510), a painting depicting Venus rising naked out of the ocean on a giant shell while flying winged figures blow wind toward her and woman on shore approaches with a blanket.] Credit & copyright: Sandro Botticelli (1445–1510), Uffizi Gallery. Public Domain. -
FREEUS History PP&T CurioFree1 CQ
In the face of tyranny, sometimes it pays to be a Paine in the neck. British-American political writer and propagandist Thomas Paine was born on this day in 1737. Paine is known for being one of the most influential voices during the American Revolution, but he was also a strong supporter of the French Revolution.
Paine was born in Norfolk, England, to a Quaker father and an Anglican mother, and had limited access to education in his early life. While his abilities to read, write, and perform basic arithmetic allowed him to work several jobs, he had few opportunities for economic advancement. Paine also seemed to struggle with every trade he attempted. One of his earliest jobs was being an officer of the excise, which involved chasing smugglers to collect excise taxes on tobacco and alcohol. The job paid little, and Paine was dismissed from the position after he published a pamphlet arguing that higher pay for excise officers would lead to lower corruption. Paine’s fortunes changed in 1774, when he had a chance meeting with Benjamin Franklin. Franklin urged Paine to move to America, and he arrived in Philadelphia, Pennsylvania, on November 30, 1774. There, he cut his teeth working at the Pennsylvania Magazine, owned by Franklin’s son-in-law, Robert Aitkin. During his tenure working with Aitkin, Paine published a number of articles under his own name and under pseudonyms. Being a steadfast abolitionist, Paine published African Slavery in America, an article that criticized and condemned the African slave trade.
Paine really began to make a name for himself when anti-British sentiment began to grow in the American colonies, along with calls for independence. In January, 1776, Paine anonymously published his most famous pamphlet, Common Sense, largely aimed at American colonists who were still undecided on the matter of independence. More than calling for sympathy, the pamphlet encouraged colonists to revolt against the British and to sever ties with the empire completely. Paine’s limited educational background might have actually contributed to his success. While his arguments were coherent and compelling, they appealed to a wider audience because of his tendency to speak and write in a plain, straightforward manner, forgoing the use of latin terms and phrases or philosophical allusions popular with more educated writers of the time. Upon being published, Common Sense sold 500,000 copies in a matter of months, and it was popular to read it aloud during public gatherings. Another popular pamphlet by Paine, The American Crisis, was published the same year, and it was promoted amongst American officers by George Washington himself. This pamphlet was aimed at bolstering the morale of the colonists as conflicts began to escalate in the American Revolution. It contained the now famous words, “These are the times that try men's souls: The summer soldier and the sunshine patriot will, in this crisis, shrink from the service of his country; but he that stands it now, deserves the love and thanks of man and woman.”
Despite his staunch support for the American Revolution, Paine had few friends left by the end of it. When he returned to Britain in 1787 and wrote The Rights of Man in favor of the French Revolution, he was tried for treason, forcing him to flee to France, where he was imprisoned, ironically, for opposing the execution of King Louis XVI. Paine was released thanks to the American ambassador to France, but he later openly criticized George Washington for failing to support him when he had claimed American citizenship to avoid prison. He eventually returned to the former American colonies, now called the United States of America, at the invitation of President Thomas Jefferson. Paine died in 1809, and only six mourners attended his funeral. For much of the following century, Paine was remembered as an instigator and poorly regarded. Today, though, he’s remembered as a leading thinker and writer who helped embolden everyday Americans. No Paine, no American Revolution!
[Image description: A painting of Thomas Paine wearing a black suit and white neckcloth sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution. Laurent Dabos, 1761 - 1835. Public Domain, CC0.In the face of tyranny, sometimes it pays to be a Paine in the neck. British-American political writer and propagandist Thomas Paine was born on this day in 1737. Paine is known for being one of the most influential voices during the American Revolution, but he was also a strong supporter of the French Revolution.
Paine was born in Norfolk, England, to a Quaker father and an Anglican mother, and had limited access to education in his early life. While his abilities to read, write, and perform basic arithmetic allowed him to work several jobs, he had few opportunities for economic advancement. Paine also seemed to struggle with every trade he attempted. One of his earliest jobs was being an officer of the excise, which involved chasing smugglers to collect excise taxes on tobacco and alcohol. The job paid little, and Paine was dismissed from the position after he published a pamphlet arguing that higher pay for excise officers would lead to lower corruption. Paine’s fortunes changed in 1774, when he had a chance meeting with Benjamin Franklin. Franklin urged Paine to move to America, and he arrived in Philadelphia, Pennsylvania, on November 30, 1774. There, he cut his teeth working at the Pennsylvania Magazine, owned by Franklin’s son-in-law, Robert Aitkin. During his tenure working with Aitkin, Paine published a number of articles under his own name and under pseudonyms. Being a steadfast abolitionist, Paine published African Slavery in America, an article that criticized and condemned the African slave trade.
Paine really began to make a name for himself when anti-British sentiment began to grow in the American colonies, along with calls for independence. In January, 1776, Paine anonymously published his most famous pamphlet, Common Sense, largely aimed at American colonists who were still undecided on the matter of independence. More than calling for sympathy, the pamphlet encouraged colonists to revolt against the British and to sever ties with the empire completely. Paine’s limited educational background might have actually contributed to his success. While his arguments were coherent and compelling, they appealed to a wider audience because of his tendency to speak and write in a plain, straightforward manner, forgoing the use of latin terms and phrases or philosophical allusions popular with more educated writers of the time. Upon being published, Common Sense sold 500,000 copies in a matter of months, and it was popular to read it aloud during public gatherings. Another popular pamphlet by Paine, The American Crisis, was published the same year, and it was promoted amongst American officers by George Washington himself. This pamphlet was aimed at bolstering the morale of the colonists as conflicts began to escalate in the American Revolution. It contained the now famous words, “These are the times that try men's souls: The summer soldier and the sunshine patriot will, in this crisis, shrink from the service of his country; but he that stands it now, deserves the love and thanks of man and woman.”
Despite his staunch support for the American Revolution, Paine had few friends left by the end of it. When he returned to Britain in 1787 and wrote The Rights of Man in favor of the French Revolution, he was tried for treason, forcing him to flee to France, where he was imprisoned, ironically, for opposing the execution of King Louis XVI. Paine was released thanks to the American ambassador to France, but he later openly criticized George Washington for failing to support him when he had claimed American citizenship to avoid prison. He eventually returned to the former American colonies, now called the United States of America, at the invitation of President Thomas Jefferson. Paine died in 1809, and only six mourners attended his funeral. For much of the following century, Paine was remembered as an instigator and poorly regarded. Today, though, he’s remembered as a leading thinker and writer who helped embolden everyday Americans. No Paine, no American Revolution!
[Image description: A painting of Thomas Paine wearing a black suit and white neckcloth sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution. Laurent Dabos, 1761 - 1835. Public Domain, CC0. -
FREEWorld History PP&T CurioFree1 CQ
Some historical events are like explosions—they happen in an instant. But others, even some of the most impactful, happen like a series of falling dominos. With events like these, it’s easier to see their true impact in hindsight. This month in 1933, one such event happened in Germany, when Adolf Hitler was appointed Chancellor by President Paul von Hindenburg. The political occurrences leading up to Hitler’s appointment, as well as those directly following it, fell perfectly in place to allow the Nazis to seize control of the country.
The end of WWI saw the end of the German Empire, and its successor, the Weimar Republic, was forced to sign the Treaty of Versailles in June of 1919. The fledgling nation was made to accept responsibility and pay reparations to the parties involved in the war. Between the reparation payments and the debt accrued during the war, the Weimar Republic was in dire economic straits by the early 1920s. There was also plenty of social unrest in the wake of WWI, which gave rise to political extremism on both ends of the spectrum. On one side was the German Communist Party, which quickly gained popularity, while another emergent group was the National Socialist German Workers’ Party, soon to be referred to as the “Nazis.” At first, the Nazis struggled to gain political ground or capture the public’s attention, though they had the support of their own paramilitary group, the Sturmabteilung SA (storm troopers), consisting mostly of WWI veterans. Then, in 1923, Adolf Hitler, a rising figure in the Nazi Party, along with WWI general Erich Ludendorff, made a failed attempt to overthrow the government in what would come to be known as the Beer Hall Putsch. Ludendorff was already a renowned war hero popular with many Germans, but the failed coup was Hitler’s first step toward political fame. Using the sudden burst of notoriety as a springboard, Hitler wrote his autobiography, Mein Kampf (My Struggle) by dictation while spending a year in prison.
When the Great Depression hit global markets at the end of the 1920s, the Nazi Party capitalized on the severe economic hardships facing everyday Germans. They blamed an ineffectual government, the communist movement, Jewish financiers, and modernist cultural movements for the decline of Germany. The party promoted the idea that minority groups, including Jews, immigrants, disabled people, and LGBTQ+ people were draining the nation’s wealth. By 1933, the Nazi Party had the largest single share of votes in parliamentary elections, and they began to throw sand into the governments’ gears, stymying any efforts by parliament to pass meaningful legislation. At the same time, they decried the passivity of parliament and the inefficiencies of democracy.
Then, in 1933, in a vain attempt to court Nazi support, President Paul von Hindenburg appointed Hitler chancellor of Germany. Hindenburg hoped that it would lead to the Nazi party’s cooperation in governance. Unfortunately, no such cooperation emerged. When Hindenburg died the following year, Hitler declared himself Führer (leader) of Germany and began the systematic dismantling of the country’s democratic apparatus. He then cemented his power by attacking or imprisoning his critics and rivals, including the Sturmabteilung SA, which he began to consider a liability due to their violent activities on the street. The systematic purge of Hitler’s enemies, including his former supporters from June 30 to July 2, 1934, came to be known as the Nacht der langen Messer (Night of the Long Knives). It was met with widespread support by the greater German populace. Throughout the rest of the 1930s, Hitler and the Nazi Party expanded their military in direct opposition to the Treaty of Versailles and began claiming neighboring territories based on the supposed populations of ethnic Germans living there. To avoid conflict, European leaders opted for a policy of appeasement in 1938, allowing Germany to claim Czechoslovakia’s Sudetenland in exchange for a pledge not to seek further territory.
Of course, Nazi Germany didn’t stop at Czechoslovakia, which they invaded the following year. Soon came Poland and before long Nazi crosshairs were aimed at the rest of Europe. It took the Nazi Party almost 20 years, but they eventually came to hold absolute power by undermining the principles of democracy and eroding the safeguards that held it in place. While most see the Nazis’ rise to power as a cautionary tale, some modern dictators have used it as a playbook to be copied, making Nazi ideology a threat to this day.Some historical events are like explosions—they happen in an instant. But others, even some of the most impactful, happen like a series of falling dominos. With events like these, it’s easier to see their true impact in hindsight. This month in 1933, one such event happened in Germany, when Adolf Hitler was appointed Chancellor by President Paul von Hindenburg. The political occurrences leading up to Hitler’s appointment, as well as those directly following it, fell perfectly in place to allow the Nazis to seize control of the country.
The end of WWI saw the end of the German Empire, and its successor, the Weimar Republic, was forced to sign the Treaty of Versailles in June of 1919. The fledgling nation was made to accept responsibility and pay reparations to the parties involved in the war. Between the reparation payments and the debt accrued during the war, the Weimar Republic was in dire economic straits by the early 1920s. There was also plenty of social unrest in the wake of WWI, which gave rise to political extremism on both ends of the spectrum. On one side was the German Communist Party, which quickly gained popularity, while another emergent group was the National Socialist German Workers’ Party, soon to be referred to as the “Nazis.” At first, the Nazis struggled to gain political ground or capture the public’s attention, though they had the support of their own paramilitary group, the Sturmabteilung SA (storm troopers), consisting mostly of WWI veterans. Then, in 1923, Adolf Hitler, a rising figure in the Nazi Party, along with WWI general Erich Ludendorff, made a failed attempt to overthrow the government in what would come to be known as the Beer Hall Putsch. Ludendorff was already a renowned war hero popular with many Germans, but the failed coup was Hitler’s first step toward political fame. Using the sudden burst of notoriety as a springboard, Hitler wrote his autobiography, Mein Kampf (My Struggle) by dictation while spending a year in prison.
When the Great Depression hit global markets at the end of the 1920s, the Nazi Party capitalized on the severe economic hardships facing everyday Germans. They blamed an ineffectual government, the communist movement, Jewish financiers, and modernist cultural movements for the decline of Germany. The party promoted the idea that minority groups, including Jews, immigrants, disabled people, and LGBTQ+ people were draining the nation’s wealth. By 1933, the Nazi Party had the largest single share of votes in parliamentary elections, and they began to throw sand into the governments’ gears, stymying any efforts by parliament to pass meaningful legislation. At the same time, they decried the passivity of parliament and the inefficiencies of democracy.
Then, in 1933, in a vain attempt to court Nazi support, President Paul von Hindenburg appointed Hitler chancellor of Germany. Hindenburg hoped that it would lead to the Nazi party’s cooperation in governance. Unfortunately, no such cooperation emerged. When Hindenburg died the following year, Hitler declared himself Führer (leader) of Germany and began the systematic dismantling of the country’s democratic apparatus. He then cemented his power by attacking or imprisoning his critics and rivals, including the Sturmabteilung SA, which he began to consider a liability due to their violent activities on the street. The systematic purge of Hitler’s enemies, including his former supporters from June 30 to July 2, 1934, came to be known as the Nacht der langen Messer (Night of the Long Knives). It was met with widespread support by the greater German populace. Throughout the rest of the 1930s, Hitler and the Nazi Party expanded their military in direct opposition to the Treaty of Versailles and began claiming neighboring territories based on the supposed populations of ethnic Germans living there. To avoid conflict, European leaders opted for a policy of appeasement in 1938, allowing Germany to claim Czechoslovakia’s Sudetenland in exchange for a pledge not to seek further territory.
Of course, Nazi Germany didn’t stop at Czechoslovakia, which they invaded the following year. Soon came Poland and before long Nazi crosshairs were aimed at the rest of Europe. It took the Nazi Party almost 20 years, but they eventually came to hold absolute power by undermining the principles of democracy and eroding the safeguards that held it in place. While most see the Nazis’ rise to power as a cautionary tale, some modern dictators have used it as a playbook to be copied, making Nazi ideology a threat to this day. -
FREEGames PP&T CurioFree1 CQ
You can’t try to tilt things in your favor when it comes to pinball! Once a popular arcade mainstay, pinball is seeing a resurgence in popularity. But while pinball machines are largely seen as harmless, wholesome fun nowadays, there was a time when they were public enemy number one. Anti-pinball sentiment was so high, in fact, that this month in 1942, New York City banned the game outright.
With their colorful designs and sound effects, it’s hard to imagine pinball machines as symbols of the seedy underground. Yet for much of pinball’s history, that’s exactly how many people saw it. The first coin-operated pinball machine was made in 1931, and throughout the Great Depression, they grew in popularity. Early pinball machines were similar to modern ones, minus one crucial detail: the flippers. Without flippers to fling the ball back up, pinball was almost entirely a game of chance. Proprietors of bars, bowling alleys, and candy shops set up machines in hopes that eager players would sink countless nickels and dimes into them. If their pinball managed to go into a specific hole, players could win a prize, ranging from a piece of candy to expensive jewelry. Adding to the game’s less-than-favorable reputation, the pinball manufacturing industry had ties to organized crime in Chicago, and pinball machines were seen by many parents as a way for gangsters to make money off of kids. In New York City, Mayor Fiorello LaGuardia went on a crusade against the arcade icon, and it reached a fever pitch after the Japanese attack against the U.S. at Pearl Harbor.
After the U.S. joined WWII following the attack, pinball machines were seen as a waste of precious resources, like metal and springs, that could go toward the war effort. Suddenly, anti-pinball sentiment wasn’t just about morality, it was about patriotism. On January 21, 1942, LaGuardia got his wish when the city council voted to make pinball machines illegal in New York City. Several other cities soon followed suit. Passing the law proved much easier than actually enforcing it, though. As enthusiastic as they were, LaGuardia and the police never quite stamped out New York’s pinball scourge. Sure, many business owners were arrested for having them on their premises while their machines were seized and destroyed in front of the press with sledgehammers, but the industry continued to thrive. Even after flippers were introduced in 1947 to make pinball a game of skill, many people opposed it.
It wasn’t until 1974, when the California Supreme Court ruled against the ban, that the crusade started to lose steam. After the ban was overturned, a financially struggling New York City saw pinball as a financial opportunity. Operators would be required to pay for a license, raising money for the city. However, proponents of pinball still had to prove that it wasn’t gambling. To that end, the Amusement and Music Operators Association hired Roger Sharpe, one of the top players in the country, to demonstrate to the city council that pinball was a game of skill, not chance. To do this, he stood in front of them and called a shot, pulling the plunger back just enough to get the pinball to land exactly where he said it would. Satisfied with the demonstration, the city lifted the ban in 1976.
Though pinball is considered a bit retro today, there are still hundreds of tournaments around the U.S. alone, some with cash prizes reaching up to $1 million. Pinball’s reputation has also had a complete turnaround. Once a sign of rebellious youth and the criminal underworld, pinball is now more likely to be found at a family-friendly arcade than a seedy bar on the wrong side of town. No need to watch your back—just keep your eyes on the ball.
[Image description: A close-up photo of dials and knobs in a pinball machine.] Credit & copyright: Cottonbro studio, PexelsYou can’t try to tilt things in your favor when it comes to pinball! Once a popular arcade mainstay, pinball is seeing a resurgence in popularity. But while pinball machines are largely seen as harmless, wholesome fun nowadays, there was a time when they were public enemy number one. Anti-pinball sentiment was so high, in fact, that this month in 1942, New York City banned the game outright.
With their colorful designs and sound effects, it’s hard to imagine pinball machines as symbols of the seedy underground. Yet for much of pinball’s history, that’s exactly how many people saw it. The first coin-operated pinball machine was made in 1931, and throughout the Great Depression, they grew in popularity. Early pinball machines were similar to modern ones, minus one crucial detail: the flippers. Without flippers to fling the ball back up, pinball was almost entirely a game of chance. Proprietors of bars, bowling alleys, and candy shops set up machines in hopes that eager players would sink countless nickels and dimes into them. If their pinball managed to go into a specific hole, players could win a prize, ranging from a piece of candy to expensive jewelry. Adding to the game’s less-than-favorable reputation, the pinball manufacturing industry had ties to organized crime in Chicago, and pinball machines were seen by many parents as a way for gangsters to make money off of kids. In New York City, Mayor Fiorello LaGuardia went on a crusade against the arcade icon, and it reached a fever pitch after the Japanese attack against the U.S. at Pearl Harbor.
After the U.S. joined WWII following the attack, pinball machines were seen as a waste of precious resources, like metal and springs, that could go toward the war effort. Suddenly, anti-pinball sentiment wasn’t just about morality, it was about patriotism. On January 21, 1942, LaGuardia got his wish when the city council voted to make pinball machines illegal in New York City. Several other cities soon followed suit. Passing the law proved much easier than actually enforcing it, though. As enthusiastic as they were, LaGuardia and the police never quite stamped out New York’s pinball scourge. Sure, many business owners were arrested for having them on their premises while their machines were seized and destroyed in front of the press with sledgehammers, but the industry continued to thrive. Even after flippers were introduced in 1947 to make pinball a game of skill, many people opposed it.
It wasn’t until 1974, when the California Supreme Court ruled against the ban, that the crusade started to lose steam. After the ban was overturned, a financially struggling New York City saw pinball as a financial opportunity. Operators would be required to pay for a license, raising money for the city. However, proponents of pinball still had to prove that it wasn’t gambling. To that end, the Amusement and Music Operators Association hired Roger Sharpe, one of the top players in the country, to demonstrate to the city council that pinball was a game of skill, not chance. To do this, he stood in front of them and called a shot, pulling the plunger back just enough to get the pinball to land exactly where he said it would. Satisfied with the demonstration, the city lifted the ban in 1976.
Though pinball is considered a bit retro today, there are still hundreds of tournaments around the U.S. alone, some with cash prizes reaching up to $1 million. Pinball’s reputation has also had a complete turnaround. Once a sign of rebellious youth and the criminal underworld, pinball is now more likely to be found at a family-friendly arcade than a seedy bar on the wrong side of town. No need to watch your back—just keep your eyes on the ball.
[Image description: A close-up photo of dials and knobs in a pinball machine.] Credit & copyright: Cottonbro studio, Pexels -
FREEBiology PP&T CurioFree1 CQ
You really shouldn’t spray paint at church—especially not on the grave of the world’s most famous biologist. Two climate activists recently made headlines for spray painting a message on Charles Darwin’s grave, in London’s Westminster Abbey. They hoped to draw attention to the fact that Earth’s global temperatures were 34.7 degrees higher than pre-industrial levels for the first time in 2024. While there’s no way to know how Darwin would feel about our modern climate crisis, during his lifetime he wasn’t focused on global temperatures. Rather, he wanted to learn how living things adapted to their environments. His theory of natural selection was groundbreaking…though, contrary to popular belief, Darwin was far from the first scientist to notice that organisms changed over time.
Born on February 12, 1809, in Shrewsbury, England, Charles Darwin was already interested in nature and an avid collector of plants and insects by the time he was teen. Still, he didn’t set out to study the natural world, at first. Instead, he apprenticed with his father, a doctor, then enrolled at the University of Edinburgh’s medical school in 1825. Alas, Darwin wasn’t cut out to be a doctor. Not only was he bored by medical lectures, he was deeply (and understandably) upset by medical practices of the time. This was especially true of a surgery he witnessed in which doctors operated on a child without anesthetics—because they hadn’t been invented yet. After leaving medical school, Darwin didn’t have a clear direction in life. He studied taxidermy for a time and later enrolled at Cambridge University to study theology. Yet again, Darwin found himself drawn away from his schooling, finally spurning theology to join the five-year voyage of the HMS Beagle to serve as its naturalist. The Beagle was set to circumnavigate the globe and survey the coastline of South America, among other things, allowing Darwin to travel to remote locations rarely visited by anyone.
During the voyage, Darwin did just what he’d done as a child, collecting specimens of insects, plants, animals, and fossils. He didn’t quite have the same “leave only footprints” mantra as modern scientists, though. In fact, Darwin not only documented the various lifeforms he encountered on his journey, he dined on them too. This was actually a habit dating back to his days at Cambridge, where he was the founding member of the Gourmet Club (also known as the Glutton Club). The goal of the club had been to feast on “birds and beasts which were before unknown to human palate,” and Darwin certainly made good on that motto during his time aboard the Beagle. According to his notes, Darwin ate iguanas, giant tortoises, armadillos, and even a puma, which he said was "remarkably like veal in taste." His most important contribution as a naturalist, though, was his theory of natural selection.
Darwin came up with his most famous idea after observing 13 different species of finches on the Galápagos Islands. Examining their behavior in the wild and studying their anatomy from captured specimens, Darwin found that the finches all had differently shaped beaks for different purposes. Some were better suited for eating seeds, while others ate insects. Despite these differences, Darwin concluded that they were all descended from the same bird, having many common characteristics, with specializations arising over time. Darwin wasn’t the first person to posit the possibility of evolution, though. 18th-century naturalist Jean Baptiste Lamarck believed that animals changed their bodies throughout their lives based on their environment, while Darwin’s contemporary Alfred Russel Wallace came up with the same theory of natural selection as he did. In fact, the two published a joint statement and gave a presentation at the Linnean Society in London in 1858. Darwin didn’t actually coin the phrase “survival of the fittest,” either. English philosopher Herbert Spencer came up with it in 1864 while comparing his economic and sociological theories to Darwin’s theory of evolution.
Despite Darwin’s confidence in his theory and praise from his peers in the scientific world, he actually waited 20 years to publish his findings. He was fearful of how his theory would be received by the religious community in England, since it contradicted much of what was written in the Bible. However, despite some public criticism, Darwin was mostly celebrated upon his theory’s publication. When he died in 1882, he was laid to rest in London’s Westminster Abbey, alongside England’s greatest heroes. It seems he didn’t have much to fear if his countrymen were willing to bury him in a church!
[Image description: A black-and-white photograph of Charles Darwin with a white beard.] Credit & copyright: Library of Congress, Prints & Photographs Division, LC-DIG-ggbain-03485, George Grantham Bain Collection. No known restrictions on publication.You really shouldn’t spray paint at church—especially not on the grave of the world’s most famous biologist. Two climate activists recently made headlines for spray painting a message on Charles Darwin’s grave, in London’s Westminster Abbey. They hoped to draw attention to the fact that Earth’s global temperatures were 34.7 degrees higher than pre-industrial levels for the first time in 2024. While there’s no way to know how Darwin would feel about our modern climate crisis, during his lifetime he wasn’t focused on global temperatures. Rather, he wanted to learn how living things adapted to their environments. His theory of natural selection was groundbreaking…though, contrary to popular belief, Darwin was far from the first scientist to notice that organisms changed over time.
Born on February 12, 1809, in Shrewsbury, England, Charles Darwin was already interested in nature and an avid collector of plants and insects by the time he was teen. Still, he didn’t set out to study the natural world, at first. Instead, he apprenticed with his father, a doctor, then enrolled at the University of Edinburgh’s medical school in 1825. Alas, Darwin wasn’t cut out to be a doctor. Not only was he bored by medical lectures, he was deeply (and understandably) upset by medical practices of the time. This was especially true of a surgery he witnessed in which doctors operated on a child without anesthetics—because they hadn’t been invented yet. After leaving medical school, Darwin didn’t have a clear direction in life. He studied taxidermy for a time and later enrolled at Cambridge University to study theology. Yet again, Darwin found himself drawn away from his schooling, finally spurning theology to join the five-year voyage of the HMS Beagle to serve as its naturalist. The Beagle was set to circumnavigate the globe and survey the coastline of South America, among other things, allowing Darwin to travel to remote locations rarely visited by anyone.
During the voyage, Darwin did just what he’d done as a child, collecting specimens of insects, plants, animals, and fossils. He didn’t quite have the same “leave only footprints” mantra as modern scientists, though. In fact, Darwin not only documented the various lifeforms he encountered on his journey, he dined on them too. This was actually a habit dating back to his days at Cambridge, where he was the founding member of the Gourmet Club (also known as the Glutton Club). The goal of the club had been to feast on “birds and beasts which were before unknown to human palate,” and Darwin certainly made good on that motto during his time aboard the Beagle. According to his notes, Darwin ate iguanas, giant tortoises, armadillos, and even a puma, which he said was "remarkably like veal in taste." His most important contribution as a naturalist, though, was his theory of natural selection.
Darwin came up with his most famous idea after observing 13 different species of finches on the Galápagos Islands. Examining their behavior in the wild and studying their anatomy from captured specimens, Darwin found that the finches all had differently shaped beaks for different purposes. Some were better suited for eating seeds, while others ate insects. Despite these differences, Darwin concluded that they were all descended from the same bird, having many common characteristics, with specializations arising over time. Darwin wasn’t the first person to posit the possibility of evolution, though. 18th-century naturalist Jean Baptiste Lamarck believed that animals changed their bodies throughout their lives based on their environment, while Darwin’s contemporary Alfred Russel Wallace came up with the same theory of natural selection as he did. In fact, the two published a joint statement and gave a presentation at the Linnean Society in London in 1858. Darwin didn’t actually coin the phrase “survival of the fittest,” either. English philosopher Herbert Spencer came up with it in 1864 while comparing his economic and sociological theories to Darwin’s theory of evolution.
Despite Darwin’s confidence in his theory and praise from his peers in the scientific world, he actually waited 20 years to publish his findings. He was fearful of how his theory would be received by the religious community in England, since it contradicted much of what was written in the Bible. However, despite some public criticism, Darwin was mostly celebrated upon his theory’s publication. When he died in 1882, he was laid to rest in London’s Westminster Abbey, alongside England’s greatest heroes. It seems he didn’t have much to fear if his countrymen were willing to bury him in a church!
[Image description: A black-and-white photograph of Charles Darwin with a white beard.] Credit & copyright: Library of Congress, Prints & Photographs Division, LC-DIG-ggbain-03485, George Grantham Bain Collection. No known restrictions on publication. -
FREEUS History PP&T CurioFree1 CQ
For most people today, winter is either a time for fun activities like sledding, ice skating, and skiing, or a time of inconvenience, when streets are slippery, commutes are longer, and windshields need scraping. But not so long ago, winter was a truly dangerous time for average people, especially if they were traveling. No story illustrates that point quite as well as the tragic tale of the Donner Party, a group of pioneers migrating from the Midwest to California in 1846. Their attempt to survive a brutal winter in the Sierra Nevada is considered one of the darkest chapters from the time of westward expansion in America.
Before the completion of the first Transcontinental Railroad in 1869, traversing the U.S. was a dangerous, harrowing task. Journeys were made largely on foot with provisions and other supplies carried on wagons. There weren’t always well-established roads or reliable maps, making long-distance travel a particularly haphazard endeavor. Nevertheless, the allure of fertile farmland drew thousands to the West Coast, including brothers George and Jacob Donner, as well as James Reed, a successful businessman from Springfield, Illinois. The Donner brothers and Reed formed a party of around 31 people and set off for Independence, Missouri, on April 14, 1846. On May 12, they joined a wagon train (a group of individual parties that traveled together for mutual protection) and headed west toward Fort Laramie 650 miles away. For the first portion of the trip, they stayed on the Oregon Trail, which ended near present-day Portland, Oregon. The Donners and Reeds, however, were traveling to California, and intended to take the California Trail, which diverged from the Oregon Trail at two points between Fort Bridger and Fort Hall. However, instead of waiting for either of those better-established routes, the Donners and Reed opted to take what they believed was a shortcut on the advice of a guide they were traveling with named Lansford Hastings. This supposed shortcut, called Hastings Cutoff, was purported to cut 300 miles from the trip, which would have gotten the travelers to their destination months earlier than anticipated. Hastings Cutoff was heavily promoted by Hastings in his book, The Emigrants' Guide to Oregon and California, which contained advice and trail maps to the West Coast.
What the Donners and Reed didn’t know was that Hastings had never actually traveled his namesake shortcut himself. Contrary to his assertions, the shortcut actually added 125 miles to the trip. Hastings also didn’t join the Donners and Reeds, who parted ways from him at Fort Bridger. Electing George Donner as their leader, the Donners, the Reeds, and dozens more joined together to tackle Hastings Cutoff. The Donner party reached it on July 31, and initially made good time. But since Hastings Cutoff took them through largely untraveled wilderness, they faced severe delays, preventing them from crossing the Sierra Nevada before winter. On October 31, the party established a camp to survive the winter in the area now known as Donner Pass. By then, Reed and his family had set off on their own after he killed another man in the party. As winter set in, the Donner party built cabins for shelter, but they had little in the way of supplies, having lost most of their food during their previous delays. By December, they were trapped by heavy snow, and on the 16th of that month, 15 members of the party set out to find help. Most of the remaining survivors at camp were children.
The aftermath of the disastrous venture made headlines around the country. Only seven of the party members who set out for help survived, and of the original 89 members of the Donner party, 42 starved or froze to death. Sensational claims of cannibalism became the focus of the story after it was discovered that about half of the survivors had consumed the flesh of the dead after depleting their meager supply of food, livestock, dogs, and whatever leather they could boil. Among the dead were the Donner brothers and most of their immediate family. Today, the doomed expedition is memorialized through museum exhibits and the area where the Donner party spent their harrowing winter, which is now called Donner Pass. The next time you curse yourself for taking the wrong exit on a road trip, thank your lucky stars for GPS.
[Image description: Snow falling against a black background.] Credit & copyright: Dillon Kydd, PexelsFor most people today, winter is either a time for fun activities like sledding, ice skating, and skiing, or a time of inconvenience, when streets are slippery, commutes are longer, and windshields need scraping. But not so long ago, winter was a truly dangerous time for average people, especially if they were traveling. No story illustrates that point quite as well as the tragic tale of the Donner Party, a group of pioneers migrating from the Midwest to California in 1846. Their attempt to survive a brutal winter in the Sierra Nevada is considered one of the darkest chapters from the time of westward expansion in America.
Before the completion of the first Transcontinental Railroad in 1869, traversing the U.S. was a dangerous, harrowing task. Journeys were made largely on foot with provisions and other supplies carried on wagons. There weren’t always well-established roads or reliable maps, making long-distance travel a particularly haphazard endeavor. Nevertheless, the allure of fertile farmland drew thousands to the West Coast, including brothers George and Jacob Donner, as well as James Reed, a successful businessman from Springfield, Illinois. The Donner brothers and Reed formed a party of around 31 people and set off for Independence, Missouri, on April 14, 1846. On May 12, they joined a wagon train (a group of individual parties that traveled together for mutual protection) and headed west toward Fort Laramie 650 miles away. For the first portion of the trip, they stayed on the Oregon Trail, which ended near present-day Portland, Oregon. The Donners and Reeds, however, were traveling to California, and intended to take the California Trail, which diverged from the Oregon Trail at two points between Fort Bridger and Fort Hall. However, instead of waiting for either of those better-established routes, the Donners and Reed opted to take what they believed was a shortcut on the advice of a guide they were traveling with named Lansford Hastings. This supposed shortcut, called Hastings Cutoff, was purported to cut 300 miles from the trip, which would have gotten the travelers to their destination months earlier than anticipated. Hastings Cutoff was heavily promoted by Hastings in his book, The Emigrants' Guide to Oregon and California, which contained advice and trail maps to the West Coast.
What the Donners and Reed didn’t know was that Hastings had never actually traveled his namesake shortcut himself. Contrary to his assertions, the shortcut actually added 125 miles to the trip. Hastings also didn’t join the Donners and Reeds, who parted ways from him at Fort Bridger. Electing George Donner as their leader, the Donners, the Reeds, and dozens more joined together to tackle Hastings Cutoff. The Donner party reached it on July 31, and initially made good time. But since Hastings Cutoff took them through largely untraveled wilderness, they faced severe delays, preventing them from crossing the Sierra Nevada before winter. On October 31, the party established a camp to survive the winter in the area now known as Donner Pass. By then, Reed and his family had set off on their own after he killed another man in the party. As winter set in, the Donner party built cabins for shelter, but they had little in the way of supplies, having lost most of their food during their previous delays. By December, they were trapped by heavy snow, and on the 16th of that month, 15 members of the party set out to find help. Most of the remaining survivors at camp were children.
The aftermath of the disastrous venture made headlines around the country. Only seven of the party members who set out for help survived, and of the original 89 members of the Donner party, 42 starved or froze to death. Sensational claims of cannibalism became the focus of the story after it was discovered that about half of the survivors had consumed the flesh of the dead after depleting their meager supply of food, livestock, dogs, and whatever leather they could boil. Among the dead were the Donner brothers and most of their immediate family. Today, the doomed expedition is memorialized through museum exhibits and the area where the Donner party spent their harrowing winter, which is now called Donner Pass. The next time you curse yourself for taking the wrong exit on a road trip, thank your lucky stars for GPS.
[Image description: Snow falling against a black background.] Credit & copyright: Dillon Kydd, Pexels -
FREEPlay PP&T CurioFree1 CQ
This family business really took off around the globe…by making globes! Snow globes are popular souvenirs and holiday decorations the world over. While these whimsical decorations seem like a simple concept—a diorama inside a glass globe with some water and fake snow thrown in—they have a surprisingly scientific origin.
Erwin Perzy I, an Austrian trademan and tinkerer, didn’t set out to invent the snowglobe. Rather, he was in the business of selling medical instruments to local surgeons. In 1900, many physicians were looking to improve the lighting in their operating rooms, which at the time were often small, dim, and hard to work in. So, Perzy went to work, experimenting with a lightbulb placed near a water-filled glass globe. In order to amplify the brightness, Perzy tried adding different materials in the water to reflect the light. His invention never caught on with surgeons, but it did give Perzy an idea. He was already making miniature pewter replicas of the nearby Mariazell Basilica to sell to tourists and pilgrims who visited the site in droves. The souvenir was already popular, so he decided to bump it up a notch by taking some of the tiny buildings and placing them inside the globes. Filled with water and a proprietary blend of wax to mimic snow, the souvenir was sold as a diorama of the Mariazell Basilica in winter, and it was an instant success.
Some historians have pointed out that snow globes may have existed, at least in some form, before Perzy's invention. During the 1878 Paris Universelle Exposition, a French glassware company sold domed paperweights containing a model of a man holding an umbrella. The dome was also filled with water and imitation snow, but this version never caught on. Either way, Perzy’s patent for his snow globe was the first of its kind, and by 1905, business was booming.
At first, snow globes remained a regional craze. In 1908, Emperor Franz Joseph of Austria awarded Perzy for his novel contributions to toymaking, helping to boost snow globe’s popularity. For the first decades of the 20th century, snow globe’s spread steadily across Europe, but sales fell during World War I, World War II, and the intervening period of economic depression. After World War II, business took off again and began to spread to the U.S. By then, Perzy’s son, Erwin Perzy II, was in charge of the family business and made the decision to market snow globes as a Christmas item. The first Christmas snow globe featured a Christmas tree inside, and proved to be a great success. With the post-war baby boom and a rising economy, snow globe sales skyrocketed. Beginning in the 1970s, Erwin Perzy III took over the family business and started selling snow globes to Japan, but by the end of the 1980s, there was a problem. The patent filed by the first Perzy expired, forcing the family to pivot and market their products as the real deal, naming themselves the Original Viennese Snow Globes.
Today, the company is still owned and operated by the Perzy family, and while plenty of other companies sell snow globes, they’re still recognized as the original. In the years since their rebranding, they’ve been commissioned to make custom snow globes for a number of U.S. presidents, and in 2020, they even made one with a model toilet paper roll inside to poke fun at the shortages during the COVID pandemic. In addition to being the original, the company still uses a proprietary blend of wax and plastic for their snow, which they claim floats longer than their competitors’. That’s one way to keep shaking up the industry after all these years.
[Image description: A snowglobe with two figures inside.] Credit & copyright: Merve Sultan, PexelsThis family business really took off around the globe…by making globes! Snow globes are popular souvenirs and holiday decorations the world over. While these whimsical decorations seem like a simple concept—a diorama inside a glass globe with some water and fake snow thrown in—they have a surprisingly scientific origin.
Erwin Perzy I, an Austrian trademan and tinkerer, didn’t set out to invent the snowglobe. Rather, he was in the business of selling medical instruments to local surgeons. In 1900, many physicians were looking to improve the lighting in their operating rooms, which at the time were often small, dim, and hard to work in. So, Perzy went to work, experimenting with a lightbulb placed near a water-filled glass globe. In order to amplify the brightness, Perzy tried adding different materials in the water to reflect the light. His invention never caught on with surgeons, but it did give Perzy an idea. He was already making miniature pewter replicas of the nearby Mariazell Basilica to sell to tourists and pilgrims who visited the site in droves. The souvenir was already popular, so he decided to bump it up a notch by taking some of the tiny buildings and placing them inside the globes. Filled with water and a proprietary blend of wax to mimic snow, the souvenir was sold as a diorama of the Mariazell Basilica in winter, and it was an instant success.
Some historians have pointed out that snow globes may have existed, at least in some form, before Perzy's invention. During the 1878 Paris Universelle Exposition, a French glassware company sold domed paperweights containing a model of a man holding an umbrella. The dome was also filled with water and imitation snow, but this version never caught on. Either way, Perzy’s patent for his snow globe was the first of its kind, and by 1905, business was booming.
At first, snow globes remained a regional craze. In 1908, Emperor Franz Joseph of Austria awarded Perzy for his novel contributions to toymaking, helping to boost snow globe’s popularity. For the first decades of the 20th century, snow globe’s spread steadily across Europe, but sales fell during World War I, World War II, and the intervening period of economic depression. After World War II, business took off again and began to spread to the U.S. By then, Perzy’s son, Erwin Perzy II, was in charge of the family business and made the decision to market snow globes as a Christmas item. The first Christmas snow globe featured a Christmas tree inside, and proved to be a great success. With the post-war baby boom and a rising economy, snow globe sales skyrocketed. Beginning in the 1970s, Erwin Perzy III took over the family business and started selling snow globes to Japan, but by the end of the 1980s, there was a problem. The patent filed by the first Perzy expired, forcing the family to pivot and market their products as the real deal, naming themselves the Original Viennese Snow Globes.
Today, the company is still owned and operated by the Perzy family, and while plenty of other companies sell snow globes, they’re still recognized as the original. In the years since their rebranding, they’ve been commissioned to make custom snow globes for a number of U.S. presidents, and in 2020, they even made one with a model toilet paper roll inside to poke fun at the shortages during the COVID pandemic. In addition to being the original, the company still uses a proprietary blend of wax and plastic for their snow, which they claim floats longer than their competitors’. That’s one way to keep shaking up the industry after all these years.
[Image description: A snowglobe with two figures inside.] Credit & copyright: Merve Sultan, Pexels -
FREEPP&T CurioFree1 CQ
This is one dispute between neighbors that got way out of hand. On this day in 1845, the U.S. Congress approved the annexation of the Republic of Texas, leading to the Mexican-American War. The conflict lasted for two brutal years and claimed the lives of nearly 40,000 soldiers.
Contrary of popular belief, Texas was not actually part of Mexico at the time of its annexation. Rather, it was a breakaway state—a Republic of its own that had gained independence from Mexico during the fittingly-named Texas Revolution. When the U.S. decided to annex it, the Republic had existed for around 10 years. For most of its existence, the U.S. recognized the Republic of Texas as an independent nation, while Mexico did not. Mexico considered it a rebellious state, and was eager to quash the Republic’s independent economic dealings with other nations. At the same time, they threatened war if the U.S. ever tried to annex the Republic of Texas.
Mexico had plenty of reasons to worry since the Republic of Texas itself was in favor of being annexed. In 1836, the Republic voted to become part of the U.S., as they were eager to procure the protection of the U.S. military and gain a stronger economic standing. However, it wasn’t until 1845 that President John Tyler, with the help of President-elect James K. Polk, passed a joint resolution in both houses of Congress and officially made Texas part of the United States. This increase in U.S. territory followed a trend of westward expansion at the time.
Mexico wasn’t happy, but they didn’t make good on their threat to declare war over the annexation. Rather, they took issue with Texas’ new borders. Mexico believed that the border should only extend as far as the Nueces River, but Texas claimed that their border extended all the way to the Rio Grande River and included portions of modern-day New Mexico and Colorado. In November, 1845, The U.S. sent Congressman John Slidell to negotiate a purchase agreement with Mexico for the disputed areas of land. At the same time, The U.S. Army began to take up stations within the disputed territory, infuriating Mexican military leaders and leading to open skirmishes between Mexican and U.S. troops. President Polk had run on a platform of westward U.S. expansion, so he wasn’t about to cede any land to Mexico, and Mexico wouldn’t allow it to be purchased. So, Polk urged Congress to declare war on Mexico, which they did on May 13, 1846.
From the start, Mexico faced serious disadvantages. Their armaments were outdated compared to those of U.S. troops, as most Mexican soldiers used surplus British muskets while U.S. soldiers had access to rifles and revolvers. Most difficult for Mexico to overcome were its own, severe political divisions. Centralistas, who supported a centralized Mexican government, were bitter rivals with federalists, who wanted a decentralized government structure. These two groups often failed to work together within military ranks, and sometimes even turned their weapons on one another. Even Mexican General Antonio López de Santa Anna, Mexico’s most famous military leader, struggled to get his nation’s divided political factions to fight together.
These obstacles quickly proved insurmountable for the Mexican military. After a three-day battle, the U.S. handily captured the major city of Monterrey, Mexico, on September 24, 1846. Not long after, the U.S. advanced into central Mexico and the bloody Battle of Buena Vista ended ambiguously, with both sides claiming victory. However, Mexico never decisively won a single battle in the war, and on September 14, 1847, the U.S. Army captured Mexico City, ending the fighting.
It wasn’t exactly smooth sailing from that point on. The Mexican government had to reform enough to be able to negotiate the war’s ending. This took time, since most of the Mexican government had fled Mexico City in advance of its downfall. It wasn’t until February 2, 1848, that the Treaty of Guadalupe Hidalgo was signed, and the war officially ended. The treaty granted the U.S. all of the formerly-contested territory, which eventually became the states of New Mexico, Utah, Arizona, Nevada, Colorado, California, and, of course, Texas. In return, Mexico got $15 million—far less than the U.S. originally offered to purchase the territory for. It might not have been a great deal to begin with—but Mexico likely ended up wishing they'd taken it.
[Image description: An illustration of soldiers in blue uniforms on horseback, one holding a sword aloft. Other soldiers are on the ground in disarray as others march up a distant hill amid clouds of smoke.] Credit & copyright: Storming of Independence Hill at the Battle of Monterey Kelloggs & Thayer, c. 1850-1900. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Control number: 93507890. Public Domain.This is one dispute between neighbors that got way out of hand. On this day in 1845, the U.S. Congress approved the annexation of the Republic of Texas, leading to the Mexican-American War. The conflict lasted for two brutal years and claimed the lives of nearly 40,000 soldiers.
Contrary of popular belief, Texas was not actually part of Mexico at the time of its annexation. Rather, it was a breakaway state—a Republic of its own that had gained independence from Mexico during the fittingly-named Texas Revolution. When the U.S. decided to annex it, the Republic had existed for around 10 years. For most of its existence, the U.S. recognized the Republic of Texas as an independent nation, while Mexico did not. Mexico considered it a rebellious state, and was eager to quash the Republic’s independent economic dealings with other nations. At the same time, they threatened war if the U.S. ever tried to annex the Republic of Texas.
Mexico had plenty of reasons to worry since the Republic of Texas itself was in favor of being annexed. In 1836, the Republic voted to become part of the U.S., as they were eager to procure the protection of the U.S. military and gain a stronger economic standing. However, it wasn’t until 1845 that President John Tyler, with the help of President-elect James K. Polk, passed a joint resolution in both houses of Congress and officially made Texas part of the United States. This increase in U.S. territory followed a trend of westward expansion at the time.
Mexico wasn’t happy, but they didn’t make good on their threat to declare war over the annexation. Rather, they took issue with Texas’ new borders. Mexico believed that the border should only extend as far as the Nueces River, but Texas claimed that their border extended all the way to the Rio Grande River and included portions of modern-day New Mexico and Colorado. In November, 1845, The U.S. sent Congressman John Slidell to negotiate a purchase agreement with Mexico for the disputed areas of land. At the same time, The U.S. Army began to take up stations within the disputed territory, infuriating Mexican military leaders and leading to open skirmishes between Mexican and U.S. troops. President Polk had run on a platform of westward U.S. expansion, so he wasn’t about to cede any land to Mexico, and Mexico wouldn’t allow it to be purchased. So, Polk urged Congress to declare war on Mexico, which they did on May 13, 1846.
From the start, Mexico faced serious disadvantages. Their armaments were outdated compared to those of U.S. troops, as most Mexican soldiers used surplus British muskets while U.S. soldiers had access to rifles and revolvers. Most difficult for Mexico to overcome were its own, severe political divisions. Centralistas, who supported a centralized Mexican government, were bitter rivals with federalists, who wanted a decentralized government structure. These two groups often failed to work together within military ranks, and sometimes even turned their weapons on one another. Even Mexican General Antonio López de Santa Anna, Mexico’s most famous military leader, struggled to get his nation’s divided political factions to fight together.
These obstacles quickly proved insurmountable for the Mexican military. After a three-day battle, the U.S. handily captured the major city of Monterrey, Mexico, on September 24, 1846. Not long after, the U.S. advanced into central Mexico and the bloody Battle of Buena Vista ended ambiguously, with both sides claiming victory. However, Mexico never decisively won a single battle in the war, and on September 14, 1847, the U.S. Army captured Mexico City, ending the fighting.
It wasn’t exactly smooth sailing from that point on. The Mexican government had to reform enough to be able to negotiate the war’s ending. This took time, since most of the Mexican government had fled Mexico City in advance of its downfall. It wasn’t until February 2, 1848, that the Treaty of Guadalupe Hidalgo was signed, and the war officially ended. The treaty granted the U.S. all of the formerly-contested territory, which eventually became the states of New Mexico, Utah, Arizona, Nevada, Colorado, California, and, of course, Texas. In return, Mexico got $15 million—far less than the U.S. originally offered to purchase the territory for. It might not have been a great deal to begin with—but Mexico likely ended up wishing they'd taken it.
[Image description: An illustration of soldiers in blue uniforms on horseback, one holding a sword aloft. Other soldiers are on the ground in disarray as others march up a distant hill amid clouds of smoke.] Credit & copyright: Storming of Independence Hill at the Battle of Monterey Kelloggs & Thayer, c. 1850-1900. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Control number: 93507890. Public Domain. -
FREEWorld History PP&T CurioFree1 CQ
Guys, I don’t think that’s Santa! In recent years, a monster-like figure known as Krampus has taken the modern world by storm, popping up in memes and even starring in his own movie. But this folkloric figure is far from a modern invention. In fact, his fame as a Christmas figure began in the 17th century (though his origins stretch back even further, to the 12th century) and he was actually portrayed as Santa’s helper.
The name Krampus, is thought to come from the German word for claw, “Krampen.” Krampus certainly does have fearsome claws, along with exaggerated, goat-like features (horns, legs, hooves, and a tail) on a mostly humanoid body with a long tongue and shaggy, black fur. Krampus is also associated with Norse mythology, and one of his earliest iterations was thought to be as the son of Hel, the god of the underworld. Regardless of exactly where he came from, Krampus came to have just one job during Christmas, according to many European countries: punish children who misbehaved during the year. Unlike Santa, who merely rewards good children, the Krampus takes punitive measures like beating children with sticks and sometimes even kidnapping them. Santa isn’t unaware of Krampus’s deeds, either. According to folklore, since Santa is a saint, he can’t punish children…which is why Krampus does it for him. Both St. Nicholas and Krampus are said to arrive on Krampusnacht, or Krampus Run (December 5), to dole out each child’s reward or punishment, respectively. The next morning, children are supposed to be either basking in their presents or crying over their injuries from the night before. Compared to that, some coal in the stocking might be preferable.
This bizarre goat-monster probably came to be associated with Christmas because he was already associated with Winter Solstice and the pagan traditions surrounding it. Once Christianity began to spread in once-pagan regions, the two traditions became mingled, creating an unlikely crossover of a Turkish saint and a Norse demon. However, Krampusnacht might have taken more from the pagans than the Christians. Krampusnacht usually involves revelers handing out alcohol and a parade where people dressed like the Krampus run around chasing children. No surprise, then, that since the Krampus started to become intertwined with Christmas, the Catholic Church attempted to abolish the figure several times, to no avail. One particularly large, long-running festival takes place in Lienz, Austria, with a parade called Perchtenlauf, where cowbells ring to signal the arrival of Krampus.
Krampus’s popularity really began to take off in the early 20th century, when the figure was featured on holiday cards that ranged from comical to spooky. At first, Krampus cards were contained mostly to Germany and Austria, but the figure’s popularity began to spread around Europe and even across the Atlantic. In the U.S., the Krampus has become the go-to figure for those who wish to forego the typical Christmas sentimentality and embrace a more horror-centric and ironic approach to the holidays.
Today, many of the older traditions around the Krampus are still practiced, but the figure is also something of a pop-culture icon. 2015 saw the debut of Krampus, a horror movie that casts the monster as the main antagonist. Other films have followed suit, often incorporating elements from real folklore. Krampus might have also gained traction in the U.S. partly as a novel way to protest the increasing commercialization of Christmas. But that might have been in vain, since merchandise featuring Krampus is becoming ever more popular. How long until we get a Christmas carol about the guy?
[Image description: Krampus, a furry, black monster with horns and a long tongue, puts a child in a sack while another child kneels by a bowl of fruit.] Credit & copyright: c. 1900, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.Guys, I don’t think that’s Santa! In recent years, a monster-like figure known as Krampus has taken the modern world by storm, popping up in memes and even starring in his own movie. But this folkloric figure is far from a modern invention. In fact, his fame as a Christmas figure began in the 17th century (though his origins stretch back even further, to the 12th century) and he was actually portrayed as Santa’s helper.
The name Krampus, is thought to come from the German word for claw, “Krampen.” Krampus certainly does have fearsome claws, along with exaggerated, goat-like features (horns, legs, hooves, and a tail) on a mostly humanoid body with a long tongue and shaggy, black fur. Krampus is also associated with Norse mythology, and one of his earliest iterations was thought to be as the son of Hel, the god of the underworld. Regardless of exactly where he came from, Krampus came to have just one job during Christmas, according to many European countries: punish children who misbehaved during the year. Unlike Santa, who merely rewards good children, the Krampus takes punitive measures like beating children with sticks and sometimes even kidnapping them. Santa isn’t unaware of Krampus’s deeds, either. According to folklore, since Santa is a saint, he can’t punish children…which is why Krampus does it for him. Both St. Nicholas and Krampus are said to arrive on Krampusnacht, or Krampus Run (December 5), to dole out each child’s reward or punishment, respectively. The next morning, children are supposed to be either basking in their presents or crying over their injuries from the night before. Compared to that, some coal in the stocking might be preferable.
This bizarre goat-monster probably came to be associated with Christmas because he was already associated with Winter Solstice and the pagan traditions surrounding it. Once Christianity began to spread in once-pagan regions, the two traditions became mingled, creating an unlikely crossover of a Turkish saint and a Norse demon. However, Krampusnacht might have taken more from the pagans than the Christians. Krampusnacht usually involves revelers handing out alcohol and a parade where people dressed like the Krampus run around chasing children. No surprise, then, that since the Krampus started to become intertwined with Christmas, the Catholic Church attempted to abolish the figure several times, to no avail. One particularly large, long-running festival takes place in Lienz, Austria, with a parade called Perchtenlauf, where cowbells ring to signal the arrival of Krampus.
Krampus’s popularity really began to take off in the early 20th century, when the figure was featured on holiday cards that ranged from comical to spooky. At first, Krampus cards were contained mostly to Germany and Austria, but the figure’s popularity began to spread around Europe and even across the Atlantic. In the U.S., the Krampus has become the go-to figure for those who wish to forego the typical Christmas sentimentality and embrace a more horror-centric and ironic approach to the holidays.
Today, many of the older traditions around the Krampus are still practiced, but the figure is also something of a pop-culture icon. 2015 saw the debut of Krampus, a horror movie that casts the monster as the main antagonist. Other films have followed suit, often incorporating elements from real folklore. Krampus might have also gained traction in the U.S. partly as a novel way to protest the increasing commercialization of Christmas. But that might have been in vain, since merchandise featuring Krampus is becoming ever more popular. How long until we get a Christmas carol about the guy?
[Image description: Krampus, a furry, black monster with horns and a long tongue, puts a child in a sack while another child kneels by a bowl of fruit.] Credit & copyright: c. 1900, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEWorld History PP&T CurioFree1 CQ
It’s really not as scary as it sounds. The Black Forest region of Germany is known for its picturesque landscape and traditional crafts. During the holiday season, German Christmas markets (or Christkindlmarkts) around the world are filled with hand-carved wooden toys and figurines from the region, and Black Forest ham is a beloved culinary delight throughout the year. However, there’s more to this historic, wooded area than just toys and food. The people living there have proudly retained distinct cultural practices that make the region unique.
Located in the southwestern state of Baden-Württemberg, the Black Forest is called Schwarzwald in German, though it went by other names in the past. The ancient Romans once associated the area with Abnoba Mons, a mountain range named after a Celtic deity. The earliest written record of the Black Forest also comes from the Romans, in the form of the Tabula Peutingeriana, a medieval copy of a Roman map that detailed the empire’s public road system. In it, the Black Forest is called Silva Marciana, which means “border forest,” in reference to the Marcomanni ("border people") who lived near Roman settlements in the area. The Black Forest today consists of 2,320 square miles of heavily forested land that stretches around 100 miles long and up to 25 miles wide. It contains the sources of both the Danube and Neckar rivers, and the area was historically known for its rich pastureland. Of course, the true stars of the Black Forest are the trees that define the region. The forests of Schwarzwald are mainly known for their oak, beech, and fir trees, the latter of which gives the region its name. Unsurprisingly, lumber production was historically a large part of the Black Forest’s economy, along with mining.
The Black Forest’s history of woodworking and woodcraft goes back centuries. Arguably the most famous craft to come out of the forest is the cuckoo clock, which was invented some time in the 17th century. As their name implies, cuckoo clocks typically feature a small, carved bird that emerges from above the clock face to mark the arrival of each hour with a call or song. More elaborate clocks sometimes have a set of dancers that circle in and out of a balcony in time to the sound. Most cuckoo clocks are carved out of wood to resemble houses, cabins, beer halls, or other traditional structures, with a scene of domestic or village life around it. While many modern cuckoo clocks use an electronic movement to keep time, mechanical versions using weights and pendulums are still being made. The weights that power the movement are often made to resemble pine cones, and users need only pull down on them periodically to keep the clock ticking. There are a limitless variety of cuckoo clock designs, and there are still traditional craftsmen making them by hand. The Black Forest is also known for wood carved figurines and sculptures, many of which served as children’s toys. Wood carving as an industry first gained traction in the 19th century, when drought and famine forced locals to seek alternative sources of income, but it is now a cherished part of the region’s culture.
Today, the Black Forest is still home to many woodworkers. The region is also a popular destination for outdoor enthusiasts, thanks to its many hiking trails and immense natural beauty. Towns in and around the Black Forest feature traditional, pastoral architecture and growing art scenes, where artists take inspiration from local traditions and landscapes. All those clocks, and they still manage to stay timeless.
[Image description: A section of the northern Black Forest with thin pine trees. Credit & copyright: Leonhard Lenz, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s really not as scary as it sounds. The Black Forest region of Germany is known for its picturesque landscape and traditional crafts. During the holiday season, German Christmas markets (or Christkindlmarkts) around the world are filled with hand-carved wooden toys and figurines from the region, and Black Forest ham is a beloved culinary delight throughout the year. However, there’s more to this historic, wooded area than just toys and food. The people living there have proudly retained distinct cultural practices that make the region unique.
Located in the southwestern state of Baden-Württemberg, the Black Forest is called Schwarzwald in German, though it went by other names in the past. The ancient Romans once associated the area with Abnoba Mons, a mountain range named after a Celtic deity. The earliest written record of the Black Forest also comes from the Romans, in the form of the Tabula Peutingeriana, a medieval copy of a Roman map that detailed the empire’s public road system. In it, the Black Forest is called Silva Marciana, which means “border forest,” in reference to the Marcomanni ("border people") who lived near Roman settlements in the area. The Black Forest today consists of 2,320 square miles of heavily forested land that stretches around 100 miles long and up to 25 miles wide. It contains the sources of both the Danube and Neckar rivers, and the area was historically known for its rich pastureland. Of course, the true stars of the Black Forest are the trees that define the region. The forests of Schwarzwald are mainly known for their oak, beech, and fir trees, the latter of which gives the region its name. Unsurprisingly, lumber production was historically a large part of the Black Forest’s economy, along with mining.
The Black Forest’s history of woodworking and woodcraft goes back centuries. Arguably the most famous craft to come out of the forest is the cuckoo clock, which was invented some time in the 17th century. As their name implies, cuckoo clocks typically feature a small, carved bird that emerges from above the clock face to mark the arrival of each hour with a call or song. More elaborate clocks sometimes have a set of dancers that circle in and out of a balcony in time to the sound. Most cuckoo clocks are carved out of wood to resemble houses, cabins, beer halls, or other traditional structures, with a scene of domestic or village life around it. While many modern cuckoo clocks use an electronic movement to keep time, mechanical versions using weights and pendulums are still being made. The weights that power the movement are often made to resemble pine cones, and users need only pull down on them periodically to keep the clock ticking. There are a limitless variety of cuckoo clock designs, and there are still traditional craftsmen making them by hand. The Black Forest is also known for wood carved figurines and sculptures, many of which served as children’s toys. Wood carving as an industry first gained traction in the 19th century, when drought and famine forced locals to seek alternative sources of income, but it is now a cherished part of the region’s culture.
Today, the Black Forest is still home to many woodworkers. The region is also a popular destination for outdoor enthusiasts, thanks to its many hiking trails and immense natural beauty. Towns in and around the Black Forest feature traditional, pastoral architecture and growing art scenes, where artists take inspiration from local traditions and landscapes. All those clocks, and they still manage to stay timeless.
[Image description: A section of the northern Black Forest with thin pine trees. Credit & copyright: Leonhard Lenz, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.