Curio Cabinet / Person, Place, or Thing
-
FREETravel PP&T CurioFree1 CQ
It may be the smallest country in the European Union, but it has one of the most interesting histories. The archipelago nation of Malta has been continuously inhabited for over 7,000 years. In that time, it has seen many factions come and go. Its geography and strategic location have made it a coveted settlement since ancient times, but they also present unique challenges for its modern day residents.
The story of Malta begins in prehistory, with the first hominids that settled the islands of Malta and Gozo (the two largest islands in the Maltese archipelago) around 5000 B.C.E. Early island inhabitants developed their own religion and built temples out of limestone between 3600 and 2500 B.C.E., predating the Great Pyramids of Giza and Stonehenge. By the time the Phoenicians arrived in 800 B.C.E., however, the temples’ builders were long gone. The Phoenicians were the first to realize the utility of the islands’ location and used Malta as a port to resupply their ships. Since Malta is located just south of Sicily and equidistant to much of the North African shore, it was a valuable stepping stone for the early seafarers until they were ousted by the Carthaginians. The Carthaginians in turn were expelled by the Romans in 218 B.C.E., after which the islands were developed rapidly. The Romans were succeeded by the Byzantines at the end of the 4th century C.E., who ruled the island until the Arabs arrived in the 9th century. Normans arrived in the 11th century and were soon replaced by the Crown of Aragon in the 13th century until Charles V handed over the islands to the Knights of the Order of Jerusalem, who became the Knights of the Order of Saint John (A.K.A. the Knights Hospitaller or the Knights of Malta). When Napoleon captured Malta in 1798, residents resisted and sought aid from Great Britain, who ousted the French occupation and took control of the island. Malta became part of the British Empire in 1814 and remained so until it gained independence in 1964.
After Malta became a republic 10 years after its independence, it remained a part of the Commonwealth and joined the E.U. in 2004. Malta is currently the smallest member of the E.U. by size, with a total area of only 122 square miles. Despite its size, Malta has a relatively high population of around 569,900 residents, making it a densely packed archipelago at around 4,900 residents per square mile. Much of the country’s economy is based on its scenic beaches, but it has struggled to grow due to its small size, limited natural resources, and a shrinking population. Where Malta truly sets itself apart is in his unique culture, shaped by the various groups that have occupied the islands throughout its history. The Maltese language is derived from a mix of Arabic and a Sicilian dialect of Italian, while much of the population speaks English as well. Malta’s modern day culture is similarly a fusion of Arabic and Italian sensibilities, though the population is mostly Roman Catholic.
Today, Malta is facing an alarming crisis. As a small archipelago, the country struggles with water scarcity. Much of the potable water for its residents is produced by desalination. This energy-intensive process turns seawater into drinkable freshwater, but it’s expensive and harmful to the environment due to carbon emissions and brine discharge. People come from all over the world to visit Malta’s beaches, and their high tourism numbers only add to their water problem. Limiting tourism isn’t a quick fix either, since the country’s economy is heavily dependent on tourism. On the bright side, these challenges have made Malta a pioneer in water recycling and conservation, even if their problems are far from being solved. In the future, the rest of the world might very well be in need of Maltese advice when it comes to water.
[Image description: A photo of the Maltese countryside with some fields, white buildings, and part of a stone wall visible.] Credit & copyright: Syced, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It may be the smallest country in the European Union, but it has one of the most interesting histories. The archipelago nation of Malta has been continuously inhabited for over 7,000 years. In that time, it has seen many factions come and go. Its geography and strategic location have made it a coveted settlement since ancient times, but they also present unique challenges for its modern day residents.
The story of Malta begins in prehistory, with the first hominids that settled the islands of Malta and Gozo (the two largest islands in the Maltese archipelago) around 5000 B.C.E. Early island inhabitants developed their own religion and built temples out of limestone between 3600 and 2500 B.C.E., predating the Great Pyramids of Giza and Stonehenge. By the time the Phoenicians arrived in 800 B.C.E., however, the temples’ builders were long gone. The Phoenicians were the first to realize the utility of the islands’ location and used Malta as a port to resupply their ships. Since Malta is located just south of Sicily and equidistant to much of the North African shore, it was a valuable stepping stone for the early seafarers until they were ousted by the Carthaginians. The Carthaginians in turn were expelled by the Romans in 218 B.C.E., after which the islands were developed rapidly. The Romans were succeeded by the Byzantines at the end of the 4th century C.E., who ruled the island until the Arabs arrived in the 9th century. Normans arrived in the 11th century and were soon replaced by the Crown of Aragon in the 13th century until Charles V handed over the islands to the Knights of the Order of Jerusalem, who became the Knights of the Order of Saint John (A.K.A. the Knights Hospitaller or the Knights of Malta). When Napoleon captured Malta in 1798, residents resisted and sought aid from Great Britain, who ousted the French occupation and took control of the island. Malta became part of the British Empire in 1814 and remained so until it gained independence in 1964.
After Malta became a republic 10 years after its independence, it remained a part of the Commonwealth and joined the E.U. in 2004. Malta is currently the smallest member of the E.U. by size, with a total area of only 122 square miles. Despite its size, Malta has a relatively high population of around 569,900 residents, making it a densely packed archipelago at around 4,900 residents per square mile. Much of the country’s economy is based on its scenic beaches, but it has struggled to grow due to its small size, limited natural resources, and a shrinking population. Where Malta truly sets itself apart is in his unique culture, shaped by the various groups that have occupied the islands throughout its history. The Maltese language is derived from a mix of Arabic and a Sicilian dialect of Italian, while much of the population speaks English as well. Malta’s modern day culture is similarly a fusion of Arabic and Italian sensibilities, though the population is mostly Roman Catholic.
Today, Malta is facing an alarming crisis. As a small archipelago, the country struggles with water scarcity. Much of the potable water for its residents is produced by desalination. This energy-intensive process turns seawater into drinkable freshwater, but it’s expensive and harmful to the environment due to carbon emissions and brine discharge. People come from all over the world to visit Malta’s beaches, and their high tourism numbers only add to their water problem. Limiting tourism isn’t a quick fix either, since the country’s economy is heavily dependent on tourism. On the bright side, these challenges have made Malta a pioneer in water recycling and conservation, even if their problems are far from being solved. In the future, the rest of the world might very well be in need of Maltese advice when it comes to water.
[Image description: A photo of the Maltese countryside with some fields, white buildings, and part of a stone wall visible.] Credit & copyright: Syced, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEArt Appreciation PP&T CurioFree1 CQ
Sew beautiful and sew practical. Quilts don’t just make for warm blankets—they can tell a story. Used as a way to pass the time, explore geometric patterns, preserve mementos, or illustrate life events, quilting can be a painstaking process. This often-overlooked art form has existed for centuries all over the world, and every culture has their own unique take on the craft—including the U.S.
A quilt, in its simplest form, is three layers of fabric sewn together to create a blanket or other textile product. Since quilters must sew across the surface of the fabrics to connect the three pieces as one, the result is a pattern formed by the stitch lines themselves. Owing to its broad definition, it’s hard to pinpoint where, exactly, quilting got its start. One of the oldest depictions of quilts dates back to the first Egyptian dynasty, which existed around 5,000 years ago. It’s not a sample of the quilt itself that survived but an ivory carving depicting a pharaoh clad in a quilted mantle. In the U.S., the earliest quilters were English and Dutch settlers. The quilts they made weren’t just for their beds, but were also hung over windows to insulate their homes. Quilts usually had to be made with whatever materials were available, so quilters frequently reused or repurposed old blankets and scraps of fabric. These early American quilts were meant to be purely utilitarian with little to no artistic intent. That changed, though, as commercially produced fabrics became more commonplace and affordable. What was once done out of necessity became an outlet of creative expression, with early American quilters sewing intricate patterns by hand.
This uniquely American style of quilting came not just from European settlers, but from enslaved Black women. These women were often skilled sewists who made bedding for their enslavers but weren’t always provided bedding of their own. So, they used leftover material that would have otherwise been thrown away to make quilts. For most enslaved women, quilting was one of the only ways they could express a sense of identity and community through a tangible medium, and as quilts passed down through the generations, they also served as a record of their families’ histories. Harriet Powers, a woman born into slavery in 1837, is remembered as one of America’s finest folk artists. Her unique quilts utilized applique to tell biblical stories, and she was even able to exhibit some of her work after the Civil War.
Since quilting was almost exclusively done by women, it turned into a kind of matrilineal tradition for both free and enslaved Americans. The knowledge of how to quilt was passed on from mother to daughter and quilts were treasured family heirlooms. Quilting was also a way for women to form bonds within their respective communities. Even after the advent of sewing machines, most quilts were sewn by hand, which was a time-consuming process. As such, women would often come together in “quilting bees,” where they would sew a blanket or other form of quilt together for a member of the community.
In the U.S., quilting became less common from the 20th century onward, as commercially-produced bedding and other textile goods became easier to find, even in rural places. Throughout the 1970s and 1980s, there was a resurgence of interest in quilting, and it’s mostly done today as a hobby. Quilts are also the subject of newfound academic interest, as they're now rightfully seen as important works of folk art. Though they weren’t exactly intended to be, quilts have become the lasting, historical legacies of the women who made them. What they once sewed together to keep their families warm are now historical artifacts that preserve their voices and serve as proof of their struggles. History isn’t always written; sometimes it’s sewn.
[Image description: A portion of a quilt from the Metropolitan Museum of Art. It features colorful, eight-pointed stars against a cream-colored background. There is an orange border with cream-colored diamonds.] Credit & copyright: Star of Lemoyne Quilt, Rebecca Davis, c. 1846. Gift of Mrs. Andrew Galbraith Carey, 1980. The Metropolitan Museum of Art. Public Domain.Sew beautiful and sew practical. Quilts don’t just make for warm blankets—they can tell a story. Used as a way to pass the time, explore geometric patterns, preserve mementos, or illustrate life events, quilting can be a painstaking process. This often-overlooked art form has existed for centuries all over the world, and every culture has their own unique take on the craft—including the U.S.
A quilt, in its simplest form, is three layers of fabric sewn together to create a blanket or other textile product. Since quilters must sew across the surface of the fabrics to connect the three pieces as one, the result is a pattern formed by the stitch lines themselves. Owing to its broad definition, it’s hard to pinpoint where, exactly, quilting got its start. One of the oldest depictions of quilts dates back to the first Egyptian dynasty, which existed around 5,000 years ago. It’s not a sample of the quilt itself that survived but an ivory carving depicting a pharaoh clad in a quilted mantle. In the U.S., the earliest quilters were English and Dutch settlers. The quilts they made weren’t just for their beds, but were also hung over windows to insulate their homes. Quilts usually had to be made with whatever materials were available, so quilters frequently reused or repurposed old blankets and scraps of fabric. These early American quilts were meant to be purely utilitarian with little to no artistic intent. That changed, though, as commercially produced fabrics became more commonplace and affordable. What was once done out of necessity became an outlet of creative expression, with early American quilters sewing intricate patterns by hand.
This uniquely American style of quilting came not just from European settlers, but from enslaved Black women. These women were often skilled sewists who made bedding for their enslavers but weren’t always provided bedding of their own. So, they used leftover material that would have otherwise been thrown away to make quilts. For most enslaved women, quilting was one of the only ways they could express a sense of identity and community through a tangible medium, and as quilts passed down through the generations, they also served as a record of their families’ histories. Harriet Powers, a woman born into slavery in 1837, is remembered as one of America’s finest folk artists. Her unique quilts utilized applique to tell biblical stories, and she was even able to exhibit some of her work after the Civil War.
Since quilting was almost exclusively done by women, it turned into a kind of matrilineal tradition for both free and enslaved Americans. The knowledge of how to quilt was passed on from mother to daughter and quilts were treasured family heirlooms. Quilting was also a way for women to form bonds within their respective communities. Even after the advent of sewing machines, most quilts were sewn by hand, which was a time-consuming process. As such, women would often come together in “quilting bees,” where they would sew a blanket or other form of quilt together for a member of the community.
In the U.S., quilting became less common from the 20th century onward, as commercially-produced bedding and other textile goods became easier to find, even in rural places. Throughout the 1970s and 1980s, there was a resurgence of interest in quilting, and it’s mostly done today as a hobby. Quilts are also the subject of newfound academic interest, as they're now rightfully seen as important works of folk art. Though they weren’t exactly intended to be, quilts have become the lasting, historical legacies of the women who made them. What they once sewed together to keep their families warm are now historical artifacts that preserve their voices and serve as proof of their struggles. History isn’t always written; sometimes it’s sewn.
[Image description: A portion of a quilt from the Metropolitan Museum of Art. It features colorful, eight-pointed stars against a cream-colored background. There is an orange border with cream-colored diamonds.] Credit & copyright: Star of Lemoyne Quilt, Rebecca Davis, c. 1846. Gift of Mrs. Andrew Galbraith Carey, 1980. The Metropolitan Museum of Art. Public Domain. -
FREEMusic Appreciation PP&T CurioFree1 CQ
The Prince of Darkness has left the mortal plane. Ozzy Osbourne was a celebrity of contradictions. Beloved and reviled, the British musician helped metal grow into its own distinct genre of rock. He was equally groundbreaking in the world of television, as he and his family became some of the world’s first reality TV stars. Though Ozzy’s life was rife with controversial stories, his work raising money for Parkinson’s research and his clear love for his family endeared him to millions, even outside the world of music.
Born in Birmingham, England, on December 3, 1948, Osbourne dropped out of school at the age of 15. He spent a few years working odd jobs and even had a stint in jail at 17 before he entered the local music scene. Eventually, Osbourne formed a blues band with his friends, Terry “Geezer” Butler, Tony Iommi, and Bill Ward named Polka Tulk in the late 1960s. They soon renamed themselves Earth, but due to another band sharing the same name, they changed theirs to Black Sabbath, after a 1963 horror anthology film. By then, they had also evolved from their blues roots to the genre that they would pioneer. Their early music consisted of aggressive vocals, heavy drumbeats and, of course, the sound of a distorted guitar. This new sound, which would come to be called heavy metal, also took inspiration from occult and fantasy imagery for shock value. Heavy metal enraged conservative critics, but the music began developing a dedicated fan base. Rock changed forever on Friday the 13th, 1970, when Black Sabbath released their eponymous debut album. Though critics dismissed it, the album sold over a million copies, launching Black Sabbath to international fame. They went on to sell over 75 million albums throughout their decades-long career, though Osbourne wasn’t there for all of it. He left the band in the late 1970s to pursue a solo career, returning to Black Sabbath at different points throughout the 80s and 90s.
Osbourne had been a controversial figure as Black Sabbath’s frontman, and things were no different when he went solo. One of his most infamous incidents occurred in 1982, when a fan threw a dead bat on stage while he was performing in Des Moines, Iowa. Osbourne claimed in his autobiography that he believed it to be a rubber bat and, living up to his wild image, he bit its head off. Had it happened to anyone else, the public might have believed that it was a mistake, but Osbourne had long cultivated a darkly outrageous persona on stage. To this day, the incident is hotly debated, with some saying that Osbourne knew it was a real, dead bat, or that it might have even been alive. The confusion was aided by Osbourne himself, who told several different versions of the tale.
Even offstage, Osbourne attracted controversy. Part of his reason for splitting from the band he’d helped build was that his drinking and drug use had spiraled out of control, in his bandmates’ opinions. Then there was the controversy about his dark, foreboding music itself, and the occult imagery associated with his persona as the Prince of Darkness. Osbourne was sued in 1986 by the parents of a young man who had committed suicide while listening to Blizzard of Ozz, and again in 1988 for the same reason by another set of parents. Both suits were dismissed, but the controversy still shaped some portion of Osbourne’s public perception.
In the 1990s, Osbourne created a rock festival called Ozzfest, which eventually became a festival tour, featuring a lineup of heavy metal bands. Despite his ongoing role in the music world, though, younger Osbourne fans might be more familiar with him as a star of the reality TV series, The Osbournes. The show, which aired for four seasons starting in 2002, was one of the first reality TV shows to focus on day-to-day details of a family’s life. It also allowed the public to see a softer side of Osbourne as he navigated fatherhood and his issues with substances.
Weeks before he passed away on July 22, Osbourne and Black Sabbath held a 10-hour farewell charity concert in his hometown of Birmingham, England. They raised a record-breaking $190 million during the event through livestream tickets. Most of the proceeds went to Birmingham Children's Hospital, Acorn Children's Hospice, and Cure Parkinson's (Osbourne himself had been diagnosed with Parkinson’s disease in 2019). There’s no doubt that Ozzy knew how to go out on a high note.
[Image description: Ozzy Osbourne’s star on the Hollywood Walk of Fame.] Credit & copyright: Elmar78, Wikimedia Commons. This work has been released into the public domain by its author, Elmar78 at German Wikipedia. This applies worldwide.The Prince of Darkness has left the mortal plane. Ozzy Osbourne was a celebrity of contradictions. Beloved and reviled, the British musician helped metal grow into its own distinct genre of rock. He was equally groundbreaking in the world of television, as he and his family became some of the world’s first reality TV stars. Though Ozzy’s life was rife with controversial stories, his work raising money for Parkinson’s research and his clear love for his family endeared him to millions, even outside the world of music.
Born in Birmingham, England, on December 3, 1948, Osbourne dropped out of school at the age of 15. He spent a few years working odd jobs and even had a stint in jail at 17 before he entered the local music scene. Eventually, Osbourne formed a blues band with his friends, Terry “Geezer” Butler, Tony Iommi, and Bill Ward named Polka Tulk in the late 1960s. They soon renamed themselves Earth, but due to another band sharing the same name, they changed theirs to Black Sabbath, after a 1963 horror anthology film. By then, they had also evolved from their blues roots to the genre that they would pioneer. Their early music consisted of aggressive vocals, heavy drumbeats and, of course, the sound of a distorted guitar. This new sound, which would come to be called heavy metal, also took inspiration from occult and fantasy imagery for shock value. Heavy metal enraged conservative critics, but the music began developing a dedicated fan base. Rock changed forever on Friday the 13th, 1970, when Black Sabbath released their eponymous debut album. Though critics dismissed it, the album sold over a million copies, launching Black Sabbath to international fame. They went on to sell over 75 million albums throughout their decades-long career, though Osbourne wasn’t there for all of it. He left the band in the late 1970s to pursue a solo career, returning to Black Sabbath at different points throughout the 80s and 90s.
Osbourne had been a controversial figure as Black Sabbath’s frontman, and things were no different when he went solo. One of his most infamous incidents occurred in 1982, when a fan threw a dead bat on stage while he was performing in Des Moines, Iowa. Osbourne claimed in his autobiography that he believed it to be a rubber bat and, living up to his wild image, he bit its head off. Had it happened to anyone else, the public might have believed that it was a mistake, but Osbourne had long cultivated a darkly outrageous persona on stage. To this day, the incident is hotly debated, with some saying that Osbourne knew it was a real, dead bat, or that it might have even been alive. The confusion was aided by Osbourne himself, who told several different versions of the tale.
Even offstage, Osbourne attracted controversy. Part of his reason for splitting from the band he’d helped build was that his drinking and drug use had spiraled out of control, in his bandmates’ opinions. Then there was the controversy about his dark, foreboding music itself, and the occult imagery associated with his persona as the Prince of Darkness. Osbourne was sued in 1986 by the parents of a young man who had committed suicide while listening to Blizzard of Ozz, and again in 1988 for the same reason by another set of parents. Both suits were dismissed, but the controversy still shaped some portion of Osbourne’s public perception.
In the 1990s, Osbourne created a rock festival called Ozzfest, which eventually became a festival tour, featuring a lineup of heavy metal bands. Despite his ongoing role in the music world, though, younger Osbourne fans might be more familiar with him as a star of the reality TV series, The Osbournes. The show, which aired for four seasons starting in 2002, was one of the first reality TV shows to focus on day-to-day details of a family’s life. It also allowed the public to see a softer side of Osbourne as he navigated fatherhood and his issues with substances.
Weeks before he passed away on July 22, Osbourne and Black Sabbath held a 10-hour farewell charity concert in his hometown of Birmingham, England. They raised a record-breaking $190 million during the event through livestream tickets. Most of the proceeds went to Birmingham Children's Hospital, Acorn Children's Hospice, and Cure Parkinson's (Osbourne himself had been diagnosed with Parkinson’s disease in 2019). There’s no doubt that Ozzy knew how to go out on a high note.
[Image description: Ozzy Osbourne’s star on the Hollywood Walk of Fame.] Credit & copyright: Elmar78, Wikimedia Commons. This work has been released into the public domain by its author, Elmar78 at German Wikipedia. This applies worldwide. -
FREEUS History PP&T CurioFree1 CQ
What happens when you have to abandon ship, but doing so isn't a viable option? The USS Indianapolis was sunk this month in 1945 after completing a crucial mission, and the aftermath of the attack was arguably worse than the attack itself. Of the 1,195 men on board, only 316 survived what became one of the most harrowing events in U.S. naval history.
The USS Indianapolis was a Portland-class heavy cruiser so impressive that it once carried President Franklin D. Roosevelt during his visit to South America in 1936. During the tail end of WWII, however, it carried other high-stakes cargo in the form of critical internal components for the nuclear bombs that would be dropped on Japan. After completing a top-secret delivery mission, the ship, under the command of Captain Charles B. McVay, was on its way to the Leyte Gulf in the Philippines on July 30, 1945, when it was intercepted by a Japanese submarine. The USS Indianapolis was immediately hit with two torpedoes and began to sink. Around 330 crew members perished in the immediate explosion. The rest ended up in the ocean with life jackets and a few life rafts. The Japanese didn’t target the men in the water, but they didn’t have to. Stranded in shark-infested waters with no supplies for five days, most of the men succumbed to the elements or to shark attacks. By the time help arrived, only 316 remained alive, and the event became the single greatest loss of life in U.S. naval history.
Captain McVay was among the few who survived, and for his alleged failure to take proper action, he was court-martialed and found guilty of negligence the following year. The argument by the prosecution was that he failed to use a zigzagging maneuver to avoid enemy torpedoes, which supposedly would have saved the ship and the lives of the men on board. There was doubt about the veracity of that claim, however, even from many of the survivors. Over the years, McVay’s defenders have pointed out that he had requested a destroyer escort, but that his request was denied. Then there was the fact that U.S. naval intelligence was aware of Japanese submarines in the area that the cruiser was sailing through but purposely didn’t warn them in advance, presumably to hide the fact that the Japanese codes had already been broken. Perhaps the most significant piece of evidence in McVay’s favor was the testimony of the commanding officer of the Japanese submarine that sunk the cruiser. According to Commander Mochitsura Hashimoto, who appeared at the court-martial to testify in person, the cruiser would not have been saved even if McVay had ordered the zigzag maneuver. Unfortunately, the damage to McVay’s reputation had already been done. He was largely blamed by the public and families of the deceased for the loss of life, and died by suicide in 1968.
Over the decades, McVay’s name and reputation have been largely cleared, thanks to the efforts of the survivors. In 2000, the U.S. Congress passed a joint resolution officially exonerating McVay. Today, only a single survivor of the sinking of the USS Indianapolis remains, but the story of the ship, the horrors endured by its men, and the injustice committed against its commanding officer make for one of the most tragic stories to come out of WWII. In a conflict as large and deadly as a World War, that’s saying a lot.
[Image description: The surface of water with some ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What happens when you have to abandon ship, but doing so isn't a viable option? The USS Indianapolis was sunk this month in 1945 after completing a crucial mission, and the aftermath of the attack was arguably worse than the attack itself. Of the 1,195 men on board, only 316 survived what became one of the most harrowing events in U.S. naval history.
The USS Indianapolis was a Portland-class heavy cruiser so impressive that it once carried President Franklin D. Roosevelt during his visit to South America in 1936. During the tail end of WWII, however, it carried other high-stakes cargo in the form of critical internal components for the nuclear bombs that would be dropped on Japan. After completing a top-secret delivery mission, the ship, under the command of Captain Charles B. McVay, was on its way to the Leyte Gulf in the Philippines on July 30, 1945, when it was intercepted by a Japanese submarine. The USS Indianapolis was immediately hit with two torpedoes and began to sink. Around 330 crew members perished in the immediate explosion. The rest ended up in the ocean with life jackets and a few life rafts. The Japanese didn’t target the men in the water, but they didn’t have to. Stranded in shark-infested waters with no supplies for five days, most of the men succumbed to the elements or to shark attacks. By the time help arrived, only 316 remained alive, and the event became the single greatest loss of life in U.S. naval history.
Captain McVay was among the few who survived, and for his alleged failure to take proper action, he was court-martialed and found guilty of negligence the following year. The argument by the prosecution was that he failed to use a zigzagging maneuver to avoid enemy torpedoes, which supposedly would have saved the ship and the lives of the men on board. There was doubt about the veracity of that claim, however, even from many of the survivors. Over the years, McVay’s defenders have pointed out that he had requested a destroyer escort, but that his request was denied. Then there was the fact that U.S. naval intelligence was aware of Japanese submarines in the area that the cruiser was sailing through but purposely didn’t warn them in advance, presumably to hide the fact that the Japanese codes had already been broken. Perhaps the most significant piece of evidence in McVay’s favor was the testimony of the commanding officer of the Japanese submarine that sunk the cruiser. According to Commander Mochitsura Hashimoto, who appeared at the court-martial to testify in person, the cruiser would not have been saved even if McVay had ordered the zigzag maneuver. Unfortunately, the damage to McVay’s reputation had already been done. He was largely blamed by the public and families of the deceased for the loss of life, and died by suicide in 1968.
Over the decades, McVay’s name and reputation have been largely cleared, thanks to the efforts of the survivors. In 2000, the U.S. Congress passed a joint resolution officially exonerating McVay. Today, only a single survivor of the sinking of the USS Indianapolis remains, but the story of the ship, the horrors endured by its men, and the injustice committed against its commanding officer make for one of the most tragic stories to come out of WWII. In a conflict as large and deadly as a World War, that’s saying a lot.
[Image description: The surface of water with some ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History PP&T CurioFree1 CQ
Going to the airport just got a lot less annoying. For decades, travelers have had to endure the inconvenience of taking their shoes on and off at security checkpoints at U.S. airports. To the relief of many, the Transportation Safety Administration (TSA) has now announced that they will be scrapping their much-criticized policy, and will allow travelers to keep their shoes on through security. Still, it might behoove us not to forget the criminal plot that triggered the shoes-off rule in the first place.
While many equate the TSA’s shoes-off rule with the September 11th terrorist attacks, it's more closely related to a completely different crime. The man responsible was Richard Reid, a British national who became radicalized and left the U.K. in 1998 to receive training from al-Qaeda in Afghanistan. In December of 2001, he arrived in Brussels, Belgium, then traveled to Paris. While in Brussels, Reid purchased a pair of black sneakers, which he plotted to use in an attempted bombing. In Paris, he purchased round-trip tickets to Antigua and Barbuda which included a stop in Miami. By then, he had already rigged his sneakers with homemade explosives by cutting a space for them in the soles. Reid even hid fuses in the shoes’ tongues. Despite his efforts to disguise the device, airport security in Paris grew suspicious of Reid for several reasons. First, he used cash to purchase his tickets, which was unusual given their high price. Secondly, he carried no luggage with him despite his supposed intention to travel overseas. Delayed by airport security, Reid missed his flight and booked another for Miami, which he did manage to board successfully on December 22. During the flight, Reid made several attempts to light the fuse of his homemade “shoe bomb”, but was caught by a passenger who complained about the smell of sulfur emanating from his seat. On the second attempt, he got into an altercation with the same passenger, after which other passengers and flight attendants jumped in to subdue him. Unable to carry out his plan, Reid was tied down and injected with sedatives until authorities could detain him.
Although the attempted plot by the so-called “Shoe Bomber” took place in 2001, it wasn’t until 2006 that TSA instated the shoes-off policy. The rule required travelers to take off their shoes, place them in a bin, and pass them through an x-ray machine for screening before they were allowed to put them back on. It was a hassle at the best of times, and could lead to slow lines and delays at the worst of times. Now, the Department of Homeland Security (DHS), which oversees the TSA, claims that such screenings are no longer necessary due to more advanced scanners and an increase in the number of officers at security checkpoints.
As annoying as the policy may have been, it still had its supporters. After all, Reid’s homemade explosive contained just ten ounces of explosive material, yet according to the FBI, the explosion would have torn a hole in the fuselage and caused the plane to crash had it been successfully detonated. For most of the world, the consequences of Reid’s actions will be largely forgotten with the repeal of the shoes-off policy. The perpetrator himself, on the other hand, is still serving a life sentence at a maximum-security prison after pleading guilty to eight terrorism-related charges in 2002. Reid also wasn’t the last person to attempt an airline bombing. In 2009, another would-be terrorist failed to detonate an explosive hidden in his underwear, prompting the use of full-body scanners by the TSA shortly thereafter. At least they didn’t make everyone take off their underwear while in line.
[Image description: A pair of men’s vintage black dress shoes.] Credit & copyright: The Metropolitan Museum of Art. Public Domain.Going to the airport just got a lot less annoying. For decades, travelers have had to endure the inconvenience of taking their shoes on and off at security checkpoints at U.S. airports. To the relief of many, the Transportation Safety Administration (TSA) has now announced that they will be scrapping their much-criticized policy, and will allow travelers to keep their shoes on through security. Still, it might behoove us not to forget the criminal plot that triggered the shoes-off rule in the first place.
While many equate the TSA’s shoes-off rule with the September 11th terrorist attacks, it's more closely related to a completely different crime. The man responsible was Richard Reid, a British national who became radicalized and left the U.K. in 1998 to receive training from al-Qaeda in Afghanistan. In December of 2001, he arrived in Brussels, Belgium, then traveled to Paris. While in Brussels, Reid purchased a pair of black sneakers, which he plotted to use in an attempted bombing. In Paris, he purchased round-trip tickets to Antigua and Barbuda which included a stop in Miami. By then, he had already rigged his sneakers with homemade explosives by cutting a space for them in the soles. Reid even hid fuses in the shoes’ tongues. Despite his efforts to disguise the device, airport security in Paris grew suspicious of Reid for several reasons. First, he used cash to purchase his tickets, which was unusual given their high price. Secondly, he carried no luggage with him despite his supposed intention to travel overseas. Delayed by airport security, Reid missed his flight and booked another for Miami, which he did manage to board successfully on December 22. During the flight, Reid made several attempts to light the fuse of his homemade “shoe bomb”, but was caught by a passenger who complained about the smell of sulfur emanating from his seat. On the second attempt, he got into an altercation with the same passenger, after which other passengers and flight attendants jumped in to subdue him. Unable to carry out his plan, Reid was tied down and injected with sedatives until authorities could detain him.
Although the attempted plot by the so-called “Shoe Bomber” took place in 2001, it wasn’t until 2006 that TSA instated the shoes-off policy. The rule required travelers to take off their shoes, place them in a bin, and pass them through an x-ray machine for screening before they were allowed to put them back on. It was a hassle at the best of times, and could lead to slow lines and delays at the worst of times. Now, the Department of Homeland Security (DHS), which oversees the TSA, claims that such screenings are no longer necessary due to more advanced scanners and an increase in the number of officers at security checkpoints.
As annoying as the policy may have been, it still had its supporters. After all, Reid’s homemade explosive contained just ten ounces of explosive material, yet according to the FBI, the explosion would have torn a hole in the fuselage and caused the plane to crash had it been successfully detonated. For most of the world, the consequences of Reid’s actions will be largely forgotten with the repeal of the shoes-off policy. The perpetrator himself, on the other hand, is still serving a life sentence at a maximum-security prison after pleading guilty to eight terrorism-related charges in 2002. Reid also wasn’t the last person to attempt an airline bombing. In 2009, another would-be terrorist failed to detonate an explosive hidden in his underwear, prompting the use of full-body scanners by the TSA shortly thereafter. At least they didn’t make everyone take off their underwear while in line.
[Image description: A pair of men’s vintage black dress shoes.] Credit & copyright: The Metropolitan Museum of Art. Public Domain. -
FREEHumanities PP&T CurioFree1 CQ
They say that a dog is man’s best friend, but there’s one thing that can get in the way of that friendship like nothing else. For thousands of years, rabies has claimed countless lives, often transmitted to humans via dogs, raccoons, foxes, and other mammals. For most of that time, there was no way to directly prevent the transmission of rabies, until French scientist Louis Pasteur managed to successfully inoculate someone against the disease on this day in 1885.
Rabies has always been a disease without a cure. Even the ancient Sumerians knew about the deadly disease and how it could be transmitted through a bite from an infected animal. It was a common enough problem that the Babylonians had specific regulations on how the owner of a rabid dog was to compensate a victim’s family in the event of a bite. The disease itself is caused by a virus which is expressed in the saliva, and causes the infected animal to behave in an agitated or aggressive manner. Symptoms across species remain similar, and when humans are infected, they show signs of agitation, hyperactivity, fever, nausea, confusion, and the same excessive salivation seen in other animals. In advanced stages, victims begin hallucinating and having difficulty swallowing. The latter symptom also leads to a fear of water. Rabies is almost always fatal without intervention. Fortunately, post-exposure prophylaxis against rabies now exists, thanks to the efforts of one scientist.
By the time 9-year-old Joseph Meister was bitten 14 times by a rabid dog in the town of Alsace, French chemist Louis Pasteur was already working with rabid dogs. Pasteur had developed a rabies vaccine which he was administering to dogs and rabbits. Though it showed promise, it had never been tested on human subjects. It had also never been used on a subject who had already been infected. When Joseph’s mother brought the child to Paris to seek treatment from Pasteur, he and his colleagues didn’t want to administer the vaccine due to its untested nature. That might have been the end for the young Joseph but for Dr. Jacques Joseph Grancher’s intervention. Grancher offered to administer the vaccine on the boy himself, and over the course of 10 days, Joseph received 12 doses. Remarkably, Joseph was cured by the end of the month, proving the vaccine’s efficacy as both a preventative and a treatment. While credit for developing the vaccine goes to Pasteur, Grancher was also recognized for his part in ending the era of rabies as an automatic death sentence. In 1888, Grancher was given the rank of Grand Officer of the Legion of Honor, the highest French honor given at the time to civilians or military personnel.
The rabies vaccine and post-exposure prophylaxis have greatly improved since Pasteur’s time, and they’re no longer as grueling to receive as they once were. Still, rabies remains a dangerous disease. Luckily, cases are few and far between nowadays, with only around ten fatalities a year in North America thanks to decades of wildlife vaccination efforts. Most cases are spread by infected raccoons, foxes, bats, or skunks, as most pet dogs are vaccinated against rabies. In the rare instance that someone is infected and unable to receive a post-exposure prophylaxis quickly, the disease is still almost always fatal. Once symptoms start showing, it’s already too late. In a way, rabies still hasn’t been put to Pasteur.
[Image description: A raccoon poking its head out from underneath a large wooden beam.] Credit & copyright: Poivrier, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.They say that a dog is man’s best friend, but there’s one thing that can get in the way of that friendship like nothing else. For thousands of years, rabies has claimed countless lives, often transmitted to humans via dogs, raccoons, foxes, and other mammals. For most of that time, there was no way to directly prevent the transmission of rabies, until French scientist Louis Pasteur managed to successfully inoculate someone against the disease on this day in 1885.
Rabies has always been a disease without a cure. Even the ancient Sumerians knew about the deadly disease and how it could be transmitted through a bite from an infected animal. It was a common enough problem that the Babylonians had specific regulations on how the owner of a rabid dog was to compensate a victim’s family in the event of a bite. The disease itself is caused by a virus which is expressed in the saliva, and causes the infected animal to behave in an agitated or aggressive manner. Symptoms across species remain similar, and when humans are infected, they show signs of agitation, hyperactivity, fever, nausea, confusion, and the same excessive salivation seen in other animals. In advanced stages, victims begin hallucinating and having difficulty swallowing. The latter symptom also leads to a fear of water. Rabies is almost always fatal without intervention. Fortunately, post-exposure prophylaxis against rabies now exists, thanks to the efforts of one scientist.
By the time 9-year-old Joseph Meister was bitten 14 times by a rabid dog in the town of Alsace, French chemist Louis Pasteur was already working with rabid dogs. Pasteur had developed a rabies vaccine which he was administering to dogs and rabbits. Though it showed promise, it had never been tested on human subjects. It had also never been used on a subject who had already been infected. When Joseph’s mother brought the child to Paris to seek treatment from Pasteur, he and his colleagues didn’t want to administer the vaccine due to its untested nature. That might have been the end for the young Joseph but for Dr. Jacques Joseph Grancher’s intervention. Grancher offered to administer the vaccine on the boy himself, and over the course of 10 days, Joseph received 12 doses. Remarkably, Joseph was cured by the end of the month, proving the vaccine’s efficacy as both a preventative and a treatment. While credit for developing the vaccine goes to Pasteur, Grancher was also recognized for his part in ending the era of rabies as an automatic death sentence. In 1888, Grancher was given the rank of Grand Officer of the Legion of Honor, the highest French honor given at the time to civilians or military personnel.
The rabies vaccine and post-exposure prophylaxis have greatly improved since Pasteur’s time, and they’re no longer as grueling to receive as they once were. Still, rabies remains a dangerous disease. Luckily, cases are few and far between nowadays, with only around ten fatalities a year in North America thanks to decades of wildlife vaccination efforts. Most cases are spread by infected raccoons, foxes, bats, or skunks, as most pet dogs are vaccinated against rabies. In the rare instance that someone is infected and unable to receive a post-exposure prophylaxis quickly, the disease is still almost always fatal. Once symptoms start showing, it’s already too late. In a way, rabies still hasn’t been put to Pasteur.
[Image description: A raccoon poking its head out from underneath a large wooden beam.] Credit & copyright: Poivrier, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEWorld History PP&T CurioFree1 CQ
Small islands aren’t immune from big problems—just ask Hong Kong. Though it's now a special administrative region of China and one of the world’s most celebrated big cities, for most of its existence Hong Kong was a quiet fishing village. In the 19th century, it fell under British control after two wars. Then, this month in 1997, it was returned to China after a century and a half of (mostly) British rule and lots of hectic changes.
Located mostly on Hong Kong Island on the southern coast of China with a small portion of its territory on mainland China, Hong Kong has been under the control of Chinese regimes for much of its history. The island itself has been inhabited by humans since the stone age, and it was first incorporated into the Chinese Empire around 300 B.C.E., during the Qin Dynasty. Like most coastal communities in the region, Hong Kong’s early economy was based around fishing, pearl farming, and salt production. For nearly 2,000 years, Hong Kong was a fishing community with a slowly but steadily growing population.
Things changed drastically with the arrival of British traders in search of tea. By the 1800s, Hong Kong developed into a major free port, but the lucrative nature of the trade placed a target on Hong Kong. British traders, dissatisfied with China’s demand for silver in exchange for tea, began trafficking opium into the country. In response to the addiction epidemic caused by the drug, the Chinese government began confiscating and destroying opium shipments. The rising tensions between British traders and the local population eventually devolved into violence when the British government sent military forces to support its traders. The First Opium War, as it came to be known, began in 1839. Then, in 1841, Hong Kong Island fell under British occupation. Over a decade later, the British instigated further hostilities, leading to the Second Opium War in 1856. In 1860, with a British victory, the Chinese government was forced to surrender the Kowloon Peninsula, expanding Hong Kong into the mainland.
In 1898, the British negotiated a further expansion of their territories during the Second Convention of Peking. The New Territories, as they came to be called, reached from Hong Kong’s border on Kowloon Peninsula to the Shenzhen River and came with a 99 year lease set to end on July 1, 1997. Unfortunately, the pending peaceful transfer was preceded by yet another military occupation, this time from the Japanese. From 1941 to the end of WWII, Hong Kong remained under Japanese control. When the war ended, Hong Kong was returned to the British for the remainder of the lease. As the end of the lease approached, the British and Chinese governments began planning a peaceful handover. At first, there were talks of the British holding on to Hong Kong Island and the Kowloon Peninsula, since the lease technically only pertained to the New Territories on the mainland. This idea was scrapped, however, as it was considered impractical to split the region in two, severing economic and social ties that were so intermingled. Instead, the two governments signed the Sino-British Joint Declaration in 1984, establishing the “one country, two systems” arrangement. Per the declaration, Hong Kong would remain a largely independent territory for 50 years with autonomy over its economic and social policies like free speech, free press, and free assembly.
Today, Hong Kong boasts a population of over 7.5 million and is a center of commerce, manufacturing, and culture in Asia. Remnants of British rule can be found all over in the architecture and the names of locations like Victoria Harbour. English also remains an official language alongside Chinese, and many residents are fluent in both. Old habits (and cultural practices) sometimes just stick.
[Image description: Part of the Hong Kong skyline and harbor on a slightly hazy day.] Credit & copyright: Syced, Wikimedia Commons.Small islands aren’t immune from big problems—just ask Hong Kong. Though it's now a special administrative region of China and one of the world’s most celebrated big cities, for most of its existence Hong Kong was a quiet fishing village. In the 19th century, it fell under British control after two wars. Then, this month in 1997, it was returned to China after a century and a half of (mostly) British rule and lots of hectic changes.
Located mostly on Hong Kong Island on the southern coast of China with a small portion of its territory on mainland China, Hong Kong has been under the control of Chinese regimes for much of its history. The island itself has been inhabited by humans since the stone age, and it was first incorporated into the Chinese Empire around 300 B.C.E., during the Qin Dynasty. Like most coastal communities in the region, Hong Kong’s early economy was based around fishing, pearl farming, and salt production. For nearly 2,000 years, Hong Kong was a fishing community with a slowly but steadily growing population.
Things changed drastically with the arrival of British traders in search of tea. By the 1800s, Hong Kong developed into a major free port, but the lucrative nature of the trade placed a target on Hong Kong. British traders, dissatisfied with China’s demand for silver in exchange for tea, began trafficking opium into the country. In response to the addiction epidemic caused by the drug, the Chinese government began confiscating and destroying opium shipments. The rising tensions between British traders and the local population eventually devolved into violence when the British government sent military forces to support its traders. The First Opium War, as it came to be known, began in 1839. Then, in 1841, Hong Kong Island fell under British occupation. Over a decade later, the British instigated further hostilities, leading to the Second Opium War in 1856. In 1860, with a British victory, the Chinese government was forced to surrender the Kowloon Peninsula, expanding Hong Kong into the mainland.
In 1898, the British negotiated a further expansion of their territories during the Second Convention of Peking. The New Territories, as they came to be called, reached from Hong Kong’s border on Kowloon Peninsula to the Shenzhen River and came with a 99 year lease set to end on July 1, 1997. Unfortunately, the pending peaceful transfer was preceded by yet another military occupation, this time from the Japanese. From 1941 to the end of WWII, Hong Kong remained under Japanese control. When the war ended, Hong Kong was returned to the British for the remainder of the lease. As the end of the lease approached, the British and Chinese governments began planning a peaceful handover. At first, there were talks of the British holding on to Hong Kong Island and the Kowloon Peninsula, since the lease technically only pertained to the New Territories on the mainland. This idea was scrapped, however, as it was considered impractical to split the region in two, severing economic and social ties that were so intermingled. Instead, the two governments signed the Sino-British Joint Declaration in 1984, establishing the “one country, two systems” arrangement. Per the declaration, Hong Kong would remain a largely independent territory for 50 years with autonomy over its economic and social policies like free speech, free press, and free assembly.
Today, Hong Kong boasts a population of over 7.5 million and is a center of commerce, manufacturing, and culture in Asia. Remnants of British rule can be found all over in the architecture and the names of locations like Victoria Harbour. English also remains an official language alongside Chinese, and many residents are fluent in both. Old habits (and cultural practices) sometimes just stick.
[Image description: Part of the Hong Kong skyline and harbor on a slightly hazy day.] Credit & copyright: Syced, Wikimedia Commons. -
FREEHumanities PP&T CurioFree1 CQ
What does the fall of Napoleon have to do with dentures? More than you might think. Napoleon Bonaparte was defeated at the Battle of Waterloo this month in 1812 by a military alliance consisting of Great Britain, the Netherlands, Prussia, and Belgium, ending with a whopping 50,000 casualties. The historic battle was a terrible time to be a soldier, but it was a red letter day for looters in search of teeth. Before the invention of synthetic materials, most dentures and other dental prostheses were made from actual human teeth and other natural materials.
As incredible as it might seem, the history of dentures and dental prosthesis dates back all the way to the ancient Egyptians. Archaeological finds supporting their advanced dental techniques include gold filled teeth and false teeth found buried with the deceased. Much of what is known about Egyptian dentistry was actually preserved by the ancient Greeks, who learned from them. Greeks, too, used gold to fill cavities as well as gold wire and wooden teeth to create bridges. Even the Etruscans, an ancient civilization that flourished in northern Italy that predates the Romans, were capable of creating dental prostheses. These include some of the earlier examples of dental bridges made of animal teeth held together with gold. The Romans were no slouches either in the dentistry department. There is written evidence that ancient Roman dentists were able to replace missing teeth with artificial ones made of bone or ivory, using methods similar to the Etruscans. Archaeological evidence also shows that they were capable of creating a complete set of dentures in this manner.
Unfortunately for those suffering from missing teeth throughout history, there were few significant advancements in the field of dental prosthesis for centuries after the fall of the Roman Empire. Teeth continued to be made from animal teeth, bones, or ivory with precious metals as the base to hold them together. Dentistry as a whole wasn’t particularly respected as a profession, and so its associated duties often fell to barbers and blacksmiths as supplementary work. Things began to slowly improve starting in the 1700s. In 1737, the “father of modern dentistry” Pierre Fauchard, created a set of complete dentures held together with springs. Fauchard was also the first to suggest making false teeth out of porcelain, though he never got around to it himself. Of course, there’s a popular myth that George Washington, who lived around the same time as Fauchard, had wooden dentures, but that’s entirely false. Washington wore dentures made from both human and animal teeth, which used ivory and lead for the base. Some believe that the origin of the wooden teeth myth comes from Washington’s affinity for Madeira wine, which stained hairline fractures in the false teeth, giving them the appearance of wood grains.
Until the mid-1800s, human teeth continued to be the standard for dentures, often bought and extracted from those desperate for money or looted from graves or battlefields. When Napoleon’s army was defeated after a bloody battle at Waterloo, survivors, locals, and professional scavengers descended on the piles of corpses and pulled as many teeth as they could to be sold to denture makers (though they often skipped the molars since they were hard to pull and would likely need reshaping).
Luckily, the practice of using human teeth eventually fell out of fashion, partly from legislation regulating the commercial use of human bodies, and partly from the advent of porcelain and celluloid teeth. Also in the 1800s, the newly developed rubber compound called vulcanite replaced the metals and ivory that formed the base of most dentures. Today, dentures are made from advanced materials like acrylic resin that closely mimic the look and function of real teeth. Modern dentures make even those from a few decades ago seem primitive by comparison. One thing’s for sure: high-tech teeth are a lot better than looted ones.
[Image description: A set of dentures partially visible against a blue background.] Credit & copyright: User: Thirunavukkarasye-Raveendran, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What does the fall of Napoleon have to do with dentures? More than you might think. Napoleon Bonaparte was defeated at the Battle of Waterloo this month in 1812 by a military alliance consisting of Great Britain, the Netherlands, Prussia, and Belgium, ending with a whopping 50,000 casualties. The historic battle was a terrible time to be a soldier, but it was a red letter day for looters in search of teeth. Before the invention of synthetic materials, most dentures and other dental prostheses were made from actual human teeth and other natural materials.
As incredible as it might seem, the history of dentures and dental prosthesis dates back all the way to the ancient Egyptians. Archaeological finds supporting their advanced dental techniques include gold filled teeth and false teeth found buried with the deceased. Much of what is known about Egyptian dentistry was actually preserved by the ancient Greeks, who learned from them. Greeks, too, used gold to fill cavities as well as gold wire and wooden teeth to create bridges. Even the Etruscans, an ancient civilization that flourished in northern Italy that predates the Romans, were capable of creating dental prostheses. These include some of the earlier examples of dental bridges made of animal teeth held together with gold. The Romans were no slouches either in the dentistry department. There is written evidence that ancient Roman dentists were able to replace missing teeth with artificial ones made of bone or ivory, using methods similar to the Etruscans. Archaeological evidence also shows that they were capable of creating a complete set of dentures in this manner.
Unfortunately for those suffering from missing teeth throughout history, there were few significant advancements in the field of dental prosthesis for centuries after the fall of the Roman Empire. Teeth continued to be made from animal teeth, bones, or ivory with precious metals as the base to hold them together. Dentistry as a whole wasn’t particularly respected as a profession, and so its associated duties often fell to barbers and blacksmiths as supplementary work. Things began to slowly improve starting in the 1700s. In 1737, the “father of modern dentistry” Pierre Fauchard, created a set of complete dentures held together with springs. Fauchard was also the first to suggest making false teeth out of porcelain, though he never got around to it himself. Of course, there’s a popular myth that George Washington, who lived around the same time as Fauchard, had wooden dentures, but that’s entirely false. Washington wore dentures made from both human and animal teeth, which used ivory and lead for the base. Some believe that the origin of the wooden teeth myth comes from Washington’s affinity for Madeira wine, which stained hairline fractures in the false teeth, giving them the appearance of wood grains.
Until the mid-1800s, human teeth continued to be the standard for dentures, often bought and extracted from those desperate for money or looted from graves or battlefields. When Napoleon’s army was defeated after a bloody battle at Waterloo, survivors, locals, and professional scavengers descended on the piles of corpses and pulled as many teeth as they could to be sold to denture makers (though they often skipped the molars since they were hard to pull and would likely need reshaping).
Luckily, the practice of using human teeth eventually fell out of fashion, partly from legislation regulating the commercial use of human bodies, and partly from the advent of porcelain and celluloid teeth. Also in the 1800s, the newly developed rubber compound called vulcanite replaced the metals and ivory that formed the base of most dentures. Today, dentures are made from advanced materials like acrylic resin that closely mimic the look and function of real teeth. Modern dentures make even those from a few decades ago seem primitive by comparison. One thing’s for sure: high-tech teeth are a lot better than looted ones.
[Image description: A set of dentures partially visible against a blue background.] Credit & copyright: User: Thirunavukkarasye-Raveendran, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEPlay PP&T CurioFree1 CQ
Looks like we’re in for one mild ride! The carousel, also called a merry-go-round or galloper, isn’t exactly a thrill ride. Yet, as family-friendly and inviting as they are, carousels have a surprisingly violent history. As summer begins and carousels begin popping up at carnivals all over the world, it’s the perfect time to learn a bit about this ubiquitous attraction.
The idea of an amusement ride has been around for millennia in some form or another. An early predecessor of the carousel even existed in the Byzantine Empire. In Constantinople, now Istanbul, there existed a ride that spun riders in baskets attached by poles to a rotating center. Later on, in medieval Europe, a similar concept was used to train knights for mounted battle. “Mounted” riders would sit atop a rotating seat, from which they would use a practice weapon to hit targets. In Turkey, riders would instead throw clay balls filled with perfume at their human opponents, but both versions of this “ride” were less about amusement, and more about training. These contraptions were eventually replaced with real horses and jousting tournaments, which tended to be violent and dangerous. When such tournaments fell out of fashion around the 17th century, the real horses were once again replaced with wooden facsimiles, with knights lancing rings and ribbons instead of other knights to show off their martial prowess. This, in turn, developed into a more accessible form of entertainment, allowing even commoners to enjoy the thrill of simulated combat. Evidence of the carousel’s roots in war games and jousting remains in its name. The word itself possibly comes from the French word “carrousel,” which means “tilting match,” or the Spanish word “carosella,” which means “little match.”
By the 18th century, the carousel began to evolve into something that more closely resembled the versions that exist today. The combat-oriented elements of the carousel were abandoned, with riders solely focused on enjoying themselves. In place of horses, seats hanging from chains on poles spun riders around at increasingly dizzying speeds, sometimes flinging the hapless amusement seekers outward. This version of the carousel was often called the “flying-horses,” despite its lack of horses. Also despite its risks, it was a popular ride at fairgrounds in England and parts of Europe. Meanwhile, in the U.S. and other parts of the world, rotating rides featuring wooden horses as seats came and went in various forms.
Finally, in 1861, the first iteration of the modern carousel arrived when the American inventor Thomas Bradshaw created the first steam-powered carousel. Throughout the 1800s, steam-powered carousels used their waste steam to power an automatic organ to play music, which is why even many modern iterations play organ music to this day. Another innovator in the esteemed field of carousel design was English inventor Fredrick Savage, who came up with the idea to have the horses move up and down as they rotated, further simulating the feeling of riding a horse. He also toyed around with other, less equestrian themes, including boats and velocipedes instead of horses.
Today, carousels are nearly unrecognizable when compared to their medieval counterparts. They feature elaborate ornamentation and whimsical themes, and are powered by electric motors. While carousels evolved from war games, they’re largely considered a gentle ride for children and their horses (or other animals) are made of fiberglass or other materials, not wood. Though they may have lost their dangerous edge over the centuries and frequently stray from their equestrian theming, carousels aren’t going anywhere. With so many traveling carnivals, these rides really get around as they spin around.
[Image description: A carousel featuring horses and dragons under the worlds “Welsh Galloping Horses.”] Credit & copyright: Jongleur100, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Looks like we’re in for one mild ride! The carousel, also called a merry-go-round or galloper, isn’t exactly a thrill ride. Yet, as family-friendly and inviting as they are, carousels have a surprisingly violent history. As summer begins and carousels begin popping up at carnivals all over the world, it’s the perfect time to learn a bit about this ubiquitous attraction.
The idea of an amusement ride has been around for millennia in some form or another. An early predecessor of the carousel even existed in the Byzantine Empire. In Constantinople, now Istanbul, there existed a ride that spun riders in baskets attached by poles to a rotating center. Later on, in medieval Europe, a similar concept was used to train knights for mounted battle. “Mounted” riders would sit atop a rotating seat, from which they would use a practice weapon to hit targets. In Turkey, riders would instead throw clay balls filled with perfume at their human opponents, but both versions of this “ride” were less about amusement, and more about training. These contraptions were eventually replaced with real horses and jousting tournaments, which tended to be violent and dangerous. When such tournaments fell out of fashion around the 17th century, the real horses were once again replaced with wooden facsimiles, with knights lancing rings and ribbons instead of other knights to show off their martial prowess. This, in turn, developed into a more accessible form of entertainment, allowing even commoners to enjoy the thrill of simulated combat. Evidence of the carousel’s roots in war games and jousting remains in its name. The word itself possibly comes from the French word “carrousel,” which means “tilting match,” or the Spanish word “carosella,” which means “little match.”
By the 18th century, the carousel began to evolve into something that more closely resembled the versions that exist today. The combat-oriented elements of the carousel were abandoned, with riders solely focused on enjoying themselves. In place of horses, seats hanging from chains on poles spun riders around at increasingly dizzying speeds, sometimes flinging the hapless amusement seekers outward. This version of the carousel was often called the “flying-horses,” despite its lack of horses. Also despite its risks, it was a popular ride at fairgrounds in England and parts of Europe. Meanwhile, in the U.S. and other parts of the world, rotating rides featuring wooden horses as seats came and went in various forms.
Finally, in 1861, the first iteration of the modern carousel arrived when the American inventor Thomas Bradshaw created the first steam-powered carousel. Throughout the 1800s, steam-powered carousels used their waste steam to power an automatic organ to play music, which is why even many modern iterations play organ music to this day. Another innovator in the esteemed field of carousel design was English inventor Fredrick Savage, who came up with the idea to have the horses move up and down as they rotated, further simulating the feeling of riding a horse. He also toyed around with other, less equestrian themes, including boats and velocipedes instead of horses.
Today, carousels are nearly unrecognizable when compared to their medieval counterparts. They feature elaborate ornamentation and whimsical themes, and are powered by electric motors. While carousels evolved from war games, they’re largely considered a gentle ride for children and their horses (or other animals) are made of fiberglass or other materials, not wood. Though they may have lost their dangerous edge over the centuries and frequently stray from their equestrian theming, carousels aren’t going anywhere. With so many traveling carnivals, these rides really get around as they spin around.
[Image description: A carousel featuring horses and dragons under the worlds “Welsh Galloping Horses.”] Credit & copyright: Jongleur100, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEArchitecture PP&T CurioFree1 CQ
These buildings are certainly imposing…perhaps even brutally so! Brutalism is undoubtedly one of the most divisive architectural styles ever created. Most people either love it or hate it. Regardless of aesthetic opinion, though, the style has an interesting history, and its name doesn’t actually mean what one might assume.
Brutalism is an architectural style that focuses on plainness, showcasing bare building materials like concrete, steel, and glass without paint or other ornamentation. Brutalist buildings often feature large blocks of concrete and simple, geometric shapes that give them something of a “building-block” look. It’s a common misconception that the term “brutalism” derives from the word “brutal”, as in cruel, due to its imposing look. Rather, the term comes from the French word béton brut, meaning “raw concrete.” In the 1950s and 60s, when brutalism first became popular, raw concrete was usually hidden rather than showcased in architecture, which made the new style stand out.
Brutalism’s popularity began in Europe, not long after the end of World War II. It was then that Swiss-French architectural designer Charles-Édouard Jeanneret, better known as Le Corbusier, designed the 18-story Unité d'Habitation in Marseille, France. The structure is now thought of as one of the first examples of brutalism, with its exposed concrete and geometric design. Le Corbusier didn’t actually label any of his work as brutalism, but he was a painter and great lover of modernist art, and translated many elements of the style into his architectural designs. Far from the grim reputation that brutalism is sometimes associated with today, Le Corbusier saw his architecture as part of a utopian future, in which simple form and minimalism would be parts of everyday, modern living. These ideas were particularly attractive in Europe after the devastation of World War II, and architects in Britain began to emulate the style.
There is some debate around who first coined the term “brutalism.” Many historians believe that it was Swedish architect Hans Asplund, who used the word in 1949 when describing a square, brick house in Uppsala, Sweden. Reyner Banham, a British architectural critic, undoubtedly popularized the name when he penned his 1955 essay, The New Brutalism. Once the term took off, a modernist philosophy similar to Le Corbusier’s began to be associated with brutalist design, and suddenly brutalism was an architectural movement, rather than just a style. Brutalist architects sought to move away from ornate, nostalgic, pre-war designs and into a new, modernized European age in which technology would help people live more equitable lives. Brutalist buildings began popping up in office complexes, on college campuses, and even in neighborhoods across Europe, Canada, Australia, and the U.S.
As ambitious as the brutalist philosophy was, the style was not to last. By the 1970s, brutalism had declined dramatically in popularity. Some complained about the aesthetics of the style, since brutalist buildings can be seen as imposing and, at worst, intimidating. Raw concrete is also prone to weathering and staining, so many brutalist buildings from the 50s were showing plenty of wear and tear by the 70s. Because brutalism was a style used for many public buildings, most of which were in cities, some people came to associate the style with crime in densely-populated areas, especially in the U.S. and Britain. Though plenty of brutalist architecture still exists today, much of it has been demolished, and new brutalist works are rarely made. Still, it’s remembered as one of the most unique architectural styles of the modern world. It took a lot of work for architecture to look so simple!
[Image description: A concrete, brutalist building. It is the Natural Resources Canada CanmetENERGY's building in the Bells Corners Complex in Haanel Drive, Ottawa.] Credit & copyright: CanmetCoop, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.These buildings are certainly imposing…perhaps even brutally so! Brutalism is undoubtedly one of the most divisive architectural styles ever created. Most people either love it or hate it. Regardless of aesthetic opinion, though, the style has an interesting history, and its name doesn’t actually mean what one might assume.
Brutalism is an architectural style that focuses on plainness, showcasing bare building materials like concrete, steel, and glass without paint or other ornamentation. Brutalist buildings often feature large blocks of concrete and simple, geometric shapes that give them something of a “building-block” look. It’s a common misconception that the term “brutalism” derives from the word “brutal”, as in cruel, due to its imposing look. Rather, the term comes from the French word béton brut, meaning “raw concrete.” In the 1950s and 60s, when brutalism first became popular, raw concrete was usually hidden rather than showcased in architecture, which made the new style stand out.
Brutalism’s popularity began in Europe, not long after the end of World War II. It was then that Swiss-French architectural designer Charles-Édouard Jeanneret, better known as Le Corbusier, designed the 18-story Unité d'Habitation in Marseille, France. The structure is now thought of as one of the first examples of brutalism, with its exposed concrete and geometric design. Le Corbusier didn’t actually label any of his work as brutalism, but he was a painter and great lover of modernist art, and translated many elements of the style into his architectural designs. Far from the grim reputation that brutalism is sometimes associated with today, Le Corbusier saw his architecture as part of a utopian future, in which simple form and minimalism would be parts of everyday, modern living. These ideas were particularly attractive in Europe after the devastation of World War II, and architects in Britain began to emulate the style.
There is some debate around who first coined the term “brutalism.” Many historians believe that it was Swedish architect Hans Asplund, who used the word in 1949 when describing a square, brick house in Uppsala, Sweden. Reyner Banham, a British architectural critic, undoubtedly popularized the name when he penned his 1955 essay, The New Brutalism. Once the term took off, a modernist philosophy similar to Le Corbusier’s began to be associated with brutalist design, and suddenly brutalism was an architectural movement, rather than just a style. Brutalist architects sought to move away from ornate, nostalgic, pre-war designs and into a new, modernized European age in which technology would help people live more equitable lives. Brutalist buildings began popping up in office complexes, on college campuses, and even in neighborhoods across Europe, Canada, Australia, and the U.S.
As ambitious as the brutalist philosophy was, the style was not to last. By the 1970s, brutalism had declined dramatically in popularity. Some complained about the aesthetics of the style, since brutalist buildings can be seen as imposing and, at worst, intimidating. Raw concrete is also prone to weathering and staining, so many brutalist buildings from the 50s were showing plenty of wear and tear by the 70s. Because brutalism was a style used for many public buildings, most of which were in cities, some people came to associate the style with crime in densely-populated areas, especially in the U.S. and Britain. Though plenty of brutalist architecture still exists today, much of it has been demolished, and new brutalist works are rarely made. Still, it’s remembered as one of the most unique architectural styles of the modern world. It took a lot of work for architecture to look so simple!
[Image description: A concrete, brutalist building. It is the Natural Resources Canada CanmetENERGY's building in the Bells Corners Complex in Haanel Drive, Ottawa.] Credit & copyright: CanmetCoop, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEWorld History PP&T CurioFree1 CQ
It seems we’re living in the “era” of the Regency era. With the recent announcement of another season of Bridgerton and the show’s elegant fashion inspiring modern trends, the Regency era (or at least a fictionalized version of it) is on a lot of peoples’ minds. The real Regency era took place at the tail end of the Georgian era and was a brief but impactful period of British history. As romanticized as it is, though, the era had a dark side too.
The Regency era took place roughly from the late 1700s to the 1830s. However, the regency for which it is named actually took place between 1811 and 1820, during which the future King George IV ruled as regent in place of his father, George III. George III suffered from poor health and frequent bouts of insanity during his 60 years on the throne. Though the precise nature of his illness remains a mystery, we know that by 1810 he was unable to function in his official capacity as king. Still, George III was never deposed. Instead, parliament passed the Regency Act in 1811, giving his son all the authority of the king on paper while the elder George lived out the last ten years of his life under the care of his wife. As Prince Regent, George IV was the king in all but name until he officially acceded in 1820 after the death of his father. Politically, the Regency era was a relatively peaceful time, though neither George III or George IV were particularly popular as heads of state.
Beyond politics, the Regency era was defined by changing cultural and artistic sensibilities. While the Georgian era was defined by elaborate fashions and ornamental artwork designed to convey a sense of opulence and excess, the Regency was influenced by the more democratic, egalitarian ideals espoused by the French Revolution. Thus, luxurious fabrics such as silks were replaced by more modest muslins, and the fashionable outfits of the time didn't serve to delineate between the social classes. Wide skirts and small waistlines gave way to more natural silhouettes, and clothes became more practical in general. Aesthetically, Regency styles were greatly inspired by Greek, Roman, and Egyptian art of antiquity. Neoclassicism had already been popular during the Georgian era, but the aesthetic sensibilities of the Regency era pursued a more authentic interpretation of ancient styles than ever before. The architecture of the time was similarly influenced by cultures of the past, favoring elegance, symmetry, and open designs. Europeans began to import Japanese and Chinese goods and art styles, and furniture was crafted to be less ornate in shape but more lavishly decorated with veneers of exotic woods. Indeed, as the Regency era progressed, elaborate designs in general became popular again, leading into the extravagance of the Victorian era.
Today, the Regency era is often overshadowed by the much longer-lasting Victorian era, but it nevertheless has its devotees. The aesthetics of the time have been greatly romanticized, and its influences can be seen in popular culture. Notably, Regency romance is a popular genre on paper and on the screen, as can be seen in shows like Bridgerton. The somewhat unusual political arrangement that the period is named after remains largely forgotten, and few remember the regent himself fondly. Ironically, George IV's shortcomings as a ruler may have helped spur on the cultural changes that the era is known for. As both regent and king, George IV preferred to spend his time patronizing artists and architects, shaping the nation's art and culture while eschewing politics for the most part. He might not have been a great head of state, but you could say he was the king of taste.
[Image description: A painting of a Regency-era woman in a white dress, smiling with her arms crossed as her dog, a beagle, looks up at her.] Credit & copyright: Lady Maria Conyngham (died 1843) by Sir Thomas Lawrence, ca. 1824–25. The Metropolitan Museum of Art, Gift of Jessie Woolworth Donahue, 1955. Public Domain.It seems we’re living in the “era” of the Regency era. With the recent announcement of another season of Bridgerton and the show’s elegant fashion inspiring modern trends, the Regency era (or at least a fictionalized version of it) is on a lot of peoples’ minds. The real Regency era took place at the tail end of the Georgian era and was a brief but impactful period of British history. As romanticized as it is, though, the era had a dark side too.
The Regency era took place roughly from the late 1700s to the 1830s. However, the regency for which it is named actually took place between 1811 and 1820, during which the future King George IV ruled as regent in place of his father, George III. George III suffered from poor health and frequent bouts of insanity during his 60 years on the throne. Though the precise nature of his illness remains a mystery, we know that by 1810 he was unable to function in his official capacity as king. Still, George III was never deposed. Instead, parliament passed the Regency Act in 1811, giving his son all the authority of the king on paper while the elder George lived out the last ten years of his life under the care of his wife. As Prince Regent, George IV was the king in all but name until he officially acceded in 1820 after the death of his father. Politically, the Regency era was a relatively peaceful time, though neither George III or George IV were particularly popular as heads of state.
Beyond politics, the Regency era was defined by changing cultural and artistic sensibilities. While the Georgian era was defined by elaborate fashions and ornamental artwork designed to convey a sense of opulence and excess, the Regency was influenced by the more democratic, egalitarian ideals espoused by the French Revolution. Thus, luxurious fabrics such as silks were replaced by more modest muslins, and the fashionable outfits of the time didn't serve to delineate between the social classes. Wide skirts and small waistlines gave way to more natural silhouettes, and clothes became more practical in general. Aesthetically, Regency styles were greatly inspired by Greek, Roman, and Egyptian art of antiquity. Neoclassicism had already been popular during the Georgian era, but the aesthetic sensibilities of the Regency era pursued a more authentic interpretation of ancient styles than ever before. The architecture of the time was similarly influenced by cultures of the past, favoring elegance, symmetry, and open designs. Europeans began to import Japanese and Chinese goods and art styles, and furniture was crafted to be less ornate in shape but more lavishly decorated with veneers of exotic woods. Indeed, as the Regency era progressed, elaborate designs in general became popular again, leading into the extravagance of the Victorian era.
Today, the Regency era is often overshadowed by the much longer-lasting Victorian era, but it nevertheless has its devotees. The aesthetics of the time have been greatly romanticized, and its influences can be seen in popular culture. Notably, Regency romance is a popular genre on paper and on the screen, as can be seen in shows like Bridgerton. The somewhat unusual political arrangement that the period is named after remains largely forgotten, and few remember the regent himself fondly. Ironically, George IV's shortcomings as a ruler may have helped spur on the cultural changes that the era is known for. As both regent and king, George IV preferred to spend his time patronizing artists and architects, shaping the nation's art and culture while eschewing politics for the most part. He might not have been a great head of state, but you could say he was the king of taste.
[Image description: A painting of a Regency-era woman in a white dress, smiling with her arms crossed as her dog, a beagle, looks up at her.] Credit & copyright: Lady Maria Conyngham (died 1843) by Sir Thomas Lawrence, ca. 1824–25. The Metropolitan Museum of Art, Gift of Jessie Woolworth Donahue, 1955. Public Domain. -
FREEUS History PP&T CurioFree1 CQ
Jumping jackalopes! Of all the hoaxes that anyone ever tried to pull off, the myth of the jackalope might be the most harmless. In fact, it proved surprisingly helpful. More than just a jackrabbit with a pair of antlers, the jackalope is the continuation of a surprisingly old myth and has even played a part in the development of a life-saving medical advancement.
On the surface, the jackalope is a fairly simple mythical beast. Most accounts describe it as looking like a black-tailed jackrabbit with a pair of antlers like a deer. Jackrabbits, despite their name, are actually hares, not rabbits. They have long, wide ears that stretch out from their heads and longer, leaner looking bodies than their rabbit cousins. None of them have antlers, though, and none of them have the same kinds of legends attached to them that jackalopes do.
Jackalopes aren’t exactly grand mythical beasts. They don’t guard treasure, as dragons do. They aren’t immortal, like phoenixes. They don’t lure humans to their deaths, like sirens, or perplex them with riddles, like sphynxes. Mostly, jackalopes just like to bother people for fun. Some claim that jackalopes will harmonize with cowboys singing by a campfire, and will only mate during lightning storms. It might be bad luck to hunt jackalopes, but some stories posit that the beasts can be easily tricked. Someone wishing to trap a jackalope can supposedly lure one by setting out a bowl of whiskey. Once a jackalope is drunk, it will be filled with bravado and believe that it can catch bullets with its teeth. But jackalopes are also able to catch hunters unawares thanks to their extraordinary vocal talents. Aside from their singing abilities, jackalopes can supposedly throw voices and mimic different sounds, even the ringtone of a hunter's phone.
Jackalopes are far from the only rabbit or hare tricksters in folklore, nor are they the only ones to have horns. In fact, stories of horned rabbits date back centuries. Europeans even officially recognized the supposed Lepus cornutus as a real species of horned hare, though it never really existed. The jackalope, however, is a purely American invention, cooked up by brothers Douglas and Ralph Herrick in 1932. According to their story, first revealed in Ralph's obituary in 2003, the brothers taxidermed a jackrabbit and attached horns to it themselves. They sold the taxidermed piece to a local bar for $10, and eventually started producing them en masse. While the Herrick brothers might have given rise to the popularity of the modern jackrabbit myth, the preexisting accounts of horned rabbits and hares might have been inspired by something less playful. Rabbits and hares around the world are vulnerable to a virus called Shope papillomavirus, named after Richard Shope, who discovered it in—coincidentally—1932. The virus is similar to the human papillomavirus (HPV), but unlike HPV, which causes cancer, Shope papillomavirus causes keratinized growths that can resemble horns to grow out of the skin. These growths can eventually get large enough to hinder the animal's health, and if it grows around the mouth, it can affect their ability to eat.
One animal's tragedy, it seems, can be another's treasure. The Shope papillomavirus was the first virus found to lead to cancer in a mammal, and this discovery led to advancements in human cancer research. In the 1970s, German virologist Harald zur Hausen proved that HPV was the main culprit for cervical cancer. Later, in the 1980s, Isabelle Giri published the complete genomic sequence of the Shope papillomavirus, which turned out to be similar to HPV. All these findings, of course, eventually led to the development of the HPV vaccine, which immunizes people against most strains of HPV responsible for causing cancer. Those are some leaps and bounds that even a jackalope would struggle to make.
[Image description: An illustration showing a squirrel, two rabbits, and a jackalope inside an oval. The jackalope sits in the center.] Credit & copyright: National Gallery of Art, Joris Hoefnagel, (Flemish, 1542 - 1600). Gift of Mrs. Lessing J. Rosenwald. Pubic Domain.Jumping jackalopes! Of all the hoaxes that anyone ever tried to pull off, the myth of the jackalope might be the most harmless. In fact, it proved surprisingly helpful. More than just a jackrabbit with a pair of antlers, the jackalope is the continuation of a surprisingly old myth and has even played a part in the development of a life-saving medical advancement.
On the surface, the jackalope is a fairly simple mythical beast. Most accounts describe it as looking like a black-tailed jackrabbit with a pair of antlers like a deer. Jackrabbits, despite their name, are actually hares, not rabbits. They have long, wide ears that stretch out from their heads and longer, leaner looking bodies than their rabbit cousins. None of them have antlers, though, and none of them have the same kinds of legends attached to them that jackalopes do.
Jackalopes aren’t exactly grand mythical beasts. They don’t guard treasure, as dragons do. They aren’t immortal, like phoenixes. They don’t lure humans to their deaths, like sirens, or perplex them with riddles, like sphynxes. Mostly, jackalopes just like to bother people for fun. Some claim that jackalopes will harmonize with cowboys singing by a campfire, and will only mate during lightning storms. It might be bad luck to hunt jackalopes, but some stories posit that the beasts can be easily tricked. Someone wishing to trap a jackalope can supposedly lure one by setting out a bowl of whiskey. Once a jackalope is drunk, it will be filled with bravado and believe that it can catch bullets with its teeth. But jackalopes are also able to catch hunters unawares thanks to their extraordinary vocal talents. Aside from their singing abilities, jackalopes can supposedly throw voices and mimic different sounds, even the ringtone of a hunter's phone.
Jackalopes are far from the only rabbit or hare tricksters in folklore, nor are they the only ones to have horns. In fact, stories of horned rabbits date back centuries. Europeans even officially recognized the supposed Lepus cornutus as a real species of horned hare, though it never really existed. The jackalope, however, is a purely American invention, cooked up by brothers Douglas and Ralph Herrick in 1932. According to their story, first revealed in Ralph's obituary in 2003, the brothers taxidermed a jackrabbit and attached horns to it themselves. They sold the taxidermed piece to a local bar for $10, and eventually started producing them en masse. While the Herrick brothers might have given rise to the popularity of the modern jackrabbit myth, the preexisting accounts of horned rabbits and hares might have been inspired by something less playful. Rabbits and hares around the world are vulnerable to a virus called Shope papillomavirus, named after Richard Shope, who discovered it in—coincidentally—1932. The virus is similar to the human papillomavirus (HPV), but unlike HPV, which causes cancer, Shope papillomavirus causes keratinized growths that can resemble horns to grow out of the skin. These growths can eventually get large enough to hinder the animal's health, and if it grows around the mouth, it can affect their ability to eat.
One animal's tragedy, it seems, can be another's treasure. The Shope papillomavirus was the first virus found to lead to cancer in a mammal, and this discovery led to advancements in human cancer research. In the 1970s, German virologist Harald zur Hausen proved that HPV was the main culprit for cervical cancer. Later, in the 1980s, Isabelle Giri published the complete genomic sequence of the Shope papillomavirus, which turned out to be similar to HPV. All these findings, of course, eventually led to the development of the HPV vaccine, which immunizes people against most strains of HPV responsible for causing cancer. Those are some leaps and bounds that even a jackalope would struggle to make.
[Image description: An illustration showing a squirrel, two rabbits, and a jackalope inside an oval. The jackalope sits in the center.] Credit & copyright: National Gallery of Art, Joris Hoefnagel, (Flemish, 1542 - 1600). Gift of Mrs. Lessing J. Rosenwald. Pubic Domain. -
FREEUS History PP&T CurioFree1 CQ
It was mayhem on the Mississippi. The Siege of Vicksburg, which began on this day in 1863, was one of the most significant battles of the American Civil War. Ending just a day after the Battle of Gettysburg, the Union victory at Vicksburg secured their control over the Mississippi River, a critical lifeline for the South. Moreover, the battle played a major role in turning the tides against the Confederacy by eroding morale.
The battle of Vicksburg was all about control of the Mississippi River. Led by General Ulysses S. Grant, Union forces set their sights on the town of Vicksburg on the river’s east bank, which lay about halfway between Memphis and New Orleans. Taking control of Vicksburg would separate the Southern states on each side of the river. Conquering the Confederate stronghold was easier said than done, however. Following the Confederates' loss of key forts in neighboring Tennessee, Vicksburg was the last fortified position from which the South could maintain control over the Mississippi. Knowing this, Confederate Lieutenant General John C. Pemberton, who was in charge of a garrison of around 33,000 men in Vicksburg, began preparing for an impending attack. A Union assault using ironclad ships on the river failed to yield results, while Union General William Tecumseh Sherman's approach by land was repelled by Confederate bombardments. At one point, Grant even tried to dig a canal to circumvent the city's defenses, to no avail.
Eventually, Grant's persistence prevailed. Union forces were able to find footing at Bruinsburg, and after stepping ashore from the Mississippi, they marched toward the state's capital of Jackson. Grant took Jackson by May 14 before continuing toward Vicksburg, fighting Confederate forces along the way. On May 18, Grant and his troops arrived at a heavily fortified Vicksburg, but finding that the garrison was poorly prepared, he hoped to take the city quickly.
To Grant’s chagrin, a quick and sound victory was not to be. Pemberton was able to establish a stubborn defense, forcing Grant to lay siege to the city after several days of fighting. But Pemberton was at a severe disadvantage; though he was able to thwart an attempt to breach the fortifications by sappers (also known as combat engineers) who used explosives to destroy part of their defenses, his garrison was low on rations and cut off from reinforcements. Despite this, when Grant demanded an unconditional surrender from Pemberton, the latter denied the proposition. With neither willing to back away, the siege continued with day after day of contentious but fruitless fighting. Still, it was clear to Pemberton that his garrison could not last. Grant controlled all roads to Vicksburg and the garrison was on the verge of starvation. After more than a month and a half of fighting, Grant offered parole for any remaining defenders, allowing them to go home rather than be imprisoned. Thus, the battle ended in a Union victory on July 4. Of the 77,000 Union soldiers and 33,000 Confederate soldiers who fought at Vicksburg, over 1,600 died and thousands more were wounded.
Today, the Siege of Vicksburg is considered one of the death knells of the Confederacy, though it is often overshadowed by the Battle of Gettysburg. While the war continued for another two years, these two battles were a turning point in the trajectory of the conflict which had, until then, favored the Confederacy. After the Union took Vicksburg, Southern forces were unable to maintain their already-waning strength. Morale plummeted, hopes of aid from England were all but gone, and Grant had distinguished himself as a Union commander. Before the Siege of Vicksburg, Grant had been a relatively unknown figure, but his triumph there gave him political momentum that would later place him in the White House. Which would be more frightening, leading a siege or running the country?
[Image description: A black-and-white illustration of Confederate soldiers ready to fire a canon at the Battle of Vicksburg.] Credit & copyright: A Popular History of the United States, Volume 5, George W. Peters, 1876. Public Domain.It was mayhem on the Mississippi. The Siege of Vicksburg, which began on this day in 1863, was one of the most significant battles of the American Civil War. Ending just a day after the Battle of Gettysburg, the Union victory at Vicksburg secured their control over the Mississippi River, a critical lifeline for the South. Moreover, the battle played a major role in turning the tides against the Confederacy by eroding morale.
The battle of Vicksburg was all about control of the Mississippi River. Led by General Ulysses S. Grant, Union forces set their sights on the town of Vicksburg on the river’s east bank, which lay about halfway between Memphis and New Orleans. Taking control of Vicksburg would separate the Southern states on each side of the river. Conquering the Confederate stronghold was easier said than done, however. Following the Confederates' loss of key forts in neighboring Tennessee, Vicksburg was the last fortified position from which the South could maintain control over the Mississippi. Knowing this, Confederate Lieutenant General John C. Pemberton, who was in charge of a garrison of around 33,000 men in Vicksburg, began preparing for an impending attack. A Union assault using ironclad ships on the river failed to yield results, while Union General William Tecumseh Sherman's approach by land was repelled by Confederate bombardments. At one point, Grant even tried to dig a canal to circumvent the city's defenses, to no avail.
Eventually, Grant's persistence prevailed. Union forces were able to find footing at Bruinsburg, and after stepping ashore from the Mississippi, they marched toward the state's capital of Jackson. Grant took Jackson by May 14 before continuing toward Vicksburg, fighting Confederate forces along the way. On May 18, Grant and his troops arrived at a heavily fortified Vicksburg, but finding that the garrison was poorly prepared, he hoped to take the city quickly.
To Grant’s chagrin, a quick and sound victory was not to be. Pemberton was able to establish a stubborn defense, forcing Grant to lay siege to the city after several days of fighting. But Pemberton was at a severe disadvantage; though he was able to thwart an attempt to breach the fortifications by sappers (also known as combat engineers) who used explosives to destroy part of their defenses, his garrison was low on rations and cut off from reinforcements. Despite this, when Grant demanded an unconditional surrender from Pemberton, the latter denied the proposition. With neither willing to back away, the siege continued with day after day of contentious but fruitless fighting. Still, it was clear to Pemberton that his garrison could not last. Grant controlled all roads to Vicksburg and the garrison was on the verge of starvation. After more than a month and a half of fighting, Grant offered parole for any remaining defenders, allowing them to go home rather than be imprisoned. Thus, the battle ended in a Union victory on July 4. Of the 77,000 Union soldiers and 33,000 Confederate soldiers who fought at Vicksburg, over 1,600 died and thousands more were wounded.
Today, the Siege of Vicksburg is considered one of the death knells of the Confederacy, though it is often overshadowed by the Battle of Gettysburg. While the war continued for another two years, these two battles were a turning point in the trajectory of the conflict which had, until then, favored the Confederacy. After the Union took Vicksburg, Southern forces were unable to maintain their already-waning strength. Morale plummeted, hopes of aid from England were all but gone, and Grant had distinguished himself as a Union commander. Before the Siege of Vicksburg, Grant had been a relatively unknown figure, but his triumph there gave him political momentum that would later place him in the White House. Which would be more frightening, leading a siege or running the country?
[Image description: A black-and-white illustration of Confederate soldiers ready to fire a canon at the Battle of Vicksburg.] Credit & copyright: A Popular History of the United States, Volume 5, George W. Peters, 1876. Public Domain. -
FREEMusic Appreciation PP&T CurioFree1 CQ
What would life be without a little music? It’s one of the great cornerstones of culture, yet music only exploded as an industry with the advent of mass media in the 20th century. This month in 1959, the National Academy of Recording Arts & Sciences (NARAS), also known as "The Recording Academy," began celebrating musicians, singers, songwriters, and other music industry professionals with the Grammy Awards.
Originally called The Gramophone Awards, the Grammys got their start as black-tie dinners held at the same time in Los Angeles and New York City. The award ceremonies were established to recognize those in the music industry in the same way that the Oscars and the Emmys did for film and television. Compared to the other events, however, The Gramophone Awards were much more formal, and compared to today, they covered relatively few categories: only 28 in total. Modern Grammys cover a whopping 94 categories. Still, many different musical styles were covered by the awards. In fact, the first-ever Record of the Year and Song of the Year awards went to Italian singer-songwriter Domenico Modugno for Nel Blu Dipinto Di Blu (Volare). Meanwhile, Henry Macini won Album of the Year for The Music from Peter Gunn, and Ella Fitzgerald won awards for Best Vocal Performance, Female and Best Jazz Performance, Individual. Though Frank Sinatra led the pack with the most nominations at six, he only received an award as the art director for the cover of his album, Only the Lonely. With such esteemed musicians and performers recognized during the first Gramophone Awards, the event quickly earned a prestigious status in the entertainment industry.
Over the years, the Grammys have grown in scope, covering more genres and roles within the music industry. Beginning in 1980, the Recording Academy began recognizing Rock as a genre, followed by Rap in 1989. Not all categories are shown during the Grammys’ yearly broadcast due to time constraints, which leads to some awards being fairly overlooked. Some lesser-known Grammys are those concerning musical theater and children's music. At one point, there were 109 categories, but the Academy managed to pare things down to 79 after 2011. This was partly achieved by eliminating gendered categories and getting rid of the differentiation between solo and group acts. Of course, now the categories have slowly increased again to 94 in total. In 1997, NARAS established the Latin Academy of Recording Arts & Sciences (LARS), which started holding its own awards ceremony in 2000 for records released in Spanish or Portuguese.
Today, the Grammys is as much known for providing a televised spectacle for fans of popular music as it is for its prestige. In contrast to the much more formal gatherings of its early years, the ceremonies and the red carpet leading up to the modern Grammys have become stages for fashion and political statements. Some modern Grammy winners have been recognized multiple times, setting impressive records. These include performers like Beyoncé and Quincy Jones, who have been awarded 35 and 28 Grammys respectively, but there are other, lesser-known recordholders too. Hungarian-British classical conductor Georg Solti received 31 Grammys in his lifetime. Then there's Jimmy Sturr, who has won 18 of the 25 Grammys ever awarded for Polka, and Yo-Yo Ma, who has won 19 awards for Classical and World Music. The Grammys might have started off as a small dinner, but it's now a veritable feast for the ears.What would life be without a little music? It’s one of the great cornerstones of culture, yet music only exploded as an industry with the advent of mass media in the 20th century. This month in 1959, the National Academy of Recording Arts & Sciences (NARAS), also known as "The Recording Academy," began celebrating musicians, singers, songwriters, and other music industry professionals with the Grammy Awards.
Originally called The Gramophone Awards, the Grammys got their start as black-tie dinners held at the same time in Los Angeles and New York City. The award ceremonies were established to recognize those in the music industry in the same way that the Oscars and the Emmys did for film and television. Compared to the other events, however, The Gramophone Awards were much more formal, and compared to today, they covered relatively few categories: only 28 in total. Modern Grammys cover a whopping 94 categories. Still, many different musical styles were covered by the awards. In fact, the first-ever Record of the Year and Song of the Year awards went to Italian singer-songwriter Domenico Modugno for Nel Blu Dipinto Di Blu (Volare). Meanwhile, Henry Macini won Album of the Year for The Music from Peter Gunn, and Ella Fitzgerald won awards for Best Vocal Performance, Female and Best Jazz Performance, Individual. Though Frank Sinatra led the pack with the most nominations at six, he only received an award as the art director for the cover of his album, Only the Lonely. With such esteemed musicians and performers recognized during the first Gramophone Awards, the event quickly earned a prestigious status in the entertainment industry.
Over the years, the Grammys have grown in scope, covering more genres and roles within the music industry. Beginning in 1980, the Recording Academy began recognizing Rock as a genre, followed by Rap in 1989. Not all categories are shown during the Grammys’ yearly broadcast due to time constraints, which leads to some awards being fairly overlooked. Some lesser-known Grammys are those concerning musical theater and children's music. At one point, there were 109 categories, but the Academy managed to pare things down to 79 after 2011. This was partly achieved by eliminating gendered categories and getting rid of the differentiation between solo and group acts. Of course, now the categories have slowly increased again to 94 in total. In 1997, NARAS established the Latin Academy of Recording Arts & Sciences (LARS), which started holding its own awards ceremony in 2000 for records released in Spanish or Portuguese.
Today, the Grammys is as much known for providing a televised spectacle for fans of popular music as it is for its prestige. In contrast to the much more formal gatherings of its early years, the ceremonies and the red carpet leading up to the modern Grammys have become stages for fashion and political statements. Some modern Grammy winners have been recognized multiple times, setting impressive records. These include performers like Beyoncé and Quincy Jones, who have been awarded 35 and 28 Grammys respectively, but there are other, lesser-known recordholders too. Hungarian-British classical conductor Georg Solti received 31 Grammys in his lifetime. Then there's Jimmy Sturr, who has won 18 of the 25 Grammys ever awarded for Polka, and Yo-Yo Ma, who has won 19 awards for Classical and World Music. The Grammys might have started off as a small dinner, but it's now a veritable feast for the ears. -
FREEWorld History PP&T CurioFree1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Bilbies, not bunnies! That’s the slogan of those in Australia who support the Easter Bilby, an Aussie alternative to the traditional Easter Bunny. Bilbies are endangered Australian marsupials with some rabbit-like features, such as long ears and strong back legs that make them prolific jumpers. This time of year, Australian shops sell chocolate bilbies and picture books featuring the Easter-themed marsupial. But the Easter Bilby isn’t just a way to Aussie-fy Easter. It helps bring awareness to two related environmental problems down under.
Bilbies are unique creatures, and some of the world’s oldest living mammals. They thrive in arid environments where many other animals have trouble surviving. Unlike rabbits, bilbies are omnivores who survive by eating a combination of plants, seeds, fungi, and insects. It’s no wonder that Australians are proud enough of this native animal to use it as a holiday mascot. As is fitting of such a whimsical character, the Easter Bilby was invented by a child. In 1968, 9-year-old Australian Rose-Marie Dusting wrote a short story called Billy The Aussie Easter Bilby, which she later published as a book. The book was popular enough to raise the general public’s interest in bilbies, and the Easter Bilby began appearing on Easter cards and decorations. The Easter Bilby really took off, though, when chocolate companies got on board and began selling chocolate bilbies right alongside the usual Easter Bunnies. Seeing that the Easter Bilby was quite popular, Australian environmentalists seized the opportunity to educate Australians about the bilby’s endangered status and the environmental problems posed by the nation's feral rabbits.
Bilbies were once found across 70 percent of Australia, but today that percentage has shriveled to 20 percent. Besides simple habitat encroachment, human life harmed bilbies in another big way: by introducing non-native species. Europeans introduced both foxes and domesticated cats to Australia in the 19th Century. Today, foxes kill around 300 million native Australian animals every year, While cats kill a whopping 2 billion annually. While it’s obvious how predators like foxes and cats can hunt and kill bilbies, cute, fluffy bunnies pose just as much of a threat. On Christmas Day in 1859, European settler Thomas Austin released 24 rabbits into the Australian wilderness, believing that hunting them would provide good sport for his fellow colonists. He couldn’t have foreseen the devastating consequences of his decision. From his original 24 rabbits, an entire population of non-native, feral rabbits was born, and they’ve been decimating native Australian wildlife ever since. These rabbits gobble up millions of native plants. This not only kills species that directly depend on the plants for food, it also causes soil erosion since the plants’ roots normally help keep soil compacted. Erosion can change entire landscapes, making them uninhabitable to native species. Unfortunately, rabbits helped drive one of Australia’s two bilby species, the Lesser Bilby, to extinction in the 1950s. Now, less than 10,000 Greater Bilbies remain in the wild.
When conservation group Foundation for Rabbit-Free Australia caught wind of the Easter Bilby, they took the opportunity to promote it as an environmentally-friendly alternative to the bunny-centric holiday. Their efforts led to more chocolate companies producing chocolate bilbies. Some even began donating their proceeds to help save real bilbies. Companies like Pink Lady and Haigh’s Chocolates have donated tens of thousands of dollars to Australia’s Save the Bilby Fund. Other Easter Bilby products include mugs, keychains, and stuffed toys. Some Australian artists create work featuring the Easter Bilby. Just like the Easter Bunny, the Easter Bilby is usually pictured bringing colorful eggs to children, and frolicking in springtime flowers. If he’s anything like his real-life counterparts, he’d sooner eat troublesome termites than cause any environmental damage. Win-win!
[Image description: A vintage drawing of a bilby with its long ears laid back.] Credit & copyright:
John Gould, Mammals of Australia Vol. I Plate 7, Wikimedia Commons, Public DomainThis week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Bilbies, not bunnies! That’s the slogan of those in Australia who support the Easter Bilby, an Aussie alternative to the traditional Easter Bunny. Bilbies are endangered Australian marsupials with some rabbit-like features, such as long ears and strong back legs that make them prolific jumpers. This time of year, Australian shops sell chocolate bilbies and picture books featuring the Easter-themed marsupial. But the Easter Bilby isn’t just a way to Aussie-fy Easter. It helps bring awareness to two related environmental problems down under.
Bilbies are unique creatures, and some of the world’s oldest living mammals. They thrive in arid environments where many other animals have trouble surviving. Unlike rabbits, bilbies are omnivores who survive by eating a combination of plants, seeds, fungi, and insects. It’s no wonder that Australians are proud enough of this native animal to use it as a holiday mascot. As is fitting of such a whimsical character, the Easter Bilby was invented by a child. In 1968, 9-year-old Australian Rose-Marie Dusting wrote a short story called Billy The Aussie Easter Bilby, which she later published as a book. The book was popular enough to raise the general public’s interest in bilbies, and the Easter Bilby began appearing on Easter cards and decorations. The Easter Bilby really took off, though, when chocolate companies got on board and began selling chocolate bilbies right alongside the usual Easter Bunnies. Seeing that the Easter Bilby was quite popular, Australian environmentalists seized the opportunity to educate Australians about the bilby’s endangered status and the environmental problems posed by the nation's feral rabbits.
Bilbies were once found across 70 percent of Australia, but today that percentage has shriveled to 20 percent. Besides simple habitat encroachment, human life harmed bilbies in another big way: by introducing non-native species. Europeans introduced both foxes and domesticated cats to Australia in the 19th Century. Today, foxes kill around 300 million native Australian animals every year, While cats kill a whopping 2 billion annually. While it’s obvious how predators like foxes and cats can hunt and kill bilbies, cute, fluffy bunnies pose just as much of a threat. On Christmas Day in 1859, European settler Thomas Austin released 24 rabbits into the Australian wilderness, believing that hunting them would provide good sport for his fellow colonists. He couldn’t have foreseen the devastating consequences of his decision. From his original 24 rabbits, an entire population of non-native, feral rabbits was born, and they’ve been decimating native Australian wildlife ever since. These rabbits gobble up millions of native plants. This not only kills species that directly depend on the plants for food, it also causes soil erosion since the plants’ roots normally help keep soil compacted. Erosion can change entire landscapes, making them uninhabitable to native species. Unfortunately, rabbits helped drive one of Australia’s two bilby species, the Lesser Bilby, to extinction in the 1950s. Now, less than 10,000 Greater Bilbies remain in the wild.
When conservation group Foundation for Rabbit-Free Australia caught wind of the Easter Bilby, they took the opportunity to promote it as an environmentally-friendly alternative to the bunny-centric holiday. Their efforts led to more chocolate companies producing chocolate bilbies. Some even began donating their proceeds to help save real bilbies. Companies like Pink Lady and Haigh’s Chocolates have donated tens of thousands of dollars to Australia’s Save the Bilby Fund. Other Easter Bilby products include mugs, keychains, and stuffed toys. Some Australian artists create work featuring the Easter Bilby. Just like the Easter Bunny, the Easter Bilby is usually pictured bringing colorful eggs to children, and frolicking in springtime flowers. If he’s anything like his real-life counterparts, he’d sooner eat troublesome termites than cause any environmental damage. Win-win!
[Image description: A vintage drawing of a bilby with its long ears laid back.] Credit & copyright:
John Gould, Mammals of Australia Vol. I Plate 7, Wikimedia Commons, Public Domain -
FREEUS History PP&T CurioFree1 CQ
".... .- .--. .--. -.-- / -... .. .-. - .... -.. .- -.--" That's Morse code for happy birthday! The inventor of the electric telegraph and Morse code, Samuel Morse, was born on this day in 1791. Tinkering and inventing were just two of Morse’s varied interests, but the story of how he invented the telegram involved both genius and tragedy.
Morse was born in Charlestown, Massachusetts, and attended Yale as a young man. As a student, he had a passing interest in electricity, but his real passion was for painting. He especially enjoyed painting miniature portraits, much to the chagrin of his parents, who wanted him to start working as a bookseller's apprentice. Despite the pressure, he remained steadfast in his pursuit of the arts, and eventually made his way to London to train properly as a painter. His painting The Dying Hercules became a critical success, and by 1815 he returned to the U.S. to open his own studio. The following year, though, the 25-year-old Morse was struck by a sudden and unexpected tragedy. While he was away from home, working on a portrait of Marquis de Lafayette, a pivotal figure in the American Revolution, Morse received word that his wife had fallen critically ill. He rushed home, but he was too late—his wife had passed away days before his arrival. Morse was understandably distraught, and the tragedy marked the beginning of his renewed interest in electricity. Specifically, he believed that the key to instant communication lay with electromagnetism.
Although Morse’s name would come to be forever associated with telegrams, he wasn't the first to invent them by any means. In 1833, Germans Carl Friedrich Gauss Wilhelm Weber created the first commercial telegraph. Meanwhile, William Cooke and Charles Wheatstone in England were working on a telegraph system of their own, but their version was limited in range. Morse himself only managed to create a prototype by 1834, yet by 1838—and with the help of machinist Alfred Vail—he was able to create a telegraph system that could relay a message up to two miles. The message sent during this demonstration, "A patient waiter is no loser," was sent in the newly developed Morse code, which Morse devised with the help of Vail. Morse applied for a patent for his telegraph in 1840, and in 1844, a line connecting Baltimore, Maryland to Washington D.C. was established. Famously, the first message sent on this line was "What hath God wrought!" Although Morse Code as created by Morse was adequate for communicating in English, it wasn't particularly accommodating of other languages. So, in 1851, a number of European countries worked together to develop a variant called the International Morse Code, which was simpler. This version would come to be adopted in the U.S. as well for its simplicity, and remains more widely used.
Both the telegraph and the Morse code remained the quickest way to communicate over long distances for many years, until the advent of the radio and other mass communication devices rendered them obsolete during the 20th century. The death blow to the telegraph—and subsequently, Morse code—as the dominant form of long distance communication came after WWII, when aging telegraph lines became too great an expense to justify in the age of radio. Morse code still had its uses with radio as the medium instead of the telegraph, but its heyday was long over. Today, telegraphs and Morse code have been relegated to niche uses, but it's undeniable that they helped shape the age of instant communication that we currently occupy. Word travels fast, but Morse made it faster.
[Image description: A photo of Samuel Morse, a mall with white hair and a beard, wearing a uniform with many medals.] Credit & copyright: The Metropolitan Museum of Art, Samuel F. B. Morse, Attributed to Mathew B. Brady, ca. 1870. Gilman Collection, Gift of The Howard Gilman Foundation, 2005. Public Domain.".... .- .--. .--. -.-- / -... .. .-. - .... -.. .- -.--" That's Morse code for happy birthday! The inventor of the electric telegraph and Morse code, Samuel Morse, was born on this day in 1791. Tinkering and inventing were just two of Morse’s varied interests, but the story of how he invented the telegram involved both genius and tragedy.
Morse was born in Charlestown, Massachusetts, and attended Yale as a young man. As a student, he had a passing interest in electricity, but his real passion was for painting. He especially enjoyed painting miniature portraits, much to the chagrin of his parents, who wanted him to start working as a bookseller's apprentice. Despite the pressure, he remained steadfast in his pursuit of the arts, and eventually made his way to London to train properly as a painter. His painting The Dying Hercules became a critical success, and by 1815 he returned to the U.S. to open his own studio. The following year, though, the 25-year-old Morse was struck by a sudden and unexpected tragedy. While he was away from home, working on a portrait of Marquis de Lafayette, a pivotal figure in the American Revolution, Morse received word that his wife had fallen critically ill. He rushed home, but he was too late—his wife had passed away days before his arrival. Morse was understandably distraught, and the tragedy marked the beginning of his renewed interest in electricity. Specifically, he believed that the key to instant communication lay with electromagnetism.
Although Morse’s name would come to be forever associated with telegrams, he wasn't the first to invent them by any means. In 1833, Germans Carl Friedrich Gauss Wilhelm Weber created the first commercial telegraph. Meanwhile, William Cooke and Charles Wheatstone in England were working on a telegraph system of their own, but their version was limited in range. Morse himself only managed to create a prototype by 1834, yet by 1838—and with the help of machinist Alfred Vail—he was able to create a telegraph system that could relay a message up to two miles. The message sent during this demonstration, "A patient waiter is no loser," was sent in the newly developed Morse code, which Morse devised with the help of Vail. Morse applied for a patent for his telegraph in 1840, and in 1844, a line connecting Baltimore, Maryland to Washington D.C. was established. Famously, the first message sent on this line was "What hath God wrought!" Although Morse Code as created by Morse was adequate for communicating in English, it wasn't particularly accommodating of other languages. So, in 1851, a number of European countries worked together to develop a variant called the International Morse Code, which was simpler. This version would come to be adopted in the U.S. as well for its simplicity, and remains more widely used.
Both the telegraph and the Morse code remained the quickest way to communicate over long distances for many years, until the advent of the radio and other mass communication devices rendered them obsolete during the 20th century. The death blow to the telegraph—and subsequently, Morse code—as the dominant form of long distance communication came after WWII, when aging telegraph lines became too great an expense to justify in the age of radio. Morse code still had its uses with radio as the medium instead of the telegraph, but its heyday was long over. Today, telegraphs and Morse code have been relegated to niche uses, but it's undeniable that they helped shape the age of instant communication that we currently occupy. Word travels fast, but Morse made it faster.
[Image description: A photo of Samuel Morse, a mall with white hair and a beard, wearing a uniform with many medals.] Credit & copyright: The Metropolitan Museum of Art, Samuel F. B. Morse, Attributed to Mathew B. Brady, ca. 1870. Gilman Collection, Gift of The Howard Gilman Foundation, 2005. Public Domain. -
FREEPolitical Science PP&T CurioFree1 CQ
Tariffs, duties, customs—no matter what you call them, they can be a volatile tool capable of protecting weak industries or bleeding an economy dry. As much as tariffs have been in the news lately, they can be difficult to understand. Luckily, one can always turn to history to see how they’ve been used in the past. In the early days of the U.S., tariffs helped domestic industries stay competitive. However, they can easily turn harmful if they’re implemented without due consideration.
A tariff is a tax applied to goods that are imported or exported, though the latter is rarely used nowadays. Tariffs on exports were sometimes used to safeguard limited resources from leaving the country, but when the word "tariff" is used, it almost always means a tax applied to imports. Tariffs are paid by the entity importing the goods, and they can be either “ad valorem” or "specific" tariffs. Ad valorem tariffs are based on a percentage of the value of the goods being taxed, while specific tariffs are fixed amounts, regardless of the total value. It's easy to think that tariffs are categorically detrimental, but there’s sometimes a good reason for them. Making certain types of imported goods more expensive might help domestic producers of those goods stay more competitive by allowing them to sell their goods at lower prices. On the other hand, poorly-conceived tariffs can end up raising the prices of goods across the board, putting economic pressure on consumers without helping domestic industries. Tariffs used to be much more common around the world, but as international trade grew throughout the 20th century, they became less and less so. In the U.S., at least, many factors led to tariffs’ decline.
In 1789, the Tariff Act was one of the first major pieces of legislation passed by Congress, and it created a massive source of revenue for the fledgling nation. Tariffs helped domestic industries gain their footing by leveling the playing field against their better-established foreign competitors, particularly in Britain. By the beginning of the Civil War, tariffs accounted for around 90 percent of the U.S. government’s revenue. As Americans took up arms against each other, however, there was a sudden, dire need for other sources of government funding. Other taxes were introduced, leading to tariffs becoming less significant. Still, even immediately after the war, tariffs accounted for around 50 percent of the nation's revenue. During the Great Depression, tariffs caused more problems than they solved. The Smoot-Hawley Tariff Act of 1930 was intended to bolster domestic industries, but it also made it less feasible for those industries to export goods, hindering their overall business. By the start of World War II, the U.S. government simply could not rely on tariffs as a significant source of revenue any longer. Social security, the New Deal, and exponentially growing military expenditure among other things created mountains of expenses far too large for tariffs to cover. Thus, tariffs became less popular and less relevant over the decades.
Today, tariffs are typically used as negotiation tools between countries engaged in trade. Generally, tariffs are applied on specific industries or goods. For example, tariffs on steel have been used a number of times in recent history to aid American producers. However, the tariffs making the news as of late are unusual. Instead of targeting specific industries, tariffs are being applied across the board against entire countries, even on goods from established trade partners like Canada and Mexico. Only time will tell how this will impact U.S. consumers and U.S. industries. It’ll be historic no matter what…but that doesn’t always mean it will be smooth sailing.
[Image description: A U.S. flag with a wooden pole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Tariffs, duties, customs—no matter what you call them, they can be a volatile tool capable of protecting weak industries or bleeding an economy dry. As much as tariffs have been in the news lately, they can be difficult to understand. Luckily, one can always turn to history to see how they’ve been used in the past. In the early days of the U.S., tariffs helped domestic industries stay competitive. However, they can easily turn harmful if they’re implemented without due consideration.
A tariff is a tax applied to goods that are imported or exported, though the latter is rarely used nowadays. Tariffs on exports were sometimes used to safeguard limited resources from leaving the country, but when the word "tariff" is used, it almost always means a tax applied to imports. Tariffs are paid by the entity importing the goods, and they can be either “ad valorem” or "specific" tariffs. Ad valorem tariffs are based on a percentage of the value of the goods being taxed, while specific tariffs are fixed amounts, regardless of the total value. It's easy to think that tariffs are categorically detrimental, but there’s sometimes a good reason for them. Making certain types of imported goods more expensive might help domestic producers of those goods stay more competitive by allowing them to sell their goods at lower prices. On the other hand, poorly-conceived tariffs can end up raising the prices of goods across the board, putting economic pressure on consumers without helping domestic industries. Tariffs used to be much more common around the world, but as international trade grew throughout the 20th century, they became less and less so. In the U.S., at least, many factors led to tariffs’ decline.
In 1789, the Tariff Act was one of the first major pieces of legislation passed by Congress, and it created a massive source of revenue for the fledgling nation. Tariffs helped domestic industries gain their footing by leveling the playing field against their better-established foreign competitors, particularly in Britain. By the beginning of the Civil War, tariffs accounted for around 90 percent of the U.S. government’s revenue. As Americans took up arms against each other, however, there was a sudden, dire need for other sources of government funding. Other taxes were introduced, leading to tariffs becoming less significant. Still, even immediately after the war, tariffs accounted for around 50 percent of the nation's revenue. During the Great Depression, tariffs caused more problems than they solved. The Smoot-Hawley Tariff Act of 1930 was intended to bolster domestic industries, but it also made it less feasible for those industries to export goods, hindering their overall business. By the start of World War II, the U.S. government simply could not rely on tariffs as a significant source of revenue any longer. Social security, the New Deal, and exponentially growing military expenditure among other things created mountains of expenses far too large for tariffs to cover. Thus, tariffs became less popular and less relevant over the decades.
Today, tariffs are typically used as negotiation tools between countries engaged in trade. Generally, tariffs are applied on specific industries or goods. For example, tariffs on steel have been used a number of times in recent history to aid American producers. However, the tariffs making the news as of late are unusual. Instead of targeting specific industries, tariffs are being applied across the board against entire countries, even on goods from established trade partners like Canada and Mexico. Only time will tell how this will impact U.S. consumers and U.S. industries. It’ll be historic no matter what…but that doesn’t always mean it will be smooth sailing.
[Image description: A U.S. flag with a wooden pole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History PP&T CurioFree1 CQ
You could say that he was a man of many words. While England is the motherland of English, every Anglophone country has its own unique twist on the language. American English wasn't always held in the highest esteem, but Noah Webster helped formalize the American vernacular and help it stand on its own with his American Dictionary of the English Language, published this month in 1828.
Webster was born on October 16, 1758, in West Hartford, Connecticut, to a family of modest means. His father was a farmer and weaver, while his mother was a homemaker. Though most working class people didn't attend college at the time, Webster's parents encouraged his studies as a young man, and he began attending Yale at the age of 16. During this time, he briefly served in the local militia and even met George Washington on one occasion as the Revolutionary War raged on. Unfortunately, Webster's financial hardships kept him from pursuing law, his original passion. Instead, he chose to become a teacher. In this role, he began to see the shortcomings of existing textbooks on the English language, all of which came from Britain. These books not only failed to reflect the language as spoken by Americans, but also included a pledge of allegiance to King George. As a grammarian, educator, and proud American, Webster believed that American students should be taught American English, which he dubbed "Federal English."
Thus, Webster set out to formalize the English spoken by Americans. He published the first of his seminal works, A Grammatical Institute of the English Language, in 1783. Also called the “American Spelling Book” or the “Blue-Backed Speller” for the color of its binding, the book codified the spelling of English words as written by Americans. When writing the book, Webster set out and followed three rules—he divided each word into syllables, described how each word was pronounced, and wrote the proper way to spell each word. Webster also simplified the spelling of many words, but not all of his spellings caught on. For instance, he wanted Americans to spell "tongue" as "tung." Still, he continued his efforts to simplify spelling in A Compendious Dictionary of the English Language, published in 1806. The book contained around 37,000 words, and many of the spellings within are still used today. For example, "colour" was simplified to "color," "musick" became "music," and many words that ended in "-re" were changed to end in "-er.” The book even added words not included in British textbooks or dictionaries. Webster’s magnum opus, American Dictionary of the English Language, greatly expanded on his first dictionary by including over 65,000 words. The dictionary was a comprehensive reflection of Webster's own views on American English and its usage, and was largely defined by its "Americanisms," which included nonliterary words and technical words from the arts and sciences. It reflected Webster's belief that spoken language should shape the English language in both the definition of words and their pronunciation.
Today, Webster is remembered through the continuously-revised editions of the dictionary that bears his name. His views on language continue to influence lexicographers and linguists. In a way, Webster was his own sort of revolutionary rebel. Instead of muskets on the battlefield, he fought for his country's identity with books in classrooms by going against the grain culturally and academically. Who knew grammar and spelling could be part of a war effort?
[Image description: A portrait of a white-haired man with a white shirt and black jacket sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution; gift of William A. Ellis. Portrait of Noah Webster , James Herring, 12 Jan 1794 - 8 Oct 1867. Public Domain, CC0.You could say that he was a man of many words. While England is the motherland of English, every Anglophone country has its own unique twist on the language. American English wasn't always held in the highest esteem, but Noah Webster helped formalize the American vernacular and help it stand on its own with his American Dictionary of the English Language, published this month in 1828.
Webster was born on October 16, 1758, in West Hartford, Connecticut, to a family of modest means. His father was a farmer and weaver, while his mother was a homemaker. Though most working class people didn't attend college at the time, Webster's parents encouraged his studies as a young man, and he began attending Yale at the age of 16. During this time, he briefly served in the local militia and even met George Washington on one occasion as the Revolutionary War raged on. Unfortunately, Webster's financial hardships kept him from pursuing law, his original passion. Instead, he chose to become a teacher. In this role, he began to see the shortcomings of existing textbooks on the English language, all of which came from Britain. These books not only failed to reflect the language as spoken by Americans, but also included a pledge of allegiance to King George. As a grammarian, educator, and proud American, Webster believed that American students should be taught American English, which he dubbed "Federal English."
Thus, Webster set out to formalize the English spoken by Americans. He published the first of his seminal works, A Grammatical Institute of the English Language, in 1783. Also called the “American Spelling Book” or the “Blue-Backed Speller” for the color of its binding, the book codified the spelling of English words as written by Americans. When writing the book, Webster set out and followed three rules—he divided each word into syllables, described how each word was pronounced, and wrote the proper way to spell each word. Webster also simplified the spelling of many words, but not all of his spellings caught on. For instance, he wanted Americans to spell "tongue" as "tung." Still, he continued his efforts to simplify spelling in A Compendious Dictionary of the English Language, published in 1806. The book contained around 37,000 words, and many of the spellings within are still used today. For example, "colour" was simplified to "color," "musick" became "music," and many words that ended in "-re" were changed to end in "-er.” The book even added words not included in British textbooks or dictionaries. Webster’s magnum opus, American Dictionary of the English Language, greatly expanded on his first dictionary by including over 65,000 words. The dictionary was a comprehensive reflection of Webster's own views on American English and its usage, and was largely defined by its "Americanisms," which included nonliterary words and technical words from the arts and sciences. It reflected Webster's belief that spoken language should shape the English language in both the definition of words and their pronunciation.
Today, Webster is remembered through the continuously-revised editions of the dictionary that bears his name. His views on language continue to influence lexicographers and linguists. In a way, Webster was his own sort of revolutionary rebel. Instead of muskets on the battlefield, he fought for his country's identity with books in classrooms by going against the grain culturally and academically. Who knew grammar and spelling could be part of a war effort?
[Image description: A portrait of a white-haired man with a white shirt and black jacket sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution; gift of William A. Ellis. Portrait of Noah Webster , James Herring, 12 Jan 1794 - 8 Oct 1867. Public Domain, CC0. -
FREEUS History PP&T CurioFree1 CQ
Sometimes, you’re in the right place at the wrong time. The Pony Express was a mail delivery service that defied the perils of the wilderness to connect the Eastern and Western sides of the U.S. The riders who traversed the dangerous trail earned themselves a lasting reputation, but the famed service wasn’t destined to last.
Prior to the establishment of the Pony Express on April 3, 1860, there was only one reliable way for someone on the East Coast to send letters or parcels to the West: steamships. These ships traveled by sea from the East Coast down to Panama, where their cargo was unloaded and carried to the Atlantic side of the isthmus. There, the mail was loaded onto yet another ship and taken up to San Francisco, where it could finally be split up and sent off to various addresses. The only other option was for a ship to travel around the southern tip of South America, which could be treacherous. In addition to the inherent risks of going by sea, these routes were costly and time consuming. Mail delivery by ship took months, costing the U.S. government more money than they earned in postage. Going directly on land from East to West was also prohibitively dangerous due to a lack of established trails and challenging terrain. Various officials proposed some type of overland mail delivery system using horses, but for years none came to fruition.
Although the early history and conception of the Pony Express is disputed, most historians credit William H. Russell with the concept. He was one of the owners of Russell, Majors and Waddell, a freight, mail, and passenger transportation company. Russell and his partners later founded the Central Overland California & Pike’s Peak Express Company to serve as the parent company to the Pony Express. Simply put, the Pony Express was an express mail delivery service that used a system of relay stations to switch out riders and horses as needed. This wasn’t a unique concept by itself, as similar systems were already in use, but the Pony Express was set apart by its speed and the distance it covered. Operating out of St. Joseph, Missouri, the company guaranteed delivery of mail to and from San Francisco in 10 days. To accomplish this, riders carried up to 20 pounds of mail on horseback and rode California mustangs (feral horses trained to accept riders) 10 to 15 miles at a time between relay stations. Using this system, riders were able to cover over 1,900 miles in the promised 10 days. Riders traveled in any weather through all types of terrain on poorly established trails, both day and night.
Bridging the nearly 2,000-mile gap between U.S. coasts was no easy feat, and the Pony Express quickly established itself as a reliable service. However, just 18 months after operations began, the Pony Express became largely obsolete thanks to the establishment of a telegraph line connecting New York City and San Francisco. In October of 1861, the company stopped accepting new mail, and their last shipment was delivered in November. Despite its short-lived success, the Pony Express holds a near-mythical place in American popular history. Its riders were seen as adventurers who braved the elements through untamed wilderness, and they are considered daring symbols of the Old West. It might not be as reliable as the modern postal service, but it was a lot easier to romanticize.
[Image description: A stone and concrete pillar-style monument near the site of Rockwell's Station along the Pony Express route in Utah.] Credit & copyright: Beneathtimp, Wikimedia CommonsSometimes, you’re in the right place at the wrong time. The Pony Express was a mail delivery service that defied the perils of the wilderness to connect the Eastern and Western sides of the U.S. The riders who traversed the dangerous trail earned themselves a lasting reputation, but the famed service wasn’t destined to last.
Prior to the establishment of the Pony Express on April 3, 1860, there was only one reliable way for someone on the East Coast to send letters or parcels to the West: steamships. These ships traveled by sea from the East Coast down to Panama, where their cargo was unloaded and carried to the Atlantic side of the isthmus. There, the mail was loaded onto yet another ship and taken up to San Francisco, where it could finally be split up and sent off to various addresses. The only other option was for a ship to travel around the southern tip of South America, which could be treacherous. In addition to the inherent risks of going by sea, these routes were costly and time consuming. Mail delivery by ship took months, costing the U.S. government more money than they earned in postage. Going directly on land from East to West was also prohibitively dangerous due to a lack of established trails and challenging terrain. Various officials proposed some type of overland mail delivery system using horses, but for years none came to fruition.
Although the early history and conception of the Pony Express is disputed, most historians credit William H. Russell with the concept. He was one of the owners of Russell, Majors and Waddell, a freight, mail, and passenger transportation company. Russell and his partners later founded the Central Overland California & Pike’s Peak Express Company to serve as the parent company to the Pony Express. Simply put, the Pony Express was an express mail delivery service that used a system of relay stations to switch out riders and horses as needed. This wasn’t a unique concept by itself, as similar systems were already in use, but the Pony Express was set apart by its speed and the distance it covered. Operating out of St. Joseph, Missouri, the company guaranteed delivery of mail to and from San Francisco in 10 days. To accomplish this, riders carried up to 20 pounds of mail on horseback and rode California mustangs (feral horses trained to accept riders) 10 to 15 miles at a time between relay stations. Using this system, riders were able to cover over 1,900 miles in the promised 10 days. Riders traveled in any weather through all types of terrain on poorly established trails, both day and night.
Bridging the nearly 2,000-mile gap between U.S. coasts was no easy feat, and the Pony Express quickly established itself as a reliable service. However, just 18 months after operations began, the Pony Express became largely obsolete thanks to the establishment of a telegraph line connecting New York City and San Francisco. In October of 1861, the company stopped accepting new mail, and their last shipment was delivered in November. Despite its short-lived success, the Pony Express holds a near-mythical place in American popular history. Its riders were seen as adventurers who braved the elements through untamed wilderness, and they are considered daring symbols of the Old West. It might not be as reliable as the modern postal service, but it was a lot easier to romanticize.
[Image description: A stone and concrete pillar-style monument near the site of Rockwell's Station along the Pony Express route in Utah.] Credit & copyright: Beneathtimp, Wikimedia Commons -
FREEUS History PP&T CurioFree1 CQ
Rags to riches is an understatement. Madam C.J. Walker, the daughter of two former slaves, worked her way up the ladder of a prejudiced society to earn enormous riches as an entrepreneur. Today we're celebrating her birthday with a look back at her remarkable career.
As a young black woman living in St. Louis in the 1890s, Walker didn't start out looking for the "next big idea." She was eking out a living for her and her daughter as a washerwoman. It wasn't until she found a job as a sales agent with a haircare company that things started taking off. The role was personal for her, as she suffered from scalp rashes and balding. Plus, her brothers worked in the hair business as barbers.
Walker was successful selling other people's hair products, but employment was getting in the way of her dream. Literally: a man who visited her in a dream inspired her to start her own company, selling hair and beauty products geared towards black women. The Madam C.J. Walker Manufacturing Company, of which Walker was the sole stakeholder, made its fortunes on sales of Madam Walker’s Wonderful Hair Grower. 19th-century hygiene called for only infrequent hair washing, which led to scalp infections, bacteria, lice, and—most commonly—balding. Walker's Hair Grower combatted balding and was backed by Walker's own guarantee that she used it to fix her own hair issues. A marketing strategy focused on black women, a neglected but growing portion of consumers, was a key ingredient for success.
As the business grew, Walker revealed bigger ambitions. “I am not merely satisfied in making money for myself," she said. "I am endeavoring to provide employment for hundreds of women of my race." Her company employed some 40,000 “Walker Agents” to teach women about proper hair care. Walker stepped beyond the boundaries of her business as a social activist and philanthropist. She donated thousands to the NAACP and put her voice behind causes like preserving Frederick Douglass's home and fighting for the rights of black World War I veterans.
It's often claimed that Walker was America's first black female self-made millionaire. But when she passed away in 1919, assessors found out her estate totaled around $600,000. Not that the number matters at all, really; Walker's legacy is priceless. We're guessing the businessmen and women she inspired could more than make up the difference.Rags to riches is an understatement. Madam C.J. Walker, the daughter of two former slaves, worked her way up the ladder of a prejudiced society to earn enormous riches as an entrepreneur. Today we're celebrating her birthday with a look back at her remarkable career.
As a young black woman living in St. Louis in the 1890s, Walker didn't start out looking for the "next big idea." She was eking out a living for her and her daughter as a washerwoman. It wasn't until she found a job as a sales agent with a haircare company that things started taking off. The role was personal for her, as she suffered from scalp rashes and balding. Plus, her brothers worked in the hair business as barbers.
Walker was successful selling other people's hair products, but employment was getting in the way of her dream. Literally: a man who visited her in a dream inspired her to start her own company, selling hair and beauty products geared towards black women. The Madam C.J. Walker Manufacturing Company, of which Walker was the sole stakeholder, made its fortunes on sales of Madam Walker’s Wonderful Hair Grower. 19th-century hygiene called for only infrequent hair washing, which led to scalp infections, bacteria, lice, and—most commonly—balding. Walker's Hair Grower combatted balding and was backed by Walker's own guarantee that she used it to fix her own hair issues. A marketing strategy focused on black women, a neglected but growing portion of consumers, was a key ingredient for success.
As the business grew, Walker revealed bigger ambitions. “I am not merely satisfied in making money for myself," she said. "I am endeavoring to provide employment for hundreds of women of my race." Her company employed some 40,000 “Walker Agents” to teach women about proper hair care. Walker stepped beyond the boundaries of her business as a social activist and philanthropist. She donated thousands to the NAACP and put her voice behind causes like preserving Frederick Douglass's home and fighting for the rights of black World War I veterans.
It's often claimed that Walker was America's first black female self-made millionaire. But when she passed away in 1919, assessors found out her estate totaled around $600,000. Not that the number matters at all, really; Walker's legacy is priceless. We're guessing the businessmen and women she inspired could more than make up the difference.