Curio Cabinet / Daily Curio
-
FREEWorld History Daily Curio #3152Free1 CQ
War makes history, and sometimes unmakes it as well. An enduring and iconic symbol of Greece, the Parthenon has stood for two and a half millennia, though not without bearing its share of scars. The most devastating blow to the ancient wonder of the world arrived this month in 1687, during the Venetian siege of Athens, which was then under Ottoman control.
The Parthenon was originally a temple to Athena, and it was built on the ruins of two other temples dedicated to the same goddess. The first was the Hekatompedon Temple, which was replaced by what is now called the Older Parthenon. After the still incomplete Older Parthenon was destroyed during the war against the Persians, Athenians built the new Parthenon over the ruins in 438 B.C.E., and it stood there largely intact for centuries. After the Romans adopted Christianity, it became a Christian church until the Ottomans conquered it and turned it into a mosque. Both groups made modifications to the Parthenon, with the former adding murals of saints and the latter adding a minaret. The Parthenon was already far from its original condition by the time the Venetians laid siege to Athens in 1687, though no modification would prove to be quite as extensive as an artillery bombardment that turned much of the Parthenon’s marble structure into rubble.
While the Parthenon was still in Ottoman custody in the early 1800s, British ambassador Lord Elgin took it upon himself to preserve what remained of the building’s sculptural features by removing them from the building’s grounds. He later sold the meticulously-detailed frieze and metopes to the British government, setting off a still-ongoing dispute between Britain and Greece regarding the ownership of the “Elgin Marbles.” It wasn’t until Greece gained independence from the Ottoman Empire in 1830 that the Parthenon would be restored to better resemble its original appearance. Despite its missing pieces, the ancient wonder is still a national symbol of Greece, having endured the ages despite the odds. Athena would be pleased, don’t you think?
[Image description: A painting of the parthenon with some collapsed columns.] Credit & copyright: The Parthenon, Frederic Edwin Church, 1871. The Metropolitan Museum of Art, Bequest of Maria DeWitt Jesup, from the collection of her husband, Morris K. Jesup, 1914. Public Domain.War makes history, and sometimes unmakes it as well. An enduring and iconic symbol of Greece, the Parthenon has stood for two and a half millennia, though not without bearing its share of scars. The most devastating blow to the ancient wonder of the world arrived this month in 1687, during the Venetian siege of Athens, which was then under Ottoman control.
The Parthenon was originally a temple to Athena, and it was built on the ruins of two other temples dedicated to the same goddess. The first was the Hekatompedon Temple, which was replaced by what is now called the Older Parthenon. After the still incomplete Older Parthenon was destroyed during the war against the Persians, Athenians built the new Parthenon over the ruins in 438 B.C.E., and it stood there largely intact for centuries. After the Romans adopted Christianity, it became a Christian church until the Ottomans conquered it and turned it into a mosque. Both groups made modifications to the Parthenon, with the former adding murals of saints and the latter adding a minaret. The Parthenon was already far from its original condition by the time the Venetians laid siege to Athens in 1687, though no modification would prove to be quite as extensive as an artillery bombardment that turned much of the Parthenon’s marble structure into rubble.
While the Parthenon was still in Ottoman custody in the early 1800s, British ambassador Lord Elgin took it upon himself to preserve what remained of the building’s sculptural features by removing them from the building’s grounds. He later sold the meticulously-detailed frieze and metopes to the British government, setting off a still-ongoing dispute between Britain and Greece regarding the ownership of the “Elgin Marbles.” It wasn’t until Greece gained independence from the Ottoman Empire in 1830 that the Parthenon would be restored to better resemble its original appearance. Despite its missing pieces, the ancient wonder is still a national symbol of Greece, having endured the ages despite the odds. Athena would be pleased, don’t you think?
[Image description: A painting of the parthenon with some collapsed columns.] Credit & copyright: The Parthenon, Frederic Edwin Church, 1871. The Metropolitan Museum of Art, Bequest of Maria DeWitt Jesup, from the collection of her husband, Morris K. Jesup, 1914. Public Domain. -
FREEEngineering Daily Curio #3151Free1 CQ
If you’re afraid of needles, there’s no need to flinch—unless you also happen to be afraid of bees. Researchers in South Korea have developed a new type of microneedle that is more comfortable than existing ones, and they did it by taking a look at nature’s flying syringes: bees.
Microneedles are already pretty pain-free compared to standard needles, thanks to the fact that they’re only a few microns thick. Still, they have their limits. These minuscule needles are used when patients require continuously injected medication, but over time, the rigidity of the needles can start to cause pain and discomfort. Microneedles are also primarily used by patients with chronic conditions. For them, the very tool meant to treat them becomes a nuisance of its own. To tackle this issue, researchers at Chung An University developed what they call electrospun web microneedles (EW-MNs), inspired by honeybee stingers. Honeybees have barbs on their stingers, and when they sting something, they get stuck. That’s why the stinger tears away from the bee’s thorax as the insect flies away, leaving the bee to perish after delivering its venomous payload. The bee might find some cold comfort in the fact that its stinger remains attached to its victim, pumping what remains of the venom, all thanks to the microscopic barbs holding it in place.
Researchers recreated this mechanism by spinning ultra-fine polymer fibers in an electric field, which act as anchors for microneedles. Like the honeybee’s stinger, the barbs allow the needles to stay in place while delivering medication without as much pain and inflammation as conventional needles would cause. Even when there was inflammation, it subsided quickly after the microneedle patch was removed. The drug they tested using the EW-MNs was rivastigmine, used for Parkinson’s and Alzheimer’s disease, and researchers found that their bee-inspired microneedles also increased the absorption of the drug. All those poor bees who gave their lives would be rolling in their graves if they knew that their stingers were actually helping people!
[Image description: A honeybee on a purple flower.] Credit & copyright: John Severns (Severnjc), Wikimedia Commons. This work has been released into the public domain by its author, Severnjc at English Wikipedia. This applies worldwide.If you’re afraid of needles, there’s no need to flinch—unless you also happen to be afraid of bees. Researchers in South Korea have developed a new type of microneedle that is more comfortable than existing ones, and they did it by taking a look at nature’s flying syringes: bees.
Microneedles are already pretty pain-free compared to standard needles, thanks to the fact that they’re only a few microns thick. Still, they have their limits. These minuscule needles are used when patients require continuously injected medication, but over time, the rigidity of the needles can start to cause pain and discomfort. Microneedles are also primarily used by patients with chronic conditions. For them, the very tool meant to treat them becomes a nuisance of its own. To tackle this issue, researchers at Chung An University developed what they call electrospun web microneedles (EW-MNs), inspired by honeybee stingers. Honeybees have barbs on their stingers, and when they sting something, they get stuck. That’s why the stinger tears away from the bee’s thorax as the insect flies away, leaving the bee to perish after delivering its venomous payload. The bee might find some cold comfort in the fact that its stinger remains attached to its victim, pumping what remains of the venom, all thanks to the microscopic barbs holding it in place.
Researchers recreated this mechanism by spinning ultra-fine polymer fibers in an electric field, which act as anchors for microneedles. Like the honeybee’s stinger, the barbs allow the needles to stay in place while delivering medication without as much pain and inflammation as conventional needles would cause. Even when there was inflammation, it subsided quickly after the microneedle patch was removed. The drug they tested using the EW-MNs was rivastigmine, used for Parkinson’s and Alzheimer’s disease, and researchers found that their bee-inspired microneedles also increased the absorption of the drug. All those poor bees who gave their lives would be rolling in their graves if they knew that their stingers were actually helping people!
[Image description: A honeybee on a purple flower.] Credit & copyright: John Severns (Severnjc), Wikimedia Commons. This work has been released into the public domain by its author, Severnjc at English Wikipedia. This applies worldwide. -
FREEMind + Body Daily CurioFree1 CQ
If there’s any sandwich that could give the hamburger a run for its money in the fame department, it’s the Sloppy Joe. Messy and comforting, this tangy sandwich has a disputed history. Depending on who you ask, sloppy joes either originated in Cuba…or Iowa.
Rather than sliced meats or patties, Sloppy Joe buns hold a thick sauce made from ground beef mixed with liquid components like ketchup and Worcestershire sauce along with chopped tomatoes, onions, and seasonings like garlic powder. They can be served on hamburger buns, split rolls, or thick toast. Pre-made Sloppy Joe filling is commonly found at grocery stores and the sandwiches are popular to serve at parties, but unlike hamburgers, Sloppy Joes aren’t served at many fast food restaurants, probably because their saucy consistency makes for a messy on-the-go meal.
Like hamburgers, Sloppy Joes couldn’t have grown popular without the rise or refrigeration and industrialization leading to widely-available ground beef in the late 19th century. As soon as ground beef became a staple item in American homes, recipes for “loose meat sandwiches” began popping up across the country, but grew especially popular in the Midwest. These sandwiches made use of condiments, sauces, and spices that most people already had on hand. It makes sense, then, that one popular story about the origin of Sloppy Joes comes from Sioux City, Iowa, where a cook named Joe supposedly created it.
The other, most common tale of the Sloppy Joe’s origin begins in Havana, Cuba, where a bar owned by a businessman named José García supposedly earned the nickname “Sloppy Joe” because of its messy atmosphere. The story goes that, after Ernest Hemingway fell in love with the bar’s signature sandwich during a trip to Cuba, he brought it back to the states, where it thrived.
Whether it was born stateside or not, there’s no doubt that Sloppy Joes are strongly associated with American backyard barbecues and family meals today. Just make sure you eat them with a good, solid plate underneath.
[Image description: A sloppy joe sandwich with chips.] Credit & copyright: Tomwsulcer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.If there’s any sandwich that could give the hamburger a run for its money in the fame department, it’s the Sloppy Joe. Messy and comforting, this tangy sandwich has a disputed history. Depending on who you ask, sloppy joes either originated in Cuba…or Iowa.
Rather than sliced meats or patties, Sloppy Joe buns hold a thick sauce made from ground beef mixed with liquid components like ketchup and Worcestershire sauce along with chopped tomatoes, onions, and seasonings like garlic powder. They can be served on hamburger buns, split rolls, or thick toast. Pre-made Sloppy Joe filling is commonly found at grocery stores and the sandwiches are popular to serve at parties, but unlike hamburgers, Sloppy Joes aren’t served at many fast food restaurants, probably because their saucy consistency makes for a messy on-the-go meal.
Like hamburgers, Sloppy Joes couldn’t have grown popular without the rise or refrigeration and industrialization leading to widely-available ground beef in the late 19th century. As soon as ground beef became a staple item in American homes, recipes for “loose meat sandwiches” began popping up across the country, but grew especially popular in the Midwest. These sandwiches made use of condiments, sauces, and spices that most people already had on hand. It makes sense, then, that one popular story about the origin of Sloppy Joes comes from Sioux City, Iowa, where a cook named Joe supposedly created it.
The other, most common tale of the Sloppy Joe’s origin begins in Havana, Cuba, where a bar owned by a businessman named José García supposedly earned the nickname “Sloppy Joe” because of its messy atmosphere. The story goes that, after Ernest Hemingway fell in love with the bar’s signature sandwich during a trip to Cuba, he brought it back to the states, where it thrived.
Whether it was born stateside or not, there’s no doubt that Sloppy Joes are strongly associated with American backyard barbecues and family meals today. Just make sure you eat them with a good, solid plate underneath.
[Image description: A sloppy joe sandwich with chips.] Credit & copyright: Tomwsulcer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3150Free1 CQ
If you start feeling queasy in the car, try blasting the stereo. Scientists have found that happy music can alleviate the symptoms of motion sickness, but the specific tunes one chooses to listen to matters a lot. Motion sickness is something that happens when a person’s brain receives conflicting information from the different senses regarding their motion. So, if the eyes see that the environment around them is moving but the inner ears and muscles don’t detect any movement, that conflict can lead to nausea, dizziness, and cold sweats. This means that riding in cars, boats, amusement park rides, and even using VR headsets can cause motion sickness, which can really cut down on the enjoyment of a car trip or vacation.
To test their theory that music can affect motion sickness, researchers at Southwest University in China actually used a driving simulator to induce the condition in participants. Participants were also equipped with electroencephalogram (EEG) caps to measure signals associated with motion sickness in the brain. When they started feeling queasy, researchers played different types of music. What they found was that “joyful” music was capable of reducing symptoms of motion sickness by 57.3 percent, while “soft” music did the same by 56.7 percent. “Passionate” music only alleviated symptoms by 48.3 percent, while “sad” music was as good as nothing, or maybe worse. In fact, they found that sad music might slightly worsen the symptoms by triggering negative emotions. Aside from music, there are other, more conventional remedies that also help with motion sickness, like sweet treats, fresh air, and taking a break from whatever is causing the sickness. In cars or other moving vehicles, reading can induce motion sickness, so it might be a good idea to take your eyes off the page or the phone. On your next road trip, maybe an MD should be the DJ.
[Image description: A reflection of trees, clouds, and the sun in a car window.] Credit & copyright: Tomwsulcer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.If you start feeling queasy in the car, try blasting the stereo. Scientists have found that happy music can alleviate the symptoms of motion sickness, but the specific tunes one chooses to listen to matters a lot. Motion sickness is something that happens when a person’s brain receives conflicting information from the different senses regarding their motion. So, if the eyes see that the environment around them is moving but the inner ears and muscles don’t detect any movement, that conflict can lead to nausea, dizziness, and cold sweats. This means that riding in cars, boats, amusement park rides, and even using VR headsets can cause motion sickness, which can really cut down on the enjoyment of a car trip or vacation.
To test their theory that music can affect motion sickness, researchers at Southwest University in China actually used a driving simulator to induce the condition in participants. Participants were also equipped with electroencephalogram (EEG) caps to measure signals associated with motion sickness in the brain. When they started feeling queasy, researchers played different types of music. What they found was that “joyful” music was capable of reducing symptoms of motion sickness by 57.3 percent, while “soft” music did the same by 56.7 percent. “Passionate” music only alleviated symptoms by 48.3 percent, while “sad” music was as good as nothing, or maybe worse. In fact, they found that sad music might slightly worsen the symptoms by triggering negative emotions. Aside from music, there are other, more conventional remedies that also help with motion sickness, like sweet treats, fresh air, and taking a break from whatever is causing the sickness. In cars or other moving vehicles, reading can induce motion sickness, so it might be a good idea to take your eyes off the page or the phone. On your next road trip, maybe an MD should be the DJ.
[Image description: A reflection of trees, clouds, and the sun in a car window.] Credit & copyright: Tomwsulcer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3149Free1 CQ
Did you know there’s more to being a redhead than hair? The city of Tilburg, Netherlands, is hosting their annual Redhead Days festival, where thousands of fiery manes gather to celebrate what makes them unique. The festival began 20 years ago after Dutch artist Bart Rouwenhorst took out a newspaper ad asking for 15 redheads to participate in an art project. After he was met with ten times the requested number, it inspired him to make it an annual event that has grown in size ever since. Still, its size will always be somewhat limited considering the rarity of red hair. Even in Scotland and Ireland, where they’re more common, redheads make up only ten percent of the population, and most redheads in the world have northwestern European ancestry.
Redheads are rare because the gene that causes red hair, MC1R, is recessive. Despite this, redheads can still be born from non-redheaded parents. If both parents carry the recessive allele, they have a 25 percent chance of having a redheaded child. If one of the parents is also a redhead, then that chance goes up to 50 percent. If both parents are redheads, then that chance goes up to 100 percent.
Both hair and skin are affected by two types of melanin, which is responsible for pigmentation. One is eumelanin, which causes darker skin by providing black and brown pigments. Pheomelanin, on the other hand, is responsible for creating the pink hues found in skin. Eumelanin and pheomelanin can be found in hair, and the more pheomelanin there is, the redder the hair appears. Those who have just eumelanin will have darker brown or black hair, while just a little of this melanin produces blonde hair. A little pheomelanin in otherwise blonde hair creates strawberry blondes, and those who have much more pheomelanin than eumelanin have red hair.
Red hair might look great, but it does come with a few downsides. Those who have red hair are more likely to develop skin cancer, endometriosis, and Parkinson’s disease. They also tend to have different levels of pain tolerance than the general population, and they may respond differently to anesthetic and pain relievers. You could say they’re more prone to medical red alerts.
[Image description: A painting of a woman with red hair inspecting her hair in a hand mirror.] Credit & copyright: Jo, La Belle Irlandaise, Gustave Courbet. H. O. Havemeyer Collection, Bequest of Mrs. H. O. Havemeyer, 1929. The Metropolitan Museum of Art, Public Domain.Did you know there’s more to being a redhead than hair? The city of Tilburg, Netherlands, is hosting their annual Redhead Days festival, where thousands of fiery manes gather to celebrate what makes them unique. The festival began 20 years ago after Dutch artist Bart Rouwenhorst took out a newspaper ad asking for 15 redheads to participate in an art project. After he was met with ten times the requested number, it inspired him to make it an annual event that has grown in size ever since. Still, its size will always be somewhat limited considering the rarity of red hair. Even in Scotland and Ireland, where they’re more common, redheads make up only ten percent of the population, and most redheads in the world have northwestern European ancestry.
Redheads are rare because the gene that causes red hair, MC1R, is recessive. Despite this, redheads can still be born from non-redheaded parents. If both parents carry the recessive allele, they have a 25 percent chance of having a redheaded child. If one of the parents is also a redhead, then that chance goes up to 50 percent. If both parents are redheads, then that chance goes up to 100 percent.
Both hair and skin are affected by two types of melanin, which is responsible for pigmentation. One is eumelanin, which causes darker skin by providing black and brown pigments. Pheomelanin, on the other hand, is responsible for creating the pink hues found in skin. Eumelanin and pheomelanin can be found in hair, and the more pheomelanin there is, the redder the hair appears. Those who have just eumelanin will have darker brown or black hair, while just a little of this melanin produces blonde hair. A little pheomelanin in otherwise blonde hair creates strawberry blondes, and those who have much more pheomelanin than eumelanin have red hair.
Red hair might look great, but it does come with a few downsides. Those who have red hair are more likely to develop skin cancer, endometriosis, and Parkinson’s disease. They also tend to have different levels of pain tolerance than the general population, and they may respond differently to anesthetic and pain relievers. You could say they’re more prone to medical red alerts.
[Image description: A painting of a woman with red hair inspecting her hair in a hand mirror.] Credit & copyright: Jo, La Belle Irlandaise, Gustave Courbet. H. O. Havemeyer Collection, Bequest of Mrs. H. O. Havemeyer, 1929. The Metropolitan Museum of Art, Public Domain. -
FREEGeography Daily Curio #3148Free1 CQ
Maps can tell you how to get somewhere, but they’re not always great at telling you where you’re going. This especially applies to common world maps known as Mercator projection maps, which distort the real size of various places. Now, an international campaign in Africa is hoping to change the way people view the world.
The “Correct The Map” campaign, endorsed by the African Union, is using one of the oldest criticisms of the Mercator projection against it in the hopes of changing people’s perceptions about the continent. Though Africa is home to over 1.4 billion people, those looking at a world map tend to underestimate the continent’s size, population, and global significance due to the undersized portrayal that results from the Mercator projection. Developed by Flemish cartographer Gerardus Mercator in the 16th century, the Mercator projection is a cylindrical map projection that allows the spherical world on a linear scale, where the meridians are equidistant and the lines of latitude grow further apart the closer it gets to the poles. The Mercator projection was ideal for nautical navigation at a time before computer assisted navigation, as it could depict rhumb lines (lines of constant course) as straight lines, making charting easier.
Beyond navigation, however, the Mercator projection has some significant flaws. Because the projection scales infinitely as the lines of latitude approach the poles, objects closer to the poles become more distorted. As a result, Greenland and other landmasses near the poles appear to be much larger than they actually are. Based on the Mercator projection, Greenland appears to be larger than Africa, when it’s actually a fraction of its size. This might be just a curious shortcoming of an otherwise practical map projection, but supporters of Correct The Map claim that such visual distortions minimize the cultural and economic influence of Africa. Therefore, they hope to come up with an alternative that more accurately depicts the size of objects in a map to encourage a more balanced portrayal. It’s a big, wide world, and it just might call for a big, wide map.
[Image description: A map of the world with green continents and blue oceans, without words or labels.] Credit & copyright: Noleander, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Maps can tell you how to get somewhere, but they’re not always great at telling you where you’re going. This especially applies to common world maps known as Mercator projection maps, which distort the real size of various places. Now, an international campaign in Africa is hoping to change the way people view the world.
The “Correct The Map” campaign, endorsed by the African Union, is using one of the oldest criticisms of the Mercator projection against it in the hopes of changing people’s perceptions about the continent. Though Africa is home to over 1.4 billion people, those looking at a world map tend to underestimate the continent’s size, population, and global significance due to the undersized portrayal that results from the Mercator projection. Developed by Flemish cartographer Gerardus Mercator in the 16th century, the Mercator projection is a cylindrical map projection that allows the spherical world on a linear scale, where the meridians are equidistant and the lines of latitude grow further apart the closer it gets to the poles. The Mercator projection was ideal for nautical navigation at a time before computer assisted navigation, as it could depict rhumb lines (lines of constant course) as straight lines, making charting easier.
Beyond navigation, however, the Mercator projection has some significant flaws. Because the projection scales infinitely as the lines of latitude approach the poles, objects closer to the poles become more distorted. As a result, Greenland and other landmasses near the poles appear to be much larger than they actually are. Based on the Mercator projection, Greenland appears to be larger than Africa, when it’s actually a fraction of its size. This might be just a curious shortcoming of an otherwise practical map projection, but supporters of Correct The Map claim that such visual distortions minimize the cultural and economic influence of Africa. Therefore, they hope to come up with an alternative that more accurately depicts the size of objects in a map to encourage a more balanced portrayal. It’s a big, wide world, and it just might call for a big, wide map.
[Image description: A map of the world with green continents and blue oceans, without words or labels.] Credit & copyright: Noleander, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3147Free1 CQ
If three bodies are a problem, how bad are four? Astronomers recently discovered a quadruple star system with some unusual inhabitants. It’s one of the first times a system with four stars orbiting each other has been found. While our own galaxy has just one nearby star, out in the vast cosmos, there are many systems with multiple stars. Most of them are binary systems, where two stars are locked in orbit with each other, engaged in a cosmic dance that will last eons until one of them explodes into a brilliant supernova. More rarely, the two stars combine into a single, more massive star. Lesser known are triple-star systems, also known as ternary or trinary systems. These consist of three stars locked in orbit, usually with two stars orbiting each other and a third orbiting around them both. Just a cosmic stone’s throw away, the Alpha Centauri system is known to contain three stars. To date, there have been multiple-star systems discovered containing up to seven stars, but the recently-discovered quadruple star system has something a little extra that makes it even rarer. Right here in the Milky Way, of all places, astronomers found a quadruple-star system containing two brown dwarfs. A brown dwarf is a rare object, also known as a “failed star.” As its nickname implies, a brown dwarf is a sub-stellar object which is too massive to be considered a planet, yet not massive enough to start and sustain nuclear fusion. Since there is no nuclear fusion taking place, brown dwarfs are cold and emit very little energy, making them difficult to find, let alone study in contrast to their brighter, full-fledged stellar counterparts. Sure, you could make the argument that having two sub-stellar objects disqualifies it from being a quadruple-star system, but who’s going to complain?
[Image description: A digital illustration of stars in a dark blue sky with a large, brown star in the center.] Credit & copyright: Author-created image. Public domain.If three bodies are a problem, how bad are four? Astronomers recently discovered a quadruple star system with some unusual inhabitants. It’s one of the first times a system with four stars orbiting each other has been found. While our own galaxy has just one nearby star, out in the vast cosmos, there are many systems with multiple stars. Most of them are binary systems, where two stars are locked in orbit with each other, engaged in a cosmic dance that will last eons until one of them explodes into a brilliant supernova. More rarely, the two stars combine into a single, more massive star. Lesser known are triple-star systems, also known as ternary or trinary systems. These consist of three stars locked in orbit, usually with two stars orbiting each other and a third orbiting around them both. Just a cosmic stone’s throw away, the Alpha Centauri system is known to contain three stars. To date, there have been multiple-star systems discovered containing up to seven stars, but the recently-discovered quadruple star system has something a little extra that makes it even rarer. Right here in the Milky Way, of all places, astronomers found a quadruple-star system containing two brown dwarfs. A brown dwarf is a rare object, also known as a “failed star.” As its nickname implies, a brown dwarf is a sub-stellar object which is too massive to be considered a planet, yet not massive enough to start and sustain nuclear fusion. Since there is no nuclear fusion taking place, brown dwarfs are cold and emit very little energy, making them difficult to find, let alone study in contrast to their brighter, full-fledged stellar counterparts. Sure, you could make the argument that having two sub-stellar objects disqualifies it from being a quadruple-star system, but who’s going to complain?
[Image description: A digital illustration of stars in a dark blue sky with a large, brown star in the center.] Credit & copyright: Author-created image. Public domain. -
FREEMind + Body Daily CurioFree1 CQ
It’s small, but there’s no doubt that it’s mighty! Jamaican cuisine is known for its bold, spicy flavors, and nothing embodies that more than one of the island’s most common foods: Jamaican beef patties. These hand pies have been around since the 17th century, and though they’re one of Jamaica’s best loved dishes today, the roots of their flavor stretch to many different places.
Jamaican beef patties, also known simply as Jamaican patties depending on their filling, are a kind of hand pie or turnover with thick crust on the outside and a spicy, meaty mixture on the inside. The sturdy-yet-flaky crust is usually made with flour, fat, salt, and baking powder, and gets its signature yellow color from either egg yolks or turmeric. The filling is traditionally made with ground beef, root vegetables like onions, and spices like garlic, ginger, cayenne pepper powder, curry powder, thyme, and Scotch bonnet pepper powder. Some patties use pulled chicken in place of beef, and some are vegetarian, utilizing vegetables like carrots, peas, potatoes, and corn.
It’s no coincidence that Jamaican beef patties bear a resemblance to European meat pies. Similar foods first came to the island around 1509, when the Spanish began to colonize Jamaica, bringing turnovers with them. In 1655, the British took control of the island from Spain, and brought along their own Cornish pasties. These meat pies, with their hard crust, were usually served with gravy. It didn’t take long, however, for Jamaicans and others living in the Caribbean to make the dish their own. Scotch bonnet peppers, commonly used in Jamaican cuisine, were added to the beef filling, while Indian traders and workers added curry powder and enslaved Africans made their patties with cayenne pepper. The patties were made smaller and thinner than Cornish pasties and were served without gravy or sauce, making them easier to carry around and eat while working. Today, the patties are eaten throughout the Caribbean, and regional variations are common. From Europe to Asia to the Caribbean, these seemingly simple patties are actually a flavorful international affair!It’s small, but there’s no doubt that it’s mighty! Jamaican cuisine is known for its bold, spicy flavors, and nothing embodies that more than one of the island’s most common foods: Jamaican beef patties. These hand pies have been around since the 17th century, and though they’re one of Jamaica’s best loved dishes today, the roots of their flavor stretch to many different places.
Jamaican beef patties, also known simply as Jamaican patties depending on their filling, are a kind of hand pie or turnover with thick crust on the outside and a spicy, meaty mixture on the inside. The sturdy-yet-flaky crust is usually made with flour, fat, salt, and baking powder, and gets its signature yellow color from either egg yolks or turmeric. The filling is traditionally made with ground beef, root vegetables like onions, and spices like garlic, ginger, cayenne pepper powder, curry powder, thyme, and Scotch bonnet pepper powder. Some patties use pulled chicken in place of beef, and some are vegetarian, utilizing vegetables like carrots, peas, potatoes, and corn.
It’s no coincidence that Jamaican beef patties bear a resemblance to European meat pies. Similar foods first came to the island around 1509, when the Spanish began to colonize Jamaica, bringing turnovers with them. In 1655, the British took control of the island from Spain, and brought along their own Cornish pasties. These meat pies, with their hard crust, were usually served with gravy. It didn’t take long, however, for Jamaicans and others living in the Caribbean to make the dish their own. Scotch bonnet peppers, commonly used in Jamaican cuisine, were added to the beef filling, while Indian traders and workers added curry powder and enslaved Africans made their patties with cayenne pepper. The patties were made smaller and thinner than Cornish pasties and were served without gravy or sauce, making them easier to carry around and eat while working. Today, the patties are eaten throughout the Caribbean, and regional variations are common. From Europe to Asia to the Caribbean, these seemingly simple patties are actually a flavorful international affair! -
FREEUS History Daily Curio #3146Free1 CQ
Get ready to clutch your pearls—there are people shopping for clothes on Sundays. The city of Paramus, New Jersey, recently filed a lawsuit against a local mall for allowing shoppers to buy garments on Sundays, and while supporters of the suit say it’s to reduce noise and traffic, detractors say it’s an example of blue laws gone wrong.
Blue laws refer to any law that restricts secular activities on Sundays, though what’s covered varies considerably. Some are more well known, like the restriction on the sales of alcohol, while more extreme cases restrict entertainment, various types of commerce, sports and working in general. In the recent controversy, the American Dream mall has been accused of allowing shoppers to purchase “nonessential” goods. These include not just clothes, but furniture and appliances, as opposed to essential goods like groceries or medicine. Like many blue laws in other jurisdictions, Paramus’s has been on the books since the colonial period and has remained largely unchanged. As for how such laws got on the books in the first place, they were developed in England, then promoted by Puritans early in American history.
Due to their religious roots, blue laws have often been challenged in court as a violation of the First Amendment, which states that the government may not favor one particular religion. However, many blue laws still remain in effect partly from lack of the political will to change them and partly because of a Supreme Court ruling in their favor. In 1961, during a case called McGowan v. Maryland, the court ruled that Maryland’s blue laws forbidding certain types of commerce weren’t in violation of the Establishment Clause of the First Amendment. The justification was that, even if the laws were originally created to encourage church attendance on Sundays, they also served a secular function by making Sundays a universal day of rest. Whether you’re the pious or partying type, it’s hard to argue with a day off the clock.
[Image description: A porcelain sculpture of a man and woman in historical clothing in a clothing shop with goods on the wall behind them.] Credit & copyright: "Venetian Fair" shop with two figures, Ludwigsburg Porcelain Manufactory (German, 1758–1824). The Metropolitan Museum of Art, Gift of R. Thornton Wilson, in memory of Florence Ellsworth Wilson, 1950. Public Domain.Get ready to clutch your pearls—there are people shopping for clothes on Sundays. The city of Paramus, New Jersey, recently filed a lawsuit against a local mall for allowing shoppers to buy garments on Sundays, and while supporters of the suit say it’s to reduce noise and traffic, detractors say it’s an example of blue laws gone wrong.
Blue laws refer to any law that restricts secular activities on Sundays, though what’s covered varies considerably. Some are more well known, like the restriction on the sales of alcohol, while more extreme cases restrict entertainment, various types of commerce, sports and working in general. In the recent controversy, the American Dream mall has been accused of allowing shoppers to purchase “nonessential” goods. These include not just clothes, but furniture and appliances, as opposed to essential goods like groceries or medicine. Like many blue laws in other jurisdictions, Paramus’s has been on the books since the colonial period and has remained largely unchanged. As for how such laws got on the books in the first place, they were developed in England, then promoted by Puritans early in American history.
Due to their religious roots, blue laws have often been challenged in court as a violation of the First Amendment, which states that the government may not favor one particular religion. However, many blue laws still remain in effect partly from lack of the political will to change them and partly because of a Supreme Court ruling in their favor. In 1961, during a case called McGowan v. Maryland, the court ruled that Maryland’s blue laws forbidding certain types of commerce weren’t in violation of the Establishment Clause of the First Amendment. The justification was that, even if the laws were originally created to encourage church attendance on Sundays, they also served a secular function by making Sundays a universal day of rest. Whether you’re the pious or partying type, it’s hard to argue with a day off the clock.
[Image description: A porcelain sculpture of a man and woman in historical clothing in a clothing shop with goods on the wall behind them.] Credit & copyright: "Venetian Fair" shop with two figures, Ludwigsburg Porcelain Manufactory (German, 1758–1824). The Metropolitan Museum of Art, Gift of R. Thornton Wilson, in memory of Florence Ellsworth Wilson, 1950. Public Domain. -
FREEEngineering Daily Curio #3145Free1 CQ
Smell ya later—but not too much later! Millions of people suffer from loss of smell (anosmia) due to a variety of medical causes, but researchers at Hanyang University and Kwangwoon University in South Korea have now discovered a way to restore the lost sense using radio waves.
The sense of smell is more important to daily life than most people think. Just ask anyone who took the sense for granted before they lost it due to a sinus infection, brain injury, or COVID-19. The recent pandemic brought the issue to the spotlight, since it caused so many people to either temporarily or permanently lose their sense of smell, along with their sense of taste. With no sense of smell, it’s difficult to enjoy food at the very least, and at the worst, it can lead to danger. Imagine, for instance, not being able to detect spoiled food at a sniff before any visual indications are obvious or not being able to smell a gas leak. Currently, there is no surefire treatment for anosmia. If the cause is something like polyps or a deviated septum, surgery might help. In other cases, olfactory treatment can be used, which involves the use of strong, often unpleasant scents to “retrain” the patient’s nose.
Now, researchers claim they have come up with a completely noninvasive, chemical-free method to restore a sense of smell. The treatment makes use of a small radio antenna placed near the patient’s head that send out targeted radio waves at the nerves inside the brain responsible for smell. It sounds almost too good to be true, but the researchers claim that just a week of treatments produced significant improvements. If it really works as they say, then even those who aren’t suffering from anosmia could benefit, as it can potentially enhance a normal sense of smell to be even sharper. It could be the mildest superpower ever!
[Image description: A black-and-white illustration of a person in historical clothing smelling a flower next to two flowering plants.] Credit & copyright: Smell, Abraham Bosse, c.1635–38. The Metropolitan Museum of Art, Harris Brisbane Dick Fund, 1930. Public Domain.Smell ya later—but not too much later! Millions of people suffer from loss of smell (anosmia) due to a variety of medical causes, but researchers at Hanyang University and Kwangwoon University in South Korea have now discovered a way to restore the lost sense using radio waves.
The sense of smell is more important to daily life than most people think. Just ask anyone who took the sense for granted before they lost it due to a sinus infection, brain injury, or COVID-19. The recent pandemic brought the issue to the spotlight, since it caused so many people to either temporarily or permanently lose their sense of smell, along with their sense of taste. With no sense of smell, it’s difficult to enjoy food at the very least, and at the worst, it can lead to danger. Imagine, for instance, not being able to detect spoiled food at a sniff before any visual indications are obvious or not being able to smell a gas leak. Currently, there is no surefire treatment for anosmia. If the cause is something like polyps or a deviated septum, surgery might help. In other cases, olfactory treatment can be used, which involves the use of strong, often unpleasant scents to “retrain” the patient’s nose.
Now, researchers claim they have come up with a completely noninvasive, chemical-free method to restore a sense of smell. The treatment makes use of a small radio antenna placed near the patient’s head that send out targeted radio waves at the nerves inside the brain responsible for smell. It sounds almost too good to be true, but the researchers claim that just a week of treatments produced significant improvements. If it really works as they say, then even those who aren’t suffering from anosmia could benefit, as it can potentially enhance a normal sense of smell to be even sharper. It could be the mildest superpower ever!
[Image description: A black-and-white illustration of a person in historical clothing smelling a flower next to two flowering plants.] Credit & copyright: Smell, Abraham Bosse, c.1635–38. The Metropolitan Museum of Art, Harris Brisbane Dick Fund, 1930. Public Domain. -
FREEHumanities Daily Curio #3144Free1 CQ
Do you think you could brave the depths of the Grand Canyon with 19th-century equipment and dwindling provisions? How about with one arm? Starting on May 24, 1869, John Wesley Powell led a group of 12 men on an expedition through the treacherous terrain of the Grand Canyon in the name of science. On August 30, the surviving crew reached the end of their grueling journey.
An explorer and geologist, Powell set out to document the geology of the Grand Canyon and locate sources of water for future settlers in the region. Things went smoothly at first, starting off on the Green River with four rowboats filled to the brim with supplies that could last for around ten months. Then, on June 8, the first of many disasters struck. A boat was lost with around a third of the group’s supplies. By late June, much of their remaining supplies were wet, though they were able to replenish some of what they had lost by trading with nearby settlements.
On July 11, the expedition nearly came to a premature end when Powell himself was thrown out of a boat. Having lost one of his arms during the Battle of Shiloh in 1862, he couldn’t hold on to his boat. While he made it safely to shore, more supplies were lost. By the end of August, the crew had endured hunger, deadly rapids, and stifling heat. Faced with the prospect of a seemingly impassable rapid, three of the crew abandoned the expedition, choosing to hike out of the canyon toward a nearby town. Powell named the spot Separation Rapid, and despite their fears, the remaining crew managed to pass through and reached the mouth of the Virgin River on August 30, marking the end of their journey. The three who hiked out just days before were found dead, killed by members of the Shivwits band. Powell accomplished much of what he had set out to do, and many of the landmarks and locations within the Grand Canyon were named by him. Incredibly, he set out on a second expedition of the Grand Canyon just a few years later to document even more of the area. Let no one say that geologists aren’t dedicated people.
[Image description: A portion of the Colorado River flowing through the Grand Canyon under a blue sky.] Credit & copyright: NPS photo, Asset ID: f8f35e4c-505c-47fc-8f74-79b4b0f9ea16. Public domain: Full Granting Rights.Do you think you could brave the depths of the Grand Canyon with 19th-century equipment and dwindling provisions? How about with one arm? Starting on May 24, 1869, John Wesley Powell led a group of 12 men on an expedition through the treacherous terrain of the Grand Canyon in the name of science. On August 30, the surviving crew reached the end of their grueling journey.
An explorer and geologist, Powell set out to document the geology of the Grand Canyon and locate sources of water for future settlers in the region. Things went smoothly at first, starting off on the Green River with four rowboats filled to the brim with supplies that could last for around ten months. Then, on June 8, the first of many disasters struck. A boat was lost with around a third of the group’s supplies. By late June, much of their remaining supplies were wet, though they were able to replenish some of what they had lost by trading with nearby settlements.
On July 11, the expedition nearly came to a premature end when Powell himself was thrown out of a boat. Having lost one of his arms during the Battle of Shiloh in 1862, he couldn’t hold on to his boat. While he made it safely to shore, more supplies were lost. By the end of August, the crew had endured hunger, deadly rapids, and stifling heat. Faced with the prospect of a seemingly impassable rapid, three of the crew abandoned the expedition, choosing to hike out of the canyon toward a nearby town. Powell named the spot Separation Rapid, and despite their fears, the remaining crew managed to pass through and reached the mouth of the Virgin River on August 30, marking the end of their journey. The three who hiked out just days before were found dead, killed by members of the Shivwits band. Powell accomplished much of what he had set out to do, and many of the landmarks and locations within the Grand Canyon were named by him. Incredibly, he set out on a second expedition of the Grand Canyon just a few years later to document even more of the area. Let no one say that geologists aren’t dedicated people.
[Image description: A portion of the Colorado River flowing through the Grand Canyon under a blue sky.] Credit & copyright: NPS photo, Asset ID: f8f35e4c-505c-47fc-8f74-79b4b0f9ea16. Public domain: Full Granting Rights. -
FREEMind + Body Daily Curio #3143Free1 CQ
It’s an eradication throughout the nation. The World Health Organization (WHO) recently announced that Kenya is finally free from human African trypanosomiasis (HAT), also known as sleeping sickness. Among diseases that disproportionately affect impoverished regions, HAT is one of the most deadly. It’s caused by the parasite Trypanosoma brucei rhodesiense, which is usually transmitted to human hosts by tsetse flies. Early symptoms of HAT are easy to confuse for flu symptoms, with fevers, headaches and joint pain. However, in the later stages, it begins to affect the nervous system, leading to confusion, changes in personality, and sensory disturbances. Its most prominent symptom, which comes in the very last stages, is sleep cycle disturbance. At this point, the patient experiences drowsiness during the day and sleeplessness at night. Untreated, the disease is usually fatal, and even if caught, the treatment can be lengthy.
The disease was difficult to contain for a few reasons. One is that, aside from tsetse flies, the parasite can be transmitted from mother to fetus during gestation and through sharing needles. There is even one case of sexual contact resulting in transmission. Another reason is that HAT can take months or even up to a year to show any symptoms, and once it does, the patient’s health rapidly declines. Since the first case of the disease was discovered in Kenya in the early 20th century, the government has been fighting and monitoring its spread. In more recent years, they’ve been distributing diagnostic tools, training more clinical personnel, and monitoring the presence of the T. b. rhodesiense in animal populations. Thanks to these efforts, only a few cases of HAT have been recorded in the past decade, and the WHO has finally declared that the disease has been eliminated in Kenya, making it the 10th country to do so. Time for this disease to wake up, smell the coffee, and get gone.
[Image description: An illustration of a fly from above.] Credit & copyright: Insect life, its why and wherefore, Stanley, Hubert George.
Brooke, Winifred M. A., London. Sir I. Pitman & Sons, 1913. Biodiversity Heritage Library. Public Domain.It’s an eradication throughout the nation. The World Health Organization (WHO) recently announced that Kenya is finally free from human African trypanosomiasis (HAT), also known as sleeping sickness. Among diseases that disproportionately affect impoverished regions, HAT is one of the most deadly. It’s caused by the parasite Trypanosoma brucei rhodesiense, which is usually transmitted to human hosts by tsetse flies. Early symptoms of HAT are easy to confuse for flu symptoms, with fevers, headaches and joint pain. However, in the later stages, it begins to affect the nervous system, leading to confusion, changes in personality, and sensory disturbances. Its most prominent symptom, which comes in the very last stages, is sleep cycle disturbance. At this point, the patient experiences drowsiness during the day and sleeplessness at night. Untreated, the disease is usually fatal, and even if caught, the treatment can be lengthy.
The disease was difficult to contain for a few reasons. One is that, aside from tsetse flies, the parasite can be transmitted from mother to fetus during gestation and through sharing needles. There is even one case of sexual contact resulting in transmission. Another reason is that HAT can take months or even up to a year to show any symptoms, and once it does, the patient’s health rapidly declines. Since the first case of the disease was discovered in Kenya in the early 20th century, the government has been fighting and monitoring its spread. In more recent years, they’ve been distributing diagnostic tools, training more clinical personnel, and monitoring the presence of the T. b. rhodesiense in animal populations. Thanks to these efforts, only a few cases of HAT have been recorded in the past decade, and the WHO has finally declared that the disease has been eliminated in Kenya, making it the 10th country to do so. Time for this disease to wake up, smell the coffee, and get gone.
[Image description: An illustration of a fly from above.] Credit & copyright: Insect life, its why and wherefore, Stanley, Hubert George.
Brooke, Winifred M. A., London. Sir I. Pitman & Sons, 1913. Biodiversity Heritage Library. Public Domain. -
FREEMind + Body Daily CurioFree1 CQ
Crunchy, yet gooey. Savory, yet tangy. This dish is a whole lot of flavor wrapped in golden batter. Tempura is strongly associated with Japanese cuisine today, but its deepest roots aren't actually in Japan, nor even on the same continent.
Tempura is a simple dish of vegetables or seafood (most commonly shrimp) coated in batter and deep fried. What makes tempura unique is the consistency of the batter, which is extremely light and crisp. It’s made from eggs, soft wheat flour, baking soda or baking powder, and ice water, since keeping the batter cold is key to keeping it crisp. Mixing it for just a few minutes at a time also helps keep it light without whipping it into a fluffy texture, and the process leaves lumps behind, adding to tempura’s extra-crunchy texture. Tempura is usually served with tentsuya, a tangy sauce made from soy sauce, wine, and soup stock.
In the 15th century, Portugal was at the forefront of navigation and exploration. This allowed the country to create a far-reaching empire in the following three centuries, colonizing parts of South America, Africa, and Asia. Portuguese traders also had relationships with a myriad of nations. After the Portuguese became the first Europeans to visit Japan in 1543, they established a major trade port in the country and became Europe’s main supplier of Japanese goods. Soon, Portuguese missionaries arrived in an attempt to convert the Japanese to Catholicism. These missionaries adhered to a special diet, since avoiding meat during certain times of the year was part of their faith. During these times, Portuguese Catholics would eat lightly-battered vegetables called “peixinhos da horta”, or “little fish of the garden”, so named because they resembled battered fish. It’s likely that deep frying didn’t exist in Japan until these missionaries introduced the technique.
The missionaries never managed to turn Japan into a Catholic country, but they did succeed in ushering in a new, deep-fried golden age in Japanese cuisine. Japanese cooks quickly developed their own deep frying techniques, using it to cook seafood and native vegetables like lotus root. Tempura has been a Japanese staple ever since. Talk about a heavenly dish.
[Image description: A small, white bowl of rice with tempura on top.] Credit & copyright: 毒島みるく, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Crunchy, yet gooey. Savory, yet tangy. This dish is a whole lot of flavor wrapped in golden batter. Tempura is strongly associated with Japanese cuisine today, but its deepest roots aren't actually in Japan, nor even on the same continent.
Tempura is a simple dish of vegetables or seafood (most commonly shrimp) coated in batter and deep fried. What makes tempura unique is the consistency of the batter, which is extremely light and crisp. It’s made from eggs, soft wheat flour, baking soda or baking powder, and ice water, since keeping the batter cold is key to keeping it crisp. Mixing it for just a few minutes at a time also helps keep it light without whipping it into a fluffy texture, and the process leaves lumps behind, adding to tempura’s extra-crunchy texture. Tempura is usually served with tentsuya, a tangy sauce made from soy sauce, wine, and soup stock.
In the 15th century, Portugal was at the forefront of navigation and exploration. This allowed the country to create a far-reaching empire in the following three centuries, colonizing parts of South America, Africa, and Asia. Portuguese traders also had relationships with a myriad of nations. After the Portuguese became the first Europeans to visit Japan in 1543, they established a major trade port in the country and became Europe’s main supplier of Japanese goods. Soon, Portuguese missionaries arrived in an attempt to convert the Japanese to Catholicism. These missionaries adhered to a special diet, since avoiding meat during certain times of the year was part of their faith. During these times, Portuguese Catholics would eat lightly-battered vegetables called “peixinhos da horta”, or “little fish of the garden”, so named because they resembled battered fish. It’s likely that deep frying didn’t exist in Japan until these missionaries introduced the technique.
The missionaries never managed to turn Japan into a Catholic country, but they did succeed in ushering in a new, deep-fried golden age in Japanese cuisine. Japanese cooks quickly developed their own deep frying techniques, using it to cook seafood and native vegetables like lotus root. Tempura has been a Japanese staple ever since. Talk about a heavenly dish.
[Image description: A small, white bowl of rice with tempura on top.] Credit & copyright: 毒島みるく, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History Daily Curio #3142Free1 CQ
9/11 was a terrible day in American history, and most of the damage was immediately obvious. For tens of thousands of people who were present in the direct aftermath, however, the terrorist attacks were just the beginning of the nightmare. Around 10 years ago today, 9/11 survivor Marcy Borders passed away from cancer—a fate similar to that of many others who lived through that day.
Borders became famous after 9/11 as the “Dust Lady” when a photo of her covered head to toe in a thick layer of dust was widely published in the media. At the time, few people could have guessed that the dust would claim her life. Soon after her passing, her brother posted on social media that Borders had been suffering from various ailments caused by the toxic dust she was exposed to on 9/11. It’s not an isolated case, either. There were 91,000 first responders and volunteers present on 9/11 and the following months in the rubble of the WTC. Among them, cancer rates are significantly higher than in the general population, particularly when it comes to cancers of the lung, thyroid, prostate, and esophagus, as well as higher rates of leukemia.
The attack on the World Trade Center directly killed around 3,000 people, but at least 6,300 people have died as a result of cancer associated with the toxic dust and debris present around Ground Zero. The main environmental culprits include cement dust and asbestos, but other, more unusual causes include heavy metals, soot, polycyclic aromatic hydrocarbons, and dioxins. Most of the heavy metals came from thousands of shattered fluorescent bulbs, which contain mercury. While it's normal to find some toxic substances at the site of collapsed structures, some of the more unusual ones found on 9/11 were present due to the jet fuel that burned through the buildings. For many, Ground Zero was Day One of a lifelong health battle.
[Image description: A portion of a memorial plaque honoring firefighters killed on September 11, 2001.] Credit & copyright: David R. Tribble (Loadmaster), Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.9/11 was a terrible day in American history, and most of the damage was immediately obvious. For tens of thousands of people who were present in the direct aftermath, however, the terrorist attacks were just the beginning of the nightmare. Around 10 years ago today, 9/11 survivor Marcy Borders passed away from cancer—a fate similar to that of many others who lived through that day.
Borders became famous after 9/11 as the “Dust Lady” when a photo of her covered head to toe in a thick layer of dust was widely published in the media. At the time, few people could have guessed that the dust would claim her life. Soon after her passing, her brother posted on social media that Borders had been suffering from various ailments caused by the toxic dust she was exposed to on 9/11. It’s not an isolated case, either. There were 91,000 first responders and volunteers present on 9/11 and the following months in the rubble of the WTC. Among them, cancer rates are significantly higher than in the general population, particularly when it comes to cancers of the lung, thyroid, prostate, and esophagus, as well as higher rates of leukemia.
The attack on the World Trade Center directly killed around 3,000 people, but at least 6,300 people have died as a result of cancer associated with the toxic dust and debris present around Ground Zero. The main environmental culprits include cement dust and asbestos, but other, more unusual causes include heavy metals, soot, polycyclic aromatic hydrocarbons, and dioxins. Most of the heavy metals came from thousands of shattered fluorescent bulbs, which contain mercury. While it's normal to find some toxic substances at the site of collapsed structures, some of the more unusual ones found on 9/11 were present due to the jet fuel that burned through the buildings. For many, Ground Zero was Day One of a lifelong health battle.
[Image description: A portion of a memorial plaque honoring firefighters killed on September 11, 2001.] Credit & copyright: David R. Tribble (Loadmaster), Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEHumanities Daily Curio #3141Free1 CQ
Ah, an archaeological dig examining the bygone era of… 1978? Archaeologists at the University of Glasgow in Scotland have invited the public and aging skaters to help in the excavation of a buried skatepark, shedding light on a bit of old European skater lore. The 1970s were a time of change, not just in terms of music and (questionable) fashion, but also in the world of sports. In Scotland, skateboarding exploded in popularity as it did in the U.S., and in 1978, the city of Glasgow invested £100,000 to build the country’s first skatepark, the Kelvin Wheelies. The skatepark featured a freestyle area, a slalom run, and a halfpipe, among other ambitious features that would have made any skater at the time drool with delight. The very year it opened, the facility hosted the first Scottish Skateboard Championships. Skaters from all around the U.K. gathered to compete and for a few years, Glasgow skaters were some of the best of the best in the U.K. Unfortunately, there was a sharp decline in the sport just a few years after the skatepark opened, and the skatepark began to see fewer visitors. Over time, it fell into disrepair, and the city made the decision to bulldoze the park due to safety concerns. It was then buried underground, with a few features remaining visible on the surface. Even without the concrete remnants jutting through the ground, Glasgow skaters from those days never forgot the park. Now, however, they may get to help resurrect the glories of yesteryear with archaeologists who are seeking their help in identifying the skatepark’s features and layout as they excavate the site. In addition to getting down and dirty themselves, the skaters hope that the site will be marked in such a way that its historic significance can be remembered properly. While skateboarding may have dipped in popularity for a time in Scotland, it’s now more popular than ever around the world and has even made it into the Olympics, so it’s understandable that skating enthusiasts hold the site in such high regard. Also, by all accounts, those bowls were absolutely sick!
[Image description: A green sign on a chainlink fence. White letters read: “No Skateboarding Allowed: Police Take Notice.”] Credit & copyright: Khrystinasnell, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Ah, an archaeological dig examining the bygone era of… 1978? Archaeologists at the University of Glasgow in Scotland have invited the public and aging skaters to help in the excavation of a buried skatepark, shedding light on a bit of old European skater lore. The 1970s were a time of change, not just in terms of music and (questionable) fashion, but also in the world of sports. In Scotland, skateboarding exploded in popularity as it did in the U.S., and in 1978, the city of Glasgow invested £100,000 to build the country’s first skatepark, the Kelvin Wheelies. The skatepark featured a freestyle area, a slalom run, and a halfpipe, among other ambitious features that would have made any skater at the time drool with delight. The very year it opened, the facility hosted the first Scottish Skateboard Championships. Skaters from all around the U.K. gathered to compete and for a few years, Glasgow skaters were some of the best of the best in the U.K. Unfortunately, there was a sharp decline in the sport just a few years after the skatepark opened, and the skatepark began to see fewer visitors. Over time, it fell into disrepair, and the city made the decision to bulldoze the park due to safety concerns. It was then buried underground, with a few features remaining visible on the surface. Even without the concrete remnants jutting through the ground, Glasgow skaters from those days never forgot the park. Now, however, they may get to help resurrect the glories of yesteryear with archaeologists who are seeking their help in identifying the skatepark’s features and layout as they excavate the site. In addition to getting down and dirty themselves, the skaters hope that the site will be marked in such a way that its historic significance can be remembered properly. While skateboarding may have dipped in popularity for a time in Scotland, it’s now more popular than ever around the world and has even made it into the Olympics, so it’s understandable that skating enthusiasts hold the site in such high regard. Also, by all accounts, those bowls were absolutely sick!
[Image description: A green sign on a chainlink fence. White letters read: “No Skateboarding Allowed: Police Take Notice.”] Credit & copyright: Khrystinasnell, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History Daily Curio #3140Free1 CQ
This wasn’t your average tea party…in fact, it wasn’t even like the other famous revolutionary tea party. With so much political upheaval going on today, it’s worth looking back on different ways that Americans have protested over the centuries, including subtle ways. The Edenton Tea Party of 1774 was quite civil, but made a powerful statement all the same.
The English love their tea, and so did early American colonists. It’s no wonder, then, that when it came to unfair taxation, a tax on tea was a particularly contentious issue. When the British Parliament passed the Tea Act in 1773 and gave the British East India Tea Company a monopoly on the commodity, they probably knew that it would ruffle feathers across the pond. They might not have been prepared for just how ruffled those feathers got, though. That same year, the famous Boston Tea Party took place, during which protesters dumped 90,000 pounds of tea into the Boston Harbor. At the same time, women were encouraged to eschew British imports to participate in politics in their own way.
One woman, named Penelope Barker, took this idea a step further. On October 25, 1774, after the First Continental Congress had passed several non-importation resolutions, Barker gathered 50 women together in what would become the first political protest held by women in America. On the surface it appeared to be like any large tea party, but there were some key differences. Instead of tea made from tea leaves, Barker served herbal tea made from local plants like mulberry leaves and lavender. Furthermore, the attendees signed the 51 Ladies’ Resolution, which expressed political will as women “who are essentially interested in their welfare, to do everything as far as lies in our power to testify our sincere adherence to the same.” Unlike the men who disguised themselves to hide their identities in the Boston Tea Party, the women specifically rejected the idea of hiding. Thus, they exposed themselves to potential public backlash aimed at them personally. As expected, they were mocked heavily by British newspapers, but they also inspired other women in the colonies to have tea parties of their own, bringing more women into the political landscape for the first time. Nothing like a good cup of tea to kick off a revolution.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This wasn’t your average tea party…in fact, it wasn’t even like the other famous revolutionary tea party. With so much political upheaval going on today, it’s worth looking back on different ways that Americans have protested over the centuries, including subtle ways. The Edenton Tea Party of 1774 was quite civil, but made a powerful statement all the same.
The English love their tea, and so did early American colonists. It’s no wonder, then, that when it came to unfair taxation, a tax on tea was a particularly contentious issue. When the British Parliament passed the Tea Act in 1773 and gave the British East India Tea Company a monopoly on the commodity, they probably knew that it would ruffle feathers across the pond. They might not have been prepared for just how ruffled those feathers got, though. That same year, the famous Boston Tea Party took place, during which protesters dumped 90,000 pounds of tea into the Boston Harbor. At the same time, women were encouraged to eschew British imports to participate in politics in their own way.
One woman, named Penelope Barker, took this idea a step further. On October 25, 1774, after the First Continental Congress had passed several non-importation resolutions, Barker gathered 50 women together in what would become the first political protest held by women in America. On the surface it appeared to be like any large tea party, but there were some key differences. Instead of tea made from tea leaves, Barker served herbal tea made from local plants like mulberry leaves and lavender. Furthermore, the attendees signed the 51 Ladies’ Resolution, which expressed political will as women “who are essentially interested in their welfare, to do everything as far as lies in our power to testify our sincere adherence to the same.” Unlike the men who disguised themselves to hide their identities in the Boston Tea Party, the women specifically rejected the idea of hiding. Thus, they exposed themselves to potential public backlash aimed at them personally. As expected, they were mocked heavily by British newspapers, but they also inspired other women in the colonies to have tea parties of their own, bringing more women into the political landscape for the first time. Nothing like a good cup of tea to kick off a revolution.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3139Free1 CQ
Who knew that weeds could be so helpful? Increasingly powerful storms and rising sea levels are quickly eroding Scotland’s coastline, but the solution to slowing the progress might lie in a humble seaweed. Coastal erosion is a growing concern around the world, and the issue is as dire as can be in Scotland. In some communities, buildings are located a literal stone’s throw from the water, and erosion is encroaching on homes, businesses, and historic sites. However, researchers at Heriot-Watt University have found a simple and plentiful resource that could slow the encroachment to a crawl: kelp. Using computer modeling, researchers tested the effectiveness of kelp and other natural barriers like seagrass, oyster reefs, and mussel beds in dampening the devastating energy carried by ocean waves. What they found was that these barriers could reduce the impact and height of incoming waves to a surprising degree. Kelp was the most effective, capable of reducing wave height by up to 70 percent depending on exact location. The problem with natural barriers is that they too are in decline in many areas due to climate change. Kelp forests are already struggling to survive rising temperatures in some areas, and they could easily be wiped out during a devastating storm, leaving nearby communities more vulnerable until the kelp can recover. Researchers say that legislation may be the next, crucial step in stopping the erosion of Scotland’s coasts. If natural barriers are protected via legislation, they could not only contribute to a diverse marine habitat but act as a natural defense against erosion and flooding. Kelp also constitutes much of the ecosystem that fisheries relies on, so protecting them would directly benefit the economy too. It’s a green solution in more ways than one.
[Image description: A kelp forest underwater.] Credit & copyright: U.S. National Park Service, Asset ID: 0DB56032-0224-DD48-5D902DA5B1D6C3F5. Public domain: Full Granting Rights.Who knew that weeds could be so helpful? Increasingly powerful storms and rising sea levels are quickly eroding Scotland’s coastline, but the solution to slowing the progress might lie in a humble seaweed. Coastal erosion is a growing concern around the world, and the issue is as dire as can be in Scotland. In some communities, buildings are located a literal stone’s throw from the water, and erosion is encroaching on homes, businesses, and historic sites. However, researchers at Heriot-Watt University have found a simple and plentiful resource that could slow the encroachment to a crawl: kelp. Using computer modeling, researchers tested the effectiveness of kelp and other natural barriers like seagrass, oyster reefs, and mussel beds in dampening the devastating energy carried by ocean waves. What they found was that these barriers could reduce the impact and height of incoming waves to a surprising degree. Kelp was the most effective, capable of reducing wave height by up to 70 percent depending on exact location. The problem with natural barriers is that they too are in decline in many areas due to climate change. Kelp forests are already struggling to survive rising temperatures in some areas, and they could easily be wiped out during a devastating storm, leaving nearby communities more vulnerable until the kelp can recover. Researchers say that legislation may be the next, crucial step in stopping the erosion of Scotland’s coasts. If natural barriers are protected via legislation, they could not only contribute to a diverse marine habitat but act as a natural defense against erosion and flooding. Kelp also constitutes much of the ecosystem that fisheries relies on, so protecting them would directly benefit the economy too. It’s a green solution in more ways than one.
[Image description: A kelp forest underwater.] Credit & copyright: U.S. National Park Service, Asset ID: 0DB56032-0224-DD48-5D902DA5B1D6C3F5. Public domain: Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
They’re a staple across the pond, but the most likely place to find these eggs in the U.S. is at a Renaissance fair! Scotch eggs have been around for centuries, but have fallen out of favor in many of the places they were once popular. Associated with Britain yet seemingly named after Scotland, these sausage-covered snacks might actually have roots in India or Africa.
A Scotch egg is a hard-boiled or soft-boiled egg covered in sausage, then coated in breadcrumbs and deep fried. Home cooks sometimes choose to bake their Scotch eggs for convenience, and they can be served whole or cut into slices.
Scotch eggs are a popular pub food in the U.K., and in the U.S. they’re associated with historic Britain, making them popular at Renaissance faires and other European-themed events, but few other places. In truth, no one actually knows where Scotch eggs came from, though they definitely didn’t originate in Scotland, despite their name. One story claims that Scotch eggs were named after 19th-century restauranteurs William J Scott & Sons, of Whitby in Yorkshire, England. Supposedly, they served eggs called “Scotties”, coated in fish paste rather than sausage. There are plenty of other theories, of course. London department store Fortnum & Mason has long held that they invented Scotch eggs in the 18th century as a snack for wealthy customers. It’s also plausible that Scotch eggs aren’t European at all, but that they originated in Africa or India. African recipes for foods similar to Scotch eggs have been found, and might have been brought to England via trade or exploration during the reign of Queen Elizabeth I, between 1558 to 1603. It’s also possible that Scotch eggs were based on an Indian dish called nargisi kofta in which an egg was coated in spiced meat, and that the dish made its way to England during the British colonization of India. However they got there, scotch eggs are right at home in British pubs. Ye olde snacks are sometimes the best.
[Image description: Four slices of Scotch egg on a white plate.] Credit & copyright: Alvis, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.They’re a staple across the pond, but the most likely place to find these eggs in the U.S. is at a Renaissance fair! Scotch eggs have been around for centuries, but have fallen out of favor in many of the places they were once popular. Associated with Britain yet seemingly named after Scotland, these sausage-covered snacks might actually have roots in India or Africa.
A Scotch egg is a hard-boiled or soft-boiled egg covered in sausage, then coated in breadcrumbs and deep fried. Home cooks sometimes choose to bake their Scotch eggs for convenience, and they can be served whole or cut into slices.
Scotch eggs are a popular pub food in the U.K., and in the U.S. they’re associated with historic Britain, making them popular at Renaissance faires and other European-themed events, but few other places. In truth, no one actually knows where Scotch eggs came from, though they definitely didn’t originate in Scotland, despite their name. One story claims that Scotch eggs were named after 19th-century restauranteurs William J Scott & Sons, of Whitby in Yorkshire, England. Supposedly, they served eggs called “Scotties”, coated in fish paste rather than sausage. There are plenty of other theories, of course. London department store Fortnum & Mason has long held that they invented Scotch eggs in the 18th century as a snack for wealthy customers. It’s also plausible that Scotch eggs aren’t European at all, but that they originated in Africa or India. African recipes for foods similar to Scotch eggs have been found, and might have been brought to England via trade or exploration during the reign of Queen Elizabeth I, between 1558 to 1603. It’s also possible that Scotch eggs were based on an Indian dish called nargisi kofta in which an egg was coated in spiced meat, and that the dish made its way to England during the British colonization of India. However they got there, scotch eggs are right at home in British pubs. Ye olde snacks are sometimes the best.
[Image description: Four slices of Scotch egg on a white plate.] Credit & copyright: Alvis, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEPolitical Science Daily Curio #3138Free1 CQ
Washington D.C. doesn’t always get to be its own city, despite its status as the nation’s capital. With the federal government’s recent controversial takeover of law enforcement duties from the Metropolitan Police Department of the District of Columbia (MPDC), it might be worth looking back at the history of the District of Columbia Home Rule Act, which lies at the center of the debate.
Washington D.C. has been the capital of the U.S. since 1800, yet for most of its history it didn’t have much autonomy as a city. Even though it’s situated in the continental U.S., it’s not technically located in one of the 50 states. This was by design, as the Founding Fathers didn’t want any one state to have too much power over the capital. That power was instead given to the federal government, and that had some unusual repercussions for D.C. residents. For one, since the city wasn’t located in a state, the residents didn’t have a say in presidential elections with electoral votes until the 23rd amendment was ratified in 1961. Washington’s residents had been trying for most of its history to gain voting rights, and that was just one small victory in the city’s struggle for representation.
The next big development for Washington was the District of Columbia Home Rule Act of 1973, which allowed residents to vote for a mayor and a council of 12 members. Still, all legislation passed by the council has to be approved by Congress. Not only that, the city’s budget is set by Congress and its judges are appointed by the president. Finally, while Washington has representatives in Congress, they aren’t allowed to vote, effectively leaving the city without a voice in federal legislation. Recent events are a stark reminder that the city is ultimately at the mercy of federal authority for even the most basic municipal functions. With the White House invoking section 740 of the Home Rule Act to declare an emergency, the federal government has taken over law enforcement duties, and it has the power to do so for up to 30 days by notifying Congress. It might be the capital, but its rights are somewhat lowercase.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Washington D.C. doesn’t always get to be its own city, despite its status as the nation’s capital. With the federal government’s recent controversial takeover of law enforcement duties from the Metropolitan Police Department of the District of Columbia (MPDC), it might be worth looking back at the history of the District of Columbia Home Rule Act, which lies at the center of the debate.
Washington D.C. has been the capital of the U.S. since 1800, yet for most of its history it didn’t have much autonomy as a city. Even though it’s situated in the continental U.S., it’s not technically located in one of the 50 states. This was by design, as the Founding Fathers didn’t want any one state to have too much power over the capital. That power was instead given to the federal government, and that had some unusual repercussions for D.C. residents. For one, since the city wasn’t located in a state, the residents didn’t have a say in presidential elections with electoral votes until the 23rd amendment was ratified in 1961. Washington’s residents had been trying for most of its history to gain voting rights, and that was just one small victory in the city’s struggle for representation.
The next big development for Washington was the District of Columbia Home Rule Act of 1973, which allowed residents to vote for a mayor and a council of 12 members. Still, all legislation passed by the council has to be approved by Congress. Not only that, the city’s budget is set by Congress and its judges are appointed by the president. Finally, while Washington has representatives in Congress, they aren’t allowed to vote, effectively leaving the city without a voice in federal legislation. Recent events are a stark reminder that the city is ultimately at the mercy of federal authority for even the most basic municipal functions. With the White House invoking section 740 of the Home Rule Act to declare an emergency, the federal government has taken over law enforcement duties, and it has the power to do so for up to 30 days by notifying Congress. It might be the capital, but its rights are somewhat lowercase.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3137Free1 CQ
Fatigue isn’t always a symptom; sometimes, it’s the disease. In the last few decades, more people have been affected by chronic fatigue syndrome. Now, researchers may finally have found out what causes the mysterious illness. Chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis, causes such profound fatigue that no amount of rest is enough to alleviate it. The disease began attracting attention in the medical community in the late 1980s, when it was widely confused for mononucleosis, which can cause similar symptoms. In addition to being easily fatigued, those who suffer from CFS are likely to experience severe dizziness, muscle and joint pain, cognitive issues, and do not feel refreshed after sleeping. In some cases, CFS can also cause tender lymph nodes and sensitivity to various stimuli. The disease is difficult to diagnose, and some patients have reported difficulty in having their condition taken seriously, even by the doctors they turn to for help.
That might change now that CFS has been linked to a change in the gut biome as well as certain genetic signals in patients. One study analyzed the gut biome of 153 individuals who have been diagnosed with CFS and compared it to those of 96 healthy individuals. Researchers found that composition of gut biome could reliably predict CFS symptoms. The link between the gut and CFS isn’t too surprising, since the disease often manifests after the patient fights off another infection that might have affected their gut biome. Another study that analyzed the data on over 15,000 CFS patients and compared it to healthy individuals found that eight genetic signals are linked to the immune and nervous systems. While a patient’s gut biome can be used to predict the type of symptoms they will have, it appears that these genetic signals can predict the severity of those symptoms. While there is still no cure for CFS, deeper research could be the key to convincing sufferers’ bodies to finally wake up and smell the coffee.
[Image description: A black-and-white illustration of a girl sleeping while sitting up in a chair with sewing in her lap.] Credit & copyright: Sleeping Girl with Needlework in her Lap, Gerard Valck after Michiel van Musscher. The Metropolitan Museum of Art, A. Hyatt Mayor Purchase Fund, Marjorie Phelps Starr Bequest, 1988. Public Domain.Fatigue isn’t always a symptom; sometimes, it’s the disease. In the last few decades, more people have been affected by chronic fatigue syndrome. Now, researchers may finally have found out what causes the mysterious illness. Chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis, causes such profound fatigue that no amount of rest is enough to alleviate it. The disease began attracting attention in the medical community in the late 1980s, when it was widely confused for mononucleosis, which can cause similar symptoms. In addition to being easily fatigued, those who suffer from CFS are likely to experience severe dizziness, muscle and joint pain, cognitive issues, and do not feel refreshed after sleeping. In some cases, CFS can also cause tender lymph nodes and sensitivity to various stimuli. The disease is difficult to diagnose, and some patients have reported difficulty in having their condition taken seriously, even by the doctors they turn to for help.
That might change now that CFS has been linked to a change in the gut biome as well as certain genetic signals in patients. One study analyzed the gut biome of 153 individuals who have been diagnosed with CFS and compared it to those of 96 healthy individuals. Researchers found that composition of gut biome could reliably predict CFS symptoms. The link between the gut and CFS isn’t too surprising, since the disease often manifests after the patient fights off another infection that might have affected their gut biome. Another study that analyzed the data on over 15,000 CFS patients and compared it to healthy individuals found that eight genetic signals are linked to the immune and nervous systems. While a patient’s gut biome can be used to predict the type of symptoms they will have, it appears that these genetic signals can predict the severity of those symptoms. While there is still no cure for CFS, deeper research could be the key to convincing sufferers’ bodies to finally wake up and smell the coffee.
[Image description: A black-and-white illustration of a girl sleeping while sitting up in a chair with sewing in her lap.] Credit & copyright: Sleeping Girl with Needlework in her Lap, Gerard Valck after Michiel van Musscher. The Metropolitan Museum of Art, A. Hyatt Mayor Purchase Fund, Marjorie Phelps Starr Bequest, 1988. Public Domain.