Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
Don’t panic, these eggs aren’t satanic! From summer barbeques to holiday feasts, deviled eggs are widely beloved despite their odd name. These savory morsels have a surprisingly long history, popping up in cultures from ancient Rome to medieval Europe.
Deviled eggs are specially-prepared, hard-boiled eggs in which the yolk is scooped out, mixed with other ingredients, and then piped back into the egg white. Recipes vary, but the yolks are usually mixed with mayo and mustard, then topped with other spices and herbs like paprika or parsley. Deviled eggs can be prepared simply or elaborately since they can be topped with practically anything, from bacon to salsa to shrimp.
Eggs have been eaten as appetizers and side dishes for centuries. In ancient Rome, boiled eggs were eaten as finger food and dipped in spicy sauces. Spiciness was a hallmark of many early deviled-egg-like recipes from medieval Europe too. One recipe from 13th-century Spain called for mixing egg yolks with pepper and onion juice, among other ingredients, then piping it back into boiled egg halves before skewering the halves together with a pepper-topped stick.
As for the name “deviled”, it’s a culinary term that applies to more than just eggs. Deviled ham still exists today, as does deviled crab. It came about in the 1700s as a way of describing heavily-spiced foods. Some food historians believe that the term had to do with the heat of the spices (the devil is known to like heat, after all). Others believe that “deviled” refers to the sinful or decadent nature of the dish, since spices and herbs were expensive and hard to obtain in the 1700s, especially in the American colonies. Either way, the name stuck, though some still prefer to call them stuffed eggs, dressed eggs, or even angel eggs. Hey, an elaborately-prepared egg by any other name still tastes just as good.
[Image description: Six deviled eggs with green garnishes on a wooden serving board.] Credit & copyright: Büşra Yaman, PexelsDon’t panic, these eggs aren’t satanic! From summer barbeques to holiday feasts, deviled eggs are widely beloved despite their odd name. These savory morsels have a surprisingly long history, popping up in cultures from ancient Rome to medieval Europe.
Deviled eggs are specially-prepared, hard-boiled eggs in which the yolk is scooped out, mixed with other ingredients, and then piped back into the egg white. Recipes vary, but the yolks are usually mixed with mayo and mustard, then topped with other spices and herbs like paprika or parsley. Deviled eggs can be prepared simply or elaborately since they can be topped with practically anything, from bacon to salsa to shrimp.
Eggs have been eaten as appetizers and side dishes for centuries. In ancient Rome, boiled eggs were eaten as finger food and dipped in spicy sauces. Spiciness was a hallmark of many early deviled-egg-like recipes from medieval Europe too. One recipe from 13th-century Spain called for mixing egg yolks with pepper and onion juice, among other ingredients, then piping it back into boiled egg halves before skewering the halves together with a pepper-topped stick.
As for the name “deviled”, it’s a culinary term that applies to more than just eggs. Deviled ham still exists today, as does deviled crab. It came about in the 1700s as a way of describing heavily-spiced foods. Some food historians believe that the term had to do with the heat of the spices (the devil is known to like heat, after all). Others believe that “deviled” refers to the sinful or decadent nature of the dish, since spices and herbs were expensive and hard to obtain in the 1700s, especially in the American colonies. Either way, the name stuck, though some still prefer to call them stuffed eggs, dressed eggs, or even angel eggs. Hey, an elaborately-prepared egg by any other name still tastes just as good.
[Image description: Six deviled eggs with green garnishes on a wooden serving board.] Credit & copyright: Büşra Yaman, Pexels -
FREEWorld History Daily Curio #3118Free1 CQ
It’s time to go on the Thames. An annual event called Swan Upping takes place around this time in England each year, and as whimsical as it sounds, it’s really serious business. King Charles III has had many titles bestowed on him in his life, including prince of Wales and earl of Chester, duke of Cornwall, Lord of the Isles, and Prince and Great Steward of Scotland. As the king of the U.K., he has yet another title: Seigneur of the Swans, or the Lord of the Swans. Of course, the king doesn’t dive into the River Thames himself. Instead, the King’s Swan Marker, wearing a red jacket and a white swan-feathered hat, leads a team of swan uppers, who row along the river in skiffs in search of swans and cygnets. The tradition dates back to the 12th century when swans were considered a delicacy, primarily served at royal banquets and feasts. In order to ensure a sustainable population of swans to feast on, it was the crown’s duty to keep track of their numbers.
Swans aren’t really considered “fair game” nowadays, and it’s no longer legal to hunt them. However, they still face threats in the form of human intervention and environmental hazards, and the Thames just wouldn’t be the same without them. So, the practice of Swan Upping has transformed into a ceremonial activity mainly focused on conservation. When swan uppers spot a swan or cygnet, they yell, “All up!” They gather the cygnets, weigh them, determine their parentage, and mark them with a ring that carries an identification number unique to that individual. The birds are also given a quick examination for injuries or diseases before they’re released. Despite rumors, the king doesn’t actually own all the swans on the Thames or in England, for that matter. Only the unmarked swans on certain parts of the Thames technically belong to the king, while the rest are claimed by two livery companies and the Ilchester family, who operate a breeding colony of the birds. The swan’s owners don’t eat them, but instead use their ownership for conservation efforts. England’s swans might no longer be served at feasts, but they do get to have a taste of the good life.
[Image description: A swan floating on blue water.] Credit & copyright: Michael, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s time to go on the Thames. An annual event called Swan Upping takes place around this time in England each year, and as whimsical as it sounds, it’s really serious business. King Charles III has had many titles bestowed on him in his life, including prince of Wales and earl of Chester, duke of Cornwall, Lord of the Isles, and Prince and Great Steward of Scotland. As the king of the U.K., he has yet another title: Seigneur of the Swans, or the Lord of the Swans. Of course, the king doesn’t dive into the River Thames himself. Instead, the King’s Swan Marker, wearing a red jacket and a white swan-feathered hat, leads a team of swan uppers, who row along the river in skiffs in search of swans and cygnets. The tradition dates back to the 12th century when swans were considered a delicacy, primarily served at royal banquets and feasts. In order to ensure a sustainable population of swans to feast on, it was the crown’s duty to keep track of their numbers.
Swans aren’t really considered “fair game” nowadays, and it’s no longer legal to hunt them. However, they still face threats in the form of human intervention and environmental hazards, and the Thames just wouldn’t be the same without them. So, the practice of Swan Upping has transformed into a ceremonial activity mainly focused on conservation. When swan uppers spot a swan or cygnet, they yell, “All up!” They gather the cygnets, weigh them, determine their parentage, and mark them with a ring that carries an identification number unique to that individual. The birds are also given a quick examination for injuries or diseases before they’re released. Despite rumors, the king doesn’t actually own all the swans on the Thames or in England, for that matter. Only the unmarked swans on certain parts of the Thames technically belong to the king, while the rest are claimed by two livery companies and the Ilchester family, who operate a breeding colony of the birds. The swan’s owners don’t eat them, but instead use their ownership for conservation efforts. England’s swans might no longer be served at feasts, but they do get to have a taste of the good life.
[Image description: A swan floating on blue water.] Credit & copyright: Michael, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3117Free1 CQ
Whether you win or lose this race, you’ll feel the pain in your feet. As part of Pride Week in Madrid, revelers have, for decades, been participating in the “Carrera de Tacones,” or the race of heels. Racers, most of them men, don high-heeled shoes and run through the city’s streets. The premise of the race is predicated on the footwear’s notoriously impractical and uncomfortable nature, but high heels were once considered much more than fashion accessories. In fact, they were worn by soldiers.
High heels were originally developed for horseback riding in Persia, which owed much of its military success to its mounted soldiers. The pronounced heels helped riders stabilize themselves on stirrups, allowing for greater control over their steeds. Although the earliest depiction of high heels dates back to the 10th century, it’s possible that they were used before then. Regardless, high heels were largely seen as military gear, and for centuries, they were associated with masculinity. Since horseback riding was usually an activity only available to those wealthy enough to own horses, high heels were also a status symbol, and they remained that way until around the first half of the 17th century. As horseback riding became more accessible to commoners, high heels lost their distinguishing appeal, at least for a while. Then, aristocrats in Europe began wearing shoes with increasingly higher heels as a display of wealth, since such footwear would be impractical for manual labor.
Around the same time, red dye was gaining popularity as a sign of conspicuous consumption, and so red heels became popular. In the 18th century, King Louis XIV of France was so enamored and protective of the shoes as a status symbol that he only allowed members of his court to wear them. While high heels gradually fell out of favor with men, they became more and more popular with women in the 19th century as they, too, sought to wear impractical shoes that denoted their high status, distancing themselves from laborers. Today, some riding shoes still have more pronounced heels than most shoes, though not nearly to the degree they did in the past. Mainly, though, high heels are a fashion item regardless of social status, and they’ve earned such a reputation for being impractical that it’s considered novel to race in them. By the way, the race of heels takes place on cobblestone. Oh, those poor ankles!
[Image description: A pair of white, historical high-heeled shoes with pointy toes and yellow-and-green floral embroidery.] Credit & copyright: The Metropolitan Museum of Art, 1690-1700. Rogers Fund, 1906. Public Domain.Whether you win or lose this race, you’ll feel the pain in your feet. As part of Pride Week in Madrid, revelers have, for decades, been participating in the “Carrera de Tacones,” or the race of heels. Racers, most of them men, don high-heeled shoes and run through the city’s streets. The premise of the race is predicated on the footwear’s notoriously impractical and uncomfortable nature, but high heels were once considered much more than fashion accessories. In fact, they were worn by soldiers.
High heels were originally developed for horseback riding in Persia, which owed much of its military success to its mounted soldiers. The pronounced heels helped riders stabilize themselves on stirrups, allowing for greater control over their steeds. Although the earliest depiction of high heels dates back to the 10th century, it’s possible that they were used before then. Regardless, high heels were largely seen as military gear, and for centuries, they were associated with masculinity. Since horseback riding was usually an activity only available to those wealthy enough to own horses, high heels were also a status symbol, and they remained that way until around the first half of the 17th century. As horseback riding became more accessible to commoners, high heels lost their distinguishing appeal, at least for a while. Then, aristocrats in Europe began wearing shoes with increasingly higher heels as a display of wealth, since such footwear would be impractical for manual labor.
Around the same time, red dye was gaining popularity as a sign of conspicuous consumption, and so red heels became popular. In the 18th century, King Louis XIV of France was so enamored and protective of the shoes as a status symbol that he only allowed members of his court to wear them. While high heels gradually fell out of favor with men, they became more and more popular with women in the 19th century as they, too, sought to wear impractical shoes that denoted their high status, distancing themselves from laborers. Today, some riding shoes still have more pronounced heels than most shoes, though not nearly to the degree they did in the past. Mainly, though, high heels are a fashion item regardless of social status, and they’ve earned such a reputation for being impractical that it’s considered novel to race in them. By the way, the race of heels takes place on cobblestone. Oh, those poor ankles!
[Image description: A pair of white, historical high-heeled shoes with pointy toes and yellow-and-green floral embroidery.] Credit & copyright: The Metropolitan Museum of Art, 1690-1700. Rogers Fund, 1906. Public Domain. -
FREEScience Daily Curio #3116Free1 CQ
We might need to redefine what qualifies as hardwood! Fig trees are known for their delicious fruit, but they may soon be useful as a means of carbon sequestration after scientists discovered that they can turn themselves into stone. It’s not exactly news that trees, like all living things, are made out of carbon. Compared to most organisms, though, trees are great at sequestering carbon. They turn carbon dioxide into organic carbon, which they then use to form everything from roots to leaves. Since trees live so long, they can store that carbon for a long time. That’s why, to combat climate change, it’s a good idea to plant as many trees as possible. It’s a win-win, since trees can also provide food and lumber to people and form habitats for other organisms. One tree, however, seems to be a little ahead of the curve. The Ficus wakefieldii is a species of fig tree native to Kenya, and scientists have found that it can turn carbon dioxide into calcium carbonate, which happens to be what makes up much of limestone. Apparently, other fig trees can do this to some extent, but F. wakefieldii was the best at it out of the three species studied.
The process is fairly simple. First, the trees convert carbon dioxide into carbon oxalate crystals, and when parts of the tree begin to naturally decay from age, bacteria and fungi convert the crystals into calcium carbonate. Much of the calcium carbonate is released into the surrounding soil, making it less acidic for the tree, but much of it is also stored in the tissue of the tree itself. In fact, scientists found that in some specimens, their roots were completely converted to calcium carbonate. Surprisingly, F. wakefieldii isn’t the only tree capable of doing this. The iroko tree (Milicia excelsa), also native to Africa, can do the same thing, though it’s only used for lumber. Fig trees,on the other hand, can produce food. Either way, carbon minerals can stay sequestered for much longer than organic carbon, so both species could one day be cultivated for that purpose. The real question is, if you wanted to make something from these trees’ wood, would you call a carpenter or a mason?
[Image description: A brown, slightly-split fig on a bonsai fig tree.] Credit & copyright: Tangopaso, Wikimedia Commons.We might need to redefine what qualifies as hardwood! Fig trees are known for their delicious fruit, but they may soon be useful as a means of carbon sequestration after scientists discovered that they can turn themselves into stone. It’s not exactly news that trees, like all living things, are made out of carbon. Compared to most organisms, though, trees are great at sequestering carbon. They turn carbon dioxide into organic carbon, which they then use to form everything from roots to leaves. Since trees live so long, they can store that carbon for a long time. That’s why, to combat climate change, it’s a good idea to plant as many trees as possible. It’s a win-win, since trees can also provide food and lumber to people and form habitats for other organisms. One tree, however, seems to be a little ahead of the curve. The Ficus wakefieldii is a species of fig tree native to Kenya, and scientists have found that it can turn carbon dioxide into calcium carbonate, which happens to be what makes up much of limestone. Apparently, other fig trees can do this to some extent, but F. wakefieldii was the best at it out of the three species studied.
The process is fairly simple. First, the trees convert carbon dioxide into carbon oxalate crystals, and when parts of the tree begin to naturally decay from age, bacteria and fungi convert the crystals into calcium carbonate. Much of the calcium carbonate is released into the surrounding soil, making it less acidic for the tree, but much of it is also stored in the tissue of the tree itself. In fact, scientists found that in some specimens, their roots were completely converted to calcium carbonate. Surprisingly, F. wakefieldii isn’t the only tree capable of doing this. The iroko tree (Milicia excelsa), also native to Africa, can do the same thing, though it’s only used for lumber. Fig trees,on the other hand, can produce food. Either way, carbon minerals can stay sequestered for much longer than organic carbon, so both species could one day be cultivated for that purpose. The real question is, if you wanted to make something from these trees’ wood, would you call a carpenter or a mason?
[Image description: A brown, slightly-split fig on a bonsai fig tree.] Credit & copyright: Tangopaso, Wikimedia Commons. -
FREEMind + Body Daily Curio #3115Free1 CQ
It was a lot of work, but it had to get done. Once devastated by a water crisis, the city of Flint, Michigan, has now completely replaced all of its lead pipes. In 2014, the city switched its municipal water source to the Flint River, which was cheaper than piping in water from Lake Huron and should have been easy enough to do. The change was part of an ongoing effort to lower the city’s spending after it was placed under state control due to a $25 million deficit. An emergency manager had been assigned by the governor to cut costs wherever possible, and so city officials and residents had no say in the change. Problems quickly arose when those overseeing Flint failed to treat the river water, which was more acidic than the lake water. The water gradually corroded the protective coating that had formed inside the lead pipes during years of hard water use. Eventually, the coating disappeared completely and the acidic water began leeching lead from the pipes. The water was tested periodically by city officials…but not adequately. Water samples were taken after letting the tap run for a little while, allowing any built up lead in the pipes to be washed out. By 2016, however, the effects of lead contamination were obvious. Residents were showing symptoms of lead poisoning, including behavioral changes, increased anxiety and depression, and cognitive decline. Overall, some 100,000 residents and 28,000 homes in and around the city were affected. Following a court decision later that year, residents were provided with faucet filters or water delivery services for drinking water, though these were only temporary solutions. The next year, a court decision forced the city to replace its 11,000 lead pipes. Now, almost 10 years later, the project is finally complete. Time to make a toast with tap water.
[Image description: The surface of water with slight ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It was a lot of work, but it had to get done. Once devastated by a water crisis, the city of Flint, Michigan, has now completely replaced all of its lead pipes. In 2014, the city switched its municipal water source to the Flint River, which was cheaper than piping in water from Lake Huron and should have been easy enough to do. The change was part of an ongoing effort to lower the city’s spending after it was placed under state control due to a $25 million deficit. An emergency manager had been assigned by the governor to cut costs wherever possible, and so city officials and residents had no say in the change. Problems quickly arose when those overseeing Flint failed to treat the river water, which was more acidic than the lake water. The water gradually corroded the protective coating that had formed inside the lead pipes during years of hard water use. Eventually, the coating disappeared completely and the acidic water began leeching lead from the pipes. The water was tested periodically by city officials…but not adequately. Water samples were taken after letting the tap run for a little while, allowing any built up lead in the pipes to be washed out. By 2016, however, the effects of lead contamination were obvious. Residents were showing symptoms of lead poisoning, including behavioral changes, increased anxiety and depression, and cognitive decline. Overall, some 100,000 residents and 28,000 homes in and around the city were affected. Following a court decision later that year, residents were provided with faucet filters or water delivery services for drinking water, though these were only temporary solutions. The next year, a court decision forced the city to replace its 11,000 lead pipes. Now, almost 10 years later, the project is finally complete. Time to make a toast with tap water.
[Image description: The surface of water with slight ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
What does a fruit salad have to do with one of the world’s most famous hotels? More than you’d think! Waldorf salad is more than just a great choice for cooling down during summer, it’s an integral part of American culinary history. Developed at New York City’s famous Waldorf-Astoria hotel during the establishment’s golden age, this humble salad is a superstar…albiet a misunderstood one.
Modern Waldorf salad is usually made with chopped apples, mayonnaise, sliced grapes, chopped celery, and walnuts. Raisins are also sometimes added. Juice from the chopped apples melds with the mayonnaise during mixing, giving the salad a tangy, sweet flavor. Often, green apples and grapes are used, though some suggest using pink lady apples for a less pucker-inducing dish. Though Waldorf salad is fairly simple to make, it used to be even more so. The original recipe called for just three ingredients: apples, celery, and mayonnaise.
Unlike many other iconic foods, Waldorf salad’s history is well-documented. It was first served on March 13, 1896, at New York City’s Waldorf-Astoria by famed maître d'hôtel Oscar Tschirky. At the time, the Waldorf-Astoria was known as a hotel of the elite. Diplomats, movie stars, and other international celebrities frequently stayed there, and as such the hotel’s menus had to meet high standards and change frequently enough to keep guests interested. Tschirky was a master at coming up with simple yet creative dishes. He first served his three-ingredient Waldorf salad at a charity ball for St. Mary's Hospital, where it was an instant hit. It soon gained a permanent place on the hotel’s menu, and spread beyond its walls when Tschirky published The Cook Book, by "Oscar" of the Waldorf later that same year. Soon, Waldorf salad made its way onto other restaurant menus in New York City, and remained a regional dish for a time before spreading to the rest of the country. Naturally, the further from its birthplace the salad traveled, the more it changed. Regional variations that included grapes and walnuts eventually became the standard, though no one is quite sure how. What’s wrong with teaching an old salad new tricks?
[Image description: A pile of green apples with some red coloring in a cardboard box.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What does a fruit salad have to do with one of the world’s most famous hotels? More than you’d think! Waldorf salad is more than just a great choice for cooling down during summer, it’s an integral part of American culinary history. Developed at New York City’s famous Waldorf-Astoria hotel during the establishment’s golden age, this humble salad is a superstar…albiet a misunderstood one.
Modern Waldorf salad is usually made with chopped apples, mayonnaise, sliced grapes, chopped celery, and walnuts. Raisins are also sometimes added. Juice from the chopped apples melds with the mayonnaise during mixing, giving the salad a tangy, sweet flavor. Often, green apples and grapes are used, though some suggest using pink lady apples for a less pucker-inducing dish. Though Waldorf salad is fairly simple to make, it used to be even more so. The original recipe called for just three ingredients: apples, celery, and mayonnaise.
Unlike many other iconic foods, Waldorf salad’s history is well-documented. It was first served on March 13, 1896, at New York City’s Waldorf-Astoria by famed maître d'hôtel Oscar Tschirky. At the time, the Waldorf-Astoria was known as a hotel of the elite. Diplomats, movie stars, and other international celebrities frequently stayed there, and as such the hotel’s menus had to meet high standards and change frequently enough to keep guests interested. Tschirky was a master at coming up with simple yet creative dishes. He first served his three-ingredient Waldorf salad at a charity ball for St. Mary's Hospital, where it was an instant hit. It soon gained a permanent place on the hotel’s menu, and spread beyond its walls when Tschirky published The Cook Book, by "Oscar" of the Waldorf later that same year. Soon, Waldorf salad made its way onto other restaurant menus in New York City, and remained a regional dish for a time before spreading to the rest of the country. Naturally, the further from its birthplace the salad traveled, the more it changed. Regional variations that included grapes and walnuts eventually became the standard, though no one is quite sure how. What’s wrong with teaching an old salad new tricks?
[Image description: A pile of green apples with some red coloring in a cardboard box.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3114Free1 CQ
These are some not-so-fresh kicks. Archaeologists in England have unearthed 2,000-year-old pairs of Roman shoes, and they’re some of the best preserved footwear from the era. The researchers were working at the Magna Roman Fort in Northumberland, located near another ancient Roman fort called Vindolanda, when they made the discovery. Many famous artifacts have been unearthed at Vindolanda, including wooden writing tablets and around 5,000 pairs of ancient Roman shoes. The Magna site, it seems, is literally following in those footsteps, with 32 shoes found so far preserved in the fort’s “ankle-breaker” trenches. Originally designed to trip and injure attackers, the trenches ended up being a perfect, anaerobic environment to preserve the shoes.
Roman shoes were made with hand-stitched leather, and many were closed-toed as opposed to the sandals often portrayed in popular media (in fact, sandals were only worn indoors). The ancient Romans were actually expert shoemakers, and their footwear contributed greatly to their military success. Most Roman soldiers wore caligae, leather boots consisting of an outer shell cut into many strips that allowed them to be laced up tightly. Replaceable iron hobnails on the soles helped the boots last longer and provided traction on soft surfaces. These boots were eventually replaced with completely enclosed ones called calcei, but the caligae have left a greater impression on the perception of Roman culture. That’s probably thanks to Caligula, the infamous Roman emperor whose real name was Gaius. When Gaius was a child, he accompanied his father on campaign in a set of kid-sized legionary gear, including the caligae. The soldiers then started calling him “Caligula,” which means “little boots.” Unfortunate, since he had some big shoes to fill as the third emperor of Rome.
[Image description: A detailed, black-and-white illustration of two elaborately-dressed ancient Roman soldiers looking at one another.] Credit & copyright: The Metropolitan Museum of Art, Two Roman Soldiers, Giovanni Francesco Venturini, 17th century. Bequest of Phyllis Massar, 2011. Public Domain.These are some not-so-fresh kicks. Archaeologists in England have unearthed 2,000-year-old pairs of Roman shoes, and they’re some of the best preserved footwear from the era. The researchers were working at the Magna Roman Fort in Northumberland, located near another ancient Roman fort called Vindolanda, when they made the discovery. Many famous artifacts have been unearthed at Vindolanda, including wooden writing tablets and around 5,000 pairs of ancient Roman shoes. The Magna site, it seems, is literally following in those footsteps, with 32 shoes found so far preserved in the fort’s “ankle-breaker” trenches. Originally designed to trip and injure attackers, the trenches ended up being a perfect, anaerobic environment to preserve the shoes.
Roman shoes were made with hand-stitched leather, and many were closed-toed as opposed to the sandals often portrayed in popular media (in fact, sandals were only worn indoors). The ancient Romans were actually expert shoemakers, and their footwear contributed greatly to their military success. Most Roman soldiers wore caligae, leather boots consisting of an outer shell cut into many strips that allowed them to be laced up tightly. Replaceable iron hobnails on the soles helped the boots last longer and provided traction on soft surfaces. These boots were eventually replaced with completely enclosed ones called calcei, but the caligae have left a greater impression on the perception of Roman culture. That’s probably thanks to Caligula, the infamous Roman emperor whose real name was Gaius. When Gaius was a child, he accompanied his father on campaign in a set of kid-sized legionary gear, including the caligae. The soldiers then started calling him “Caligula,” which means “little boots.” Unfortunate, since he had some big shoes to fill as the third emperor of Rome.
[Image description: A detailed, black-and-white illustration of two elaborately-dressed ancient Roman soldiers looking at one another.] Credit & copyright: The Metropolitan Museum of Art, Two Roman Soldiers, Giovanni Francesco Venturini, 17th century. Bequest of Phyllis Massar, 2011. Public Domain. -
FREEMind + Body Daily Curio #3113Free1 CQ
It’s not always good to go out with a bang. Heart attacks were once the number one cause of deaths in the world, but a recent study shows that the tides are changing. In the last half-century or so, the number of heart attacks has been in sharp decline. Consider the following statistic from Stanford Medicine researchers: a person over the age of 65 admitted to a hospital in 1970 had just a 60 percent chance of leaving alive, and the most likely cause of death would have been an acute myocardial infarctions, otherwise known as a heart attack. Since then, the numbers have shifted drastically. Heart disease used to account for 41 percent of all deaths in the U.S., but that number is now down to 24 percent. Deaths from heart attacks, specifically, have fallen by an astonishing 90 percent. There are a few reasons for this change, the first being that medical technology has simply advanced, giving doctors better tools with which to help their patients, including better drugs. Another reason is that more people have become health-conscious, eating better, exercising more, and smoking less. Younger Americans are also drinking less alcohol, which might continue to improve the nation’s overall heart health. More people know how to perform CPR now too, and those that don’t can easily look it up within seconds thanks to smartphones. This makes cardiac arrest itself less deadly than it once was. Nowadays, instead of heart attacks, more people are dying from chronic heart conditions. That might not sound like a good thing, but it’s ultimately a positive sign. As the lead author of the study, Sara King, said in a statement, “People now are surviving these acute events, so they have the opportunity to develop these other heart conditions.” Is it really a trade-off if the cost of not dying younger is dying older?
[Image description: A digital illustration of a cartoon heart with a break down the center. The heart is maroon, the background is red.] Credit & copyright: Author-created image. Public domain.It’s not always good to go out with a bang. Heart attacks were once the number one cause of deaths in the world, but a recent study shows that the tides are changing. In the last half-century or so, the number of heart attacks has been in sharp decline. Consider the following statistic from Stanford Medicine researchers: a person over the age of 65 admitted to a hospital in 1970 had just a 60 percent chance of leaving alive, and the most likely cause of death would have been an acute myocardial infarctions, otherwise known as a heart attack. Since then, the numbers have shifted drastically. Heart disease used to account for 41 percent of all deaths in the U.S., but that number is now down to 24 percent. Deaths from heart attacks, specifically, have fallen by an astonishing 90 percent. There are a few reasons for this change, the first being that medical technology has simply advanced, giving doctors better tools with which to help their patients, including better drugs. Another reason is that more people have become health-conscious, eating better, exercising more, and smoking less. Younger Americans are also drinking less alcohol, which might continue to improve the nation’s overall heart health. More people know how to perform CPR now too, and those that don’t can easily look it up within seconds thanks to smartphones. This makes cardiac arrest itself less deadly than it once was. Nowadays, instead of heart attacks, more people are dying from chronic heart conditions. That might not sound like a good thing, but it’s ultimately a positive sign. As the lead author of the study, Sara King, said in a statement, “People now are surviving these acute events, so they have the opportunity to develop these other heart conditions.” Is it really a trade-off if the cost of not dying younger is dying older?
[Image description: A digital illustration of a cartoon heart with a break down the center. The heart is maroon, the background is red.] Credit & copyright: Author-created image. Public domain. -
FREEBiology Daily Curio #3112Free1 CQ
The Earth is teeming with life and, apparantly, with “not-life” as well. Scientists have discovered a new type of organism that appears to defy the standard definition of “life.” All living things are organisms, but not all organisms are living. Take viruses, for instance. While viruses are capable of reproducing, they can’t do so on their own. They require a host organism to perform the biological functions necessary to reproduce. Viruses also can’t produce energy on their own or grow, unlike even simple living things, like bacteria. Now, there’s the matter of Sukunaarchaeum mirabile. The organism was discovered by accident by a team of Canadian and Japanese researchers who were looking into the DNA of Citharistes regius, a species of plankton. When they noticed a loop of DNA that didn’t belong to the plankton, they took a closer look and found Sukunaarchaeum. In some ways, this new organism resembles a virus. It can’t grow, produce energy, or reproduce on its own, but it has one distinct feature that sets it apart: it can produce its own ribosomes, messenger RNA, and transfer RNA. That latter part makes it more like a bacterium than a virus.
Then there’s the matter of its genetics. Sukunaarchaeum, it seems, is a genetic lightweight with only 238,000 base pairs of DNA. Compare that to a typical virus, which can range from 735,000 to 2.5 million base pairs, and the low number really stands out. Nearly all of Sukunaarchaeum’s genes are made to work toward the singular goal of replicating the organism. In a way, Sukunaarchaeum appears to be somewhere between a virus and a bacteria in terms of how “alive” it is, indicating that life itself exists on a spectrum. In science, nothing is as simple as it first appears.The Earth is teeming with life and, apparantly, with “not-life” as well. Scientists have discovered a new type of organism that appears to defy the standard definition of “life.” All living things are organisms, but not all organisms are living. Take viruses, for instance. While viruses are capable of reproducing, they can’t do so on their own. They require a host organism to perform the biological functions necessary to reproduce. Viruses also can’t produce energy on their own or grow, unlike even simple living things, like bacteria. Now, there’s the matter of Sukunaarchaeum mirabile. The organism was discovered by accident by a team of Canadian and Japanese researchers who were looking into the DNA of Citharistes regius, a species of plankton. When they noticed a loop of DNA that didn’t belong to the plankton, they took a closer look and found Sukunaarchaeum. In some ways, this new organism resembles a virus. It can’t grow, produce energy, or reproduce on its own, but it has one distinct feature that sets it apart: it can produce its own ribosomes, messenger RNA, and transfer RNA. That latter part makes it more like a bacterium than a virus.
Then there’s the matter of its genetics. Sukunaarchaeum, it seems, is a genetic lightweight with only 238,000 base pairs of DNA. Compare that to a typical virus, which can range from 735,000 to 2.5 million base pairs, and the low number really stands out. Nearly all of Sukunaarchaeum’s genes are made to work toward the singular goal of replicating the organism. In a way, Sukunaarchaeum appears to be somewhere between a virus and a bacteria in terms of how “alive” it is, indicating that life itself exists on a spectrum. In science, nothing is as simple as it first appears. -
FREEAstronomy Daily Curio #3111Free1 CQ
Don’t hold your breath for moon dust. Long thought to be toxic, new research shows that moon dust may be relatively harmless compared to what’s already here on Earth. While the dusty surface of the moon looks beautiful and its name sounds like a whimsical ingredient in a fairy tale potion, it was a thorn in the side of lunar explorers during the Apollo missions. NASA astronauts who traversed the moon’s dusty surface reported symptoms like nasal congestion and sneezing, which they began calling “lunar hay fever.” They also reported that moon dust smelled like burnt gunpowder, and while an unpleasant smell isn’t necessarily bad for one’s health, it couldn’t have been comforting. These symptoms were likely caused by the abrasive nature of moon dust particles, which are never smoothed out by wind or water the way they would be on Earth. The particles are also small, so they’re very hard to keep out of spacesuits and away from equipment. Then there’s the matter of the moon’s low gravity, which allows moon dust to float around for longer than it would on Earth, making it more likely to penetrate spacesuit’s seals and be inhaled into the lungs. There, like asbestos, the dust can cause tiny cuts that can lead to respiratory problems and even cancer…at least, that’s what everyone thought until recently. Researchers at the University of Technology Sydney (UTS) just published a paper claiming that moon dust might not be so dangerous after all. They believe that the dust will likely cause short-term symptoms without leading to long-term damage. Using simulated moon dust and real human lungs, they found that moon dust was less dangerous than many air pollutants found on Earth. For instance, silica (typically found on construction sites) is much more dangerous, as it can cause silicosis by lingering in the lungs, leading to scarring and lesions. Astronauts headed to the moon in the future can breathe a sigh of relief—but it may be safer to wait until they get there.
[Image description: A moon surrounded by orange-ish hazy clouds against a black sky.] Credit & copyright: Cbaile19, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Don’t hold your breath for moon dust. Long thought to be toxic, new research shows that moon dust may be relatively harmless compared to what’s already here on Earth. While the dusty surface of the moon looks beautiful and its name sounds like a whimsical ingredient in a fairy tale potion, it was a thorn in the side of lunar explorers during the Apollo missions. NASA astronauts who traversed the moon’s dusty surface reported symptoms like nasal congestion and sneezing, which they began calling “lunar hay fever.” They also reported that moon dust smelled like burnt gunpowder, and while an unpleasant smell isn’t necessarily bad for one’s health, it couldn’t have been comforting. These symptoms were likely caused by the abrasive nature of moon dust particles, which are never smoothed out by wind or water the way they would be on Earth. The particles are also small, so they’re very hard to keep out of spacesuits and away from equipment. Then there’s the matter of the moon’s low gravity, which allows moon dust to float around for longer than it would on Earth, making it more likely to penetrate spacesuit’s seals and be inhaled into the lungs. There, like asbestos, the dust can cause tiny cuts that can lead to respiratory problems and even cancer…at least, that’s what everyone thought until recently. Researchers at the University of Technology Sydney (UTS) just published a paper claiming that moon dust might not be so dangerous after all. They believe that the dust will likely cause short-term symptoms without leading to long-term damage. Using simulated moon dust and real human lungs, they found that moon dust was less dangerous than many air pollutants found on Earth. For instance, silica (typically found on construction sites) is much more dangerous, as it can cause silicosis by lingering in the lungs, leading to scarring and lesions. Astronauts headed to the moon in the future can breathe a sigh of relief—but it may be safer to wait until they get there.
[Image description: A moon surrounded by orange-ish hazy clouds against a black sky.] Credit & copyright: Cbaile19, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
Happy Fourth of July! This year, we’re highlighting a food that’s as American as apple pie…actually, much more so. Chicken and waffles is a U.S.-born, soul food staple, but exactly where, when, and how it developed is a source of heated debate.
Chicken and waffles is exactly what its name implies: a dish of waffles, usually served with butter and maple syrup, alongside fried chicken. The chicken is dredged in seasoned flour before cooking, and the exact spices used in the dredge vary from recipe to recipe. Black pepper, paprika, garlic powder, and onion powder are all common choices. The exact pieces of chicken served, whether breast meat, wings, or thighs, also varies. Sometimes, honey is substituted for syrup.
The early history of chicken and waffles is shrouded in mystery. Though there’s no doubt that it’s an American dish, there are different stories about exactly how it developed. Some say that it came about in Jazz Age Harlem, when partiers and theater-goers stayed out so late that they craved a combination of breakfast and dinner foods. This story fits with chicken and waffles’ modern designation as soul food, since Harlem was largely segregated during the Jazz Age, and soul food comes from the culinary traditions of Black Americans. Still, others say that the dish was actually made famous by founding father Thomas Jefferson, who popularized waffles after he purchased waffle irons (which were fairly expensive at the time) from Amsterdam in the 1780s. Another story holds that the Pennsylvania Dutch created chicken and waffles based on German traditions.
Though we’ll never know for certain, it’s likely that all three tales are simply parts of a larger story. Dutch colonists brought waffles to the U.S. as early as the 1600s, where they made their way into the new culinary traditions of different groups of European settlers. This included the “Pennsylvania Dutch”, who were actually from Germany, where it was common to eat meat with bread or biscuits to sop up juices. They served waffles with different types of meat, including chicken with a creamy sauce. Thomas Jefferson did, indeed, help to popularize waffles, but it was the enslaved people who cooked for him and other colonists who changed the dish into what it is today. They standardized the use of seasoned, sometimes even spicy, fried chicken served with waffles, pancakes, or biscuits. After the civil war, chicken and waffles fell out of favor with white Americans, but was still frequently served in Black-owned restaurants, including well-known establishments in Harlem and in Black communities throughout the South. For centuries, the dish was categorized as Southern soul food. Then, in the 1990s, chicken and waffles had a sudden surge in nationwide popularity, possibly due to the rise of food-centric T.V. and “foodie” culture. Today, it can be found everywhere from Southern soul food restaurants to swanky brunch cafes in northern states. Its origins were humble, but its delicious reach is undeniable.
[Image description: Chicken wings and a waffle on a white plate with an orange slice.] Credit & copyright: Joost.janssens, Wikimedia Commons. This work has been released into the public domain by its author, Joost.janssens at English Wikipedia. This applies worldwide.Happy Fourth of July! This year, we’re highlighting a food that’s as American as apple pie…actually, much more so. Chicken and waffles is a U.S.-born, soul food staple, but exactly where, when, and how it developed is a source of heated debate.
Chicken and waffles is exactly what its name implies: a dish of waffles, usually served with butter and maple syrup, alongside fried chicken. The chicken is dredged in seasoned flour before cooking, and the exact spices used in the dredge vary from recipe to recipe. Black pepper, paprika, garlic powder, and onion powder are all common choices. The exact pieces of chicken served, whether breast meat, wings, or thighs, also varies. Sometimes, honey is substituted for syrup.
The early history of chicken and waffles is shrouded in mystery. Though there’s no doubt that it’s an American dish, there are different stories about exactly how it developed. Some say that it came about in Jazz Age Harlem, when partiers and theater-goers stayed out so late that they craved a combination of breakfast and dinner foods. This story fits with chicken and waffles’ modern designation as soul food, since Harlem was largely segregated during the Jazz Age, and soul food comes from the culinary traditions of Black Americans. Still, others say that the dish was actually made famous by founding father Thomas Jefferson, who popularized waffles after he purchased waffle irons (which were fairly expensive at the time) from Amsterdam in the 1780s. Another story holds that the Pennsylvania Dutch created chicken and waffles based on German traditions.
Though we’ll never know for certain, it’s likely that all three tales are simply parts of a larger story. Dutch colonists brought waffles to the U.S. as early as the 1600s, where they made their way into the new culinary traditions of different groups of European settlers. This included the “Pennsylvania Dutch”, who were actually from Germany, where it was common to eat meat with bread or biscuits to sop up juices. They served waffles with different types of meat, including chicken with a creamy sauce. Thomas Jefferson did, indeed, help to popularize waffles, but it was the enslaved people who cooked for him and other colonists who changed the dish into what it is today. They standardized the use of seasoned, sometimes even spicy, fried chicken served with waffles, pancakes, or biscuits. After the civil war, chicken and waffles fell out of favor with white Americans, but was still frequently served in Black-owned restaurants, including well-known establishments in Harlem and in Black communities throughout the South. For centuries, the dish was categorized as Southern soul food. Then, in the 1990s, chicken and waffles had a sudden surge in nationwide popularity, possibly due to the rise of food-centric T.V. and “foodie” culture. Today, it can be found everywhere from Southern soul food restaurants to swanky brunch cafes in northern states. Its origins were humble, but its delicious reach is undeniable.
[Image description: Chicken wings and a waffle on a white plate with an orange slice.] Credit & copyright: Joost.janssens, Wikimedia Commons. This work has been released into the public domain by its author, Joost.janssens at English Wikipedia. This applies worldwide. -
FREESTEM Daily Curio #3110Free1 CQ
When the fungi kicked ash, the ash started fighting back. For over a decade, ash trees in the U.K. have been under threat from a deadly fungus. Now, the trees appear to be developing a resistance. No matter where they grow, ash trees just can’t seem to catch a break. Invasive emerald ash borers started devastating ash trees in North America in the 1990s. Then, around 30 years ago, the fungi Hymenoscyphus fraxineus arrived in Europe, making its way through the continent one forest at a time. Finally, it made its way into the U.K. in 2012. H. fraxineus is native to East Asia and is the cause of chalara, also called ash dieback. It’s particularly devastating to Fraxinus excelsior, better known as European ash, and it has already reshaped much of the U.K.’s landscape. While the fungus only directly kills ash trees, it presents a wider threat to the overall ecology of the affected areas. H. fraxineus also poses an economic threat, since ash lumber is used for everything from hand tools to furniture.
When not being felled by fungus or bugs, ash trees are capable of growing in a wide range of conditions, creating a loose canopy that allows sunlight to reach the forest floor. That, in turn, encourages the growth of other vegetation. A variety of insect species and lichen also depend on ash trees for survival. Luckily, for the past few years, researchers have been seeing a light at the end of the fungus-infested tunnel. Some ash trees have started showing signs of fungal resistance, and a genetic analysis has now revealed that the trees are adapting at a faster rate than previously thought. If even a small percentage of ash trees become fully immune to the fungus, it may be just a matter of time before their population is replenished. Ash trees are great at reproducing, as they’re each capable of producing around 10,000 seeds that are genetically distinct from each other. That also means that ash trees may be able to avoid creating a genetic bottleneck, even though their population has sharply declined due to dieback. Still, scientists estimate around 85 percent of the remaining non-immune ash trees will be gone by the time all is said and done. It’s darkest before the dawn, especially in an ash forest.
[Image description: An upward shot of ash tree limbs affected with dieback disease against a blue sky. Some limbs still have green leaves, others are bare.] Credit & copyright: Sarang, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.When the fungi kicked ash, the ash started fighting back. For over a decade, ash trees in the U.K. have been under threat from a deadly fungus. Now, the trees appear to be developing a resistance. No matter where they grow, ash trees just can’t seem to catch a break. Invasive emerald ash borers started devastating ash trees in North America in the 1990s. Then, around 30 years ago, the fungi Hymenoscyphus fraxineus arrived in Europe, making its way through the continent one forest at a time. Finally, it made its way into the U.K. in 2012. H. fraxineus is native to East Asia and is the cause of chalara, also called ash dieback. It’s particularly devastating to Fraxinus excelsior, better known as European ash, and it has already reshaped much of the U.K.’s landscape. While the fungus only directly kills ash trees, it presents a wider threat to the overall ecology of the affected areas. H. fraxineus also poses an economic threat, since ash lumber is used for everything from hand tools to furniture.
When not being felled by fungus or bugs, ash trees are capable of growing in a wide range of conditions, creating a loose canopy that allows sunlight to reach the forest floor. That, in turn, encourages the growth of other vegetation. A variety of insect species and lichen also depend on ash trees for survival. Luckily, for the past few years, researchers have been seeing a light at the end of the fungus-infested tunnel. Some ash trees have started showing signs of fungal resistance, and a genetic analysis has now revealed that the trees are adapting at a faster rate than previously thought. If even a small percentage of ash trees become fully immune to the fungus, it may be just a matter of time before their population is replenished. Ash trees are great at reproducing, as they’re each capable of producing around 10,000 seeds that are genetically distinct from each other. That also means that ash trees may be able to avoid creating a genetic bottleneck, even though their population has sharply declined due to dieback. Still, scientists estimate around 85 percent of the remaining non-immune ash trees will be gone by the time all is said and done. It’s darkest before the dawn, especially in an ash forest.
[Image description: An upward shot of ash tree limbs affected with dieback disease against a blue sky. Some limbs still have green leaves, others are bare.] Credit & copyright: Sarang, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEEngineering Daily Curio #3109Free1 CQ
They’re turning greenhouse gases into rocky masses. A London-based startup has developed a device that can not only reduce emissions from cargo ships, but turn them into something useful. Cargo ships, as efficient as they are in some ways, still produce an enormous amount of emissions. In fact, they account for roughly three percent of all greenhouse gas emissions globally. Reducing their emissions even a little could have a big environmental impact, and there have been efforts to develop wind-based technology to reduce fuel consumption as well as alternative fuel. In the case of the startup Seabound, their approach is to scrub as much of the carbon from cargo ship exhaust as possible. Their device is the shape and size of a standard shipping container and can be retrofitted onto existing ships. Once in place, it’s filled with quicklime pellets which soak up carbon from the ship’s exhaust. By the time the exhaust makes it out to the atmosphere, 78 percent of the carbon and 90 percent of the sulfur is removed from it. The process also converts quicklime back into limestone, sequestering the carbon.
Similar carbon scrubbing technology is already in use in some factories, so the concept is sound, but there are some downsides. The most common method of quicklime production involves heating limestone to high temperatures, which releases carbon from the limestone and creates emissions from the energy required to heat it. There are greener methods to produce quicklime, but supply is highly limited for the time being. In addition, the process requires an enormous quantity of quicklime, reducing the overall cargo capacity of the ships. Meanwhile, some critics believe that such devices might delay the development and adoption of alternatives that could lead to net zero emissions for the shipping industry. It’s not easy charting a course for a greener future.
[Image description: A gray limestone formation in grass photographed from above.] Credit & copyright: Northernhenge, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re turning greenhouse gases into rocky masses. A London-based startup has developed a device that can not only reduce emissions from cargo ships, but turn them into something useful. Cargo ships, as efficient as they are in some ways, still produce an enormous amount of emissions. In fact, they account for roughly three percent of all greenhouse gas emissions globally. Reducing their emissions even a little could have a big environmental impact, and there have been efforts to develop wind-based technology to reduce fuel consumption as well as alternative fuel. In the case of the startup Seabound, their approach is to scrub as much of the carbon from cargo ship exhaust as possible. Their device is the shape and size of a standard shipping container and can be retrofitted onto existing ships. Once in place, it’s filled with quicklime pellets which soak up carbon from the ship’s exhaust. By the time the exhaust makes it out to the atmosphere, 78 percent of the carbon and 90 percent of the sulfur is removed from it. The process also converts quicklime back into limestone, sequestering the carbon.
Similar carbon scrubbing technology is already in use in some factories, so the concept is sound, but there are some downsides. The most common method of quicklime production involves heating limestone to high temperatures, which releases carbon from the limestone and creates emissions from the energy required to heat it. There are greener methods to produce quicklime, but supply is highly limited for the time being. In addition, the process requires an enormous quantity of quicklime, reducing the overall cargo capacity of the ships. Meanwhile, some critics believe that such devices might delay the development and adoption of alternatives that could lead to net zero emissions for the shipping industry. It’s not easy charting a course for a greener future.
[Image description: A gray limestone formation in grass photographed from above.] Credit & copyright: Northernhenge, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREERunning Daily Curio #3108Free1 CQ
They’re more than sneakers—they’re a tribute. Adidas will soon be bringing back the very shoes worn by Terry Fox during his run across Canada in commemoration of the 45th anniversary of his “Marathon of Hope.” The blue-and-white-striped shoes were worn by Fox in 1980 when he embarked on a journey that would go on to inspire millions. At the time, though, no one was looking at his shoes. Born on July 28, 1958, in Winnipeg, Manitoba, Fox was diagnosed with osteogenic sarcoma in 1977 at the age of 18. The disease didn’t claim his life then, but Fox lost his right leg just above the knee. By 1979, Fox mastered the use of his artificial limb and completed a marathon, but he was determined to do more. Fox was driven by his personal experiences from dealing with cancer, including his time in the cancer ward. He believed that cancer research needed more funding, and he came up with the idea to run across Canada to raise awareness.
Fox started his marathon on April 12th, 1980, by dipping his prosthetic leg in the Atlantic Ocean, and in the first days of his journey, he attracted little attention. For months, Fox ran at a pace averaging 30 miles a day, and his persistence paid off. Over time, more and more people rallied behind Fox, and began to stand along his route to cheer him on. Then, after over 3,300 miles, Fox started suffering from chest pains. The culprit was his cancer, which had spread to his lungs and forced him to stop his marathon prematurely. Fox passed away the following year on June 28, and though he never managed to reach the Pacific side of Canada, he accomplished something more. He surpassed his goal of $24 million CAD, raising the equivalent of $1 from every single Canadian. Fox also became a national hero for his dedication, and is the youngest Canadian ever to be made a Companion of the Order of Canada, the country’s highest civilian honor. Since his passing, the Terry Fox Foundation has raised a further $850 million CAD, and a statue in his honor stands in Ottawa, Ontario. A true hero of the Great White North.
[Image description: A statue of Terry Fox running, with another wall-like memorial behind it. In the background is a building and trees.] Credit & copyright: Raysonho, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re more than sneakers—they’re a tribute. Adidas will soon be bringing back the very shoes worn by Terry Fox during his run across Canada in commemoration of the 45th anniversary of his “Marathon of Hope.” The blue-and-white-striped shoes were worn by Fox in 1980 when he embarked on a journey that would go on to inspire millions. At the time, though, no one was looking at his shoes. Born on July 28, 1958, in Winnipeg, Manitoba, Fox was diagnosed with osteogenic sarcoma in 1977 at the age of 18. The disease didn’t claim his life then, but Fox lost his right leg just above the knee. By 1979, Fox mastered the use of his artificial limb and completed a marathon, but he was determined to do more. Fox was driven by his personal experiences from dealing with cancer, including his time in the cancer ward. He believed that cancer research needed more funding, and he came up with the idea to run across Canada to raise awareness.
Fox started his marathon on April 12th, 1980, by dipping his prosthetic leg in the Atlantic Ocean, and in the first days of his journey, he attracted little attention. For months, Fox ran at a pace averaging 30 miles a day, and his persistence paid off. Over time, more and more people rallied behind Fox, and began to stand along his route to cheer him on. Then, after over 3,300 miles, Fox started suffering from chest pains. The culprit was his cancer, which had spread to his lungs and forced him to stop his marathon prematurely. Fox passed away the following year on June 28, and though he never managed to reach the Pacific side of Canada, he accomplished something more. He surpassed his goal of $24 million CAD, raising the equivalent of $1 from every single Canadian. Fox also became a national hero for his dedication, and is the youngest Canadian ever to be made a Companion of the Order of Canada, the country’s highest civilian honor. Since his passing, the Terry Fox Foundation has raised a further $850 million CAD, and a statue in his honor stands in Ottawa, Ontario. A true hero of the Great White North.
[Image description: A statue of Terry Fox running, with another wall-like memorial behind it. In the background is a building and trees.] Credit & copyright: Raysonho, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3107Free1 CQ
Beware the pharaoh’s… cure? A deadly fungus that once “cursed” those who entered the tomb of King Tutankhamun has been engineered into a treatment for cancer by researchers at the University of Pennsylvania. When a team of archaeologists opened up King Tutankhamun’s fabled tomb back in 1922, they couldn’t have known about the terrible fate they had been dealt. One by one, those who entered the tomb died from an unknown illness. Then, in the 1970s, a similar string of tragedies befell those who entered the 15th century tomb of King Casimir IV in Poland. One such incident might have been dismissed as an unfortunate accident, but two meant that there was something else at play. Despite speculation about ancient curses, the likely culprit was found to be a fungus called Aspergillus flavus. It’s capable of producing spores that can stay alive seemingly indefinitely, and the spores contain toxins that are deadly when inhaled by humans. As they say, though, it’s the dose that makes the poison. In this case, the proper dose can instead be a cure. Researchers studying the deadly toxins within the fungal spores found a class of compounds called RiPPs (ribosomally synthesized and post-translationally modified peptides) which are capable of killing cancer cells. Moreover, the compounds seem to be able to target only cancer cells without affecting healthy ones. That’s a huge improvement over conventional treatments like chemotherapy, which can harm a variety of healthy cells as much as they harm cancer. Another interesting fact is that the compounds can be enhanced by combining them with lipid molecules like those found in royal jelly (the special honey that is fed exclusively to queen bees), making it easier for them to pass through cell membranes. Fungus and honey coming together to cure cancer? Sounds like a sweet (and savory) deal.
[Image description: A petri dish containing a culture of the fungus Aspergillus flavus against a black background. The fungus appears as a white-ish circle.] Credit & copyright: CDC Public Health Image Library, Dr. Hardin. This image is in the public domain and thus free of any copyright restrictions.Beware the pharaoh’s… cure? A deadly fungus that once “cursed” those who entered the tomb of King Tutankhamun has been engineered into a treatment for cancer by researchers at the University of Pennsylvania. When a team of archaeologists opened up King Tutankhamun’s fabled tomb back in 1922, they couldn’t have known about the terrible fate they had been dealt. One by one, those who entered the tomb died from an unknown illness. Then, in the 1970s, a similar string of tragedies befell those who entered the 15th century tomb of King Casimir IV in Poland. One such incident might have been dismissed as an unfortunate accident, but two meant that there was something else at play. Despite speculation about ancient curses, the likely culprit was found to be a fungus called Aspergillus flavus. It’s capable of producing spores that can stay alive seemingly indefinitely, and the spores contain toxins that are deadly when inhaled by humans. As they say, though, it’s the dose that makes the poison. In this case, the proper dose can instead be a cure. Researchers studying the deadly toxins within the fungal spores found a class of compounds called RiPPs (ribosomally synthesized and post-translationally modified peptides) which are capable of killing cancer cells. Moreover, the compounds seem to be able to target only cancer cells without affecting healthy ones. That’s a huge improvement over conventional treatments like chemotherapy, which can harm a variety of healthy cells as much as they harm cancer. Another interesting fact is that the compounds can be enhanced by combining them with lipid molecules like those found in royal jelly (the special honey that is fed exclusively to queen bees), making it easier for them to pass through cell membranes. Fungus and honey coming together to cure cancer? Sounds like a sweet (and savory) deal.
[Image description: A petri dish containing a culture of the fungus Aspergillus flavus against a black background. The fungus appears as a white-ish circle.] Credit & copyright: CDC Public Health Image Library, Dr. Hardin. This image is in the public domain and thus free of any copyright restrictions. -
FREEMind + Body Daily CurioFree1 CQ
If only eating your veggies was always this sweet. On the surface, carrot cake seems like an odd confection. After all, we don’t typically use carrots (or other vegetables, for that matter) in sweet desserts these days. The dessert first took off due to World War II sugar rationing, when foods had to be used a bit more flexibly, and its full history stretches back even further.
Carrot cake has a flavor profile similar to traditional spice cake, since its batter often includes cinnamon and nutmeg. However, carrot cake has a chunkier texture since finely sliced carrots, walnuts, and sometimes raisins are also included in the batter. Carrot cake is almost always topped with cream cheese frosting which gives the entire cake a slight tang.
No one knows exactly how and where carrot cake originated, but food historians have a few clues. In the Middle Ages, sugar was difficult for most people throughout Europe to afford. This resulted in dessert recipes that utilized sweet vegetables, like carrots and parsnips. One 1591 recipe from England for “carrot pudding” consisted of a carrot stuffed with meat and baked with a batter of cream, eggs, dates, clove, and breadcrumbs. Such puddings evolved over time and had many regional variations. Some were baked in pans and had crusts, like pies; others were mashed desserts similar to modern, American-style pudding. By the 1800s, carrots made their way into British cake batters. A recipe for carrot cake with some similarities to the modern version was published in 1814 by Antoine Beauvilliers, who once worked as Louis XVI’s personal chef. Similar recipes were commonly found in France, Sweden, England, and Switzerland over the following century, but none were as popular as the carrot cake we enjoy today.
Modern carrot cake was truly born During World War II, when sugar was rationed in both the U.S. and England. The British government promoted carrots as a healthy alternative to sugar, and even included a carrot cake recipe (albeit without the cream cheese frosting) in a wartime cooking leaflet. Carrot cake’s famous cream cheese frosting was first adopted in the U.S. after British carrot cake recipes made their way overseas. Americans were already using cream cheese frosting on tomato soup cake, which did actually use a can of condensed tomato soup as a key ingredient, but mostly tasted like a traditional spice cake. Carrot cake’s flavor profile was very similar, so cream cheese frosting became a popular topping for it and remained so even after tomato soup cake fell into obscurity. Eventually, even Europeans came to adopt cream cheese frosting for their cakes. Unlike many World War II recipes, carrot cake has managed to retain its popularity to this day. Once you’ve used soup in your cake, carrots don’t seem like such a strange addition anymore.
[Image description: A large carrot cake and a slice of carrot cake on white plates with carrot-shaped decorations.] Credit & copyright: Muago, Wikimedia Commons.If only eating your veggies was always this sweet. On the surface, carrot cake seems like an odd confection. After all, we don’t typically use carrots (or other vegetables, for that matter) in sweet desserts these days. The dessert first took off due to World War II sugar rationing, when foods had to be used a bit more flexibly, and its full history stretches back even further.
Carrot cake has a flavor profile similar to traditional spice cake, since its batter often includes cinnamon and nutmeg. However, carrot cake has a chunkier texture since finely sliced carrots, walnuts, and sometimes raisins are also included in the batter. Carrot cake is almost always topped with cream cheese frosting which gives the entire cake a slight tang.
No one knows exactly how and where carrot cake originated, but food historians have a few clues. In the Middle Ages, sugar was difficult for most people throughout Europe to afford. This resulted in dessert recipes that utilized sweet vegetables, like carrots and parsnips. One 1591 recipe from England for “carrot pudding” consisted of a carrot stuffed with meat and baked with a batter of cream, eggs, dates, clove, and breadcrumbs. Such puddings evolved over time and had many regional variations. Some were baked in pans and had crusts, like pies; others were mashed desserts similar to modern, American-style pudding. By the 1800s, carrots made their way into British cake batters. A recipe for carrot cake with some similarities to the modern version was published in 1814 by Antoine Beauvilliers, who once worked as Louis XVI’s personal chef. Similar recipes were commonly found in France, Sweden, England, and Switzerland over the following century, but none were as popular as the carrot cake we enjoy today.
Modern carrot cake was truly born During World War II, when sugar was rationed in both the U.S. and England. The British government promoted carrots as a healthy alternative to sugar, and even included a carrot cake recipe (albeit without the cream cheese frosting) in a wartime cooking leaflet. Carrot cake’s famous cream cheese frosting was first adopted in the U.S. after British carrot cake recipes made their way overseas. Americans were already using cream cheese frosting on tomato soup cake, which did actually use a can of condensed tomato soup as a key ingredient, but mostly tasted like a traditional spice cake. Carrot cake’s flavor profile was very similar, so cream cheese frosting became a popular topping for it and remained so even after tomato soup cake fell into obscurity. Eventually, even Europeans came to adopt cream cheese frosting for their cakes. Unlike many World War II recipes, carrot cake has managed to retain its popularity to this day. Once you’ve used soup in your cake, carrots don’t seem like such a strange addition anymore.
[Image description: A large carrot cake and a slice of carrot cake on white plates with carrot-shaped decorations.] Credit & copyright: Muago, Wikimedia Commons. -
FREEScience Daily Curio #3106Free1 CQ
Here’s a look at everything, near and far. Astronomers at the newly-operational Vera C. Rubin Observatory just released the first batch of images from the state-of-the-art facility, and it’s revealing new things about the solar system while giving a clearer view of the greater universe. Named after American astronomer Dr. Vera C. Rubin, the observatory was designed to push the boundaries of what is possible with ground-based telescopes. That’s fitting, considering that Rubin herself pushed the envelope of astrophysics during her lifetime. Her most significant achievement was finding unequivocal evidence for the existence of dark matter.
Located in Cerro Pachón in Chile, the Rubin Observatory is equipped with the largest camera ever built and is capable of capturing images at 3,200-megapixels. Its main purpose—for now—is to take a survey of the sky continuously for the next ten years. Its decade-long mission is off to a strong start with an initial batch of images featuring distant galaxies and Milky Way stars in unprecedented clarity. Its first ten hours of observation alone revealed 2,104 asteroids within the Solar System that have never been seen before (thankfully, none of them are on their way to crash into Earth anytime soon). When all is said and done, the observatory will gather around 500 petabytes worth of images, a veritable treasure trove of space imagery. It also has one other purpose. Its ability to observe dim and distant objects could prove useful in the search for the fabled “Planet Nine,” which (if it exists) orbits the sun every 10,000 to 20,000 years. It seems that the more you can see, the more there is to find out.
[Image description: A starry night sky with a line of dark trees in the foreground.] Credit & copyright: tommy haugsveen, PexelsHere’s a look at everything, near and far. Astronomers at the newly-operational Vera C. Rubin Observatory just released the first batch of images from the state-of-the-art facility, and it’s revealing new things about the solar system while giving a clearer view of the greater universe. Named after American astronomer Dr. Vera C. Rubin, the observatory was designed to push the boundaries of what is possible with ground-based telescopes. That’s fitting, considering that Rubin herself pushed the envelope of astrophysics during her lifetime. Her most significant achievement was finding unequivocal evidence for the existence of dark matter.
Located in Cerro Pachón in Chile, the Rubin Observatory is equipped with the largest camera ever built and is capable of capturing images at 3,200-megapixels. Its main purpose—for now—is to take a survey of the sky continuously for the next ten years. Its decade-long mission is off to a strong start with an initial batch of images featuring distant galaxies and Milky Way stars in unprecedented clarity. Its first ten hours of observation alone revealed 2,104 asteroids within the Solar System that have never been seen before (thankfully, none of them are on their way to crash into Earth anytime soon). When all is said and done, the observatory will gather around 500 petabytes worth of images, a veritable treasure trove of space imagery. It also has one other purpose. Its ability to observe dim and distant objects could prove useful in the search for the fabled “Planet Nine,” which (if it exists) orbits the sun every 10,000 to 20,000 years. It seems that the more you can see, the more there is to find out.
[Image description: A starry night sky with a line of dark trees in the foreground.] Credit & copyright: tommy haugsveen, Pexels -
FREEScience Daily Curio #3105Free1 CQ
You just can’t beat this heat. This year’s summer is getting off to a brutal start for much of the U.S. as a heat dome stretches over multiple states. Heat domes are defined by suffocating heat and humidity which have a synergistic effect and make it feel even hotter than it actually is. While heat domes can cause heat waves, the two meteorological phenomena are not the same. The source of a heat dome’s elevated temperatures and humidity is a lingering high-pressure system in the atmosphere that prevents heat on Earth’s surface from rising. The high pressure comes from the jet stream after it weakens and deviates beyond its normal course. Until the jet stream corrects itself, the heat dome will continue to persist, and the longer it lasts, the worse it gets. Because the high pressure also prevents cloud formation, the sun’s rays beat down on the ground, making the heat dome hotter over time. Temperatures can easily exceed 100 degrees Fahrenheit, and it can feel tens of degrees hotter. Sometimes, a heat dome will dissipate after just a few days, but they can last for weeks.
In 1995, a particularly devastating heat dome claimed over 700 lives in the Chicago area in less than a week. Even worse, a heat dome over the southern Plains back in 1980 claimed around 10,000 lives. Part of what makes a heat dome so dangerous isn’t just the heat itself, but the humidity, which makes it impossible for sweat to effectively wick excess heat away from our bodies. When a heat dome forms over a given area, it’s best to avoid venturing outside. The best policy is to stay in a climate-controlled area and drink plenty of water until the heat dome dissipates. Some problems are better avoided than faced head on.
[Image description: The sun shining above a treetop in a clear blue sky.] Credit & copyright: TheUltimateGrass, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You just can’t beat this heat. This year’s summer is getting off to a brutal start for much of the U.S. as a heat dome stretches over multiple states. Heat domes are defined by suffocating heat and humidity which have a synergistic effect and make it feel even hotter than it actually is. While heat domes can cause heat waves, the two meteorological phenomena are not the same. The source of a heat dome’s elevated temperatures and humidity is a lingering high-pressure system in the atmosphere that prevents heat on Earth’s surface from rising. The high pressure comes from the jet stream after it weakens and deviates beyond its normal course. Until the jet stream corrects itself, the heat dome will continue to persist, and the longer it lasts, the worse it gets. Because the high pressure also prevents cloud formation, the sun’s rays beat down on the ground, making the heat dome hotter over time. Temperatures can easily exceed 100 degrees Fahrenheit, and it can feel tens of degrees hotter. Sometimes, a heat dome will dissipate after just a few days, but they can last for weeks.
In 1995, a particularly devastating heat dome claimed over 700 lives in the Chicago area in less than a week. Even worse, a heat dome over the southern Plains back in 1980 claimed around 10,000 lives. Part of what makes a heat dome so dangerous isn’t just the heat itself, but the humidity, which makes it impossible for sweat to effectively wick excess heat away from our bodies. When a heat dome forms over a given area, it’s best to avoid venturing outside. The best policy is to stay in a climate-controlled area and drink plenty of water until the heat dome dissipates. Some problems are better avoided than faced head on.
[Image description: The sun shining above a treetop in a clear blue sky.] Credit & copyright: TheUltimateGrass, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3104Free1 CQ
Chromosomes are fundamental to creating life, but you can have too much of a good thing. Just one extra copy of chromosome 21 is responsible for causing Down syndrome, which itself causes many different health problems. Now, scientists at Mie University in Japan have developed a way to remove the extra chromosome using CRISPR technology. The chromosome responsible for down syndrome is called trisomy 21. When someone is born with this chromosome, they end up with 47 total chromosomes, rather than the usual 46. This results in a range of health effects, including congenital heart problems and cognitive issues.
Until recently, genetic disorders like Down syndrome were considered untreatable, but medical advancements have been changing things. Back in 2023, the FDA approved Casgevy and Lyfgenia, both of which are cell-based gene therapies to treat sickle cell disease (SCD) in patients over age 12. The treatments were developed using CRISPR-Cas9, which utilizes enzymes to accurately target parts of the DNA strand responsible for the disease. It’s the same technology used by the scientists at Mie University, who targeted trisomy 21 in a process called allele-specific editing, or, as one of the researchers described, “Trisomic rescue via allele-specific multiple chromosome cleavage using CRISPR-Cas9 in trisomy 21 cells.” The process was performed on lab-grown cells which quickly recovered and began functioning like any other cells. It’s unlikely that this new development will signal an immediate reversal of Down syndrome, as it will be a while before the treatment can undergo human trials. One particular hurdle is that the treatment can sometimes target healthy chromosomes. Still, it shows that CRISPR-Cas9 can be used to remove entire chromosomes and that cells affected by trisomy 21 can make a full recovery with treatment. That’s a lot of medical advancement in one crisp swoop.
[Image description: A diagram of a DNA strand with a key for each labeled part. The key from top to bottom reads: Adenine, Thymine, Cytosine, Guanine, and phosphate backbone.] Credit & copyright: Forluvoft, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Chromosomes are fundamental to creating life, but you can have too much of a good thing. Just one extra copy of chromosome 21 is responsible for causing Down syndrome, which itself causes many different health problems. Now, scientists at Mie University in Japan have developed a way to remove the extra chromosome using CRISPR technology. The chromosome responsible for down syndrome is called trisomy 21. When someone is born with this chromosome, they end up with 47 total chromosomes, rather than the usual 46. This results in a range of health effects, including congenital heart problems and cognitive issues.
Until recently, genetic disorders like Down syndrome were considered untreatable, but medical advancements have been changing things. Back in 2023, the FDA approved Casgevy and Lyfgenia, both of which are cell-based gene therapies to treat sickle cell disease (SCD) in patients over age 12. The treatments were developed using CRISPR-Cas9, which utilizes enzymes to accurately target parts of the DNA strand responsible for the disease. It’s the same technology used by the scientists at Mie University, who targeted trisomy 21 in a process called allele-specific editing, or, as one of the researchers described, “Trisomic rescue via allele-specific multiple chromosome cleavage using CRISPR-Cas9 in trisomy 21 cells.” The process was performed on lab-grown cells which quickly recovered and began functioning like any other cells. It’s unlikely that this new development will signal an immediate reversal of Down syndrome, as it will be a while before the treatment can undergo human trials. One particular hurdle is that the treatment can sometimes target healthy chromosomes. Still, it shows that CRISPR-Cas9 can be used to remove entire chromosomes and that cells affected by trisomy 21 can make a full recovery with treatment. That’s a lot of medical advancement in one crisp swoop.
[Image description: A diagram of a DNA strand with a key for each labeled part. The key from top to bottom reads: Adenine, Thymine, Cytosine, Guanine, and phosphate backbone.] Credit & copyright: Forluvoft, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEGardening Daily Curio #3103Free1 CQ
They may be small, but they’re no saplings! The Brooklyn Bonsai Museum is celebrating its 100th birthday by inviting the public to learn more about the ancient art of bonsai, which has roots that go beyond just Japan. Bonsai involves growing trees in containers, carefully pruning and maintaining them to let them thrive in a confined space. When done properly, a tree kept in such a manner will resemble a full-sized tree in miniaturized form and not just look like a stunted specimen. Experienced practitioners can also guide the growth of the trunk and branches to form artful, often dramatic shapes.
Bonsai has been gaining popularity in the U.S. in the past century, and its history goes all the way back to 8th-century China, when dwarf trees were grown in containers and cultivated as luxury gifts. Then, in the Kamakura period, which lasted from the late 12th century to the early 14th century, Japan adopted many of China’s cultural and artistic practices and sensibilities, including what they would come to call bonsai.
For a tree to be a bonsai tree, it has to be grown in a shallow container which limits its overall growth while still allowing it to mature. While most bonsai trees are small enough to be placed on a desk or table, it’s not really the size that dictates what is or isn’t a bonsai. As long as it’s grown in a shallow container, a tree can be considered bonsai. In fact, there are some downright large specimens that dwarf their human caretakers. A category of bonsai called “Imperial bonsai” typically ranges between five to seven feet, but the largest bonsai in existence is a sixteen-foot red pine in Akao Herb & Rose Garden in Shizuoka, Japan. Bonsai trees can also live just as long as their container-free counterparts. The oldest currently in existence is a Ficus Retusa Linn at the Crespi Bonsai Museum in Italy, which is over 1000 years old and was originally grown in China, presumably before the practice even spread to Japan. If this tree ever falls—in a forest or not—you can bet that someone’s going to make a lot of noise.
[Image description: A potted bonsai tree sitting on a table with a bamboo fence in the background.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They may be small, but they’re no saplings! The Brooklyn Bonsai Museum is celebrating its 100th birthday by inviting the public to learn more about the ancient art of bonsai, which has roots that go beyond just Japan. Bonsai involves growing trees in containers, carefully pruning and maintaining them to let them thrive in a confined space. When done properly, a tree kept in such a manner will resemble a full-sized tree in miniaturized form and not just look like a stunted specimen. Experienced practitioners can also guide the growth of the trunk and branches to form artful, often dramatic shapes.
Bonsai has been gaining popularity in the U.S. in the past century, and its history goes all the way back to 8th-century China, when dwarf trees were grown in containers and cultivated as luxury gifts. Then, in the Kamakura period, which lasted from the late 12th century to the early 14th century, Japan adopted many of China’s cultural and artistic practices and sensibilities, including what they would come to call bonsai.
For a tree to be a bonsai tree, it has to be grown in a shallow container which limits its overall growth while still allowing it to mature. While most bonsai trees are small enough to be placed on a desk or table, it’s not really the size that dictates what is or isn’t a bonsai. As long as it’s grown in a shallow container, a tree can be considered bonsai. In fact, there are some downright large specimens that dwarf their human caretakers. A category of bonsai called “Imperial bonsai” typically ranges between five to seven feet, but the largest bonsai in existence is a sixteen-foot red pine in Akao Herb & Rose Garden in Shizuoka, Japan. Bonsai trees can also live just as long as their container-free counterparts. The oldest currently in existence is a Ficus Retusa Linn at the Crespi Bonsai Museum in Italy, which is over 1000 years old and was originally grown in China, presumably before the practice even spread to Japan. If this tree ever falls—in a forest or not—you can bet that someone’s going to make a lot of noise.
[Image description: A potted bonsai tree sitting on a table with a bamboo fence in the background.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.