Catnip is best known for producing bouts of euphoria in cats of all sizes, from house cats to their big cat brethren (including bobcats, jaguars, and lions). In addition to giving felines a healthy release from stress and anxiety, however, some studies show it offers up an additional perk: repelling mosquitoes. Related to mint, basil, and lemon balm, Nepeta cataria (aka catnip) emits a chemical compound called nepetalactone when crushed, which naturally wards off some mosquito species. What’s more, catnip-addled cats often chew and rub the leaves into their coats, an action that (unknowingly) spreads the natural bug repellent around. While catnip is all fun for cats, it’s not so great for mosquitoes; Nepeta leaves may be effective at fending off pests because they cause pain to the buzzing bugs. Researchers initially theorized that catnip’s aroma was enough to repel the insects, but some studies show mosquitoes exposed to nepetalactone actually feel pain or itchiness in the same way humans experience the sensation of wasabi.
Domesticated house cats have around 20 facial whiskers, along with a set on the back of their front legs called carpal whiskers. Since most cats have poor up-close vision, these special strands detect movement from captured prey held in their paws.
Indoor cats may not need mosquito protection, but catnip still provides a safe, effective way for them to calm down — although scientists aren’t fully sure how it works. It’s possible that catnip affects cat brains in the same way opioids work in humans to relieve pain: One study found that cats who were given naloxone — a lifesaving medication that blocks opioid receptors and is used to treat narcotics overdoses — didn’t have a reaction to catnip. Even so, catnip doesn’t work on all cats. Kittens won’t respond to the plant’s minty leaves until 3 to 6 months old; plus, catnip sensitivity is hereditary, and an estimated 50% of cats don’t experience any reaction at all. But if your cat just so happens to turn up its nose at fresh catnip, don’t worry. Humans can use it for a calming tea similar to chamomile.
Scientists think mosquitoes may be key to developing painless needles.
From the human perspective, we don’t cohabitate well with mosquitoes and their seemingly voracious summer appetites. But some researchers believe we can learn more about pain-free blood extraction from these ancient pests, which have inhabited Earth 200 million years longer than humans. Only female mosquitoes bite, and they do so by using their proboscis, a long tube that pierces the skin and removes blood. These miniature needles use a combination of features for undetected and painless feeding: a chemical in mosquito saliva that numbs the bite zone, a serrated edge that more easily pierces the skin, and tiny vibrations that reduce how much force a mosquito needs to puncture its prey. Scientists think incorporating these elements in new needles — which haven’t seen major improvements in decades — could be pivotal in developing microneedles that deliver pain-free vaccinations and medications. Another benefit: Gentle injections could reduce trypanophobia (aka the fear of needles), suffered by an estimated 66% of kids and 25% of adults.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Original photo by White House Photo/ Alamy Stock Photo
In the sibling department, every president has had, at minimum, one half-brother or half-sister. However, a few presidents are sometimes considered to have been raised as only children — most notably Franklin D. Roosevelt, whose only half-sibling (his father’s oldest son, James) was 28 years FDR’s senior. Bill Clinton’s half-brother, Roger, is about a decade younger than him. Barack Obama also has a 10-year age gap with his younger half-sister Maya, although he learned later in life that he had at least five more half-siblings on his father’s side. Meanwhile, Gerald Ford is the only child his mother and father produced, but he was raised with three younger half-brothers after his mother remarried, and as a teen, learned that he also had three younger half-sisters, via his father.
Almost one-third of U.S. presidents were born in either Ohio or Virginia.
Of the 45 people to serve as president so far, seven were born in Ohio, while eight were born in Virginia when it was either a colony or a state. Only 21 states have produced a president.
The no-only-children rule isn’t the only presidential birth quirk. Fifteen presidents are firstborns. Just seven occupants of the Oval Office have been the babies of their families, among them Andrew Jackson and Ronald Reagan. That means 23 presidents have fallen somewhere in the middle of the birth order, with the likes of Grover Cleveland and Herbert Hoover being true middle children. (They were born to families with nine and three offspring, respectively.) John Tyler, the 10th president, fathered the most youngsters himself: 15.
Before she was a first lady, only child Laura Bush worked as a public school teacher and librarian.
Advertisement
The only U.S. president to get married at the White House was Grover Cleveland.
Grover Cleveland is often remembered for being the first president elected to nonconsecutive terms. Yet he was also the only U.S. president to serve as a groom while in office. His wedding to Frances Folsom fell on June 2, 1886, less than 15 months after his first inauguration. The bride, 22, was a recent Wells College graduate, and Cleveland was the 49-year-old commander in chief. Once law partners with Frances’ father, Oscar, Cleveland had known her since she was an infant. After Oscar died in an 1875 carriage accident, Cleveland oversaw the Folsom estate and Frances’ schooling. A decade later, Cleveland proposed in a letter; the pair kept their engagement secret until five days before the wedding. The ceremony occurred in the Blue Room and was attended by 28 guests. Rather than use the line vowing to “honor, love, and obey” in the bride’s vows, Frances and Cleveland replaced the last word with “keep.” During Cleveland’s time as America’s 24th president, Frances also gave birth to the lone child ever born to a sitting president in the White House: Esther Cleveland came into the world on September 9, 1893. The couple’s second child of five, Esther was born in her parents’ bedroom.
Jenna Marotta
Writer
Jenna is a writer whose work has appeared in The New York Times, The Hollywood Reporter, and New York Magazine.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Sculpture from classical antiquity is often presented in museums, textbooks, and more as a world of white marble. Whether unearthed from the ground or perched upon crumbling temples, these supposedly pale masterpieces influenced Renaissance artists such as Michelangelo, who — in the throes of a classic art obsession — created sculptures meant to highlight the natural beauty of stone. Other Renaissance masterpieces, such as Raphael’s early 1500s fresco “The School of Athens,” placed colorful figures of antiquity against a backdrop of white marble. But these representations aren’t an accurate portrayal of the past: Ancient Athens and Rome were full of eye-popping color, with statues sporting vibrant togas and subtle skin tones — in fact, no sculpture was considered complete without a dazzling coat of paint.
European Renaissance artists invented oil painting.
The very first known oil paintings were created far from Europe. In the seventh century CE, Buddhist monks in Afghanistan used oil paints to create murals on cave walls.
Over time, these impermanent paints — left unprotected from the elements — wore away, leaving behind unblemished stone and a false legacy of monotone marble. This perception of the “whiteness” of antiquity was cemented in the 18th century, tied to racist ideals that equated the paleness of the body with beauty. When German scholar Johann Winckelmann (sometimes called the “father of art history”) glimpsed flecks of color on artifacts found near the ancient Roman cities of Pompeii and Herculaneum, he brushed off the work as Etruscan — a civilization he considered beneath the grandeur of ancient Rome. Besides bits of color still clinging to some statues, other evidence of the Mediterranean’s colorful past survives in frescoes from Pompeii (which even depict a Roman in the act of painting a statue); the Greek playwright Euripides also mentions colored statues in his work Helen. In recent decades, the art world has been busy recreating the colorful past of Western civilization as archaeologists use UV light to illuminate certain pigments and art exhibits travel the world to unshroud the colorful palette of these ancient civilizations.
The oldest evidence of Homo sapiens making paint comes from a prehistoric cave in South Africa.
Advertisement
The Egyptian pyramids were originally polished white.
Even 4,500 years after its construction, the Pyramid of Giza never fails to impress. The largest of the pyramids at 455 feet tall, it’s the last survivor of the Seven Wonders of the World and every year hosts several million visitors. However, the Pyramid of Giza would likely be a sorry sight to ancient Egyptians who witnessed its beauty back in the 26th century BCE. Today, the pyramid’s earthy color matches the surrounding desert, but archaeologists now believe that the original structure was encased with highly polished white limestone so that the pyramids appeared white and glistening. Some experts believe that the capstones, called pyramidions, were also plated in gold. One leading theory suggests these limestone coverings were repurposed millennia later to build mosques, a process that exposed the pyramids we know and love today.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
The quintet of nations lacking their own airports — Vatican City, San Marino, Monaco, Liechtenstein, and Andorra — all reside within Europe. Perhaps unsurprisingly, all of these nations are pretty tiny. Vatican City and Monaco are in fact the two smallest countries in the world, covering just 109 and 494 acres, respectively. Vatican City, famously the home of the pope, is located inside Rome, and many travelers use one of Rome’s two airports to reach it. San Marino (23.5 square miles in area) is also located inside Italy; by car, it’s approximately 9 miles from Federico Fellini International Airport, near Rimini.
Lightning strikes planes about annually, or once every 1,000 hours of flight. Yet crew members and passengers rarely notice. Each aircraft’s wings and tail hold static wicks that divert electricity away from navigation and communication radios and back into the atmosphere.
The diminutive Monaco, meanwhile, is the globe’s most densely populated sovereign state, with a citizenry of slightly less than 40,000 in less than 1 square mile. Since three sides of Monaco border France, it’s unsurprising that a French airport — Nice Côte d’Azur — is the nearest spot to arrive by commercial flight. Liechtenstein is bordered by Austria and Switzerland, and many visitors use Switzerland’s St. Gallen–Altenrhein to arrive. Finally, there’s Andorra (about 180 square miles in size), a destination in the Pyrenees mountains that borders France and Spain. Andorra’s capital, Andorra la Vella, lies within 89 miles of five airports — even if they’re all in other countries.
Most planes have blue seats because the color has a calming effect and is easy to clean.
Advertisement
You can buy the contents of unclaimed airport luggage.
Airlines around the world misplace about 25 million bags each year. Whenever a mysterious suitcase turns up, the airlines begin a 90-day tracing period to try to reunite the bag with its owner. If the tracing period is unsuccessful, all major U.S. airlines sell the luggage to the Scottsboro, Alabama-based company Unclaimed Baggage. Founded by Doyle Owens in 1970, the company cherrypicks a fraction of items for resale while tossing or donating the rest. For its first several decades, Unclaimed Baggage operated solely as a 50,000-square-foot store stocked with everything from apparel and accessories to musical instruments and sports equipment. (A lucky buyer once took home a loose 41-carat emerald for $17,000, less than half of its appraisal value.) In honor of its 50th anniversary, in 2020 Unclaimed Baggage launched a curated website so bargain hunters can now shop the contents of former checked trunks and carry-ons without making the trip to Alabama.
Jenna Marotta
Writer
Jenna is a writer whose work has appeared in The New York Times, The Hollywood Reporter, and New York Magazine.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
The woody, warming spice we sprinkle with abandon on top of holiday cookies, baked goods, and seasonal coffees is native to Sri Lanka, Myanmar, and India. But very few people knew where cinnamon came from when merchants first began selling spices throughout Europe, Asia, and Africa as far back as 3,000 years ago — and spice traders capitalized on that lack of knowledge to charge high prices. Harvested from the inner bark of Cinnamomum trees, cinnamon has been used for thousands of years as medicine, for religious practices and funerals, and in cuisine, but with a big price tag: It was once considered more precious than gold.
Several tree species produce cinnamon, but only one is considered the real deal. Bark produced by the Cinnamomum verum (aka Ceylon cinnamon) has a lighter flavor and a heftier price tag than other kinds (aka cassia cinnamons). Yet many taste-testers say they can’t tell the difference.
In an effort to conceal cinnamon’s origins from competitors and explain the extravagant markup to wary customers, spice traders of the past provided elaborate backstories. By some fifth-century accounts, cinnamon traders asserted that collecting the spice was a dangerous task thanks to angry “winged creatures” that lived in the trees; cinnamon harvesters supposedly donned protective outerwear made of thick hides and risked their personal safety to collect a few measly pieces of cinnamon bark. Other vendors claimed cinnamon was transported from far-off lands by birds who used it as nesting material. (In this tale, harvesting cinnamon sticks from nests required a cow sacrifice to provide the birds with a meaty distraction.) Yet another story declared that cinnamon grew in dangerous, snake-infested valleys. Cinnamon’s origins remained an enigma for centuries, but luckily for chefs and bakers today, the secret eventually got out thanks to global exploration brought on by a surging interest in spices. Now, the flavoring is a low-cost mainstay in modern pantries.
Sticks of curled and dried cinnamon bark are called quills.
Advertisement
Scientists have recreated a cinnamon perfume Cleopatra may have worn.
What did our ancestors smell like? Archaeologists and historians have pieced together how numerous cultures ate, dressed, relaxed — in short, lived — but it’s generally been harder to tell how people once smelled. Thanks to one archaeological find, however, we have a clue as to how Egyptians — perhaps even Cleopatra, a royal known for a cinnamon-laced scent so seductive, it’s credited with attracting Julius Caesar — may have perfumed themselves. In 2012, archaeologists unearthed ruins north of Cairo suspected to be an ancient Egyptian perfume factory; that dig inspired a team of historians and perfume experts to recreate fragrances that hadn’t been worn in nearly 2,000 years. Using recipes from ancient Greek texts that may have borrowed from Cleopatra’s own formulas — a book of recipes that no longer exists but was often referenced by other perfumers — researchers blended cinnamon, myrrh, and other herbs with olive oil to create a viscous fragrance akin to what ancient Egyptians once donned. While we’ll never know for sure if Cleopatra wore this specific scent, the experiment gives us an olfactory link with history.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Smell is one of humanity’s most important senses. It’s intimately tied to taste and memory, and it plays a pivotal role in detecting danger, whether from fires or rotten food. It may even play a role in how we choose our mates.
One little-known aspect of smell is how it fluctuates throughout the day. According to research conducted by Brown University and published in the journal Chemical Senses in 2017, our sense of smell is somewhat regulated by our circadian rhythm, the internal biological process that regulates a human’s wake-sleep cycle. (If you’ve ever traveled across the ocean, the resulting jet lag is a disruption of this rhythm.)
Smell in humans begins declining after the age of 30.
When we’re born, humans can sense only certain smells, such as a mother’s body. However, our sense of smell really takes off at the age of 8, and is usually stable until around age 50. After that, our nose powers decline and drop off precipitously after the age of 70.
The Brown study analyzed 37 teenagers for a week and measured their sense of smell against their levels of the sleep-inducing hormone melatonin. A rise in melatonin meant that the body’s nighttime circadian rhythm was kicking in, essentially saying, “It’s time to sleep.” The results showed that the teens’ sense of smell was at its highest in the evening, around 9 p.m., or what the researchers called the beginning of “biological night.” Conversely, their sense of smell was at its lowest between the hours of 3 a.m. and 9 a.m., when the body has little need for sniffing. Scientists can only guess at why the body kicks its olfactory receptors into high gear at 9 p.m. — it may help humans ensure satiety following the last meal of the day, scan for nearby threats before sleeping, or act as a means for encouraging that aforementioned mate choice.
The animal with the strongest sense of smell is the African elephant (Loxodonta).
Advertisement
Humans are more sensitive than dogs when it comes to certain scents.
The human nose often takes a backseat to other famous sniffers in the animal kingdom. Dogs, pigs, and elephants have nasal biology jam-packed with olfactory receptors, which makes them particularly gifted at smelling scents. But no two odors are exactly alike, and research from Rutgers University argues that the human nose — with our measly 400 different kinds of olfactory receptors — can actually sniff out smells important to humans better than even the most skillful bloodhound. For example, human noses are more sensitive to amyl acetate, a main odorant found in bananas, because ripe fruit was important for our survival thousands of years ago. For dogs, finding such fruit was much less important, and thus biologically deprioritized. Human noses can also sniff out the smell of fresh rain on dirt, a scent known as “petrichor,” better than a shark can smell blood in the sea, likely due to our essential need for fresh water. So don’t write off your sense of smell — instead, take pride in what your nose knows.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Of all the rocks on Earth, almost none are eaten directly by humans — and even fewer play a biological role in our survival. There is one striking exception to this, however: salt.
A mineral is a single natural substance with a specific chemical makeup, while a rock is a mixture of one or more minerals. Natural salt in its mineral form is called halite, a crystalline version of sodium chloride (NaCl). Halite is a mineral, but it commonly occurs in large natural deposits known as rock salt — a rock made mostly of halite. When this material is mined and refined for everyday use, it becomes table salt, a purified and often iodized form of the same compound.
Salt and pepper became a popular pairing during World War II.
Though wartime rationing changed how people cooked, salt and pepper had already been a standard pairing since 17th-century France, when black pepper became the preferred spice for seasoning food without overpowering it.
Unlike most minerals, halite is not only edible in small amounts; it’s biologically essential. When dissolved in water, it separates into sodium (Na⁺) and chloride (Cl⁻), charged particles the body depends on as electrolytes. Those ions help regulate fluid balance, enable nerve signaling, and support muscle contraction in the human body.
While many minerals contain elements the body needs — including calcium, iron, and magnesium — those are typically obtained indirectly through food or supplements, not by consuming the minerals themselves. Halite stands apart as the rare case in which a naturally occurring rock is routinely processed, consumed, and required for human physiology.
For thousands of years, this simple crystal has shaped human history. It was used to preserve food long before refrigeration and became so valuable that people used it as currency in ancient markets. It helped power trade routes and influence the rise of civilizations, making salt one of the most important minerals in human history.
The largest underground salt mine in the world is in Goderich, Ontario, located 1,800 feet under Lake Huron.
Advertisement
The placement of salt on the table once determined where you sat.
In medieval Europe, salt was more than a seasoning; it was also a symbol of status. At formal feasts, a large salt container called a salt cellar was placed prominently on the table, often near the host. A guest’s position in relation to the salt was significant: Those seated closer to the host were considered “above the salt,” while those farther away were “below the salt,” a phrase that came to signal lower social standing. Because salt was expensive and essential for preserving food, it became a natural marker of rank, wealth, and honor.
Kristina Wright
Writer
Kristina is a coffee-fueled writer living happily ever after with her family in the suburbs of Richmond, Virginia.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
If we could track our breaths the way many people do steps or exercise, the results would be astonishing. While there’s no app for that, scientists estimate that an average person takes 20,000 to 25,000 breaths over the course of 24 hours. That breaks down to between 12 and 18 breaths per minute for an adult. Children typically breathe more quickly, up to 60 breaths per minute (or as many as 86,000 a day), which tapers down to the adult rate by their teenage years. All those inhales and exhales add up, and by age 50, the average human has taken at least 400 million breaths. Each one helps fuel our bodies; oxygen is a crucial component needed for our most basic functions, like moving muscles, digesting food, and even thinking.
Most images of lungs show the organ as being symmetrical, but that’s not true. The left lung has two internal chambers (called lobes, which fill with air when we breathe), while the right side has three. The left lung is also slightly smaller to make room for the heart.
Breathing tends to be an automatic process, but some scientists say that not everyone does it right. Mouth breathing isn’t just annoying when you’re sick or to those around you — it’s actually inefficient for your body. Inhaling through the nose helps heat and pressurize air so that the lungs can extract oxygen efficiently, and the cilia (aka nose hairs) are able to stop particles like pollen and pollution from entering the lungs; neither job can be done by the mouth. Mouth breathing can also cause sleep apnea, snoring, and even asthma. Amazingly, it can change the structure of your face over time; children who primarily breathe through their mouths have a higher chance of having narrow mouths and misaligned teeth.
The aptly named lungfish has both gills and lungs, used to breathe in and out of water.
Advertisement
Scientists have found parasites that don’t need oxygen.
Breathing is a requirement for most living creatures on Earth, except one: a parasitic, water-dwelling blob called the Henneguya salminicola. In 2020, a group of scientists from Israel, France, and the U.S. announced they had discovered that the parasite — which is microscopic and typically infects salmon — doesn’t appear to breathe. In fact, it could be the only known nonbreathing animal on the planet. H. salminicola belongs to the same family as jellyfish, which do breathe by absorbing the oxygen in water directly through their skin; however, H. salminicola lacks mitochondrial DNA, a part of the DNA sequence that turns oxygen into fuel to power the body’s cells. Earth is home to many simple, single-celled organisms (like yeast and bacteria) that don’t need to breathe, but H. salminicola stands out because it’s the first known multicellular animal that’s not dependent on oxygen — and researchers aren’t sure why. One theory is that the parasite could get the power it needs to survive by stealing protein from its fish hosts.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
There’s only one spot outside the nation’s capital that you’ll see featured on some of the seven U.S. banknotes currently in circulation. The $5 bill features the Lincoln Memorial, while the $10 features the Treasury Building — fitting, since Alexander Hamilton, whose visage adorns the obverse, served as the Treasury Department’s first secretary. The $20 and $50 bills finish the architectural tour of Washington with the White House and Capitol, respectively. The $1 is notably absent from this list, as the only building-like structure on its reverse side is a pyramid with a floating eye — and no such pyramid exists in the U.S. (or the world).
$100 is the highest legal denomination in U.S. dollars.
The Benjamin isn’t the highest federal reserve note in the U.S. — that’d be the $10,000 bill. First printed in 1918 and featuring the face of Salmon P. Chase, Abraham Lincoln’s treasury secretary, the currency was discontinued in 1969 when Congress purged large denomination bills.
The $100 bill switches things up by featuring Independence Hall in Philadelphia. Although an immensely important building — it’s the site where revolutionaries signed the Declaration of Independence and where the Founding Fathers crafted the U.S. Constitution — it’s also a thematic choice, seeing as Benjamin Franklin (depicted on the obverse of the bill) is undoubtedly Philadelphia’s most famous historical figure. But this isn’t Independence Hall’s only appearance on U.S. currency. A very small section of the interior of the building is also displayed on the 1976 reissue of the $2, which includes a reproduction of John Trumbull’s 1818 painting “Declaration of Independence.”
The first U.S. banknotes were created in response to the Civil War.
Advertisement
No one knows when the Liberty Bell cracked.
Lots of myths surround the Liberty Bell, which hung in the bell tower of Independence Hall for nearly a century (and whose image is also woven into the $100 bill as a security measure). One myth explains how the bell pealed on July 4, 1776, which likely isn’t true; another says it cracked in 1835 to announce the death of Chief Justice John Marshall (also not true).
The story of the Liberty Bell begins in 1751; it was originally cast by a foundry in London, but cracked on its first test ring in Philadelphia. Metalworkers melted it down and cast a new one, which is the Liberty Bell we know today. For 90 years, the Liberty Bell alerted Philadelphians of news or, in Benjamin Franklin’s case, to go to work, as he once wrote in a letter: “The Bell rings, and I must go among the Grave ones, and talk Politicks.” No one recorded when or how the Liberty Bell began to crack, but the most likely reason is the most simple — hard use and time. What historians do know is that metalworkers tried to repair the crack for George Washington’s birthday in 1846, but only made the damage worse. Today, no one alive has heard the Liberty Bell ring with its original clapper, but a digital recreation of the bell’s sound can help transport you back to the early days of the republic.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
A few years into his reign, Russian Czar Peter I (aka “Peter the Great”) decided to study abroad. Worried that Russia was lagging behind in key technological areas, especially when it came to shipbuilding, Peter traveled incognito from 1697 to 1698 to various European countries, including Prussia, Holland, and England, in an effort to modernize his own nation. Afterward, with his newly learned shipbuilding know-how, he created Russia’s first navy.
In Rome, urine was prized for its ammonia, which was used in cleaning products and toothpaste. The urine trade was so lucrative that Emperor Vespasian placed a tax on it in 70 CE. When confronted about the new tax, he famously stated “pecunia non olet,” or “money doesn’t stink.”
But it wasn’t just maritime skills Peter learned on his “Grand Embassy.” He also picked up a few fashion and grooming ideas — including a particular interest in the freshly shaven chins of most Western European men. Determined to integrate Russia into the increasingly powerful club of European countries, Peter established (around 1705) a tax that fiscally punished anyone sporting a beard. The tax was progressive, with the well-to-do shelling out more for their facial adornments than the peasantry; nobility and merchants could pay as much as 100 rubles a year, while peasants might pay one kopek (1/100 of a ruble). Yet the tax was almost universally reviled — and even helped spark a few riots. The biggest opponent of the tax was the Russian Orthodox Church, which regarded clean-shaven faces as sinful. Despite this stiff opposition, Peter I stuck with the tax and was known to even shave off the beards of his guests at parties, much to the horror displayed on their now-clean-shaven faces.
When Peter I visited Western Europe, he traveled incognito under the name Sergeant Pyotr Mikhaylov.
Advertisement
Sideburns are named after a Union general in the Civil War.
Sideburns have been found on the faces of several famous figures, from Alexander the Great to Charles Darwin, but it wasn’t until the U.S. Civil War (1861-1865) that the term “sideburns” came into being, thanks to a particularly hirsute Union general. Ambrose Burnside wasn’t much of a general: At the Battle of Antietam, his ineffective command meant his soldiers struggled to take a stone bridge (now called Burnside Bridge) and turned what could’ve been a Union victory into a draw. At Fredericksburg, things went from bad to worse, as Burnside led several failed assaults against Robert E. Lee’s forces. But what Burnside might’ve lacked in military acumen, he made up for with his luxurious facial hair, which connected his side-whiskers to his mustache (his chin remained clean-shaven). After the war, many men copied the general’s look, and these facial facsimiles were called “burnsides.” Over the years, the term eventually flipped into its modern spelling.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.