Star that exploded in 1437 tracked to its current position

Some stars erupt like clockwork. Astronomers have tracked down a star that Korean astronomers saw explode nearly 600 years ago and confirmed that it has had more outbursts since. The finding suggests that what were thought to be three different stellar objects actually came from the same object at different times, offering new clues to the life cycles of stars.

On March 11, 1437, Korean royal astronomers saw a new “guest star” in the tail of the constellation Scorpius. The star glowed for 14 days, then faded. The event was what’s known as a classical nova explosion, which occurs when a dense stellar corpse called a white dwarf steals enough material from an ordinary companion star for its gas to spontaneously ignite. The resulting explosion can be up to a million times as bright as the sun, but unlike supernovas, classical novas don’t destroy the star.
Astronomer Michael Shara of the American Museum of Natural History in New York City and colleagues used digitized photographic plates dating from as early as 1923 to trace a modern star back to the nova. The team tracked a single star as it moved away from the center of a shell of hot gas, the remnants of an old explosion, thus showing that the star was responsible for the nova. The researchers also saw the star, which they named Nova Scorpii AD 1437, give smaller outbursts called dwarf novas in the 1930s and 1940s. The findings were reported in the Aug. 31 Nature.

The discovery fits with a proposal Shara and colleagues made in the 1980s. They suggested that three different stellar observations — bright classical nova explosions, dwarf nova outbursts and an intermediate stage where a white dwarf is not stealing enough material to erupt — are all different views of the same system.

“In biology, we might say that an egg, a larva, a pupa and a butterfly are all the same system seen at different stages of development,” Shara says.

Learning takes brain acrobatics

Peer inside the brain of someone learning. You might be lucky enough to spy a synapse pop into existence. That physical bridge between two nerve cells seals new knowledge into the brain. As new information arrives, synapses form and strengthen, while others weaken, making way for new connections.

You might see more subtle changes, too, like fluctuations in the levels of signaling molecules, or even slight boosts in nerve cell activity. Over the last few decades, scientists have zoomed in on these microscopic changes that happen as the brain learns. And while that detailed scrutiny has revealed a lot about the synapses that wire our brains, it isn’t enough. Neuroscientists still lack a complete picture of how the brain learns.

They may have been looking too closely. When it comes to the neuroscience of learning, zeroing in on synapse action misses the forest for the trees.

A new, zoomed-out approach attempts to make sense of the large-scale changes that enable learning. By studying the shifting interactions between many different brain regions over time, scientists are beginning to grasp how the brain takes in new information and holds onto it.
These kinds of studies rely on powerful math. Brain scientists are co-opting approaches developed in other network-based sciences, borrowing tools that reveal in precise, numerical terms the shape and function of the neural pathways that shift as human brains learn.

“When you’re learning, it doesn’t just require a change in activity in a single region,” says Danielle Bassett, a network neuroscientist at the University of Pennsylvania. “It really requires many different regions to be involved.” Her holistic approach asks, “what’s actually happening in your brain while you’re learning?” Bassett is charging ahead to both define this new field of “network neuroscience” and push its boundaries.

“This line of work is very promising,” says neuroscientist Olaf Sporns of Indiana University Bloomington. Bassett’s research, he says, has great potential to bridge gaps between brain-imaging studies and scientists’ understanding of how learning happens. “I think she’s very much on the right track.”
Already, Bassett and others have found tantalizing hints that the brains that learn best have networks that are flexible, able to rejigger connections on the fly to allow new knowledge in. Some brain regions always communicate with the same neural partners, rarely switching to others. But brain regions that exhibit the most flexibility quickly swap who they’re talking with, like a parent who sends a birthday party invite to the preschool e-mail list, then moments later, shoots off a work memo to colleagues.

In a few studies, researchers have witnessed this flexibility in action, watching networks reconfigure as people learn something while inside a brain scanner. Network flexibility may help several types of learning, though too much flexibility may be linked to disorders such as schizophrenia, studies suggest.

Not surprisingly, some researchers are rushing to apply this new information, testing ways to boost brain flexibility for those of us who may be too rigid in our neural connections.

“These are pretty new ideas,” says cognitive neuroscientist Raphael Gerraty of Columbia University. The mathematical and computational tools required for this type of research didn’t exist until recently, he says. So people just weren’t thinking about learning from a large-scale network perspective. “In some ways, it was a pretty boring mathematical, computational roadblock,” Gerraty says. But now the road is clear, opening “this conceptual avenue … that people can now explore.”
It takes a neural village
That conceptual avenue is more of a map, made of countless neural roads. Even when a person learns something very simple, large swaths of the brain jump in to help. Learning an easy sequence of movements, like tapping out a brief tune on a keyboard, prompts activity in the part of the brain that directs finger movements. The action also calls in brain areas involved in vision, decision making, memory and planning. And finger taps are a pretty basic type of learning. In many situations, learning calls up even more brain areas, integrating information from multiple sources, Gerraty says.

He and colleagues caught glimpses of some of these interactions by scanning the brains of people who had learned associations between two faces. Only one of the faces was then paired with a reward. In later experiments, the researchers tested whether people could figure out that the halo of good fortune associated with the one face also extended to the face it had been partnered with earlier. This process, called “transfer of learning,” is something that people do all the time in daily life, such as when you’re wary of the salad at a restaurant that recently served tainted cheese.

Study participants who were good at applying knowledge about one thing — in this case, a face — to a separate thing showed particular brain signatures, Gerraty and colleagues reported in 2014 in the Journal of Neuroscience. Connections between the hippocampus, a brain structure important for memory, and the ventromedial prefrontal cortex, involved in self-control and decision making, were weaker in good learners than in people who struggled to learn. The scans, performed several days after the learning task, revealed inherent differences between brains, the researchers say. The experiment also turned up other neural network differences among these regions and larger-scale networks that span the brain.

Children who have difficulty learning math, when scanned, also show unexpected brain connectivity, according to research by neuroscientist Vinod Menon of Stanford University and colleagues. Compared with kids without disabilities, children with developmental dyscalculia who were scanned while doing math problems had more connections, particularly among regions involved in solving math problems. That overconnectivity, described in 2015 in Developmental Science, was a surprise, Menon says, since earlier work had suggested that these math-related networks were too weak. But it may be that too many links create a system that can’t accommodate new information. “The idea is that if you have a hyperconnected system, it’s not going to be as responsive,” he says.
There’s a balance to be struck, Menon says. Neural pathways that are too weak can’t carry necessary information, and pathways that are too connected won’t allow new information to move in. But the problem isn’t as simple as that. “It’s not that everything is changing everywhere,” he says. “There is a specificity to it.” Some connections are more important than others, depending on the task.

Neural networks need to shuttle information around quickly and fluidly. To really get a sense of this movement as opposed to snapshots frozen in time, scientists need to watch the brain as it learns. “The next stage is to figure out how the networks actually shift,” Menon says. “That’s where the studies from Dani Bassett and others will be very useful.”

Flexing in real time
Bassett and colleagues have captured these changing networks as people learn. Volunteers were given simple sequences to tap out on a keyboard while undergoing a functional MRI scan. During six weeks of scanning as people learned the task, neural networks in their brains shifted around. Some connections grew stronger and some grew weaker, Bassett and her team reported in Nature Neuroscience in 2015.

People who quickly learned to tap the correct sequence of keys showed an interesting neural trait: As they learned, they shed certain connections between their frontal cortex, the outermost layer of the brain toward the front of the head, and the cingulate, which sits toward the middle of the brain. This connection has been implicated in directing attention, setting goals and making plans, skills that may be important for the early stages of learning but not for later stages, Bassett and colleagues suspect. Compared with slow learners, fast learners were more likely to have shunted these connections, a process that may have made their brains more efficient.

Flexibility seems to be important for other kinds of learning too. Reinforcement learning, in which right answers get a thumbs up and wrong answers are called out, also taps into brain flexibility, Gerraty, Bassett and others reported online May 30 at bioRxiv.org. This network comprises many points on the cortex, the brain’s outer layer, and a deeper structure known as the striatum. Other work on language comprehension, published by Bassett and colleagues last year in Cerebral Cortex, found some brain regions that were able to quickly form and break connections.

These studies captured brains in the process of learning, revealing “a much more interesting network structure than what we previously thought when we were only looking at static snapshots,” Gerraty says. The learning brain is incredibly dynamic, he says, with modules breaking off from partners and finding new ones.

While the details of those dynamics differ from study to study, there is an underlying commonality: “It seems that part of learning about the world is having parts of your brain become more flexible, and more able to communicate with different areas,” Gerraty says. In other words, the act of learning takes flexibility.

But too much of a good thing may be bad. While performing a recall task in a scanner, people with schizophrenia had higher flexibility among neural networks across the brain than did healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. “That suggests to me that while flexibility is good for healthy people, there is perhaps such a thing as too much flexibility,” Bassett says.
Just how this flexibility arises, and what controls it, is unknown. Andrea Stocco, a cognitive neuroscientist at the University of Washington in Seattle, suspects that a group of brain structures called the basal ganglia, deep within the brain, has an important role in controlling flexibility. He compares this region, which includes the striatum, to an air traffic controller who shunts information to where it’s most needed. One of the basal ganglia’s jobs seems to be shutting things down. “Most of the time, the basal ganglia is blocking something,” he says. Other researchers have found evidence that crucial “hubs” in the cortex help control flexibility.

Push for more
Researchers don’t yet know how measures of flexibility in brain regions relate to the microscopic changes that accompany learning. For now, the macro and the micro views of learning are separate worlds. Despite that missing middle ground, researchers are charging ahead, looking for signs that neural flexibility might offer a way to boost learning aptitude.

It’s possible that external brain stimulation may enhance flexibility. After receiving brain stimulation carefully aimed at a known memory circuit, people were better able to recall lists of words, scientists reported May 8 in Current Biology. If stimulation can boost memory, some argue, the technique could enhance flexibility and perhaps learning too.
Certain drugs show promise. DXM, found in some cough medicines, blocks proteins that help regulate nerve cell chatter. Compared with a placebo, the compound made some brain regions more flexible and able to rapidly switch partners in healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. She is also studying whether neurofeedback — a process in which people try to change their brain patterns to become more flexible with real-time monitoring — can help.

Something even simpler might work for boosting flexibility. On March 31 in Scientific Reports, Bassett and colleagues described their network analyses of an unusual subject. For a project called MyConnectome, neuroscientist Russ Poldrack, then at the University of Texas at Austin, had three brain scans a week for a year while assiduously tracking measures that included mood. Bassett and her team applied their mathematical tools to Poldrack’s data to get measurements of his neural flexibility on any given scan day. The team then looked for associations with mood. The standout result: When Poldrack was happiest, his brain was most flexible, for reasons that aren’t yet clear. (Flexibility was lowest when he was surprised.)

Those results are from a single person, so it’s unknown how well they would generalize to others. What’s more, the study identifies only a link, not that happiness causes more flexibility or vice versa. But the idea is intriguing, if not obvious, Bassett says. “Of course, no teacher is really going to say we’re doing rocket science if we tell them we should make the kids happier and then they’ll learn better.” But finding out exactly how happiness relates to learning is important, she says.

The research is just getting started. But already, insights on learning are coming quickly from the small group of researchers viewing the brain as a matrix of nodes and links that deftly shift, swap and rearrange themselves. Zoomed out, network science brings to the brain “a whole new set of hypotheses and new ways of testing them,” Bassett says.

A new map exhibit documents evolving views of Earth’s interior

Much of what happens on the Earth’s surface is connected to activity far below. “Beneath Our Feet,” a temporary exhibit at the Norman B. Leventhal Map Center in the Boston Public Library, explores the ways people have envisioned, explored and exploited what lies underground.

“We’re trying to visualize those places that humans don’t naturally go to,” says associate curator Stephanie Cyr. “Everybody gets to see what’s in the sky, but not everyone gets to see what’s underneath.”
“Beneath Our Feet” displays 70 maps, drawings and archaeological artifacts in a bright, narrow exhibit space. (In total, the library holds a collection of 200,000 maps and 5,000 atlases.) Many objects have two sets of labels: one for adults and one for kids, who are guided by a cartoon rat mascot called Digger Burrows.

The layout puts the planet’s long history front and center. Visitors enter by walking over a U.S. Geological Survey map of North America that is color-coded to show how topography has changed over geologic time.
Beyond that, the exhibit is split into two main themes, Cyr says: the natural world, and how people have put their fingerprints on it. Historical and modern maps hang side by side, illustrating how ways of thinking about the Earth developed as the tools for exploring it improved.

For instance, a 1665 illustration drawn by Jesuit scholar Athanasius Kircher depicts Earth’s water systems as an underground network that churned with guidance from a large ball of fire in the planet’s center, Cyr says. “He wasn’t that far off.” Under Kircher’s drawing is an early sonar map of the seafloor in the Pacific Ocean, made by geologists Marie Tharp and Bruce Heezen in 1969 (SN: 10/6/12, p. 30). Their maps revealed the Mid-Atlantic Ridge. Finding that rift helped to prove the existence of plate tectonics and that Earth’s surface is shaped by the motion of vast subsurface forces.

On another wall, a 1794 topological-relief drawing of Mount Vesuvius — which erupted and destroyed the Roman city of Pompeii in A.D. 79 — is embellished by a cartouche of Greek mythological characters, including one representing death. The drawing hangs above a NASA satellite image of the same region, showing how the cities around Mount Vesuvius have grown since the eruption that buried Pompeii, and how volcano monitoring has improved.

The tone turns serious in the latter half of the exhibit. Maps of coal deposits in 1880s Pennsylvania sit near modern schematics explaining how fracking works (SN: 9/8/12, p. 20). Reproductions of maps of the Dakotas from 1886 may remind visitors of ongoing controversies with the Dakota Access Pipeline, proposed to run near the Standing Rock Sioux Reservation, and maps from the U.S. Environmental Protection Agency mark sites in Flint, Mich., with lead-tainted water.

Maps in the exhibit are presented dispassionately and without overt political commentary. Cyr hopes the zoomed-out perspectives that maps provide will allow people to approach controversial topics with cool heads.

“The library is a safe place to have civil discourse,” she says. “It’s also a place where you have access to factual materials and factual resources.”

A key virus fighter is implicated in pregnancy woes

An immune system mainstay in the fight against viruses may harm rather than help a pregnancy. In Zika-infected mice, this betrayal appears to contribute to fetal abnormalities linked to the virus, researchers report online January 5 in Science Immunology. And it could explain pregnancy complications that arise from infections with other pathogens and from autoimmune disorders.

In pregnant mice infected with Zika virus, those fetuses with a docking station, or receptor, for immune system proteins called type I interferons either died or grew more poorly compared with fetuses lacking the receptor. “The type I interferon system is one of the key mechanisms for stopping viral infections,” says Helen Lazear, a virologist at the University of North Carolina at Chapel Hill, who coauthored an editorial accompanying the study. “That same [immune] process is actually causing fetal damage, and that’s unexpected.”
Cells infected by viruses begin the fight against the intruder by producing type I interferons. These proteins latch onto their receptor on the surfaces of neighboring cells and kick-start the production of hundreds of other antiviral proteins.

Akiko Iwasaki, a Howard Hughes Medical Institute investigator and immunologist at Yale School of Medicine, and her colleagues were interested in studying what happens to fetuses when moms are sexually infected with Zika virus. The researchers mated female mice unable to make the receptor for type I interferons to males with one copy of the gene needed to make the receptor. This meant that moms would carry some pups with the receptor and some without in the same pregnancy.

Pregnant mice were infected vaginally with Zika at one of two times — one corresponding to mid‒first trimester in humans, the other to late first trimester. Of the fetuses exposed to infection earlier, those that had the interferon receptor died, while those without the receptor continued to develop. For fetuses exposed to infection a bit later in the pregnancy, those with the receptor were much smaller than their receptor-lacking counterparts.

Story continues below graphic
The fetuses without the receptor still grew poorly due to the Zika infection, which is expected given their inability to fight the infection. What was striking, Iwasaki says, is that the fetuses able to fight the infection were more damaged, and were the only fetuses that died.

It’s unclear how this antiviral immune response causes fetal damage. But the placentas—which, like their fetuses, had the receptor — didn’t appear to provide those fetuses with enough oxygen, Iwasaki says.

The researchers also infected pregnant mice that had the receptor for type I interferons with a viral mimic — a bit of genetic material that goads the body to begin its antiviral immune response — to see if the damage happened only during a Zika infection. These fetuses also died early in the pregnancy, an indication that perhaps the immune system could cause fetal damage during other viral infections, Iwasaki notes.

Iwasaki and colleagues next added type I interferon to samples of human placental tissue in dishes. After 16 to 20 hours, the placental tissues developed structures that resembled syncytial knots. These knots are widespread in the placentas of pregnancies with such complications as preeclampsia and restricted fetal growth.

Figuring out which of the hundreds of antiviral proteins made when type I interferon ignites the immune system can trigger placental and fetal damage is the next step, says Iwasaki. That could provide more understanding of miscarriage generally; other infections that cause congenital diseases, like toxoplasmosis and rubella; and autoimmune disorders that feature excessive type I interferon production, such as lupus, she says.

The great Pacific garbage patch may be 16 times as massive as we thought

We’re going to need a bigger trash can.

A pooling of plastic waste floating in the ocean between California and Hawaii contains at least 79,000 tons of material spread over 1.6 million square kilometers, researchers report March 22 in Scientific Reports. That’s the equivalent to the mass of more than 6,500 school buses. Known as the great Pacific garbage patch, the hoard is four to 16 times as heavy as past estimates.

About 1.8 trillion plastic pieces make up the garbage patch, the scientists estimate. Particles smaller than half a centimeter, called microplastics, account for 94 percent of the pieces, but only 8 percent of the overall mass. In contrast, large (5 to 50 centimeters) and extra-large (bigger than 50 centimeters) pieces made up 25 percent and 53 percent of the estimated patch mass.
Much of the plastic in the patch comes from humans’ ocean activities, such as fishing and shipping, the researchers found. Almost half of the total mass, for example, is from discarded fishing nets. A lot of that litter contains especially durable plastics, such as polyethylene and polypropylene, which are designed to survive in marine environments.
To get the new size and mass estimates, Laurent Lebreton of the Ocean Cleanup, a nonprofit foundation in Delft, the Netherlands, and his colleagues trawled samples from the ocean surface, took aerial images and simulated particle pathways based on plastic sources and ocean circulation.
Aerial images provided more accurate tallies and measurements of the larger plastic pieces, the researchers write. That could account for the increase in mass over past estimates, which relied on trawling data and images taken from boats, in addition to computer simulations. Another possible explanation: The patch grew — perhaps driven by an influx of debris from the 2011 tsunami that hit Japan and washed trash out to sea (SN: 10/28/17, p. 32).

Here’s why some Renaissance artists egged their oil paintings

Art historians often wish that Renaissance painters could shell out secrets of the craft. Now, scientists may have cracked one using chemistry and physics.

Around the turn of the 15th century in Italy, oil-based paints replaced egg-based tempera paints as the dominant medium. During this transition, artists including Leonardo da Vinci and Sandro Botticelli also experimented with paints made from oil and egg (SN: 4/30/14). But it has been unclear how adding egg to oil paints may have affected the artwork.
“Usually, when we think about art, not everybody thinks about the science which is behind it,” says chemical engineer Ophélie Ranquet of the Karlsruhe Institute of Technology in Germany.

In the lab, Ranquet and colleagues whipped up two oil-egg recipes to compare with plain oil paint. One mixture contained fresh egg yolk mixed into oil paint, and had a similar consistency to mayonnaise. For the other blend, the scientists ground pigment into the yolk, dried it and mixed it with oil — a process the old masters might have used, according to the scant historical records that exist today. Each medium was subjected to a battery of tests that analyzed its mass, moisture, oxidation, heat capacity, drying time and more.

In both concoctions, the yolk’s proteins, phospholipids and antioxidants helped slow paint oxidation, which can cause paint to turn yellow over time, the team reports March 28 in Nature Communications.

In the mayolike blend, the yolk created sturdy links between pigment particles, resulting in stiffer paint. Such consistency would have been ideal for techniques like impasto, a raised, thick style that adds texture to art. Egg additions also could have reduced wrinkling by creating a firmer paint consistency. Wrinkling sometimes happens with oil paints when the top layer dries faster than the paint underneath, and the dried film buckles over looser, still-wet paint.

The hybrid mediums have some less than eggs-ellent qualities, though. For instance, the eggy oil paint can take longer to dry. If paints were too yolky, Renaissance artists would have had to wait a long time to add the next layer, Ranquet says.

“The more we understand how artists select and manipulate their materials, the more we can appreciate what they’re doing, the creative process and the final product,” says Ken Sutherland, director of scientific research at the Art Institute of Chicago, who was not involved with the work.

Research on historical art mediums can not only aid art preservation efforts, Sutherland says, but also help people gain a deeper understanding of the artworks themselves.

Hayabusa2 has blasted the surface of asteroid Ryugu to make a crater

Hayabusa2 has blasted the asteroid Ryugu with a projectile, probably adding a crater to the small world’s surface and stirring up dust that scientists hope to snag.

The projectile, a two-kilogram copper cylinder, separated from the Hayabusa2 spacecraft at 9:56 p.m. EDT on April 4, JAXA, Japan’s space agency, reports.

Hayabusa2 flew to the other side of the asteroid to hide from debris that would have been ejected when the projectile hit (SN: 1/19/19, p. 20). Scientists won’t know for sure whether the object successfully made a crater, and, if so, how big it is, until the craft circles back. But by 10:36 p.m. EDT, Hayabusa2’s cameras had captured a blurry shot of a dust plume spurting up from Ryugu, so the team thinks the attempt worked.
“This is the world’s first collision experiment with an asteroid!” JAXA tweeted.

Hayabusa2 plans to briefly touch down inside the crater to pick up a pinch of asteroid dust. The spacecraft has already grabbed one sample of Ryugu’s surface (SN Online: 2/22/19). But dust exposed by the impact will give researchers a look at the asteroid’s subsurface, which has not been exposed to sunlight or other types of space radiation for up to billions of years.

If all goes as planned, Hayabusa2 will return to Earth with both samples in late 2020. A third planned sample pickup has been scrapped because Ryugu’s boulder-strewn surface is so hazardous for the spacecraft.
Comparing the two samples will reveal details of how being exposed to space changes the appearance and composition of rocky asteroids, and will help scientists figure out how Ryugu formed (SN Online: 3/20/19). Scientists hope that the asteroid contains water and organic material that might help explain how life got started in the solar system.

A Greek skull may belong to the oldest human found outside of Africa

A skull found in a cliffside cave on Greece’s southern coast in 1978 represents the oldest Homo sapiens fossil outside Africa, scientists say.

That skull, from an individual who lived at least 210,000 years ago, was encased in rock that also held a Neandertal skull dating to at least 170,000 years ago, contends a team led by paleoanthropologist Katerina Harvati of the University of Tübingen in Germany.

If these findings, reported online July 10 in Nature, hold up, the ancient Greek H. sapiens skull is more than 160,000 years older than the next oldest European H. sapiens fossils (SN Online: 11/2/11). It’s also older than a proposed H. sapiens jaw found at Israel’s Misliya Cave that dates to between around 177,000 and 194,000 years ago (SN: 2/17/18, p. 6).

“Multiple Homo sapiens populations dispersed out of Africa starting much earlier, and reaching much farther into Europe, than previously thought,” Harvati said at a July 8 news conference. African H. sapiens originated roughly 300,000 years ago (SN: 7/8/17, p. 6).
A small group of humans may have reached what’s now Greece more than 200,000 years ago, she suggested. Neandertals who settled in southeastern Europe not long after that may have replaced those first H. sapiens. Then humans arriving in Mediterranean Europe tens of thousands of years later would eventually have replaced resident Neandertals, who died out around 40,000 years ago (SN Online: 6/26/19).

But Harvati’s group can’t exclude the possibility that H. sapiens and Neandertals simultaneously inhabited southeastern Europe more than 200,000 years ago and sometimes interbred. A 2017 analysis of ancient and modern DNA concluded that humans likely mated with European Neandertals at that time.

The two skulls were held in a small section of wall that had washed into Greece’s Apidima Cave from higher cliff sediment and then solidified roughly 150,000 years ago. Since one skull is older than the other, each must originally have been deposited in different sediment layers before ending up about 30 centimeters apart on the cave wall, the researchers say.
Earlier studies indicated that one Apidima skull, which retains the face and much of the braincase, was a Neandertal that lived at least 160,000 years ago. But fossilization and sediment pressures had distorted the skull’s shape. Based on four 3-D digital reconstructions of the specimen, Harvati’s team concluded that its heavy brow ridges, sloping face and other features resembled Neandertal skulls more than ancient and modern human skulls. An analysis of the decay rate of radioactive forms of uranium in skull bone fragments produced an age estimate of at least 170,000 years.

A second Apidima fossil, also dated using uranium analyses, consists of the back of a slightly distorted braincase. Its rounded shape in a digital reconstruction characterizes H. sapiens, not Neandertals, the researchers say. A bunlike bulge often protrudes from the back of Neandertals’ skulls.
But without any facial remains to confirm the species identity of the partial braincase, “it is still possible that both Apidima skulls are Neandertals,” says paleoanthropologist Israel Hershkovitz of Tel Aviv University. Hershkovitz led the team that discovered the Misliya jaw and assigned it to H. sapiens.

Harvati and her colleagues will try to extract DNA and species-distinguishing proteins (SN: 6/8/19, p. 6) from the Greek skulls to determine their evolutionary identities and to look for signs of interbreeding between humans and Neandertals.

The find does little to resolve competing explanations of how ancient humans made their way out of Africa. Harvati’s suggestion that humans trekked from Africa to Eurasia several times starting more than 200,000 years ago is plausible, says paleoanthropologist Eric Delson of City University of New York’s Lehman College in an accompanying commentary. And the idea that some H. sapiens newcomers gave way to Neandertals probably also applied to humans who reached Misliya Cave and nearby Middle Eastern sites as late as around 90,000 years ago, before Neandertals occupied the area by 60,000 years ago, Delson says.

Hershkovitz disagrees. Ancient humans and Neandertals lived side-by-side in the Middle East for 100,000 years or more and occasionally interbred, he contends. Misliya Cave sediment bearing stone tools dates to as early as 274,000 years ago, Hershkovitz says. Since only H. sapiens remains have been found in the Israeli cave, ancient humans probably made those stone artifacts and could have been forerunners of Greek H. sapiens.

How meningitis-causing bacteria invade the brain

Bacteria can slip into the brain by commandeering cells in the brain’s protective layers, a new study finds. The results hint at how a deadly infection called bacterial meningitis takes hold.

In mice infected with meningitis-causing bacteria, the microbes exploit previously unknown communication between pain-sensing nerve cells and immune cells to slip by the brain’s defenses, researchers report March 1 in Nature. The results also hint at a new way to possibly delay the invasion — using migraine medicines to interrupt those cell-to-cell conversations.
Bacterial meningitis is an infection of the protective layers, or meninges, of the brain that affects 2.5 million people globally per year. It can cause severe headaches and sometimes lasting neurological injury or death.

“Unexpectedly, pain fibers are actually hijacked by the bacteria as they’re trying to invade the brain,” says Isaac Chiu, an immunologist at Harvard Medical School in Boston. Normally, one might expect pain to be a warning system for us to shut down the bacteria in some way, he says. “We found the opposite…. This [pain] signal is being used by the bacteria for an advantage.”

It’s known that pain-sensing neurons and immune cells coexist in the meninges, particularly in the outermost layer called the dura mater (SN: 11/11/20). So to see what role the pain and immune cells play in bacterial meningitis, Chiu’s team infected mice with two of the bacteria known to cause the infection in humans: Streptococcus pneumoniae and S. agalactiae. The researchers then observed where that bacteria ended up in mice genetically tweaked to lack pain-sensing nerve cells and compared those resting spots to those in mice with the nerve cells intact.

Mice without pain-sensing neurons had fewer bacteria in the meninges and brain than those with the nerve cells, the team found. This contradicts the idea that pain in meningitis serves as a warning signal to the body’s immune system, mobilizing it for action.

Further tests showed that the bacteria triggered a chain of immune-suppressing events, starting with the microbes secreting toxins in the dura mater.

The toxins hitched onto the pain neurons, which in turn released a molecule called CGRP. This molecule is already known to bind to a receptor on immune cells, where it helps control the dura mater’s immune responses. Injecting infected mice with more CGRP lowered the number of dural immune cells and helped the infection along, the researchers found.

The team also looked more closely at the receptor that CGRP binds to. In infected mice bred without the receptor, fewer bacteria made it into the brain. But in ones with the receptor, immune cells that would otherwise engulf bacteria and recruit reinforcements were disabled.
The findings suggest that either preventing the release of CGRP or preventing it from binding to immune cells might help delay infection.

In humans, neuroscientists know that CGRP is a driver of headaches — it’s already a target of migraine medications (SN: 6/5/18). So the researchers gave five mice the migraine medication olcegepant, which blocks CGRP’s effects, and infected them with S. pneumoniae. After infection, the medicated mice had less bacteria in the meninges and brain, took longer to show symptoms, didn’t lose as much weight and survived longer than mice that were not given the medication.

The finding suggests olcegepant slowed the infection. Even though it only bought mice a few extra hours, that’s crucial in meningitis, which can develop just as quickly. Were olcegepant to work the same way in humans, it might give doctors more time to treat meningitis. But the effect is probably not as dramatic in people, cautions Michael Wilson, a neurologist at the University of California, San Francisco who wasn’t involved with the work.

Scientists still need to determine whether pain-sensing nerve cells and immune cells have the same rapport in human dura mater, and whether migraine drugs could help treat bacterial meningitis in people.

Neurologist Avindra Nath has doubts. Clinicians think the immune response and inflammation damage the brain during meningitis, says Nath, who heads the team investigating nervous system infections at the National Institute of Neurological Disorders and Stroke in Bethesda, Md. So treatment involves drugs that suppress the immune response, rather than enhance it as migraine medications might.

Chiu acknowledges this but notes there might be room for both approaches. If dural mater immune cells could head the infection off at the pass, it may keep some bacteria from penetrating the defenses, minimizing brain inflammation.

This study might not ultimately change how clinicians treat patients, Wilson says. But it still reveals something new about one of the first lines of defense for the brain.

The Milky Way may be spawning many more stars than astronomers had thought

The Milky Way is churning out far more stars than previously thought, according to a new estimate of its star formation rate.

Gamma rays from aluminum-26, a radioactive isotope that arises primarily from massive stars, reveal that the Milky Way converts four to eight solar masses of interstellar gas and dust into new stars each year, researchers report in work submitted to arXiv.org on January 24. That range is two to four times the conventional estimate and corresponds to an annual birthrate in our galaxy of about 10 to 20 stars, because most stars are less massive than the sun.
At this rate, every million years — a blink of the eye in astronomical terms — our galaxy spawns 10 million to 20 million new stars. That’s enough to fill roughly 10,000 star clusters like the beautiful Pleiades cluster in the constellation Taurus. In contrast, many galaxies, including most of the ones that orbit the Milky Way, make no new stars at all.

“The star formation rate is very important to understand for galaxy evolution,” says Thomas Siegert, an astrophysicist at the University of Würzburg in Germany. The more stars a galaxy makes, the faster it enriches itself with oxygen, iron and the other elements that stars create. Those elements then alter star-making gas clouds and can change the relative number of large and small stars that the gas clouds form.

Siegert and his colleagues studied the observed intensity and spatial distribution of emission from aluminum-26 in our galaxy. A massive star creates this isotope during both life and death. During its life, the star blows the aluminum into space via a strong wind. If the star explodes when it dies, the resulting supernova forges more. The isotope, with a half-life of 700,000 years, decays and gives off gamma rays.

Like X-rays, and unlike visible light, gamma rays penetrate the dust that cloaks the youngest stars. “We’re looking through the entire galaxy,” Siegert says. “We’re not X-raying it; here we’re gamma-raying it.”

The more stars our galaxy spawns, the more gamma rays emerge. The best match with the observations, the researchers find, is a star formation rate of four to eight solar masses a year. That is much higher than the standard estimate for the Milky Way of about two solar masses a year.

The revised rate is very realistic, says Pavel Kroupa, an astronomer at the University of Bonn in Germany who was not involved in the work. “I’m very impressed by the detailed modeling of how they account for the star formation process,” he says. “It’s a very beautiful work. I can see some ways of improving it, but this is really a major step in the absolutely correct direction.”

Siegert cautions that it is difficult to tell how far the gamma rays have traveled before reaching us. In particular, if some of the observed emission arises nearby — within just a few hundred light-years of us — then the galaxy has less aluminum-26 than the researchers have calculated, which means the star formation rate is on the lower side of the new estimate. Still, he says it’s unlikely to be as low as the standard two solar masses per year.
In any event, the Milky Way is the most vigorous star creator in a collection of more than 100 nearby galaxies called the Local Group. The largest Local Group galaxy, Andromeda, converts only a fraction of a solar mass of gas and dust into new stars a year. Among Local Group galaxies, the Milky Way ranks second in size, but its high star formation rate means that we definitely try a lot harder.