In 2007, Carlo Carandang, then an attending physician at a hospital in Halifax, Nova Scotia, saw a most unusual patient: an 8-year-old boy who had recently adopted some strange beliefs, all while losing 18 pounds. The boy thought that nurses were “evil,” and that he could inject other people with his fat cells simply by walking past them.

The boy’s symptoms had begun a few months prior. After his school held a lesson on healthy eating, he started to scrutinize food labels and avoid fat and carbs, according to Carandang, who now works as a data scientist. The boy worried that he was too fat, and he would examine his stomach in the mirror throughout the day. He grew suspicious of what his mother might be putting in his food and began preparing all of his own meals. Before long, he was eating just 200 calories a day.

The boy also developed odd tics, flapping his arms and tapping his mouth to undo what he perceived to be “contamination.” Carandang knew that the boy had always been an anxious child and that he had a history of recurring strep throat. But the food-related symptoms far exceeded what would normally accompany anxiety. The boy was admitted to the hospital, where it took months for Carandang and his team to successfully treat him. Ultimately, the boy had to get his tonsils taken out to stop the strep infections. Around the same time, his eating disorder stopped.

In a report about the patient, Carandang wrote that the boy appeared to have PANDAS, or pediatric autoimmune neuropsychiatric disorders associated with streptococcal infection, a type of obsessive-compulsive disorder that sometimes comes on in children after a bout of strep throat. While PANDAS is a fairly well-established condition, it was unusual that the infection-induced psychological symptoms had brought about an eating disorder.

Other researchers have reported isolated cases of children developing eating disorders after coming down with infections. In the late 1990s, Mae Sokol, a psychiatry professor at Creighton University, described treating several patients whose eating disorders had begun after strep infections. One 12-year-old treated by Sokol lost 30 pounds after he suddenly became afraid to eat fats and liquids. He had experienced an untreated upper-respiratory-tract infection just a month before the symptoms began. A 16-year-old had a series of upper-respiratory-tract infections, then suddenly became concerned about weight gain and “dead animals on plates,” according to Sokol’s report.

These cases hinted at a relationship between the infections and the subsequent disordered eating, but childhood infections are so common, and eating disorders so multifaceted, that scientifically connecting the two conditions has been hard. It seems so counterintuitive: Why would a sore throat lead to a state in which a person feels irrationally preoccupied with thinness? This year, though, a large study found that the boys Carandang and Sokol treated weren’t isolated incidents. Infections might, in fact, spark eating disorders in some people.

For the study, Lauren Breithaupt, a clinical psychologist at Massachusetts General Hospital, and several of her colleagues from Denmark and North Carolina looked at the health histories of 525,643 Danish teen girls born from 1989 to 2006. (The rate of eating disorders among Danish boys was too low for them to be included in the analysis.) The researchers examined the girls’ medical records to see if they had ever been hospitalized for a variety of infections, including rheumatic fever, strep throat, viral meningitis, mycoplasma pneumonia, coccidioidomycosis, or influenza, and also whether they had ever been diagnosed with an eating disorder.

A connection between the two ailments immediately became clear. The overall number of girls diagnosed with eating disorders was relatively small—as it is in the United States. But the teens hospitalized with a severe infection were 22 percent more likely to be diagnosed with anorexia, 35 percent more likely to be diagnosed with bulimia, and 39 percent more likely to have an eating disorder that doesn’t quite meet the criteria for an anorexia or bulimia diagnosis. The diagnosis of an eating disorder tended to happen soon after the infection took place, such that the girls were at their greatest risk of developing one within the first three months after being hospitalized for an infection.

The study seemed to crystallize the connections among infections, obsessive behavior, and eating disorders that Breithaupt and other researchers had been seeing. In her work as a psychologist, Breithaupt says, she has seen patients who, after an infection, “have really rigid thoughts and impressions about either food or weight or its shape, or they might have lots of concerns about fat in foods and fat in their body.” Carandang’s PANDAS patient, too, seemed to first grow obsessed with food, then fixate on avoiding it.

No one knows exactly why, precisely, infections might spark eating disorders. Breithaupt suggests that either the infection itself or the antibiotic used to treat it might be disrupting the patient’s gut microbiome, the collection of microorganisms in the intestine that plays a role in health and disease. This disruption might change the amount of chemicals called neuropeptides circulating in the gut. Because the gut communicates with the brain, the quantities of neuropeptides circulating in the brain might then change, as well. That could, in essence, make people think differently about food or their body.

Perhaps other mechanisms are at play. One competing theory is that the body’s own immune response to an infection might end up invading the brain. When the body senses a dangerous bug, it produces proteins that destroy the invader. But some of those proteins can also attack our own cells. In possible cases of anorexia or bulimia induced by bacteria, some scientists suspect that these proteins get into parts of the brain that control impulses such as disgust and hunger. There, they might attack the brain tissues or switch on the “I’m not hungry anymore” impulse, or even the “I’m disgusted by my own body” impulse.

There’s no direct evidence for these theories; for now they’re merely speculation. And even if one of them proved correct, researchers would still have to contend with the mystery of why people get infections all the time but relatively few develop eating disorders. Or, for that matter, why not everyone with an eating disorder recently dealt with an infection.

It might be that underlying factors about people predispose them to developing an eating disorder after an infection. “Maybe you have more of a genetic risk for obsessive-compulsive disorder or anorexia, and the infection then unmasks that vulnerability. That’s one possibility,” says Kyle Williams, the director of the pediatric-neuropsychiatry-and-immunology program at Massachusetts General Hospital for Children.

If confirmed, these findings could eventually affect how eating disorders are treated, leading doctors to check if their eating-disorder patients have any lingering infections, Breithaupt says. The results also have the potential to radically change our notion of the many ways eating disorders might originate. While most professionals acknowledge that anorexia and bulimia are deeply psychologically rooted, some eating-disorder patients still face stigma for supposedly being so “vain” as to starve themselves. It’s less likely that people would accuse a person of getting meningitis on purpose. Similarly, people who are attacked for compulsively dieting out of vanity might simply be under the spell of antibodies gone awry.

Jim Morris, a professor at Lancaster University in the United Kingdom, says there are still too many unanswered questions to begin treating patients with eating disorders any differently. Instead, he says, this research should prompt a consideration of just how closely intertwined our brains are with our bodies. Just as some problems that seem physical might have psychological aspects, some problems that seem psychological might have physical instigators.

“We say disease is due to biological factors, social factors, and psychological factors all interacting together,” Morris says. “Well, it works with psychiatric disease as well.”

This is the question a paper by Chambers et al. (2019) aims to answer. The authors analyzed coverage decision of 204 drugs (or 409 unique drug–indication pairs based on information from the Tufts Medical Center Specialty Drug Evidence and Coverage (SPEC) database. This database contains coverage information for 17 or the top 20 health plans in the U.S. They authors use this database to evaluate whether the payers imposed restrictions such as:

  • Patient subgroup restriction: Requiring patients to meet certain clinical criteria (e.g., disease severity
  • Step therapy: Requiring patients to first try and then fail an alternative–typically cheaper–treatment before gaining access to a drug of interest
  • Prescriber restriction: Requiring that only certain physician specialties (e.g., oncologist, rheumatologist) prescribe the drug,
  • Combination therapy: Requiring that a drug to be used concomitantly with another medication
  • Other.

Using this approach, the authors find that:

Health plans are less likely to restrict orphan drugs compared with nonorphan drugs. Of orphan drug decisions (n = 2168), plans did not apply coverage restrictions in 70% of cases, applied restrictions in 29%, and did not cover in 1%. In contrast, of nonorphan drug decisions (n = 2832), plans did not apply coverage restrictions in 53% of cases, applied restrictions in 41%, and did not cover in 6%. The frequency of restrictions for orphan drugs varied from 11% to 65% across plans. The attributes of orphan drugs that were more likely to be associated with restrictions than others included drugs for noncancer diseases, drugs with alternatives, self-administered drugs, drugs indicated for diseases with a higher prevalence, and drugs with higher annual costs (all P <.05).

Figure courtesy of AJMC.

These average do hide some variability. The most generous plan covered all orphan drugs with only 11% having any restrictions, whereas for 2 of the 17 plans considered in the study, 50% or more of orphan drugs had some restrictions on access.

After giving birth to a baby, a young woman told her nurses at Boston Medical Center that she was having pain in her hip. That happens sometimes after births, says Ali Guermazi, one of the doctors involved. As he recounts the case from a few years ago, he looked at X-rays and saw a small amount of extra fluid in the joint. Otherwise things looked normal. “We injected her hip with steroids, hoping to help with the pain,” Guermazi says. It seemed to help, and the women went home with her baby.

Guermazi didn’t think more of it until the woman returned to the hospital six months later, unable to walk. “The head of her femur was gone,” says Guermazi, who is now chief of radiology at VA Boston Healthcare System. The bone appeared to have simply vanished. The new mother needed a total hip replacement. “We didn’t know what happened, and still can’t know for certain,” Guermazi says. “But I feared it was related to the injection.”

This is not a typical suspicion. Doctors have long considered a single injection of steroids—the type that come from the adrenal glands and modulate the body’s stress response—to be a pretty harmless way to temporarily relieve pain in a joint. The worst case scenario was that the shot didn’t help the pain. Some people get temporary relief, and some do not. Such injections are done by podiatrists, rheumatologists, orthopedists, spine neurosurgeons, anesthesiologists, and others at major hospitals around the world.

As a specialist in joint pain, Guermazi has done thousands of steroid injections over decades of work. He has trained other doctors as he had been trained: to believe that the injections were safe as long as they weren’t overused. But now, he has come to believe the procedure is more dangerous than he knew. And he and a group of Boston University colleagues are raising a warning flag for doctors and patients alike.

Millions of times every year, people with joint pain allow doctors to run a needle through their skin, then their muscle, then their tendons, and into the fluid-filled space of a painful joint to calm inflammation. Such inflammation can be the result of many types of injury or disease, but most commonly the process is the result of gradual wear and tear known as osteoarthritis, in which the cartilage diminishes, the space between the bones narrows, and eventually bones start to rub on one another. At that stage, a person may need a surgical joint replacement. The progression of the disease itself can’t be reversed with drugs, so medical treatment is aimed at easing pain and maximizing mobility. Steroid injections are one of the chief ways this is attempted.

In the journal Radiology this week, Guermazi and colleagues at Boston University published a study of 459 patients at their hospital who got injections, in the hips or knees, in 2018. Of those patients, 8 percent had complications that worsened the state of their joint. In some cases, the arthritis actually sped up. Others developed small fractures under the cartilage or had complications that compromised the blood supply to bone. In the worst cases, patients had what Guermazi and his colleagues describe as “rapid joint destruction.”

Patterns of harm can be slow to emerge in medicine, and causal relationships are difficult to prove. But these findings build on a gradual accretion of evidence challenging the widespread use of steroid injections. In 2015, the Cochrane Musculoskeletal Group did a meta-analysis to see if the intervention was even helpful. After collating data from 27 knee arthritis trials carried out around the world, the authors concluded that the quality of evidence was low and overall inconclusive. Some of the studies they analyzed found small to moderate improvements in pain and physical function, but the results were not statistically reliable. Whether there is truly any positive effect, the authors concluded, is “unclear.”

Since then, the role of the placebo effect in steroid injections has gotten attention. In 2017, rheumatologists at Tufts did a randomized controlled trial in people with knee pain. A control group got a “sham” injection that contained no steroids. In what became a bombshell paper in the journal JAMA, people with knee arthritis reported their pain was no different if they received injections of steroids or saline. What’s more, the people who got the steroid injections saw more erosion in the cartilage in their knees.

These less-than-promising findings tend to be overshadowed by anecdotes from many people who receive the injections and say they feel like they’ve magically received a new knee. Doctors and patients hoping to keep a person ambulatory, and to stave off a major surgery like a joint replacement, might have a bias toward hoping that the injections are indeed a wise choice. Short on other options, steroid injections are still recommended in certain cases by the American College of Rheumatology and Osteoarthritis Research Society International, with caution. The latest guidelines from the American Academy of Orthopedic Surgeons equivocate on the injections, saying the evidence is not strong enough to recommend for or against them.

“The unfortunate thing is that there is no pharmaceutical treatment for osteoarthritis,” says Guermazi. The injections were only ever thought to be a temporary measure, but they were one of the few things in the doctors’ tool kit to help people with an often debilitating condition. “All the guidelines tell you to lose weight, exercise, and improve lifestyle. Those are the treatments,” Guermazi says.

He and colleagues emphasize that two groups in particular should be cautious: young patients and anyone with pain that seems dramatically worse than might be expected (based on the history, imaging, and physical exam). Such disproportionate pain suggests a subtle diagnosis that might be being overlooked. Adding steroids to the mix could only make things worse, or delay an important finding. This may well have been the case for the young mother Guermazi treated. A tiny stress fracture could have been invisible in the X-ray. It would have required treatment by keeping weight off the leg. Instead, with steroids or placebo creating some sense of relief, the woman felt able to walk on the hip, precipitating the collapse of the bone.

The procedure still likely has a role in helping people with arthritis in some cases, Guermazi believes. But he says more research is “urgently needed” to help figure out what makes some people develop seemingly related complications, and how they might be prevented. Performing fewer injections could have massive financial ramifications for hospitals and doctors, and medicine is notoriously slow to change its ways in the face of new evidence. Fundamentally, though, Guermazi sees this as an ethical issue—as a matter of consent. Patients at least deserve to know about these possible complications. “As a doctor, I want to protect patients,” he says. “We are just saying we need to be careful.”

The U.S. has a gun problem. A paper by Goldstick et al. (2019) use data from the CDC’s Wide-ranging Online Data for Epidemiologic Research (WONDER) tool and find that:

Rates of firearm homicides, suicides, and unintentional deaths in the US are 25.2, 8.0, and 6.2 times higher, respectively, than rates in other developed countries… There were 497,627 firearm deaths in 1999– 2014 (10.4 per 100,000 person-years), of which 291,623 (58.6 percent) were suicides and 191,531 (38.5 percent) were homicides. There were 114,683 firearm deaths in 2015–17 (11.8 per 100,000 person-years—a 13.8 percent increase from 1999–2014), of which 68,810 (60.0 percent) were suicides and 43,483 (37.9 percent) were homicides.

Thus, clearly there is a problem with firearm-related mortality. Gun control activists would say the issue is that there are too many guns on the street. Those fighting for gun rights would argue that other factors are causing people to use guns to cause harm rather than to protect oneself. Regardless, these figures are clearly too high and we need an honest debate about what can be done to bring down gun-related homicides and suicides.

Source:

The city government worker was just getting the hang of his job when a new hire upended everything. She became his mentee, and she asked him if he could put together a manual on how to do her work. He told her okay, but begrudgingly. The manual was a good idea in theory, but he was busy, and he wished she could just learn through observation, as he had.

Over the next months, as he dealt with more immediate deadlines, the worker kept pushing the manual off. His new colleague grew frustrated. “All day, morning and evening, she kept asking me, ‘When will the manual be ready? When will the manual be ready?’” the worker told me through an interpreter.

The manual was a mundane request, but it made him feel confused and powerless. He didn’t know how to communicate to the new colleague that he didn’t have the time and that explaining the job was difficult. Repeated over and over, her request caused his anxiety to ratchet up to extreme levels. He hesitated to delegate work to her, which meant that he took on even more. He started having problems sleeping and eating.

Finally, the worker says, he went to lunch with his boss to discuss the situation. His boss assured him that it wasn’t his fault and asked him to work on the manual as best he could. Still, when he came back to the office, he could see the new colleague giving him the side-eye. Later, she asked him why she hadn’t been invited out, too.

That evening, the worker went home and collapsed in his living room. He felt like he couldn’t go to work anymore. The next day, his wife took him to the hospital, where he was diagnosed with depression. He was allowed him to take a hiatus from his job for a few months. After graduating with a degree from a prestigious state-run university, he couldn’t believe what was happening to him.

The worker was one of a few patients in similar situations introduced to me by Takahiro Kato, a professor of neuropsychiatry at Kyushu University in Japan. (Kato requested anonymity for the patients to maintain their privacy and protect them from repercussions at work.) Kato believes these patients’ distress is an example of an emerging condition that he refers to as “modern-type depression.” At its heart, the condition is a struggle by some workers to learn how to assert themselves in a social context where they have little practice. And its reach might extend far beyond Japan.


Aside from a few researchers, most mental-health professionals in Japan don’t use the term “modern-type depression.” It isn’t a clinical diagnosis, and despite its “modern” tag, characteristics of the condition likely have always existed alongside other forms of depression. The term first gained prominence in the 1990s, when Japanese media seized on it to portray young workers who took time off from work for mental-health reasons as immature and lazy.

While the term still carries stigma, Kato believes it’s useful to examine as an emerging cultural phenomenon. In the West, depression is often seen as a disease of sadness that is highly personal. But in Japan, it has long been considered a disease of fatigue caused by overwork. The traditional depressed patient has been a “yes man,” someone who always acquiesces to extra tasks at the expense of his social life and health. What makes modern-type depression different, according to Kato, is that patients have the desire to stand up for their personal rights, but instead of communicating clearly they become withdrawn and defiant.

Clinically, this type of behavior first started to appear with some frequency in the work of Shin Tarumi, a colleague in Kato’s department at Kyushu University. In the early 2000s, Tarumi noticed that some of his younger depression patients, particularly those born after 1970, had an entirely different personality profile than traditional depression patients. They didn’t try to maintain harmony at the expense of themselves, and they had less loyalty to social structures. Instead, they avoided responsibility. They tended to fault others for their unhappiness.

Several years after Tarumi died, Kato took over the line of research based on his own clinical observations. There are no definitive statistics on the prevalence of this type of patient. Patients exhibiting these characteristics tend to be middle class. Most are men, because men are more likely to seek professional help in Japan. There’s no connection to a particular type of job, as the issues patients face are mostly interpersonal. What they do share are similar personality traits and social conditions.

Kato connected his findings about these patients to Japan’s public discourse around modern-type depression because he found the term useful for exploring a fairly recent cultural flux. Modern-type depression patients, Kato believes, are in an uncomfortable limbo state, trained to be dependent in their family and social lives and unclear how to adapt to a quickly evolving company culture that asks them to be more assertive. While they want to speak up for themselves, their ways of going about it are ineffective and immature.

[Read: The social contradictions of Japanese capitalism]

One patient Kato introduced me to was a 34-year-old engineer. At first, the engineer was happily employed at a government office, but he says he was transferred against his wishes to another known for its long hours. He repeatedly asked if he could be moved again, but his supervisor told him it was impossible. He lost his motivation. Months after he started asking, he was finally granted the transfer, but it was too late for him to snap out of his withdrawn state. When we spoke, the engineer was in the middle of a long hiatus from work.


Kato has found that a variety of disruptive changes in Japanese culture, from childhood through the workplace, have made it difficult for many workers to adjust to a corporate ethos in the country increasingly based on Western individualism. He lays out these causes in two papers in the journals Psychiatry and Clinical Neurosciences and American Journal of Psychiatry.

Japanese parenting is one major factor. As Japan focused on rebuilding economically after its defeat in World War II, Kato observes, men were busy working and mostly absent, so the culture began promoting the ideal of the nurturing, even coddling, mother. The mother-child bond became symbolic of the Japanese behavioral pattern of amae, a desire by children to be loved and act self-indulgently well into adulthood. While some psychologists have promoted the importance of this nurturing relationship, others say that, taken to extremes, it discourages children from becoming autonomous adults.

Kato believes this problem of dependence was compounded by Japan’s education structure. In the 1970s, the government education system deemphasized competition and focused more on allowing students to develop their own interests. This approach, called yutori kyōiku, was a huge contrast to the strict schooling that had led to Japanese success in the past. Today, yutori is widely criticized for bringing down the overall rigor of Japanese education. Some blame the idea itself, and others believe that it was just implemented incorrectly. Either way, the more relaxed system offered fewer opportunities to contend with demanding authority figures or competition from peers.

As Kato explains, many who were brought up within this environment had a major wake-up call when Japan’s economy hit a period of stagnation in the 1990s. At work, they faced an older, paternalistic model of leadership and had to put up with heavy criticisms from bosses. In the past, unending diligence under such pressures would at least lead to senior positions; job stability was pretty much guaranteed as the country experienced years of steady economic progress. But the rupture of the bubble economy meant that this silver lining had disappeared.

[Read: What Americans should understand about Japan’s 1990s economic bust]

To keep a job, it was no longer sufficient to follow basic orders. Now, workers had to prove themselves as individuals, and many had never developed that skill. It was especially hard on those whose personalities tended to be withdrawn or less socially skilled, who might have been able to fly under the radar in the past. Some simply gave up. “Modern-type depression patients are living out the consequences of a nation transitioning from a culture of collectivism, in which they have to accept their rank within a family, to a capitalistic workplace where they have to forge their own path,” Kato says.


Modern-Type Depression does not seem to be isolated to Japan. In a 2011 study, Kato surveyed 247 psychiatrists, half of them from Japan and half from eight other countries, including Australia, Bangladesh, and South Korea. He gave the psychiatrists two case vignettes resembling traditional and modern-type depression, and found that both descriptions were familiar to many of the participants.

Based on these doctors’ replies, modern-type depression appears to be most prevalent in urban areas within collectivistic cultures that are experiencing rapid socioeconomic changes. Taiwan, another collectivist society that has rapidly urbanized, had an even higher rate of such cases than Japan; Bangladesh and Thailand also had a high prevalence. As cultures around the world adapt to a globalized workplace, this psychologically demanding adjustment might be in store for many more workers, which could lead to a wave of mental-health troubles that psychologists so far don’t know how to treat. (The same pattern might appear in immigrant populations who move from a country with a collectivist culture to the West, though Kato has not yet looked at such examples.)

In Japan, some researchers remain concerned about continuing to use the term “modern-type depression.” Junko Kitanaka, a medical anthropologist at Keio University, worries that the historical stigma that comes with the label unnecessarily pathologizes young people’s dissatisfaction at work, when it would be more helpful to build a workplace culture in which they can thrive. “If it’s used to better understand workers’ psyche and the genesis of depression, then it’s good,” she says. “But I don’t think it is used that way in general discourse. It is used in a way that places blame unnecessarily on the individual worker’s personality.”

So far, no medical consensus exists on therapeutic interventions for the condition, whatever it’s called. While efforts to normalize depression in Japan have led many people to seek treatment, Kitanaka says that the country still needs to educate people about the many different forms depression can take beyond the current stereotype of self-sacrifice. Kato has proposed that psychosocial interventions such as group therapy and changing companies’ work environments should be the primary treatment strategies, since medication has shown itself to be less effective for modern-type depression.

Kato is currently studying 400 patients long-term to see what protocols work best. In the meantime, one therapy he recommends is Rework, a program that Tsuyoshi Akiyama, a psychiatrist at NTT Medical Center in Tokyo, started for treating conventional workplace depression. More than 220 clinics in Japan use it. The program is run as an imitation workplace, where participants do readings, have discussions, play sports, and work out puzzles with each other. Trained staff members watch and give them ideas about where their interpersonal problems might lie and how to work more effectively.

The city government worker I spoke to who struggled with writing the manual for his colleague is one beneficiary of Rework. After returning to his job, he had a hard time adjusting, because he felt everyone was handling him with kid gloves. He couldn’t find a way to reassure them he was okay, and all of his overthinking about the situation made him lag behind and relapse. Through Rework, he began to see that he needed to start simply doing the work instead of getting caught up in the social dynamics.

Today, he says, if a coworker asked him to make a manual, he wouldn’t blame himself so much if he couldn’t get it done. He would simply state what his limits are. “I was hesitant before to talk to someone who I didn’t want to communicate with,” he says. “Now if I have a difficult colleague, I can handle it.”

During a discussion about solutions to the opioid crisis during last night’s Democratic primary debate, Beto O’Rourke suggested that when pharmaceutical companies go low, we should get high.

The former congressman from El Paso said a veteran he once met wouldn’t have gotten addicted to heroin if the veteran had been prescribed marijuana instead of opioids for his health condition. “Now imagine that veteran, instead of being prescribed an opioid, had been prescribed marijuana, because we made that legal in America [and] ensured the VA could prescribe it,” O’Rourke said.

This was a savvy answer. It clearly won O’Rourke some fans: At the mention of weed, the entrepreneur Andrew Yang, another Democratic presidential candidate, yelled across the stage, “PREACH, Beto.” (And thereby perhaps underscored O’Rourke’s famed youth-pastor energy.) O’Rourke was also in line with the majority of American voters—two-thirds of whom also support legalizing marijuana—as well as the majority of Democratic candidates for president. Joe Biden, the former vice president, whose stance on marijuana is the most conservative of the bunch, has called merely for decriminalizing the substance.

Putting forth marijuana as a solution for chronic pain stands to differentiate O’Rourke, whose campaign has been flagging in recent months. The other candidates mostly focused on putting pharmaceutical executives who have peddled opioids in jail. That’s all well and just, Beto seemed to say, but marijuana could help replace those awful opioids we’re trying to get rid of.

[Read: Searching for Beto]

Except, it’s not really clear that it can. As I wrote in June, while it’s true that the introduction of medical-marijuana laws was initially associated with a decline in opioid-overdose deaths, that relationship hasn’t held up in recent years. When the data include states that introduced medical-marijuana laws between 2010 and 2017, medical marijuana was associated with a 23 percent increase in overdose deaths, instead of a reduction in opioid overdoses.

This doesn’t mean that marijuana has no role in pain management; other studies have shown it can indeed reduce pain. And there are certainly reasons to decriminalize marijuana other than to halt the spread of opioids, such as to reduce mass incarceration. But the evidence isn’t quite there yet that marijuana can be a perfect substitute for prescription painkillers. (And fortunately, we don’t have to wait for such evidence to reduce opioid prescribing: Legal, nonaddictive drugs exist that perform just as well as opioids for many types of pain.)

[Read: The misplaced optimism of legal pot]

Of course, if everything that was said in a primary debate was backed by randomized controlled studies, these things would be a lot shorter than three hours. Part of the exercise is seeing how ideas play. O’Rourke shrewdly picked up on Americans’ exhaustion with the War on Drugs, their growing acceptance of marijuana, and their sense that prescription opioids are a far greater menace to Americans’ health than plain old pot. He knew he could “preach,” and that his sermon wouldn’t fall on deaf ears.

Abhijit BanerjeeEsther Duflo and Michael Kremer. The announcement on the 2019 Economics award on the Nobel website:

This year’s Laureates have introduced a new approach to obtaining reliable answers about the best ways to fight global poverty. In brief, it involves dividing this issue into smaller, more manageable, questions – for example, the most effective interventions for improving educational outcomes or child health. They have shown that these smaller, more precise, questions are often best answered via carefully designed experiments among the people who are most affected.

And a video:

Some additional coverage:

Here is Esther Duflo’s TED talk: