Every January, the produce drawers in America’s refrigerators fill up with shame. The moment comes at the end of a three-vegetable trend that runs through the holidays. First, in mid-November, the country happily becomes obsessed with brussels sprouts (or “brussel sprouts,” as Americans tend to spell it), likely in anticipation of Thanksgiving and its many delicious, often bacon-laden side dishes. Next, after sprouts have had their day in the sun, spinach ascends and almost always peaks in December. Christmas, after all, also requires side dishes, but you have to mix it up or your cousins will talk.

By January, though, things have changed. The mood is darker. America is ready to repent for the imagined sins of “enjoying food” and “cooking things that taste good.” January belongs to kale.

This annual vegetable cycle shows up in the past decade of Google Trends data, which compiles how frequently Americans trawl the internet for information about certain terms. Since about 2011, when Gwyneth Paltrow taught the world how to make kale chips on the Ellen show, kale has entered into the cultural lexicon as a status symbol for a generation of young adults drawn to conspicuous health-consciousness. Whereas spinach has been popular for generations and brussels sprouts have become gradually more trendy, the dominant produce-department narrative of the past decade has been that Americans are just crazy for kale.

But kale’s cultural ubiquity might not be exactly what it seems. After kale briefly overtook spinach as America’s favorite cooked green in mid-2014, Google’s measure of interest in kale has steadily declined. The green’s digital fortunes are currently back at about where they were in 2011, almost as if Paltrow had never kale-chipped. Search data aren’t the end-all-be-all measurement of popularity, but the more leads you follow, the more you begin to question the narrative of kale’s dominance. In fact, America might never have been that into kale in the first place.

My first inkling that kale was in trouble came from the New York magazine restaurant critic Adam Platt’s recent account of his attempt to love takeout-lunch salad, the purveyors of which dot seemingly every street corner in Manhattan. (The four best-known chains—Sweetgreen, Chopt, Just Salad, and Dig Inn—have a combined 81 locations in the borough.) During Platt’s experiment, someone from Sweetgreen told him that kale sales had waned at its stores, even as its menu had expanded to include grain bowls and warm dishes.

[Read: America’s $300 million salad industry]

It seemed that if kale was losing Millennials who still love to buy super-trendy $15 salads—Sweetgreen’s most ardent fans—then something larger might be afoot. A representative from Sweetgreen would not confirm or deny what the company had told Platt about kale’s popularity, and offered no further explanation. But the company’s earlier comment was enough to send me into the internet’s data mines with my red string and pushpins, ready to unravel the grand kale conspiracy.

Kale’s drop in Google Trends would be less ominous if the vegetable’s preparation were straightforward. Relatively few people search the internet for romaine lettuce, for example (unless they think it might kill them), even though romaine consumption has accelerated in the past two decades in the United States. But when it comes to cooking greens, the perennial holiday spikes suggest that people need to come back again and again to the giant recipe box of the internet, even after learning to prepare something once. Kale, in particular, has a natural taste and texture—bitter, tough, laborious to chew—that is off-putting to many. Including it in a dish takes work and know-how, even if you’re just making a salad. (Raw kale has to be … massaged? Finely chopped? Beaten into submission?)

Kale is currently at less than half the search popularity of its 2014 high. According to the most recent data from the Produce Market Guide, 8 million fewer pounds of kale were sold in America in 2017 than in 2016, a 6 percent drop in national sales volume. But people aren’t turning away from vegetables overall; they’re just looking in other bins in the produce section. Spinach’s sales volume went up nearly 4 percent in the same period. Brussels sprouts saw growth of 19 percent. A representative for the specialty grocery chain the Fresh Market confirmed that its recent sales reflect brussels sprouts’ burgeoning popularity, but noted that kale was still “holding its own” with shoppers. The company declined to provide specific sales figures.

If this were an episode of Law & Order, kale’s defense attorney would point out that the evidence against his client is circumstantial at best. This is true: The vegetable’s exact future remains unknown, and using search and sales data to triangulate people’s true feelings is certainly an imprecise science. Many people do, of course, genuinely like kale. But the most revealing thing about America’s relationship with kale isn’t whether people are buying the vegetable, but when.

During the holidays, people make dishes they love and want to share, so it makes sense that fewer Americans would be searching for brussels-sprouts cooking times as soon as a big food-prep holiday is over. January, however, is not really part of the holidays. Rather than piles of delicious food, the month traditionally brings promises of long-term self-abnegation; it’s when America’s sense of duty toward “healthy” things shines. But like almost all New Year’s resolutions, America’s commitment to eat more kale rarely makes it to February. If eating kale were something that people mostly enjoyed, you’d think they’d keep right on stuffing it down their gullets in blissful perpetuity.

[Read: It’s the most inadequate time of the year]

It might be that Americans try to love kale every January for the same reason the green so quickly became a household name in the first place. In the era of “clean eating” and internet wellness fads, kale comes approved by internet wellness gurus. It has been branded a “superfood,” and it’s talked about in juice shops with such hushed reverence that you’d think it held the key to eternal life. It’s low calorie and nutrient dense, with particularly robust supplies of vitamins A, C, and K, plus some fiber and protein. But avoiding kale won’t hurt you. Pretty much all dark, leafy greens have strong nutrient profiles, so there’s little reason to privilege one over all the others.

Marketers seem to have quickly caught on that many Americans might want to consume kale without being forced to taste, chew, and swallow a significant amount of it. A 2017 analysis from Nielsen identified eight areas in which sales of products containing the vegetable had grown significantly in the previous year, and many of them—snacks, pasta sauces, and deli dips, for example—are prepared foods in which kale’s characteristics can be masked. Sales of vitamins and supplements with kale in them also more than doubled, even though desiccated kale powder is basically devoid of the nutrients that make the full plant good for you. The most explosive growth for kale products happened in baby foods, the sales of which nearly quadrupled. The Americans eating kale most consistently, in other words, might be those who literally have things spoon-fed to them, with no say in the matter.

Puzzling together the available data creates a picture of a populace with an uneasy relationship with a vegetable whose health reputation is so powerful that people seem to think of it like taking a vitamin. Or maybe the more accurate analogy for eating kale would be flossing—something Americans know is supposed to be good for them, but that’s still annoying and unpleasant. Food trends usually last 10 to 20 years before waning, but if the things people search for and buy are any indication, many Americans seem eager to make it to kale’s cultural finish line. If Beyoncé dancing pantsless in a sweatshirt emblazoned with the word kale can’t persuade the country to get over its aversion to the vegetable, it might be time everyone admitted their true feelings and just went back to spinach.

Most people who spend time in health care are aware of the opioid crisis. But how bad is it really? The National Institute for Health Care Management (NIHCM) Foundation provides a number of interactive graphs to quantitatively describe how the opioid crisis has evolved between 2000 and 2017. For instance, the the graph below shows how the opioid epidemic started with prescription pain killers, then moved on to heroin, and now the biggest threat are synthetic opioids such as fentanyl.

Also, the opioid crisis doesn’t only affect the young. Almost 3 in 10 opioid-related deaths occur in individuals aged 50 and above.

You can find additional graphics and statistics here.

Where did these statistics come from? The NIHCM website notes:

Data on opioid overdose deaths were derived from the multiple cause-of-death files of the Wide-ranging Online Data for Epidemiologic Research (WONDER) data system maintained by the Centers for Disease Control and Prevention (CDC).

FDA faces an near existential crisis: digital.  New digital technologies have the capability to improve the way care is delivered to patients.  The question is, how do we make sure it is safe? The FDA is certainly good at making sure drugs and devices are safe; in the case of digital however, it is less clear how they should be regulated.  Do updates to software to improve run time need to be reviewed?  What about changes to user interfaces? Or fundamental changes to underlying algorithms. 

If the FDA does not provide oversight, unsafe
products will come to market.   However,
providing too much oversight may decrease innovation as the cost to bring
products to market may rise dramatically. 

Yesterday, FDA announced additional refinement
to its guidance
on digital health technology development
.  One point that was made clear was that
lifestyle apps will not e FDA-regulated.

We’re making clear that certain digital health technologies – such as mobile apps that are intended only for maintaining or encouraging a healthy lifestyle – generally fall outside the scope of the FDA’s regulation. Such technologies tend to pose a low risk to patients, but can provide great value to consumers and the healthcare system.

At the same time, higher risk technologies may need to be regulated.  But how would the FDA measure risk?  FDA announced that it was using the International Medical Device Regulators Forum (IMDRF) risk-based framework for categorizing products.  IMDRF uses the term Software as a Medical Device (SaMD) and finds a few key challenges unique to these products.

  • Medical device software might behave differently when deployed to different hardware platforms. 
  • Often an update made available by the manufacturer is left to the user of the medical device software to install. 
  • Due to its non-physical nature (key differentiation), medical device software may be duplicated in numerous copies and widely spread, often outside the control of the manufacture. 
  • Deployment cycles are often rapid.

The IMDRF categorization of SaMD depends on two dimensions: (i) how seriously ill the patient is and (ii) how the digital technology is being used.  The sensible categorization is in the table below.

IMDRF

For more specific information on how FDA will manage risk for digital, please see the list of selected FDA guidance documents below.

A small group of people goes into the woods with a chain saw. That’s how it begins. It ends with the death of a tree—felled, illegally, by poachers.

The United States Forest Service estimates that as many as one in 10 trees cut in the national forests is poached. In a new animated video, the Atlantic contributing writer Lyndsie Bourgon reveals the surprising reason tree-poaching is on the rise.

For more, read Bourgon’s Atlantic article, “The Opioid Crisis Is Killing Trees Too.”

Life Up Close is a project of The Atlantic, supported by the HHMI Department of Science Education.

A few years ago, haunted by vague memories of being a weak middle-schooler, Brett McKay decided he wanted to be able to do more pull-ups. McKay, who runs the website and podcast The Art of Manliness,  had in the past tried doing a traditional, twice-weekly regimen, gradually building up his reps. But this time, he turned to a training technique from Pavel Tsatsouline, a former Soviet trainer who is credited with getting Americans into kettlebells, the rounded weights with handles for swinging or lifting.

After reading a book by Tsatsouline, McKay decided he needed a radical approach to his fitness routine. He needed to grease the groove.

Greasing the groove, as Tsatsouline explains it, means not working your muscles to the point of failure. A common idea in weightlifting is that you should lift until you can’t do another rep, purposely damaging muscle tissues so they grow back bigger. But muscle failure, Tsatsouline writes in his 1999 book, Power to the People! Russian Strength Training Secrets for Every American, “is more than unnecessary—it is counterproductive!”

Instead, Tsatsouline advocates lifting weights for no more than five repetitions, resting for a bit between sets and reps, and not doing too many sets. For a runner, this would be like going for a four-mile jog, but taking a break to drink water and stretch every mile. Tsatsouline’s book suggests spending 20 minutes at the gym, tops, five days a week. In this way, he claims, you grease the neurological “groove,” or pathway, between your brain and the exercises your body performs. It’s not exactly the brutal routine you’d expect from someone billed as a Soviet weight lifter. But Tsatsouline contends this is the most effective way to build strength.

Over time, greasing the groove has trickled down through the fitness realm, with each lifter and CrossFit champ who practices it slightly changing its meaning. In The Complete Guide to Bodyweight Training, the sports therapist Kesh Patel defines it as lifting weights in “smaller, but frequent chunks, rather than one large one.” On Instagram, people tag everything from yoga poses to 100-pound deadlifts with #greasethegroove. (The term is, helpfully, both sciencey and sexy sounding.)

“I can’t say for certain why it has gained popularity,” said Christopher J. Lundstrom, a professor of exercise science at the University of Minnesota, in an email, “but I suspect it has to do with the simplicity of the idea, and the fact that it does not require a particularly hard effort (i.e., it doesn’t hurt) and often requires little to no equipment.”

In fact, greasing the groove has become something of a catchphrase for people who don’t have the time or ability to do a full workout, but still want to squeeze in a little exercise. “Some days your daily routine is better than others but the key is consistency and #greasingthegroove,” one yogi’s Instagram caption says. The practice appears to have taken on a Michael Pollan–esque definition: Lift weight, not too much, most of the days. For busy people who just want to squeeze in fitness however they can, that might be just the right mantra.

[Read: The futility of the workout-sit cycle]

One way to grease the groove is to just do the exercise whenever you think of it. Ben Greenfield, in Beyond Training, describes how he would do three to five pull-ups every time he walked under a pull-up bar installed in his office doorway. By the end of the day, he’d have performed 30 to 50 pull-ups with minimal effort.

McKay opted for something similar: He set up a pull-up bar in his door frame, and every time he walked under it, he would do one. “You’re allowing yourself to practice more without going to fatigue,” he says. “If you’re constantly thrashing your body, doing max sets every time you do a pull-up, you’re gonna have a bad time.” Anyone who has tried to climb the stairs to their apartment on achy quads after an ambitious leg day knows the risks of overexertion. Within a month, McKay says, he went from being able to do about five pull-ups to about 15.

Kevin Weaver, a professor of physical therapy at New York University, told me that training by greasing the groove can help your body increase the number of muscle fibers it uses to perform a certain action. Brad Schoenfeld, an associate professor of exercise science at CUNY’s Lehman College, also sees a potential benefit. Because of how the brain learns, he says, doing four sets of an exercise over five days rather than 20 sets in one day, for instance, might be a way to improve technique or form, which could result in getting stronger even if you don’t add additional weight. This would be especially helpful for more complex exercises, like certain kettlebell moves.

Schoenfeld cautions that a deliberately patient approach to lifting is not the same as “just doing a pull-up now and then,” though. As with most of life’s good, easy things, there’s not much evidence that haphazardly greasing your groove will make you much stronger. While lifting lighter weights for more repetitions can increase strength and muscle-building, strength improvements are still slightly better if you lift heavier weights, says Mike Roberts, an associate professor at Auburn University’s School of Kinesiology. He recommends switching up your workout regimen so that occasionally you perform workouts with heavy loads and separate workouts with light loads. And contra Tsatsouline, he says performing the exercises to the point of exertion is what’s most important.

Greasing the groove, in other words, might not actually be a secret Spetsnaz shortcut to getting ripped. But the loose way many people are interpreting the practice—try to get stronger in small bursts, whenever the opportunity presents—could offer something more valuable. Ria Heaton, a stay-at-home mom, started greasing the groove in the last year to increase the number of pull-ups she could do. Within about a month, she went from one to five—not as many as the most hardcore gym rats, maybe, but still a high number for a woman. Heaton’s explanation for why greasing the groove works is simpler than muscle fibers or perfecting technique. “The more you practice something, even a little bit at a time, the better you become at it,” she told me via email.

This more relaxed “greasing the groove light”—call it “spritzing the groove with Pam”—might still be a strategy for people who want to get stronger, but don’t have the time to get swole. In the approach’s slow simplicity, it could be a more sustainable way to exercise. Though it’s almost certainly not what Tsatsouline intended, doing whatever physical activity you can whenever it’s convenient is still a decent way to burn a few calories and feel less sedentary. An exercise strategy intended for Navy SEALs is actually perfect for everyday cubicle dwellers.

I, for instance, have been told I should lift weights. Every time I plummet out of crow pose in a yoga class, my teacher says I need to work on my upper body strength. (Well, that and “be less afraid,” which there’s no workout for.) Such admonitions would have motivated me in high school, when I would cut out weight exercises from Seventeen magazine and peer down at them while I grunted away in my town’s tiny community-college gym. But these days, life has eclipsed my desire for abs. I’m happy if I can drag my increasingly jiggly butt to the elliptical before 9 p.m. on a weeknight. Realistically, the only way I would have time for upper-body work is by doing the occasional push-up between folding the laundry, sending that email, making that phone call, and chopping up that stuff for the slow cooker.

The bodybuilders out there might criticize this softer way of greasing the groove as lazy or ineffective. But in a way, it fits with a broader cultural trend of embracing imperfection and simply trying one’s best. Americans’ stressed-out lives have given rise to a new philosophy in which we are, essentially, encouraged to admit defeat on certain things (spotless kitchens, impeccable pecs, and so forth). Our schedules won’t ease up on us, the thinking goes, so maybe we should ease up on ourselves.

[Read: I found the key to the kingdom of sleep]

If you wake up in the middle of the night and are stressed because you can’t fall back asleep, you’re supposed to tell yourself that’s fine; you’ll fall asleep eventually. Similarly, if you can’t lift a ton of weights, maybe that’s fine, too. You’ll lift them gradually.

When John F. Kennedy was 17, he was part of a prank club. At Connecticut’s elite Choate school in 1935, word spread that the group was planning to pile horse manure in the gymnasium. Before this “prank” could happen, the school’s headmaster confronted the troublesome boys. The scheme was the culmination of a list of offenses at the school, and young Kennedy was expelled.

Though the sentence was eventually reduced to probation, the headmaster suggested that Kennedy see a “gland specialist” to help him “overcome this strange childishness.” The doctor Kennedy ended up seeing was Prescott Lecky, a young, mutton-chopped psychologist. Lecky had made a name for himself at Columbia University as a skeptic of psychoanalytic theory, running up against Carl Jung and the Viennese establishment’s approach at the time. Instead of tracing Kennedy’s rebellious instincts to repressed motives or early-life stress, Lecky interrogated the boy’s sense of self.

Lecky paid particular attention to Kennedy’s talk of sibling rivalry. “My brother is the efficient one in the family, and I am the boy that doesn’t get things done,” Kennedy says in one of Lecky’s records. This constituted what Lecky considered a “self view”—a deeply held belief about oneself. He wrote that Kennedy had a reputation in the family for “sloppiness and inefficiency, and he feels entirely at home in the role. Any criticism he receives only serves to confirm the feeling that he has defined himself correctly.”

Kennedy’s case fit into a new idea Lecky was developing, called self-consistency theory. It posited that people are always striving to create a world in which their ideas of themselves make sense. We are motivated, sometimes above any sense of morality or personal gain, simply to hold our views of ourselves constant. This allows us to maintain a coherent sense of order, even if it means doing things the rest of the world would see as counterproductive.

The idea was never fully formed, and Lecky died at just 48, his work unpublished. But today, the basic concept is seeing a renewed interest from scholars who think Lecky was truly onto something. When the psychologist’s students compiled his writing posthumously, in 1945, the postwar world was grappling with how humans were capable of such catastrophic cruelty. Surely entire armies had not been motivated by their relationships with their mothers. The early science of the mind was beginning to delve into the timeless questions of philosophy and religion: Why do we do destructive things—to others, and to ourselves? Why do we so often act against our own interests? Why would a young boy risk his acceptance to Harvard to pile manure into a school gym?

These questions meant studying the roots of identity, and how a person could be at peace with being hateful and even dangerous. Now, decades later, an emerging explanation points to something more insidious than the possibility that someone simply identifies with a malicious group or blindly follows a toxic person. Instead, out of a basic need for consistency, we might take on other identities as our own.


“I have always been intrigued with the surprising things people will do in the service of preserving their identity,” says William Swann, a social- and personality-psychology professor at the University of Texas at Austin. He took up Lecky’s ideas and, in the 1990s, built them out into what he called self-verification theory. It asserts that we tend to prefer to be seen by others as we see ourselves, even in areas where we see ourselves negatively. As opposed to cognitive dissonance—the psychological unease that drives people to alter their interpretation of the world to create a sense of consistency—self-verification says that we try to bring reality into harmony with our long-standing beliefs about ourselves.

Swann’s theory offers an explanation for all sorts of seemingly counterproductive things that people do, from procrastination to poisoning relationships. Swann has noted that people with negative self-views tend to withdraw or flee from romantic partners who treat them too well. Some would call this “self-sabotage”—the basis for why some people ignore those who seem to genuinely appreciate them.

As Swann sees it, outwardly appearing self-injurious behaviors like these might actually be part of a fundamental “desire to be known and understood by others.” Self-views enable us to make predictions about our world and guide our behavior. They maintain a sense of continuity and order. Stable self-views also, ideally, help facilitate relationships and group dynamics. When people know their role in any particular dynamic, they predictably play the part, even when doing so is self-destructive.

Self-views seem to have their basis in how others treat us, and they solidify as we accept our position and behave to further warrant similar treatment. An overall sense of identity comes together like a patchwork quilt of group and self, defined by where we fit into the world. Each of us is someone’s child, someone’s neighbor; a member of some community or religious sect; we are the work we do, the dogs we have, the places we’ve lived, the bands we listen to and teams we cheer for and authors we keep on our shelves.

Sometimes we bond especially strongly with some of our associations, such as family, a military group, or a religion. We say we can’t imagine existing without something. Even in cases of extreme identification, however, people typically maintain a sense of their own identity. There is a distinct conceptual border between self and other. You are a part of the team, and you are you.

Occasionally, though, this border becomes permeable. Over the past decade, a new conceptualization has gained attention. It began with the seeds of an idea after the attacks on 9/11, Swann says, in that the terrorists’ actions seemed to him to be driven by unusually powerful group identities. A willingness to die—and to kill thousands of others in the process—goes beyond simple allegiance. He reasoned that these people had essentially taken on the group identity as their own.

Swann gradually developed the concept and deemed it “identity fusion.” Along with a collaborator named Angel Gómez, he defined it in 2009 as when someone’s “personal and social identities become functionally equivalent.” The border between self and other, as Swann sees it, “become[s] porous.” The phenomenon is sometimes described as a visceral feeling of oneness with a group or person, and sometimes as an expansion of the self.

“When people are fused, your personal identity is now subsumed under something larger,” says Jack Dovidio, a psychology professor at Yale. One way researchers test for fusion is to ask people to draw a circle that represents themselves, and a circle that represents another person (or group). Usually people draw overlapping circles, Dovidio explains. In fusion, people draw themselves entirely inside the other circle.

“This isn’t the normal way most people think about identity,” says Jonas Kunst, a psychology researcher at the University of Oslo. In disagreements over politics, for example, many people believe they can change someone else’s mind with a thoughtful-enough argument. Typically that’s the case; people are willing to challenge their group identities, if reluctantly. In fusion, though, a perceived challenge to the group’s ideology is a challenge to the self. Arguments about climate change, for example, might not actually be about climate change, and instead about people protecting their basic sense of order and consistency.

By a similar token, pundits often chalk “radical” behavior up to pathology, or simply to a vague “mental illness” or religious or political extremism. But fusion offers a framework that involves an ordered thought process. It is thought of as distinct from blind obedience (often assumed to be the case in cults and military violence), in which a person might follow orders and torture a prisoner, either unquestioningly or out of fear for personal safety. In fusion, people become “engaged followers.” These people will torture because they have adopted the value system that views the torture as justifiable. Engaged followers do so of their own volition, with enthusiasm.

[Read: What your politics do to your morals]

Fusion is not a bunch of individuals contorting their way of thinking, but a bunch of individuals suspending their way of thinking. “It makes us more likely to do extreme things that aren’t consistent with our normal identity,” Kunst says. “It allows you to do things you couldn’t conceive of doing.”


Identity fusion might be a new name for a timeless phenomenon, but Swann and others find it helpful as part of an explanation for current social divides. Swann believes that the political landscape accounts for growing interest in the concept, and that better understanding how and why fusion happens could have serious global consequences. It could also just make it easier to understand other people, and to be aware of one’s own susceptibility.

This month, in the journal Nature: Human Behaviour, Kunst and Dovidio examined fusion specifically involving Donald Trump. In a series of seven studies using various surveys, including Swann and Gomez’s “identity fusion scale,” the Yale and Oslo team found that Americans who fused with Trump—as opposed to simply agreeing with or supporting him—were more willing to engage in various extreme behaviors, such as personally fighting to protect the U.S. border from an “immigrant caravan,” persecuting Muslims, or violently challenging election results.

The fusion might explain some apparent contradictions in ideology, Dovidio says. Even people who typically identify as advocates of small or no government might endorse acts of extreme authoritarianism if they have fused with Trump. In fusion, those inconsistencies simply don’t exist, according to Dovidio: Value systems are only contradictory if they’re both activated, and “once you step into the fusion mind-set, there is no contradiction.”

Fusion seems most likely to happen when there is a charismatic leader, particularly of an authoritarian bent. “Humans are social, and the individual person has a power over us that abstract thought doesn’t,” Dovidio says. “The leader is a concrete manifestation of ideas, but allegiance to individuals will trump allegiance to ideas.” In that sense, the idea of fusion might help some people explain how family members or colleagues whom they view as fundamentally good people might seem to suspend their typical sense of morality and do things like downplay Trump’s bragging about groping women; enriching himself at taxpayer expense; defending white supremacists in Charlottesville, Virginia; or failing to release his tax returns despite multiple promises to do so.

[Read: The deepening crisis in evangelical Christianity]

The idea of identity fusion is not, the researchers assure me, some effort to use science to overlook or excuse bigotry or racial hatred, which are distinct elements in the formation of identity. Though fusion tends to happen with authoritarian leaders, the fusion is not itself antisocial or bad. It can be seen in political movements of all sorts; Kunst cites followers of Mohandas Karamchand Gandhi. Fusion might have arisen as a psychological adaptation to facilitate cooperation among kin in the face of extreme adversity, explains Harvey Whitehouse, chair of social anthropology at Oxford. Even so, Whitehouse warns, “social institutions could hijack the fusion mechanism in novel ways.”

A sense of deprivation—real or perceived threats to socioeconomic status—also seems to leave people inclined to fuse. “When we primed people to think of relative deprivation, this increased their likelihood of fusion with the leader,” Kunst says, noting that economic recessions have often preceded authoritarian movements. The findings from Kunst and Dovidio’s study suggest that Trump’s continued emphasis on the relative deprivation of his base—and his promise of the power and resources presumably under his control as an apparently wealthy Manhattan real-estate developer and reality-TV star—probably helped his election by increasing his followers’ fusion with him.

Even if this personal enrichment didn’t come to fruition for his voters, the researchers found that fusion with Trump only increased after his election. The presidency itself made him more powerful, and hence a more attractive target to fuse with.

Fundamentally, fusion is an opportunity to realign the sense of self. It creates new systems by which people can value themselves. A life that consists of living up to negative ideas about yourself does not end well. Nor does a life marked by failing to live up to a positive self-vision. But adopting the values of someone who is doing well is an escape. If Donald Trump is doing well, you are doing well. Alleged collusion with a foreign power might be bad for democracy, but good for an individual leader, and therefore good for you. “Fusion satisfies a lot of need for people,” Dovidio says. “When you fuse with a powerful leader, you feel more in control. If that person is valued, you feel valued.”

The process of de-fusing, then, might involve offering alternative systems of creating consistency and order. If people who are inclined to fusion have the option to fuse with entities that do not wish to exploit them, and that are generally good or neutral for the world, they might be less likely to fuse with, say, a demagogue. “But, of course,” Dovidio says, “that’s hard.”

One trend in recent years is how many hospitals are being purchased by large regional or national hospital chains. An April 2019 slide deck from the NIHCM Foundation shows this trend graphically.

A paper by Hsieh and Rossi-Hansberg (2019) provide some aggregate data as well as a case study from the Boston area.

Four decades ago, about 85% of hospitals were single establishment non-profits. Today, more than 60% of hospitals are owned by for-profit chains or are part of a large network of hospitals owned by an academic institution…As an example of the former, consider the Steward Health Care Group. This company was created by the Cerberus private equity fund in 2010 when it purchased 6 Catholic hospitals in Boston….Cerberus’ goal was to create the “Southwest Airlines of healthcare” by figuring out and codifying best practices and implementing these practices over a large scale… By 2019, Steward had expanded from its original hospitals in Boston to 36 hospitals located in 9 states and Malta.

Hsieh and Rossi-Hansberg (2019)

Is big beautiful? What is the advantage of such large changes?

One potential economy of scale is the ability to better leverage new technology. For instance, Steward Health has a remote intensive care unit (ICU).  The remote ICU provides live video feeds, real-time readings from instruments, scans and lab results. Steward software is used to identify alarming trends. By pooling data across hospitals, this approach has the potential to be more effect.

The Hsieh and Rossi-Hansberg (2019) paper notes that this is part of a bigger industrial trend towards increased consolidation in the services sector of the economy more generally.

…new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations.

Source: