An Epidemic of Empathy in Healthcare Page 3
At hospitals like mine, we usually need only a day or so to pin down the cause of a patient’s problems and start the best available treatment. A patient who poses a “diagnostic dilemma” for more than a few days is, well, exciting. Hardly anyone eludes diagnosis for a week. It took 10 days to get to a diagnosis for my patient, during which time he became a minor celebrity as everyone struggled to be the first to figure out what he had.
He turned out to have a rare condition called Sweet’s syndrome, which is known to the few specialists who are familiar with it as febrile neutrophilic dermatosis. I had heard about this condition but had never seen it before. It is a skin disease in which white blood cells invade the skin, causing exquisite pain and tenderness. Its cause is unknown, but Sweet’s syndrome tends to occur in patients with cancers or infections. It usually improves with high doses of steroids, as it did in the case of my patient.
However, before that diagnosis was made, the end of a month came and went, and that meant that my patient had changes in almost all of his many physicians. The schedules did not all change at the end of the month, as some of the physicians’ schedules were in two-week blocks. But during my patient’s 10-day admission, he had changes in his intern and resident physicians, the hospitalist who was his attending physician, and the consultation teams in rheumatology, infectious disease, and dermatology. He was never neglected—he was, after all, what we call a fascinoma—a fascinating case. In fact, he received so much attention that both of the rheumatologists who had treated him gave him appointments for office visits after the hospitalization.
It was at this point that my patient’s wife e-mailed me, asking whether it was really necessary for him to see both rheumatologists. They lived an hour away, after all, and it was still a struggle for the patient to get into and out of bed, let alone a car. They would be happy to do it if it was important, but was it?
I was embarrassed by the confusion. I apologized and tried to put the best possible spin on it, saying that my colleagues had been overzealous in their efforts to make sure that his problems received appropriate follow-up after discharge from the hospital. It was at this point that the patient’s wife wrote that she and her husband were “frightened”—she used that word—by the number of clinicians involved in her husband’s care. She continued, “We are depending on you to tie all this together.”
I am a hard worker, and I have not received too many reminders in my career to “do your job.” But I recognized this one when it arrived. I described the case and the e-mail to my friend and colleague, the physician-writer Atul Gawande, and he said, “Patients like this one are a stress test for our healthcare system. They show us where and how we break down and fall apart.”
In cardiology, when we do stress tests, patients ask whether they passed. Actually, no one passes or fails a stress test. The speed and the incline of the treadmill are increased bit by bit until the patient has to stop. Everyone has to stop eventually; the question is how soon and why. Cardiologists use stress tests to determine how long patients can keep going, how much they can push themselves, what makes them stop, and how their hearts are doing when they reach their limit.
Patients like mine unmask the breaking points of our healthcare system: the points at which even smart, good, hardworking clinicians who are doing their best fail to inspire trust or give patients the peace of mind that comes from knowing that we are all working together on their behalf. It is ironic that the more sophisticated the care, the greater the risk for chaos—and the sense that patients are being forgotten.
What We’ve Lost
It wasn’t always this way for either patients or clinicians. A century ago, the role model for all physicians was a doctor who prided himself on sitting at the bedside of patients and listening intently to what they had to say. That physician was Sir William Osler (1849–1919), a magnificent teacher who is often celebrated as the father of modern medicine and is frequently portrayed in photos and paintings surrounded by students as he sits in a chair at the bedside of patients.
Without exaggeration, Osler revolutionized medical education as the first physician in chief at Johns Hopkins Hospital by pulling medical trainees out of the lecture halls and onto hospital wards. Until then, students would earn medical degrees by attending classes taught by teachers with uncertain credentials. The poor quality of most of these schools was exposed in the landmark Flexner Report of 1910, in which Abraham Flexner described American medical education in harsh terms but praised a few institutions, most notably Johns Hopkins.
Medical training at Johns Hopkins was outstanding because Osler knew that even with the best teachers, classroom teaching did not prepare anyone to be a real physician. He believed that the only way to learn medicine was to see patients, listen to them, and thoroughly examine them. One of his most famous sayings was “If you listen carefully to the patient, they will tell you the diagnosis.”
Figure 1.1. William Osler at a patient’s bedside. Reproduced by permission of the Osler Library of the History of Medicine, McGill University.
Osler would spend hours with students on teaching rounds at the bedsides of patients, demonstrating what could be discerned through thorough patient histories and physical examinations. Later, he would take students to the hospital morgue and perform autopsies so that they could correlate postmortem findings with what they had witnessed in living patients. He believed immersion with patients was so important that he established the first full-time live-in residency at Hopkins: a training period just after medical school when young physicians would reside in the hospital so that they could be around their patients constantly.
Osler knew that sitting by the bedside to give emotional comfort to patients was often the best thing physicians of his era could offer. Through countless hours of unflinchingly honest observations of patients, Osler could see that many treatments simply did not work. He was fond of quoting Oliver Wendell Holmes’s quip that if all the medications that were used to treat patients were dumped into the ocean, it would be good for patients but very bad for fish. Instead, Osler advocated accurate diagnosis, followed by reliance on “mother rest and father time.”2
Osler cited Hippocrates as saying that “for the physician to cultivate prognosis, and nothing so much inspires confidence as the power of foreseeing and foretelling in the presence of the sick the present, the past, and the future, he will indeed manage the cure best who has foreseen what is to happen.”3 In light of the lack of effective treatments, he believed that much of the real role of physicians consisted of predicting the patient’s future and relieving the patient’s suffering because he knew that in his era, physicians could only rarely change destiny.
The Era of Optimism
By 1919, when Osler died, a victim of the worldwide Spanish influenza epidemic, that fatalism was just about to start giving way to optimism. In the decade that followed, the first glimpses of truly effective treatments for diseases that had been routinely fatal began to emerge. In 1921, for example, insulin was extracted from animals by a Canadian researcher named Frederick Banting and his student Charles Best. A year later, they gave it to a boy dying from diabetes at Toronto General Hospital. The first injection nearly killed the boy because of an allergic reaction, and the scientists worked to purify the insulin. The second time they gave it, the results were miraculous. They went around a ward of children who were in comas because they were dying from diabetes and injected each child with the new insulin. By the time they were injecting the last child, the first was awakening.
In 1923, a surgeon at Harvard Medical School and Peter Bent Brigham Hospital named Elliott Cutler performed the first successful major heart operation, opening a narrowed valve in the heart of a 12-year-old girl. In 1928, Alexander Fleming observed that bacteria on a culture plate had been killed by a contaminating mold, leading to the discovery of penicillin. As it turned out, most of the research advances in the 1920s took decades to make a real difference in medical care. For example, Elliott Cutler’s
next several patients all died, and major heart surgery was not undertaken again until after World War II.
However, during that period, the culture of medicine began to change, a process that was captured in a Pulitzer Prize–winning novel published in 1925 by the American writer Sinclair Lewis. It was a book that approached required reading for young aspiring physicians. Arrowsmith tells the story of a bright young man who becomes a physician and researcher and makes discoveries that help curb an epidemic. His studies take Martin Arrowsmith from a small town to New York City. Along the way, Arrowsmith loses his wife to the epidemic he was studying, setting off a cycle of despair and preoccupation with superficial values such as wealth and fame. Eventually, however, Arrowsmith abandons New York and the fast life and resumes his research career in rural Vermont.
Arrowsmith reflected the hope and excitement that was beginning to surround medical research in the 1920s. Technical advances increasingly allowed scientists to measure processes at work inside the bodies of healthy people (physiology) and understand how those processes changed when diseases developed (pathophysiology). Insights into the mechanisms of diseases set the stage for treatments that might actually stop or reverse them.
World War II led to enormous investments in medical research. For example, the U.S. government funded studies of shock—the condition in which blood flow is inadequate to support normal organ function—in hopes of helping wounded soldiers survive. That research led to the development of cardiac catheterization, in which thin tubes are inserted into blood vessels and threaded into and around the heart. That innovation ultimately led to coronary angiography, coronary artery bypass graft surgery, and coronary angioplasty.
The desperate efforts to save wounded soldiers also led to a general increase in the audacity of physicians, especially surgeons. Cardiac surgery had gone into hibernation after Elliott Cutler’s series of fatalities in the 1920s, but when soldiers arrived at hospitals in England with shrapnel buried in their beating hearts, surgeons such as Dwight Harken would not give up. Harken became famous for opening their hearts, pulling out the metal, and sewing the hearts up, and a surprising number of the soldiers he treated survived. Those experiences showed surgeons that it was possible to operate on the heart and set the stage for Harken and others to start performing heart valve surgery after the war.
There were also technical advances that had nothing to do with medicine, at least at first. Ultrasound was used to detect submarines in the sea and was eventually applied to detecting abnormalities in the body. Oscilloscopes became cardiac monitors. Research on the magnitude of g-forces that could be tolerated by fighter pilots before they lost consciousness provided new insights into cardiovascular physiology.
The cultural sequelae to the war also played a role in medical advances, especially in the United States, where people felt lucky to be alive and confident that they could take on any foe. A combination of gratitude and confidence led to an enormous increase in funding for research, and the fatalism that had characterized Osler’s era gave way to optimism, even cockiness.
The transition that was under way after World War II is apparent in the first edition of what would become the bible of internal medicine, Harrison’s Principles of Internal Medicine. The first edition was published in 1950, when the title did not yet include the name of its legendary editor, the cardiologist Tinsley R. Harrison. In the introduction, Harrison wrote, “The modern view of clinical teaching holds that the classic approach, with primary emphasis on specific diseases, is inadequate.” He was announcing that there was a new era dawning in which physicians had to understand the science behind diseases because they might actually be able to do something other than hold patients’ hands and predict their prognoses, as Osler had done.
However, in 1950, there were not many effective treatments yet. That first edition of Harrison’s reflected a philosophical acceptance of death. Harrison, who wrote the cardiology chapters himself, reflected, “Arteriosclerosis, removing people from active life when the period of maximum fertility has passed, is of benefit to the young if it relieves them of the care of parents, or brings them an inheritance as they enter adult life. … Any attempts to eradicate such a disease from the urban population will be frustrated by natural selection and the survival of more grandchildren in families with few grandparents. Those best fitted to survive in a world growing more urban are those who cease to require support as soon as their roles as parents have been completed. Atherosclerosis and hypertension are now the chief factors in determining that we do not overstay our allotted span of life too long.”4
Despite that fatalistic assessment, the book’s emphasis on the scientific basis of disease helped foster a generation of physician-researchers who over the ensuing decades did more than predict the course of disease. They changed it. In Harrison’s specialty, cardiology, the next 20 years would see the introduction of game-changing medications such as beta blockers and statins, both of which would decrease mortality by about one-third in large subsets of patients. Cardiopulmonary resuscitation (CPR) and cardiac defibrillation (the use of electric shocks to attempt to revert a life-threatening heart rhythm to normal) were both described in 1960, setting the stage for the development of the coronary care unit a few years later. Once it was possible to keep some patients alive after a cardiac arrest, there was a good reason to concentrate those patients in a place with the technology to save them and personnel trained in that technology.
Before the development of CPR and the coronary care unit, beepers were essentially unknown in medicine. Since not much could be done to save a dying patient, there was no particular need to rush to the patient’s bedside. But in the new era, getting physicians to the bedside of patients within a few minutes might make the difference between life and death. The beeper quickly became a badge of honor for physicians. The little device conveyed the message “I might be needed somewhere at any moment.”
Another change was the development of team care. Even with beepers, physicians couldn’t be everywhere at once. Therefore, nurses in the coronary care units were trained and empowered to use defibrillators and start CPR without awaiting physicians’ orders. Until then, the doctor had always been in charge and nothing happened without his (it was virtually always a man) issuance of an order. But now there was simply too much to do, and nurses were given the autonomy to decide whether to use certain lifesaving measures. The coronary care unit thus drove major advances for nurses and, by extension, for women and established the idea of teamwork in medicine.
More advances followed in rapid succession. The 1960s, 1970s, and 1980s saw the development of coronary angiography, coronary artery bypass graft surgery, coronary angioplasty, thrombolysis (“clot-busting” drugs that dissolve the blood clots that are the cause of most heart attacks and strokes), and a succession of other advances. These advances have extended the lives of and given hope to untold numbers of patients.
Cardiology wasn’t the only field to experience seismic changes in the final decades of the twentieth century. In oncology, the development of chemotherapy enabled the treatment of systemic disease. Two previously fatal diseases of children and young adults—acute lymphoblastic leukemia and Hodgkin’s lymphoma—became potentially curable, as did testicular cancer. The development of colonoscopy and mammography made it possible to screen for cancers of the colon and breast. Hospitals sprouted oncology wings and then cancer centers.
Technology was also transforming surgery. Miniaturized cameras and robotics made it possible to remove a gallbladder or cartilage fragments from a knee through a “Band-Aid” incision. Hip and knee replacements became common. Cataracts no longer meant impending blindness, only the need for a lens replacement.
The Downside of Progress
There are now a tremendous number of clinicians involved in delivering all this expertise, and their focus is often narrow. A study published in 2000 showed that the average Medicare patient saw seven different doctors per year and that those physicians were i
n four different medical practices, making it unlikely that they would interact directly with one another in the course of a day. Most of those physicians were specialists, and the question on their minds as they would see patients was often “Does this patient have something that will benefit from what I do?” as opposed to “What is wrong with this patient, and what does he or she need?”
The difference between these questions was brought home for me when I was struggling to take care of one of my patients, an aging MIT professor who had developed increasing difficulty walking and did not fit into any obvious disease category that I could identify. I sent him to a good neurologist, a good orthopedist, and a good rheumatologist who specialized in muscle diseases.
After the two months it took to get those consultations, the responses I received were “It is not his nerves,” “It is not his joints,” and “It is not his muscles.” None of them told me what they thought his problem actually was or what we should do. Over time, I encountered each of those physicians in the hallway and mentioned my patient. They all had good ideas and were happy to contribute them, but until those face-to-face meetings, they had considered only the question “Is this what I do?”
Even when physicians recognize patients’ real concerns, there is so much to do and so much to think about that their interactions feel hopelessly rushed. Studies show that the number of patient visits to physicians’ offices has been steadily rising, with physicians squeezing increasing numbers of visits into their day. Despite this trend, the average time doctors spend with patients has gone up slightly, but there is so much more to discuss today than there was a generation ago that the time available for an office visit seems inadequate.