Diagnosing Doctors’ New Dilemmas : STRANGERS AT THE BEDSIDE: A History of How Law and Bioethics Transformed Medical Decision Making, <i> By David J. Rothman (Basic Books: $24.95; 356 pp.)</i>
In 1968, Sen. Walter Mondale introduced a resolution that would establish a Commission on Health Science and Society, a government body to assess the new ethical and legal issues being raised by advances in medicine.
Organ transplants, for example, were imposing medical decisions that Hippocrates never had faced, and Mondale suggested that it was time for doctors to discuss these questions openly. The commission also would involve lawyers and philosophers, lay people who could bring their different perspectives to bear on these complicated matters.
The idea, which now seems somewhat less than revolutionary, was viewed as unnecessary, and even threatening, by most of the physicians who testified. Dr. Christiaan Barnard, the South African heart-transplant pioneer, told the Senate subcommittee: “I feel that if you do this, it will be an insult to your doctors.” Dr. Owen Wangensteen, a prominent surgeon, cautioned, “The fellow who holds the apple can peel it best. . . . If you are thinking of theologians, lawyers, philosophers and others to give some direction here . . . I cannot see how they could help. I would leave these decisions to the responsible people doing the work.”
“Strangers at the Bedside” is the story of how the “responsible people doing the work” were joined, largely against their will, by the lawyers, the bioethicists, the theologians--and the government as well. David S. Rothman, a historian and a professor of social medicine at Columbia, argues that the entrance into medical decision-making of all these non-physicians actually began when the government, and the general public, began to worry about the ethics of medical research. The scrutinizing of human experiments, the discussion of ethical imperatives, the eventual regulation, Rothman claims, led to a similar discourse concerning the agonizing dilemmas of medical treatment--and non-treatment.
It is almost too much a commonplace nowadays to observe that medical technology has outstripped both our legal and our ethical frameworks. Regularly, cases come to national prominence--parents trying to discontinue medical support for a beloved but devastated child, patients who are stricken in body but apparently quite sound in mind and want assistance in ending their own lives--but by definition, these are the instances that become public because they are not handled quietly and privately.
Anyone who works in the medical field sees these life-and-death decisions made on a daily basis: to resuscitate or not to resuscitate; to continue aggressive medical care or to “pull back” and let nature take its course. But if these decisions often are made privately, they are no longer made by doctors acting in the security of unquestioned authority; hospitals now have DNR (Do Not Resuscitate) policies, ethics committees to review ambiguous decisions, institutional review boards to consider experimental treatments.
But this was not always true. In fact, it was not true as recently as the 1970s. “Strangers at the Bedside”’ is chiefly concerned with the decade of change from 1966 to 1976, beginning with a whistle-blowing expose of research practices and ending with the Karen Ann Quinlan case.
However, some of his most fascinating material concerns an earlier period: He traces medical research, the tradition of experimentation on human subjects, back through World War II, arguing that the exigencies of war transformed research in this country.
Overall, Rothman argues, wartime conditions engendered urgency, and a utilitarian ethic that demanded sacrifices from all citizens for the common good--whether or not those citizens were capable of giving what we would now call informed consent. “The lessons that the medical researchers learned in their first extensive use of human subjects was that ends certainly did justify means; that in wartime the effort to conquer disease entitled them to choose the martyrs to scientific progress.”
After the war, there took place what Rothman calls, with some irony, the Gilded Age of Research, an era in which human experimentation took place on a larger and larger scale--but with almost no regulation. The National Institutes of Health conducted research on a grand scale, sponsored new and experimental treatments for a wide range of diseases, but left it up to each individual scientist what information would be divulged to the patient--or subject. Would the risks be explained, the possible side effects? Maybe, and maybe not.
It was a doctor named Henry Beecher who blew the whistle. In 1966, he published an article in the New England Journal of Medicine, listing 22 instances of published research built on dubious ethics: live cancer cells injected into patients who were told only that they would be receiving “some cells”; drugs withheld or new drugs tested. The subjects, Rothman points out, included soldiers, charity patients, mentally retarded people and other groups whose ability to consent freely was doubtful.
From this and other exposes came new rules, governmental supervision, institutional supervision, peer review--and also a growing sense that “a fundamental conflict of interest characterized the relationship between the researcher and the subject.” With this awareness, Rothman argues, non-doctors began to look more closely at medical decision-making, at the bedside as well as in the laboratory.
This transition, from the regulation of medical research to the agonizing daily decisions of the hospital, is at the heart of the book. To explain it, Rothman invokes the increasing alienation of doctor from patient, the loss of community hospitals and the growing sense that doctors are isolated, that hospitals are strange and alien environments.
Taken together, all these factors suggest a very complex evolution, a loss of faith in medicine and a loss of familiarity with its trappings, a public fascination with the miracles of modern therapeutics, and a sense that these miracles required new checks and balances.
In 1969, a baby with Down’s Syndrome was allowed to starve to death at Johns Hopkins, instead of having surgery to correct an intestinal blockage. This decision, made in accordance with the wishes of the parents, sparked tremendous controversy; it was one of the first to raise the overwhelming ethical dilemmas of the newborn’s nursery.
As doctors searched for ways to arbitrate these decisions, lawyers, ethicists and theologians took their places on hospital boards: “The physicians were ready to strike a bargain: In order to be able to terminate treatment, they were ready to give nonphysicians a say in decision-making.”
Rothman discusses the ramifications of the Karen Ann Quinlan case, another of the agonizing moments that happened on a public stage. After this case, the lawyers and the judges were more involved than ever in bedside decisions; doctors and hospitals wanted to be sure they were covered legally in whatever decisions they made.
And so, by the middle 1970s, doctors were no longer alone at the bedside, resplendent in white coats and absolute authority. Medicine had leaped ahead into areas where no decisions were easy, and doctors had been joined by those that Rothman refers to as “strangers,” helping to shape those decisions.
This book offers a well-written and thoughtfully considered analysis of how these changes came about, rooted in fascinating detail about the history of medical research and medical practice in this country.