Talking Points

When a Nurse wants to be called a Doctor
“Hi. I’m Dr. Patti McCarver, and I’m your nurse,” she said. And with that, Dr. McCarver stuck a scope in Ms. Cassidy’s ear, noticed a buildup of fluid and prescribed an allergy medicine. With pain in her right ear, Sue Cassidy went to a clinic. The doctor, wearing a white lab coat with a stethoscope in one pocket, introduced herself. 
It was something that will become increasingly routine for patients: a someone who is not a physician using the title of doctor. Dr. McCarver calls herself a doctor because she returned to school to earn a doctorate last year, one of thousands of nurses doing the same recently. Doctorates are popping up all over the health professions, and the result is a quiet battle over not only the title “doctor,” but also the money, power and prestige that often comes with it. 
As more nurses, pharmacists and physical therapists claim this honorific, physicians are fighting back. For nurses, getting doctorates can help them land a top administrative job at a hospital, improve their standing at a university and win them more respect from colleagues and patients. But so far, the new degrees have not brought higher fees from insurers for seeing patients or greater authority from states to prescribe medicines. 
Nursing leaders say that their push to have more nurses earn doctorates has nothing to do with their fight of several decades in state legislatures to give nurses more autonomy, money and prescriptive power. But many physicians are suspicious and say that once tens of thousands of nurses have doctorates, they will invariably seek more prescribing authority and more money. Otherwise, they ask, what is the point? 
Dr. Roland Goertz, the board chairman of the American Academy of Family Physicians, says that physicians are worried that losing control over “doctor,” a word that has defined their profession for centuries, will be followed by the loss of control over the profession itself. He said that patients could be confused about the roles of various health professionals who all call themselves doctors. “There is real concern that the use of the word ‘doctor’ will not be clear to patients,” he said. 
So physicians and their allies are pushing legislative efforts to restrict who gets to use the title of doctor. A bill proposed in the New York State Senate would bar nurses from advertising themselves as doctors, no matter their degree. A law proposed in Congress would bar people from misrepresenting their education or license to practice. And laws already in effect in Arizona, Delaware and other states forbid nurses, pharmacists and others to use the title “doctor” unless they immediately identify their profession. 
The deeper battle is over who gets to treat patients first. Pharmacists, physical therapists and nurses largely play secondary roles to physicians, since patients tend to go to them only after a prescription, a referral or instructions from a physician. By requiring doctorates of new entrants, leaders of the pharmacy and physical therapy professions hope their members will be able to treat patients directly and thereby get a larger share of money spent on patient care. 
As demand for health care services has grown, physicians have stopped serving as the sole gatekeepers for their patients’ entry into the system. So physicians must increasingly share their patients — not only with one another but also with other professions. Teamwork is the new mantra of medicine, and nurse practitioners and physician assistants (sometimes known as midlevels or physician extenders) have become increasingly important care providers, particularly in rural areas. But while all physician organizations support the idea of teamwork, not all physicians are willing to surrender the traditional understanding that they should be the ones to lead the team. Their training is so extensive, physicians argue, that they alone should diagnose illnesses. Nurses respond that they are perfectly capable of recognizing a vast majority of patient problems, and they have the studies to prove it. The battle over the title “doctor” is in many ways a proxy for this larger struggle. 
For patients, the struggle has brought an increasing array of professionals trained to deal with their day-to-day health woes, but also at times confusion over who is responsible for their care and what sort of training they have. 
Six to eight years of collegiate and graduate education generally earn pharmacists, physical therapists and nurses the right to call themselves “doctors,” compared with nearly twice that many years of training for most physicians. For decades, a bachelor’s degree was all that was required to become a pharmacist. That changed in 2004 when a doctorate replaced the bachelor’s degree as the minimum needed to practice. Physical therapists once needed only bachelor’s degrees, too, but the profession will require doctorates of all students by 2015 — the same year that nursing leaders intend to require doctorates of all those becoming nurse practitioners. 
Dr. Kathleen Potempa, dean of the University of Michigan School of Nursing and the president of the American Association of Colleges of Nursing, said that the profession’s new doctoral degree, called the doctor of nursing practice, was simply about remaining current. “Knowledge is exploding, and the doctor of nursing practice degree evolved out of a grass-roots recognition that we need to continuously improve our curriculum,” she said. Last year, 153 nursing schools gave doctor of nursing practice degrees to 7,037 nurses, compared with four schools that gave the degrees to 170 nurses in 2004, when the association of nursing schools voted to embrace the new degree. In 2008, there were 375,794 nurses with master’s degrees and 28,369 with doctorates, according to a recent government survey. Dr. Potempa said that nurses with master’s degrees were every bit as capable of treating patients as those with doctorates. 
Nursing is filled with multiple specialties requiring varying levels of education, from a high school equivalency degree for nursing assistants to a master’s degree for nurse practitioners. Those wishing to become nurse anesthetists will soon be required to earn doctorates, but otherwise there are presently no practical or clinical differences between nurses who earn master’s degrees and those who get doctorates. 
Nurse practitioners must generally graduate from college and take an additional 12 to 16 months of classes, which include months of treating patients for both mild and serious illnesses in clinics and hospitals under the watchful eyes of instructors. Those earning doctorates must generally take a further four semesters or 12 to 16 months of additional classes. While instruction at each school varies, Dr. McCarver took classes in statistics, epidemiology and health care economics to earn her doctor of nursing practice degree. These additional classes, at Vanderbilt University, did not delve into how to treat specific illnesses, but taught Dr. McCarver the scientific and economic underpinnings of the care she was already providing and how they fit into the nation’s health care system. Studies have shown that nurses with master’s level training offer care in many primary care settings that is as good as and sometimes better than care given by physicians, who generally have far more extensive training. And patients often express higher satisfaction with care delivered by nurses, studies show. Physicians say they are better at recognizing rare problems, something studies have trouble measuring. 
The benefits to patients of nurses receiving doctorates is unclear, since there is no evidence that nurses with doctoral degrees provide better care than those with master’s degrees do. Given the proven effectiveness of nurses with master’s degrees, even some nursing leaders have asked why nurses should be required to get doctorates. “If it ain’t broke, why fix it?” asked Dr. Afaf I. Meleis, dean of the University of Pennsylvania School of Nursing. 
Some health care economists say the push for clinical doctorates across health professions could be misguided. They argue that anything requiring students to spend more time and money getting trained will invariably result in longer waits and increased costs for patients, because fewer students will meet the increased requirements and those who do will eventually demand higher compensation. “Everyone’s talking about improving patients’ access to care, bending the cost curve and creating team-based care,” said Erin Fraher, an assistant professor of surgery and family medicine at the University of North Carolina School of Medicine. “Where’s the evidence that moving to doctorates in pharmacy, physical therapy and nursing achieves any of these?” 
Depending on their area of specialty, nurse practitioners earn a median salary of $86,000 to $90,000 annually, according to the Medical Group Management Association — a bit less than half of what primary care physicians earn. Nurses with doctorates generally earn the same salaries as those with master’s degrees since insurers pay the same rates to both. Physician groups fear that the real reason behind the creation of the doctor of nursing practice degree is to persuade more state legislatures to grant nurses the right to treat patients without supervision from doctors. Twenty-three states allow nurses to practice without a physician’s supervision or collaboration, and most are in the mountain West and northern New England, areas that have trouble attracting enough physicians. Nursing groups have lobbied for years to increase that number. “This degree is just another step toward independent practice,” said Louis J. Goodman, chief executive of the Texas Medical Association. Not true, Dr. Potempa said — the new degree simply ensures that nurses stay competent. “It’s not like a group of us woke up one day to create a degree as a way to compete with another profession,” she said. “Nurses are very proud of the fact that they’re nurses, and if nurses had wanted to be doctors, they would have gone to medical school.”
From an article by Gardner Harris in The New York Times 1 October 2011



February 26, 2011
Treat the Patient, Not the CT Scan
By ABRAHAM VERGHESE
Palo Alto, Calif.



THE other day as I walked through a wing of my hospital, it occurred to me that Watson, I.B.M.’s supercomputer, would be more at home here than he was on “Jeopardy!” Perhaps it’s good, I thought, that his next challenge, with the aid of the Columbia University Medical Center and the University of Maryland School of Medicine, will be to learn to diagnose illnesses and treat patients.On our rounds of the wards, Watson would see lots of other computers with humans glued to them like piglets at a sow’s teats. We might visit a patient with a complex illness — one whose second liver transplant has failed, who has a fungal meningitis and now also has kidney failure and bleeding and is on a score of medications. 
Watson might help me digest the sheer volume of data that is in the electronic medical record and might see trends in the data that speak of an impending disaster. And since Watson is constantly trolling the Web, he would perhaps bring to my attention a case report published the previous night in a Swedish journal describing a new interaction between two of the drugs my patient is taking. 
Better still, if Watson could harness data from all the patients in our hospital and in every other hospital in America, we might be alerted to mini-epidemics taking shape. For example, Watson might recognize that the kidney failure in our patient is linked to kidney failure in a patient in Buffalo and another in San Antonio; all three patients, he might inform me, were taking a “natural” weight loss supplement that contained a Chinese herb, aristolochia, that has been associated with more than 100 cases of kidney failure.
In short, Watson would be a potent and clever companion as we made our rounds. 
But the complaints I hear from patients, family and friends are never about the dearth of technology but about its excesses. My own experience as a patient in an emergency room in another city helped me see this. My nurse would come in periodically to visit the computer work station in my cubicle, her back to me while she clicked and scrolled away. Over her shoulder she said, “On a scale of one to five how is your ...?” 
The electronic record of my three-hour stay would have looked perfect, showing close monitoring, even though to me as a patient it lacked a human dimension. I don’t fault the nurse, because in my hospital, despite my best intentions, I too am spending too much time in front of the computer: the story of my patient’s many past admissions, the details of surgeries undergone, every consultant’s opinion, every drug given over every encounter, thousands of blood tests and so many CT scans, M.R.I.’s and ultrasound images reside in there. 
This computer record creates what I call an “iPatient” — and this iPatient threatens to become the real focus of our attention, while the real patient in the bed often feels neglected, a mere placeholder for the virtual record. 
Imaging the body has become so easy (and profitable, too, if you own the machine). When I was an intern some 30 years ago, about three million CT scans were performed annually in the United States; now the number is more like 80 million. Imaging tests are now responsible for half of the overall radiation Americans are exposed to, compared with about 15 percent in 1980. 
With that radiation exposure comes increasing risk for cancer, but what worries me even more is that this ease of ordering a scan has caused doctors’ most basic skills in examining the body to atrophy. This loss is palpable when American medical trainees go to hospitals and clinics abroad with few resources: it can be quite humbling to see doctors in Africa and South America detect fluid around patients’ lungs not with X-rays but by percussing the chest with their fingers and listening with their stethoscopes. 
Of course, we still teach medical students how to properly examine the body. In dedicated physical diagnosis courses in their first and second years, students learn on trained actors, who give them appropriate stories and responses, how to do a complete exam of the body’s systems (circulatory, respiratory, musculoskeletal and the rest). Faculty members stand by to assess that the required maneuvers are performed correctly. 
But all that training can be undone the moment the students hit their clinical years. Then, they discover that the currency on the ward seems to be “throughput” — getting tests ordered and getting results, having procedures like colonoscopies done expeditiously, calling in specialists, arranging discharge. And the engine for all of that, indeed the place where the dialogue between doctors and nurses takes place, is the computer. 
The consequence of losing both faith and skill in examining the body is that we miss simple things, and we order more tests and subject people to the dangers of radiation unnecessarily. Just a few weeks ago, I heard of a patient who arrived in an E.R. in extremis with seizures and breathing difficulties. After being stabilized and put on a breathing machine, she was taken for a CT scan of the chest, to rule out blood clots to the lung; but when the radiologist looked at the results, she turned out to have tumors in both breasts, along with the secondary spread of cancer all over the body. 
In retrospect, though, her cancer should have been discovered long before the radiologist found it; before the emergency, the patient had been seen several times and at different places, for symptoms that were probably related to the cancer. I got to see the CT scan: the tumor masses in each breast were likely visible to the naked eye — and certainly to the hand. Yet they had never been noted. 
Too frequently, I hear of (and in a study we are conducting, I am collecting) stories like that from all across the country. They represent a type of error that stems from not making use of basic bedside skills, not asking the patient to fully disrobe. It is a more subtle kind of error than operating on the wrong limb; indeed, this sort of mistake is not always recognized, and yet the consequences can be grave. 
IN my experience, being skilled at examining the body has a salutary effect beyond finding important clues that lead to an early diagnosis. It is a ritual that remains important to the patient. Recently my ward team admitted an elderly woman who had been transferred from her nursing home in the night because of a change in her mental status. A CT of the head and all other tests were determined to be normal; the problem had been dehydration, and she was better, ready to go back. But as our team was about to enter the room, my intern warned me that the patient’s lawyer daughter was unhappy with the plan to return her mother to the nursing home, and was waiting impatiently to see me and contest the transfer. 
After introducing myself to the patient and to her daughter, I did a thorough but quick neurologic exam. I put the patient through her paces: mental status, cranial nerves, motor and sensory function, used my reflex hammer and pointed out interesting things along the way to my interns and students. I then said to the daughter that her mother seemed back to normal. To our surprise, the daughter seemed comforted, and now had no objection to her mother’s return to the nursing home. 
Later, our team discussed what had just happened. We all felt that the daughter witnessing the examination of the patient, that ritual, was the key to earning both their trusts.
I find that patients from almost any culture have deep expectations of a ritual when a doctor sees them, and they are quick to perceive when he or she gives those procedures short shrift by, say, placing the stethoscope on top of the gown instead of the skin, doing a cursory prod of the belly and wrapping up in 30 seconds. Rituals are about transformation, the crossing of a threshold, and in the case of the bedside exam, the transformation is the cementing of the doctor-patient relationship, a way of saying: “I will see you through this illness. I will be with you through thick and thin.” It is paramount that doctors not forget the importance of this ritual. 
Abraham Verghese, a professor at the Stanford University School of Medicine, is the author of the novel “Cutting for Stone.

Rediscovering the First Miracle Drug

Every few months some miracle drug or other is rolled out with bells and confetti, but only once or twice in a generation does the real thing come along. These are the blockbuster medications that can virtually raise the dead, and while the debuts of some, like the AIDS drugs, are still fresh in memory, the birth of the first one is almost forgotten. It was injectable insulin, long sought by researchers all over the world and finally isolated in 1921 by a team of squabbling Canadians. With insulin, dying children laughed and played again, as parents wept and doctors spoke of biblical resurrections. As in Ezekiel’s vision of the dry bones, it actually put flesh on living skeletons.
But the miracle went only so far: insulin was not a cure. In 1921, New York City’s death rate from diabetes was estimated to be the highest in the country, and today the health department lists diabetes among the city’s top five killers. Now though, it is adults who die, not children. What insulin did was turn a brief, deadly illness into a long, chronic struggle.
In the first decades of the 20th century, half a dozen different research groups were hot on the trail of insulin, a hormone manufactured in the pancreas but difficult to separate out from the digestive enzymes also made there. Before insulin was available, doctors understood enough of this sequence to cobble together a stopgap treatment: diabetics were put on salad- and egg-based diets devoid of sugar and starch, with only the minimum number of calories needed to survive. Already thin, these patients became skeletal, but the excess glucose disappeared from their blood and urine, and they survived far longer than untreated contemporaries.
Dr. Elliott Joslin, whose Boston clinic was and remains a renowned diabetes center, recalled that before insulin one of his dieting patients was “just about the weight of her bones and a human soul.”
The other great authority on diet therapy was New York’s Dr. Frederick Allen, now long forgotten, who founded a residential hospital for diabetics, first on East 51st Street in Manhattan, and then in rural New Jersey.
It was to Dr. Allen that the eminent American jurist and Supreme Court justice Charles Evans Hughes turned when his daughter Elizabeth was diagnosed with diabetes in 1919, at age 11. Elizabeth Hughes was a cheerful, pretty little girl, five feet tall, with straight brown hair and a consuming interest in birds. On Dr. Allen’s diet her weight fell to 65 pounds, then 52 pounds, and then, after an episode of diarrhea that almost killed her in the spring of 1922, 45 pounds. By then she had survived three years, far longer than expected. And then her mother heard the news: insulin had finally been isolated in Canada.
The unlikely hero was Frederick Banting, an awkward Ontario farmboy who graduated from medical school without distinction, was wounded in World War I, then more or less forced himself into a laboratory at the University of Toronto with an idea of how to get at the elusive substance. Over the miserably hot summer of 1921 Dr. Banting and his assistant Charles Best experimented on diabetic dogs, with only limited success until finally dog No. 92, a yellow collie, jumped off the table after an injection and began to wag her tail. Meanwhile, Dr. Banting’s mentor and lab director, Dr. John J. R. Macleod, was summering in Scotland.
Dr. Banting never forgave Dr. Macleod for arriving back in the autumn, rested and refreshed, and taking over. His bitter hostility lasted years, long after the Nobel Prize ceremony in 1923 which Dr. Banting refused to attend, for although he shared the physiology prize with Dr. Macleod, he would not share a podium.
Meanwhile, mothers all over the globe were writing him heart-wrenching letters: “My dear Dr. Banting: I am very anxious to know more of your discovery,” wrote one, going on to describe her daughter’s case: “She is pitifully depleted and reduced.”
That was from Elizabeth Hughes’s mother, Antoinette. Charles Evans Hughes had by that time temporarily left the Supreme Court, and was serving as secretary of state in President Warren G. Harding’s administration. Dr. Banting, unimpressed, replied no, sorry, no insulin available — for, in fact, the team was having difficulty making enough for more than a handful of patients.
And then a few weeks later, Dr. Banting changed his mind.
Presumably higher powers had intervened, or perhaps Justice Hughes himself — a rigid, unsmiling man whom Theodore Roosevelt had nicknamed “the bearded iceberg” — had pulled strings. Either way, Elizabeth traveled posthaste to Toronto and the lifesaving injections.
It was the end of her journey, but only the beginning for many children without her connections, who had to wait while the Canadians fought bitterly with each other over how to fairly distribute their tiny amounts of the lifesaving substance.
Dr. Banting wound up giving one of his colleagues a black eye before it was all over, and Eli J. Lilly and Company, the Indianapolis pharmaceutical firm, won the right to mass-produce insulin. It was the first partnership negotiated among academia, individual physicians and the pharmaceutical industry. The expense and logistics of large-scale insulin manufacture were initially daunting. But soon trainloads of frozen cattle and pig pancreas from the giant Chicago slaughterhouses began to arrive at Lilly’s plant. By 1932 the drug’s price had fallen by 90 percent.
Meanwhile, the notion of allowing patients to test their own urine for glucose and calculate their own insulin doses was outlandish to most doctors. Diabetes was the first illness which forced them to cede some medical authority to the patient, said Jean Ashton, one of the exhibit’s curators. With insulin, diabetics suddenly acquired both the right and the responsibility to maintain their own health. Some of the children who were early recipients of insulin became diabetes advocates, speaking out for patients’ rights well into their old age.
But not Elizabeth Hughes: she ran in the other direction, far from the headlines that briefly made her the most famous diabetic child in the United States. Although she received an estimated 42,000 insulin shots before she died in 1981 at the age of 74, she systematically destroyed most of the material documenting her illness, expunged all references to diabetes from her father’s papers, and occasionally even denied she had been ill as a child. The few dozen of her letters that survive from her six-month stay in Toronto, as she exuberantly regained health and strength, emphasize how desperately she wanted to stop being a patient forever.
It was a great day when she injected herself with insulin for the first time: “I can do it perfectly beautifully,” she wrote to her mother. “Now I feel so absolutely independent.”
Edited by Hanifullah Khan from an article by Abigail Zuger in The New York Times published 4 October 2010.


Ranking Medical evidence

Good clinical evidence serves as the basis for the practice of safe and appropriate medical care. Traditionally, this evidence is obtained from clinical trials, meta-analyses and epidemiological studies, to name a few. What constitutes best evidence is still a point of conjecture. The following is a critique on the ranking of medical evidence.
The randomized clinical trial is still considered the gold standard for providing answers to testing clinical questions related to therapeutic interventions. This is because these trials will eventually provide an answer to a hypothetical question; irrespective of whether it concurs or not with the stated intent. Of course, this is provided that the trial is adequately powered, designed and randomized. Therein lies the problems with such trials. The majority of them are not large enough to provide adequate statistical power. Often, there is a delay before any harm becomes obvious. And thirdly, the results of the trial may be something totally new and unexpected.
Despite this, a well-designed randomized clinical trial more frequently provide a true answer compared to other approaches such as meta-analyses and epidemiology studies. Although these might provide very important and compelling information, they more frequently suggest hypotheses to be tested than the true answer. Dependent on the analytical approach used, it may be possible to get very different results. This is very much the case with meta-analyses where the analytical approach being applied can result in the exclusion of important studies, some of which may be quite compelling.
With retrospective observational studies, people can be studied for extended periods of time; a large number of patients are usually included; and researchers know what they are looking for. Traditionally, the FDA has not relied much on retrospective observational studies and meta-analyses; it prefers individual trials. Not all clinical trials are the same; there are good trials and there are bad trials. Just because it is a large trial does not mean it is a very useful source of information.
The scientific community is in agreement that good clinical trials give good answers. To determine efficacy, large, well-done clinical trials and meta-analyses usually provide adequate evidence. To determine safety, all available information should be utilized, not just single randomized trials, which may be helpful but do not provide all the answers.
Ultimately, for providing clinical information regarding one therapy vs. another or even the safety of one therapy vs. another, the results of randomized clinical trials must still be considered the best available evidence.

Steps Forward, and Backward, in Treating Diabetes
By Dan Hurley
New York Times, 19 July 2010

Catch the headline in The Times? “Warning Urged on Diabetes Pill,” it stated. “F.D.A. Proposes a Strongly Worded Label on Hazards of Heart Disease to User.” But it didn’t run this week, regarding the drug Avandia. The headline appeared almost exactly 35 years ago, on July 4, 1975, about a different drug for Type 2 diabetes that went through a strikingly similar controversy: tolbutamide. To this day, it and similar drugs for diabetes, the sulfonylureas, are still sold with a warning on “increased risk of cardiovascular mortality.”
The more things change in diabetes treatments, it seems, the more they stay the same. About four months after that old headline ran, during my freshman semester at college, I went to the hospital one afternoon for nausea, figuring I had a bad case of flu, and learned I had Type 1 (juvenile) diabetes. Not to worry, the doctor told me. In fact, he said, I was lucky. The old glass syringes that diabetics used to need were a thing of the past. “Now we have disposable plastic syringes,” he said. (Oh, joy.) Better yet, he said, a cure was coming any day. Pancreas transplants had been done in mice!
I was still waiting for that cure in 1983, when another Times article began by quoting a physician speaking to a group of Type 1 diabetics: “In your lifetime, you’re going to be cured.” These promised cures and assorted breakthroughs turn out to have a long history. On May 6, 1923, The Times published an article by Dr. Joseph Collins under the headline: “Diabetes, Dreaded Disease, Yields to New Gland Cure; Previous Claims for Insulin Confirmed at Meeting of American Physicians.” That was actually the third time the newspaper used the word “cure” in the headline of an article about insulin’s discovery. And it was easy to understand the excitement. Until the 20th century, diabetes was considered a rare disease: in 1866, for example, the reported death rate in New York City was 1.4 per 100,000 residents. By 1923, the rate had jumped to 22.9 per 100,000, and the idea of a cure was welcome indeed.
But as many knew even then, insulin wasn’t a cure. Sure, it instantly saved the lives of people like me, with Type 1 diabetes; but it was a lifelong treatment, carrying the ever-present risk of causing blood-sugar levels to fall dangerously low. Moreover, it soon became apparent that insulin didn’t prevent long-term complications, and it didn’t work nearly so well in older, heavier people — those with the far more common version of the disease, Type 2.
So imperfect was this so-called cure that the death rate attributed to diabetes actually went up. From 22.9 deaths per 100,000 New York City residents in 1923, the rate reached 29 per 100,000 in 1932 and soared to 44.4 in 1947 — nearly double the rate before insulin’s discovery. (The nationwide death rate is now 24.2 for Types 1 and 2 combined, and diabetes is the sixth leading cause of death; in New York City, the rate is 18 per 100,000, and it is the fifth leading cause of death.) By the 21st century, the promise of a cure for Type 1 seemed to have finally been fulfilled with the development of the Edmonton protocol, a method for transplanting insulin-producing beta cells into the pancreas. It looked for a few years like the real deal — until most of the transplanted cells stopped producing insulin in most recipients, and the patients had to resume taking injections.
For Type 2 diabetes, the drug industry has now produced some two dozen types of medications, even as the disease has become about 50 percent more widespread in the United States than it was in 2001, with some 23.6 million diabetics, or nearly 8 percent of the population, according to the Centers for Disease Control and Prevention. Type 1 diabetes is rising sharply, too. A large and growing body of scientific literature suggests that it is now being diagnosed at about double the rate as in the 1980s, about five times the rate of the 1950s, and perhaps 10 times the rate of a century ago. Researchers at the C.D.C. tell me that it continues increasing by about 3 percent a year.
It hasn’t all been bad news for diabetes treatments, of course. With so many more people affected by the disease, the decline in death rates since the 1940s is reassuring. Home blood-sugar tests weren’t even available when I learned I was diabetic, and they’ve since helped millions manage their disease better. Insulin pumps and continuous glucose monitors for Type 1 have also greatly improved control of blood-sugar levels. And an old standby for Type 2, metformin, appears to be one of the few drugs for the disease that actually prevents the loss of insulin-producing beta cells in the pancreas.
But given the disappointing history of many other treatments, some researchers have set out to find ways to prevent both forms of diabetes in the first place. “Whatever the trigger is, we want to find it,” Dr. Judith Fradkin, director of the diabetes division at the National Institute of Diabetes and Digestive and Kidney Diseases, told me. Regarding Type 1, she said: “The rates are rising. Something has to be behind it. We need to find it. If we find it, that has tremendous implications for prevention.” Studies are now under way to test promising strategies, whether by removing cow’s-milk formula from infants’ diet or by giving a vaccine that calms the immune system’s attack on the pancreas. For Type 2, ambitious public-health campaigns are likewise seeking to prevent the disease’s spread, by lifting techniques from the anti-cigarette playbook: taxing unhealthy foods and drinks, limiting their availability and alerting consumers to their risks with calorie counts on chain restaurants’ menus (a strategy pioneered in New York City and recently passed as part of the national health-care legislation).
Nearly 35 years after my diagnosis, I’m doing fine, without any complications — and still without any cure. But these days, my hopes have shifted from cure to prevention, so that my 14-year-old daughter and the millions of others at risk never get diabetes in the first place.

Dan Hurley is the author of the new book “Diabetes Rising: How a Rare Disease Became a Pandemic, and What to Do About It.”


Heart Risks Increased with Avandia

Two studies published in influential medical journals and using very different methods found that Avandia, a controversial diabetes medicine made by GlaxoSmithKline, substantially increased patients’ heart risks. The studies were made public Monday in hopes of influencing an expert panel that will convene on July 13 and 14 to offer advice to the Food and Drug Administration about whether Avandia should be removed from the market.
An editorial in the Journal of the American Medical Association, accompanying one of the studies, concluded that there was little reason that patients should ever be given Avandia, since a similar medicine, Actos, works just as well but appears to involve fewer risks. In response to the new studies, GlaxoSmithKline released a statement saying that other, better studies published in recent years had shown that Avandia is safe. “Taken together, these trials show that Avandia does not increase the overall risk of heart attack, stroke or death,” the company said.
Dr. Joshua M. Sharfstein, F.D.A.’s principal deputy commissioner, said in an interview that “these are two important papers that will be part of the discussion that F.D.A. has as we consider the important question of Avandia’s safety.”
Doubts about Avandia’s safety have been growing since May 2007 when a study co-authored by Dr. Steven E. Nissen, chairman of cardiology at the Cleveland Clinic, found that it increased the risks of heart attacks by 43 percent. An investigation revealed that the company had known about the possible increased risks for nearly two years, and the F.D.A. for at least a year, but neither had informed the public. Since then, a fierce debate has raged inside the agency about what to do, with some officials arguing that the drug should be withdrawn and others saying that it remains an appropriate option for doctors and patients.
A committee of independent experts found in 2007 that Avandia might increase the risk of heart attack but recommended that it remain on the market, and an F.D.A. oversight board voted 8 to 7 to accept that advice. Since then, the F.D.A., under the Obama administration, has expressed an increased concern over medical risks. Avandia was once one of the biggest-selling drugs in the world. Driven in part by a multimillion-dollar advertising campaign, sales were $3.2 billion in 2006. Last year, sales were $1.19 billion, and more than 2 million prescriptions for the drug were filled.
The study in the Journal of the American Medical Association was co-written by Dr. David Graham, an F.D.A. drug safety expert who has advocated for Avandia’s withdrawal. Using records for 227,571 patients in the federal Medicare program who were given either Avandia or Actos, Dr. Graham found that patients given Avandia had higher risks of stroke, heart failure and death compared to those given Actos, made by Takeda. Dr. Graham’s study suggests that more than 47,000 people taking Avandia suffered a heart attack, stroke, heart failure or death from 1999 to 2009 who, if they had been taking Actos, would have been spared such health issues.
In an interview, Dr. Graham said that the only reason Avandia remained on the market was that those at the F.D.A. who had approved the drug in the first place “are going to defend their original decision. We need to split up the people who approve a drug from those who oversee its safety” once it is on the market, he said. The agency is in the midst of an internal study of its safety decision-making process, Dr. Sharfstein said in response.
The second study, published in the Archives of Internal Medicine and co-written by Dr. Nissen, is an updated version of his 2007 study. Both studies were meta-analyses, in which information from multiple trials are combined into a single data set. The study published Monday used information from more trials and more patients but came to roughly the same conclusion as the 2007 study — that Avandia increased the risks of heart attack by 39 percent and the risks of heart-related death by 46 percent.

From an article by Gardner Harris published in The New York Times published 28 June 2010

Growing Obesity Increases Perils of Childbearing

As Americans have grown fatter over the last generation, inviting more heart disease, diabetes and premature deaths, all that extra weight has also become a burden in the maternity ward, where babies take their first breath of life.
About one in five women are obese when they become pregnant, meaning they have a body mass index of at least 30, as would a 5-foot-5 woman weighing 180 pounds, according to researchers with the federal Centers for Disease Control and Prevention. And medical evidence suggests that obesity might be contributing to record-high rates of Caesarean sections and leading to more birth defects and deaths for mothers and babies.
Hospitals, especially in poor neighborhoods, have been forced to adjust. They are buying longer surgical instruments, more sophisticated fetal testing machines and bigger beds. They are holding sensitivity training for staff members and counseling women about losing weight, or even having bariatric surgery, before they become pregnant.
Studies have shown that babies born to obese women are nearly three times as likely to die within the first month of birth than women of normal weight, and that obese women are almost twice as likely to have a stillbirth. 
About two out of three maternal deaths in New York State from 2003 to 2005 were associated with maternal obesity, according to the state-sponsored Safe Motherhood Initiative, which is analyzing more recent data. 
Obese women are also more likely to have high blood pressure, diabetes, anesthesia complications, hemorrhage, blood clots and strokes during pregnancy and childbirth, data shows.
The problem has become so acute that five New York City hospitals — Beth Israel Medical Center and Mount Sinai Medical Center in Manhattan, Maimonides in Brooklyn and Montefiore Medical Center and Bronx-Lebanon Hospital Center in the Bronx — have formed a consortium to figure out how to handle it. They are supported by their malpractice insurer and the United Hospital Fund, a research group.
One possibility is to create specialized centers for obese women. The centers would counsel them on nutrition and weight loss, and would be staffed to provide emergency Caesarean sections and intensive care for newborns. 
Very obese women, or those with a B.M.I. of 35 or higher, are three to four times as likely to deliver their first baby by Caesarean section as first-time mothers of normal weight, according to a study by the Consortium on Safe Labor of the National Institutes of Health.
While doctors are often on the defensive about whether Caesarean sections, which carry all the risks of surgery, are justified, Dr. Howard L. Minkoff, the chairman of obstetrics at Maimonides, said doctors must weigh those concerns against the potential complications from vaginal delivery in obese women. Typically, these include failing to progress in labor; diabetes in the mother, which can lead to birth complications; and difficulty monitoring fetal distress. 
But even routine care, like finding a vein to take blood, can be harder through layers of fatty tissue.
And equipment can be a problem. Dr. Janice Henderson, an obstetrician for high-risk pregnancies at Johns Hopkins in Baltimore, described a recent meeting where doctors worried that the delivery room table might collapse under the weight of an obese patient. 
At Maimonides, the perinatal unit threw away its old examining tables and replaced them with wider, sturdier ones. It bought ultrasound machines that make lifelike three-dimensional images early in pregnancy, when the fetus is still low in the uterus and less obscured by fat, but also less developed and thus harder to diagnose clearly. 
Many experienced obstetricians complain that as Americans have grown larger, the perception of what constitutes obesity has shifted, leading to some complacency among doctors. At UMass Memorial Medical Center in Worcester, Mass., Dr. Tiffany A. Moore Simas, the associate director of the residency program in obstetrics, demands that residents calculate B.M.I. as a routine part of prenatal treatment. 

Edited from an article by ANEMONA HARTOCOLLIS published in The New York Times on 5 June 2010

The Pill Revolution

The birth control pill has been called the most important scientific advance of the 20th century, and no wonder. Fifty years after its approval by the Food and Drug Administration, it is still one of the leading methods of contraception, in the United States and around the world. Much has been written about how it revolutionized sexual and social relationships, allowing women to defer pregnancy, enter the work force and make life choices their mothers could not — or, if you prefer, spawning promiscuity and undermining the foundations of marriage. 
But the pill also led to profound changes in the F.D.A. itself. Many of the steps that underlie modern drug approvals — extensive clinical trials, routine referrals to panels of outside experts, continuing assessments of a medicine’s safety, and direct communications between the F.D.A. and patients — were pioneered to deal with evolving concerns about the pill’s safety. In regulatory terms, the pill brought about a kind of reformation: just as Martin Luther insisted that individual Christians could communicate directly with God without the mediation of priests, the pill eventually led the F.D.A. to communicate directly with patients without going through doctors.
That change, fiercely resisted by some physician groups, is now firmly entrenched; the F.D.A. now routinely requires that many medicines carry significant and sometimes complex warnings that patients are expected to read and understand.
But the pill was the first. The pill’s role in the maturing of the F.D.A. has often been overlooked because shortly after the agency’s approval of the contraceptive, news of the horrific effects of thalidomide swept the world. That drug had been introduced in Europe as a sedative but was withdrawn in 1961 after it was linked with profound birth defects. Although thalidomide was never approved in the United States, the horror surrounding its effects led Congress to toughen the drug approval process by requiring manufacturers to prove their medicines were both safe and effective. It was a standard the F.D.A. had already been putting into effect, quietly if fitfully, in part because of the growing view that the safety of a medicine was inextricably linked with its efficacy.
Enovid, a pill combining the hormones estrogen and progestin, was already being prescribed for menstrual problems. But in approving it as a contraceptive, the agency’s reviewers required Searle to prove that it was effective in preventing pregnancy. (If it worked, the pill would spare women the risks of pregnancy and childbirth, which dwarfed any known risks from the drug.) So the company undertook one of the most extensive clinical trial programs to date, said Suzanne Junod, an F.D.A. historian. The pill was formally tested in 897 women, mostly in Puerto Rico and Haiti. The trials were relatively brief and did not answer fundamental questions about risks of cancer, heart disease and other chronic diseases. Uncertain about the long-term effects of hormonal contraceptives, the F.D.A. mandated that doctors limit prescriptions to two years.
The pill’s overwhelming popularity, however, soon rendered this limitation unenforceable. New versions were introduced, so women could simply switch brands — or find another doctor to prescribe the old one. And many doctors ignored the limit anyway. Then in November 1961, a British physician reported in The Lancet that a young woman had developed a blood clot and died while taking the pill. Within months, two similar fatalities were reported in the United States, and by August 1962, the F.D.A. had received 26 reports of users’ suffering blood clots. By the end of 1964, more than four million women had used Searle’s pill, and a blizzard of competitors had begun to blanket the market. With something so popular, the agency had no way of knowing if the problems experienced by users were related to the pill or would have happened anyway — the kind of mystery that has plagued drug regulators ever since.
So agency officials did two things for the first time that would eventually become routine. They asked a panel of outside experts to review the evidence on a continuing basis, and they and British regulators pressed for a large epidemiological investigation that would become a model for the future. Even before the pill, the federal government had a long history of using advisory committees to assess specific subjects and issue reports. But in 1965, the F.D.A. established its first permanent advisory panel, the Obstetrics and Gynecology Advisory Committee, largely to track the safety of the pill. The agency now has 32 permanent advisory committees, one of them with 18 different panels. These committees provide crucial advice not only about whether to approve certain medicines and devices but also how to address safety concerns that arise after approval.
The challenge of communicating these risks to patients while still supporting the product’s continued use bedeviled top agency officials. Protests by women’s groups and hearings on Capitol Hill made clear that despite the agency’s attempts, many women said they took the pill without being fully informed of its risks.
Frustrated that some doctors were not communicating adequately with their patients, the F.D.A. created a handout in 1975 that doctors could use in counseling patients. Many doctors, incensed at what they saw as the agency’s intrusion into the doctor-patient relationship, either ignored the material or refused to give it out.
In 1978, faced with mounting complaints that women did not have the information they needed, the F.D.A. mandated that patients be given the handouts when they picked up their prescriptions at the drugstore.
More recently, the Ortho Evra birth control patch has become a telling example of the continuing challenges that the F.D.A. faces in regulating a global, multibillion-dollar industry on which the agency depends for crucial information about drug safety. Johnson & Johnson developed the patch in hopes of exposing women to even lower doses of estrogen than they got with the pill. But the company’s own studies showed that the patch actually delivered far higher doses. The finding was buried in a mathematical formula in a 435-page report filed with the F.D.A. The company said it acted responsibly, but after four years, the F.D.A. issued a warning about high estrogen doses, and sales plunged.
One last bit of lore about the pill: no one is even sure when to celebrate its birthday. Ten years ago, the agency honored the occasion on June 23, the date that the F.D.A. gave formal approval for Searle to market the product. This year, the agency is celebrating on May 9, which coincides with the period 50 years ago when it announced its intention to approve the pill when a few technical details were ironed out. That this happens to be Mother’s Day this year may have played a role in the decision.
But whatever the date, it represents the F.D.A.’s first steps into adulthood. The pill was a landmark in the field of drug regulation. This is the drug that started it all.

Edited from an article by Gardiner Harris titled “It started more than one revolution” published on 3 May 2010 in The New York Times

Cleaning Up Medical Advice
New York Times,  
May 1, 2010

Professional medical societies play an enormously influential role in determining how medicine is practiced, but their activities and financing are a mystery. Outsiders can’t tell how independent the societies are from the companies that supply much of their financing.
So it was welcome news that the umbrella organization for specialty groups has adopted a new code of conduct that seeks to limit industry’s ability to influence professional judgments. But it was disappointing that it does not make a clean break from industry money.
The new code was adopted by the Council of Medical Specialty Societies, representing more than 30 specialty groups, on April 17. More than a dozen members, including the American College of Cardiology and the American Academy of Pediatrics, have adopted it, but a majority have yet to respond.
The code’s main weakness is the lack of any effort to wean the societies from their dependence on money from the makers of drugs, biological medicines and medical devices. There have been complaints in recent years that some societies conduct educational programs that feel more like marketing sessions for products or issue practice guidelines that push their members to use treatments favored by their industrial benefactors.
Last year, a group of experts proposed that such societies should quickly restrict industry support to no more than 25 percent of their operating budgets and work toward a virtually complete ban on industry money. The new code does not make even a nod to a ban.
Instead, it tries to prevent the industry support from biasing a society’s professional activities and judgments. The code seems strong in decreeing that the top leaders of medical societies and the top editors of their journals have no direct financial relationships with companies during their time in office.
But it goes only part way in protecting the integrity of medical practice guidelines that help doctors decide what treatments or tests to use. Industry could no longer help pay for developing the guidelines or their initial dissemination, but it could pay for further distribution. The chairpersons and most members of panels developing such guidelines would have to be free of conflicts of interest. Why not require that of all panel members?
The code also allows companies to help finance “continuing medical education” programs that most doctors must take to retain their licenses — provided the societies, not the companies, pick the topics, speakers and content. The code should have completely eliminated industry financing and found other resources or required doctors to pay the full cost of their continuing education.