Friday, December 31, 2010

Inflammatory Mitochondria

Happy New Year everyone! Last post for the year before many of us scatter apart in the upcoming one.


Motivation: Every lecture on sepsis starts with a Venn diagram showing an overlap between SIRS (systemic inflammatory response syndrome) and general infection with the intersection defined as sepsis.  I get the sepsis part, but I have always wondered about other non-infectious causes of SIRS like pancreatitis or trauma.  Since the injured tissue is our own and subject to constant immunologic surveillance, what exactly is so inflammatory about it when injured? Recently, a group in Boston examined this issue and proposed a remarkable hypothesis to explain the phenomenon.


Paper: Circulating Mitochondrial DAMPs Cause Inflammatory Responses to Injury. Zhang, Q. et. al.  Nature (2010) 464: 104-107.  http://www.nature.com/nature/journal/v464/n7285/full/nature08780.html

Hypothesis: The authors observed that when innate immunity responds to infection, specific signals expressed on invading micro-organisms called pathogen-associated molecular patterns (PAMPs) are recognized.  Some of these specific signals like N-formylated peptides are also expressed by mitochondria, which are thought to have a bacterial origin.  The authors postulated that when human tissue is injured, release of the mitochondrial products is inflammatory.

Method (clinical): Plasma was collected from 15 patients presenting with acute accidental trauma.  The age range was 17-71.  Patients did not either have significant medical co-morbidities or have major open or intestinal injuries. For controls, plasma was collected from healthy volunteers aged 26-61.  Most did not have chronic illnesses except two controls with Type II diabetes.

Results: I will just highlight some of the key results from the paper.

When trauma patients were compared to healthy controls, the concentration of mitochondrial DNA (mtDNA) in trauma plasma was 2.7 ug/mL (sem 0.94) compared to barely detectable in control plasma.  Bacterial products (tested by measuring ribosomal subunit 16S RNA) was absent from all samples.  To demonstrate the inflammatory and activating nature of mitochondrial products, human neutrophils exposed to mitochondrial products showed signs of activation by producing cytokines like IL-8 and expressing proteases such as matrix-metalloprotease 8 (MMP-8).  Other downstream activation cascade markers were also increased. In culture, human neutrophils became activated and migrated towards regions with mitochondrial products.  In the paper, the authors also identify specific receptors in neutrophils that are likely involved in sensing mitochondrial products.

To test biological significance, rats were intravenously given mitochondrial degradation products equivalent to a 5% liver injury.  Remarkably, as in sepsis ARDS, rats showed marked oxidative lung injury, increased pulmonary permeability, accumulation of IL-6, and PMN infiltration into airways.  Rat livers also demonstrated increased PMN accumulation.

Discussion: The paper has clinical relevance because it showed (1) that after trauma, the concentration of mitochondrial products is increased in plasma and (2) that mitochondrial products are inflammatory and can lead to injury at least in animal models.  While tissue injury may release a number of products that are inflammatory, this paper pins down a novel pathway that at the very least contributes to the process and provides concrete mechanistic basis of why tissue injury is inflammatory.  A direct consequence of describing these interactions could be development of inhibitors that could pharmacologically decrease the PMN activation process and decrease the severe damage that tissue injury like pancreatitis produces.





Thursday, December 23, 2010

The Two Hour Rule

Motivation: On the residency interview trail, I was put on a sample rotating ward team at a hospital, and one of the patients examined by the team had a significant pressure ulcer.  After examining the patient, the team talked about the importance of frequent repositioning to prevent pressure ulcers.  The standard of care is repositioning every two hours.  After listening to this case, I wondered how the figure of two hours entered clinical practice.  If I sit in the same position for even an hour, I feel pretty uncomfortable.  Turns out that the two hour guideline is partly expert opinion.  There have been some trials with inconsistent results.  Are there studies to measure effectiveness?

Paper: Frequent manual repositioning and incidence of pressure ulcers among bed-bound elderly hip fracture patients. Rich, S. et. al. Wound Repair and Regeneration (2010): 1-9.

Methods: The study examined bed-bound elderly patients 65 or older who underwent surgery for hip fracture.  During the first five days of hospitalization, study nurses assessed the frequency of repositioning through chart review.  Study nurses also determined the presence of stage 2+ pressure ulcers (at least partial thickness dermal loss or presence of blister) at baseline (within five days of hospitalization) with follow-up on alternating days for 21 days.  The study was conducted in seven hospitals in Maryland and two in Pennsylvania.

Results:
A total of 269 patients entered the study.  Overall, only 53% of patients were repositioned at least every two hours.  There was wide inter-hospital variability with repositioning rates varying from 23-77% among hospitals.  Patients most likely to be repositioned were those with pressure ulcer at time of admission.

Effect of repositioning: Pressure ulcers developed in 12% of patients repositioned frequently compared to 10% of patients repositioned less frequently.  The difference in incidence rate  was statistically not-significant (Incident Rate Ratio (IRR): 1.22 with 95% CI of 0.65-2.30).  Even after adjusting for covariates such as BMI, different support surfaces, peripheral vascular disease, nutritional status, disease severity, comorbidities, and other variables, the incident rate ratio (IRR) was not significantly, IRR: 1.12 (0.52, 2.42).

Conclusion:
In this study, frequent repositioning was not associated with reduced incidence of pressure ulcers.  In this study, there are a couple of important caveats.  The first is that the repositioning data was taken in a cross-sectional fashion in the first five days while pressure-ulcer incidence was followed for 21 days.  This technique does not capture what repositioning the patients received in the subsequent 16 days after the visit (although presumably hospitals which have repositioning in the first few days would continue the practice).  Also, the frequency of repositioning was determined from chart recordings by nurses.  The recordings may not accurately reflect the actual frequency of repositioning and may over-represent the frequency of repositioning.  Another significant finding is that the standard of care repositioning is only followed for 53% of patients despite nationwide emphasis on this issue. 

Overall, the data suggest that even with repositioning every two hours, the strategy may not be significant. Adjusting for pressure support techniques did not make a difference. Other factors such as exact position of reposition and more novel pressure support techniques may turn out to be important. While this study does not conclusively show that repositioning every two hours is ineffective, the study certainly highlights the need for a large scale randomized trial to examine the issue robustly.

Monday, December 13, 2010

Gray Hair

Motivation: For the past two days, I was caught in the blizzard at Minneapolis and spent a lot of idle time reading newspapers at the airport.  One of the faces that caught my attention was that of the WikiLeaks founder Julian Assange - his hair is just so white.  Even looking around the airport, I saw that hair graying proceeds at remarkably different rates.  We all associate hair graying with ageing, but is it also linked with diseases of ageing like vascular disease?  Turns out that this issue has also been studied.

Paper: "Gray hair, baldness, and wrinkles in relation to myocardial infarction: The Copenhagen City Heart Study", Schnohr, P. et. al. American Heart Journal, 1995 (130): 1003-1010.  http://www.ahjonline.com/article/0002-8703(95)90201-5/abstract?source=aemf

Method: The Cophenhagen City Heart Study is a prospective study of 20,000 adult men and women.  In the study, 7,163women and 5,837 men were physically examined between 1976-1978.  Five years later, the subjects were again examined (91% follow-up rate).  Incidence of death and MI were ascertained from questionnaires and from hospital records.  During physical exam, gray hair, baldness, wrinkles, ear lobe crease, and arcus senilis were noted.  Follow-up time for patients was on average 12 years.

Results:
The paper provides excellent statistics of the prevalence of gray hair, baldness, wrinkles, etc. divided by gender and age groups (really interesting), but I will skip to the meat of the matter regarding correlation.  Overall, in the observed time period, women suffered from 226 MI and men suffered from 524 MI.

Gray hair: After adjusting for known risk factors including age, smoking, BMI, cholesterol, etc., a significant correlation existed for men but not for women.  For men, the relative risk was 1.4 (CI: 0.9-2.0) for moderately gray hair and 1.9 (1.2 to 2.8) for completely gray hair.  For women, the relative risk was 1.1 (0.7-1.6) for moderately gray hair and 1.4 (0.9-2.0) for completely gray hair.  Note that in the subgroup, about 25% of women dyed their hair and had to be excluded.  Among women, dyeing hair did not increase incidence of MI, RR - 1.3 (0.6-2.7).

Baldness: Frontoparietal region baldness was associated with incidence of MI in men but not in women.  In men, the relative risk was 1.6 (1.1-2.3) compared to men with no bald triangle.  No significant association was found in crown-top baldness.

Wrinkles: Facial wrinkles over all age groups were not significantly associated in either sex with MI.  For subjects younger than 55,  in men but not women, severe facial wrinkles were associated with MI, RR for men 1.6 (1.1-2.3).

Conclusion: At least in men, after adjusting for conventional known risk factors of MI, completely gray hair was associated with increased risk of MI.  While the same association did not hold true for women, the case is a bit murkier.  First of all, women, as expected, experienced fewer MI than men.  Also, a large fraction of women (~25%) used hair dyes and had to be excluded, which biased the analysis.  So, graying hair may still be a risk factor or may not be a risk factor for women.  Other interesting results were that frontoparietal baldness and severe facial wrinkling in subjects younger than 55 were both associated with MI in men rather than women.  These results suggest that craniofacial ageing may result from different processes in men and women.

Overall, this was one of the largest prospective studies examining the issue of clinical ageing signs and MI.  But, as with any prospective studies, the presence of confounders cannot be ruled out.

Thursday, December 2, 2010

The Great Heart Failure Mystery

This post is best viewed from the blog website: http://siriasis.blogspot.com/


Motivation: One of the "bread and butter" diagnoses in inpatient medicine is systolic heart failure exacerbation - we all see it, we all treat it, but do we understand it?  I first wondered about this question last year during my basic medicine clerkship.  The etiology of systolic heart failure  is usually answered with some notion of overstretching the heart.  For me, the difficulty was that it is easy to see how overstretching a single muscle fiber can lead to weakness, but do we really overstretch an entire organ - especially an already dilated organ (like the enlarged heart in dilated cardiomyopathy)?

The traditional teaching and general consensus for decompensated systolic heart failure is that with volume overload, pre-load exceeds the capacity of the Frank-Starling compensation and then an inverse relation holds with increased filling resulting in decreased cardiac output.  The presumed basis is that there may be lack of productive overlap of muscle fibers with increased filling.  But, is there solid empirical data for this explanation?

The answer is shockingly no.  I have even asked cardiologists to confirm this assessment.  Of course, lack of data does not imply that the traditional teaching is false.  So, are there physiological studies about the response of the dilated heart to filling?

Paper: Evidence of the Frank-Starling Mechanism in the Failing Human Heart.  Holubarsch, C., et. al. Circulation (1996); 94: 683-689. http://circ.ahajournals.org/cgi/content/full/94/4/683

Method: Five human hearts with end-stage dilated cardiomyopathy were excised from patients receiving heart transplants.  The average ejection fraction of these hearts was 17%.  These hearts were kept alive artificially (really cool!)  using oxygenated blood.  Pressure and volume relationships were directly measured.  The hearts were paced using epicardial leads.  For controls, two healthy hearts were used which were originally destined for transplantation but could not be transplanted due to technical reasons.

Results:
For this experiment, I think that pictures tell more than any description.  So, I will paste some of the key graphs and point out the pertinent results:


In the graph on the left, the x-axis describes the volume of the left ventricle.  In Figure A, as the volume increases, the pressure inside the ventricle increases.




In B, we see that as the volume increases the diastolic pressure increases and the generated systolic pressure increases as well (the Frank Starling mechanism).  For perspective, the hearts here are really being filled up - the average diastolic volume is about 120 mL.





C.  Observe in this figure that as the LV is being filled up, the net generated pressure (which is simply systolic pressure minus diastolic pressure) decreases despite an intact Frank-Starling mechanism (the diastolic pressure increases at a faster rate than the systolic pressure at high volumes).





In the figure on the right, to investigate the state of Frank-Starling effect on muscle strips rather than whole organ, individual muscle strips were isolated from healthy hearts (top curve) and from failing hearts (bottom curve).  The total tension generated was measured in response to different muscle lengths.  As seen, the healthy heart was able to generate a lot more force in response to increased stretch.  But, while the response of the failing heart was more moderate, there is no evidence of "falling-off" the Starling curve.






Conclusion: This paper does not conclusively answer the physiologic basis of dilated cardiomyopathy but sheds some light on couple of areas.  (1) Even in failing hearts with weak contractility, the Frank-Starling effect is likely preserved even when stretched out by volume overload.  (2) Besides preload, cardiac contractility is controlled by two other factors: heart rate (increased contractility at increased heart rates) and neurohumoral responses (such as to catecholamines).  It may be possible that increased volume somehow upsets the responses to the latter two mechanisms rather than disrupting the Frank-Startling mechanism.  Anyway, there is a story yet to be told ....

Tuesday, November 16, 2010

Cell Phone and Brain Cancer

Motivation: Yesterday, I was talking on my new smartphone, and my ear grew warm.  The warmth spurred thoughts about a number of recent Google News posts warning about the dangers of cell phone use.  Do cell phones cause brain cancer?

Paper: Meta-analysis of long-term mobile phone use and the association with brain tumours.  Hardell, L. et al. Int. J. of Onc. (2008) 32: 1097-1103.

Method: Review of prospective and case-control studies that studied association of intracranial neoplasms and cell phone usage.

Results:
Glioma: In the meta-analysis using six case-control studies, the odds ratio of developing a glioma in the ipsilateral side to cell phone usage was 2.0 (CI: 1.2-3.4).  The odds ratio in the contralateral side was 1.1 (CI: 0.6-2.0).  The largest included study was a multi-national case-control study that studied cell-phone users, who had used cell phones for greater than ten years.  In the study, there were 77 cases of gliomas and 117 controls.  The odds ratio for gliomas in the side ipsilateral to cell phone use was 1.4 (CI: 1.01-1.9).  The OR for gliomas in the contralateral side was 1.0 (CI: 0.7-1.4).

Acoustic Neuroma: Nine case-control studies have examined this issue.  The meta-analysis yielded an odds ratio of 0.9 (CI: 0.7-1.1).  Of the nine studies, eight had not found any association.  When examining only the three studies looking at cell-phone users who had used cell phones for greater than 10 years, the odds ratio for acoustic neuroma in the ipsilateral side to use was 2.4 (CI: 1.1-5.3) and contralateral side was 1.2 (CI: 0.7-2.2).  Two of the three studies included in this analysis reached statistical significance for ipsilateral acoustic neuroma.

Meningioma: There were seven case-control studies studying this issue.  The meta-analysis gave an odds-ratio of 0.8 (CI: 0.7-0.99).  In the four case-control studies examining more than 10 year cell-phone users, the odds ratio for meningioma in the ipsilateral side was 1.7 (CI: 0.99-3.1) and in the contralateral side was 1.0 (CI: 0.3-3.1).  None of the four case-control studies reached statistical significance.

Conclusion: The association between cell-phone use and neoplasms is obviously of great public concern given the exploding number of global users.  The meta-analysis primarily found that cell-phone users of more than ten years had a greater likelihood of developing gliomas and acoustic neuromas in the side ipsilateral to cell phone use.  The study passes common sense tests because there appears to be greater association with length of use and site of use.  On the other hand, there are a couple of caveats.  The first is that this meta-analysis primarily consisted of case-control studies which are weak in proving causality.  People who use cell-phones, for instance, are also more likely to use other gadgets which emit radio-frequency or live in cities, where citizens may also be exposed to other toxins.  The second caveat is that gliomas are as such infrequent events, an odds-ratio of 2.0 may be scary but may not be very clinically significant (two times the odds of a rare event is not that frequent).  On the flip side, if the risk of neoplasm is dose dependent, the rates of neoplasms after multiple years may be significantly high.  For now, given our uncertain knowledge, I am buying a Bluetooth headset and getting a holster to separate the cell-phone from body contact!

Wednesday, October 27, 2010

Case study

When I was on Polk, I had a patient with a CD4<200 who presented with HA x 1wk. She didn't appear toxic and described the headache as more of an annoyance than anything else. Her basic labs were normal, and she was afebrile. Having a low threshold in such a patient, we did an LP which was only remarkable for a minimally elevated protein and no organisms on gram stain. Lacking much to go on, I was about to attribute her symptoms to a tension headache. However, the next morning, I was talking to her and randomly looked down at her hands (image below is from the actual patient, courtesy of the Department of Photo Path at JHH). Look familiar? I sent an RPR & VDRL which were positive--both serum and CSF. Neurosyphilis with a palmar rash! She had a PICC line placed for a 14 day course of IV PCN G. Verging on corny, this case gave me a renewed appreciation for the physical exam.





Home remedies

Fantastic posts, Shamik!! So, I saw a case of "neutrophilic urticaria" today in derm clinic. Obviously, most urticaria is hypersensitivity IgE mediated, but this is a more rare and treatment-resistant form of urticaria. It's typically treated with dapsone, which actually inhibits neutrophil chemotaxis. This patient, however, was G6PD deficient and had severe hemolysis with dapsone and, furthermore, developed methemoglobinemia. It got me to thinking though. What else inhibits neutrophil function and could potentially be used as an alternative treatment? I did a pubmed search and found the following article published in CHEST (2000) by Rennard et al, which I thought was interesting/entertaining:

"Chicken soup inhibits neutrophil chemotaxis in vitro."
ABSTRACT: Chicken soup has long been regarded as a remedy for symptomatic upper respiratory tract infections. As it is likely that the clinical similarity of the diverse infectious processes that can result in "colds" is due to a shared inflammatory response, an effect of chicken soup in mitigating inflammation could account for its attested benefits. To evaluate this, a traditional chicken soup was tested for its ability to inhibit neutrophil migration using the standard Boyden blindwell chemotaxis chamber assay with zymosan-activated serum and fMet-Leu-Phe as chemoattractants. Chicken soup significantly inhibited neutrophil migration and did so in a concentration-dependent manner. The activity was present in a nonparticulate component of the chicken soup. All of the vegetables present in the soup and the chicken individually had inhibitory activity, although only the chicken lacked cytotoxic activity. Interestingly, the complete soup also lacked cytotoxic activity. Commercial soups varied greatly in their inhibitory activity. The present study, therefore, suggests that chicken soup may contain a number of substances with beneficial medicinal activity. A mild anti-inflammatory effect could be one mechanism by which the soup could result in the mitigation of symptomatic upper respiratory tract infections.

This seemed to be a pretty rigorous study of chicken soup published in Chest, of all journals! I continued to wonder: "Well, what about orange juice?" I found a Cochrane review article, "Vitamin C for preventing and treating the common cold." by Douglas et al (2004). The trials in which vitamin C was introduced at the onset of colds as therapy did not show any benefit in doses up to 4 grams daily, but one large trial reported equivocal benefit from an 8 gram therapeutic dose at onset of symptoms. Now, you may ask yourself, "How many glasses of OJ would it take to get 8g of vitamin C." And, yes, I did the calculation. It would take 64.5 8oz glasses of orange juice to get an "equivocal benefit."

In brief, no, OJ doesn't help, and who knows if chicken soup has any clinical benefit...but it inhibits neutrophil chemotaxis which is pretty cool. So, now you have the answer next time someone asks you about OJ and chicken soup!

Tuesday, October 26, 2010

CPR-Why not?

Motivation: When I was in eighth grade, I was traveling through the Detroit Airport and saw a man ten feet from me collapse.  He just folded senseless to the ground without any preliminaries and stayed that way.  My first response was to blink my eyes to assure myself that this was real.  The unreal feeling was succeeded by an enveloping numbness.  I did nothing and gazed dumbly at the man on the ground.  Of course, if everyone had acted as I did, the man would not have received any help.  His wife cried out, and someone else must have done something (I was not noticing much).  Soon, EMS came and cleared us out.  Interestingly, nobody did CPR.

Recently, as I learned in my ED clerkship orientation, the American Heart Association reformulated the CPR guidelines to eliminate rescue breathing from the CPR algorithm for the lay person.  Although many factors went into this decision including emerging evidence that in the first few minutes chest compressions are more important than breaths, one of the key cited reasons is that removing rescue breathing may encourage more people to do CPR, which substantially increases chances of survival.  Only 20-30% of out-of-hospital cardiac arrests ever receive CPR.  I wondered, however, if people don't do CPR because they are afraid of rescue breathing or or because they are scared out of their minds - like me.  Here is an article examining this very question:

Article: CPR Training and CPR Performance: Do CPR-trained Bystanders Perform CPR?  Swor, R., et. al. Acad. Emerg. Med. 2006 (13): 596-601.  http://www.ncbi.nlm.nih.gov/pubmed/16614455?dopt=Abstract

Method: A prospective multicentered study in Southeastern Michigan in which individuals calling 911 for cardiopulmonary arrest between 1997-2003 were followed-up and interviewed.  Arrests occurring in nursing home facilities were not counted.

Results: In the study period, 868 subjects suffered arrest for whom 911 calls were made.  Of these 868 calls, 684 callers were followed-up and interviewed (78.8%).  Of the missing callers, 163 (18.8%) could not be identified or contacted while 21 refused to give permission (2.4%).  Callers were most frequently family member of the victim (69.6%).

Patient population: The patient population was predominantly male (68%) and suffered an arrest in a residential setting (80.8%).  About 17% survived to hospital admission, but only 6.6% survived to hospital discharge!

Responder characteristics: Among bystanders, 54% had received CPR training during their lifetime.  In all, CPR was started before EMS arrival in 33.6% of cases.  For bystanders, factors positively associated with starting CPR were younger age (less than 50), public location, witnessed arrest, and higher educational level. Among those who were CPR trained, only 35.1% initiated CPR.  Factors positively associated in this subgroup included public location, witnessed arrest, higher education level, and recent CPR training.  Younger age was not correlated in this population.

Reason for not doing CPR:  For those who were CPR trained, the most common causes recalled for not initiating CPR were panic (38.7%), concern about performing CPR correctly (10.8%), physical inability to do CPR (3.6%), and thoughts about potentially harming the patient (1.8%).  A significant fraction (4.3%) did not perform CPR because they thought the patient was dead.  Only four callers (1.4%) identified mouth-to-mouth resuscitation as a barrier.  None identified concern about infectious diseases.

Conclusion: This study proved once again that while arrest outside hospital has poor prognosis, many potential lives are being lost because CPR is not initiated soon enough.  What I found fascinating in this study was that even among the CPR trained, only 35% initiated CPR.  Overwhelmingly, the most common reason for not doing CPR was panic and not aversion to mouth-to-mouth resuscitation or concern about infectious diseases as commonly thought.  One point to keep in mind is that the whole study of bystanders is susceptible to the psychological bias that those not performing CPR may not want to disclose their true reasons from shame.  I also found the association between public location and bystander initiation of CPR interesting.  One reason may be that presence of more people decreases panic and incites action.

Given the high panic and anxiety surrounding such cases, I think that simplifying the CPR regimen is probably a very good thing.  If I know that when someone goes down before me I just have to kneel down and pump at 100 compressions a minute, I may be more likely to do it.  Hopefully, there will be widespread dissemination of this knowledge through posters and other public education tools.

Wednesday, October 20, 2010

Dabigatran - Goodbye to coumadin?

Motivation: After even a few months in the wards, I think that all of us learn that coumadin/warfarin inspires mixed emotions.  With too little drug, you don't get the proven benefits.  And, with too much, there is chance of bleeding - even to death.  This summer, while in the neuro service, I saw a man on coumadin die from intraventricular hemorrhage.  The problem with coumadin is that its metabolism is affected by so many environmental variables that steady anti-coagulation is hard to maintain.  Recently, the FDA approved dabigatran, which is a direct inhibitor of thrombin, for anti-coagulation in atrial fibrillation.  It is expected that in the future, dabigatran can be substituted for most cases of anticoagulation.   But, what is the data to justify replacing good old coumadin?

Paper(s): Two primary studies for dabigatran: (1) Trial RE-COVER: Dabigatran versus Warfarin in the Treatment of Acute Venous Thromboembolism. Schulman, S. et. al. NEJM, 2009 (361): 2342-2352.  (2) Trial RE-LY: Dabigatran versus Warfarin in Patients with Atrial Fibrillation. Stuart, J. et. al. NEJM, 2009 (361): 1139-1151.

Links: RE-COVER: http://www.nejm.org/doi/full/10.1056/NEJMoa0906598#t=article; RE-LY: http://www.nejm.org/doi/full/10.1056/NEJMoa0905561#t=article

Methods: For the RE-COVER study, 2,539 patients were recruited from 228 clinical centers in 29 countries and, in a blinded way, randomized to a fixed dose of dabigatran (150 mg bid) or warfarin adjusted to INR of 2.0-3.0.  Patients were essentially adults with symptomatic deep vein thrombosis who did not have pulmonary embolism with hemodynamic instability.  The primary end-point was symptomatic venous thromboembolism (VTE) or death from VTE.  For the RE-LY study, 18,113 patients were recruited from 951 clinical centers in 44 countries and randomized to a fixed dose of dabigatran (110 mg bid or 150 mg bid) or warfarin adjusted to INR of 2.0-3.0.  In the study, patients essentially had to have atrial fibrillation plus one more condition: previous stroke/TIA, heart failure, or older age (65-74) with diabetes, HTN, or CAD.  The primary end-point was stroke or systemic embolism.

Results:
RE-COVER:  In the primary end-point of symptomatic or deadly VTE, 2.4% in the dabigatran group had such events while 2.1% in the warfarin group (difference not significant).  There was no difference in overall mortality.  In the warfarin group, INR was therapeutic for only 66% of the time. In terms of safety, there were equivalent rates for any bleeding events or for major bleeds (1.6% in dabigatran vs. 1.9% in coumadin).  Patients in dabigatran were marginally more likely to experience any adverse event (9.0% vs. 6.8%, p=0.05).  Follow-up was for six months and essentially complete (>98%).

RE-LY: For the primary end-point of stroke or systemic embolism, the 150 mg dose of dabigatran was superior to coumadin (rate of 1.11% vs. 1.69% per year, p<0.001).  The 110 mg dose of dabigatran was equivalent to coumadin (rate of stroke or embolism: 1.53% vs. 1.69%, p=0.34).  Rates of hemorrhagic stroke were lower in the 150 mg group compared to warfarin (0.10% vs. 0.38%, p<0.001).  Similar trend was found in the 110 mg group.  Overall rates of mortality were not significantly different though there was a slight trend of lower mortality in the 150 mg group (3.64% vs. 4.13%, p=0.051).

Follow-up was essentially complete in all patients for two years.  In the warfarin group, INR was therapeutic only 64% of the time.  In terms of safety, rate of MI was higher in the 150 mg group compared to warfarin (0.74% vs. 0.53%, p = 0.048).  Same trend held for 110 mg group.  Overall, life-threatening bleeds were lower in both dabigatran group compared to warfarin, but rates of major GI bleeds were higher in the 150 mg dabigatran group compared to warfarin (1.51% vs. 1.02%, p<0.001).  In the 110 mg group, rates of GI bleeding were comparable.

Conclusion: Overall, both studies were well done with good follow-up and clinically relevant end-points chosen.  The RE-COVER trial satisfactorily showed, I think, that dabigatran was equivalent to warfarin in terms of preventing complications of thromboembolism.  Another interesting result from this trial was that even though warfarin was not therapeutic 1/3 of the time in the trial, the missed target did not seem to make any difference for thromboembolic events or bleeding events when compared with dabigatran, which presumably has pretty constant anti-coagulation all of the time.  The RE-LY also provided many interesting results.  On one hand, 150 mg dose of dabigatran proved to be superior to coumadin (and 110 mg dose) for preventing embolic events from atrial-fibrillation.  On the other hand, the side-effect of dabigatran is a mixed bag.  While it has lower rates of potentially deadly intraventricular hemorrhage and life-threatening bleeds, dabigatran had higher rates of MI and GI bleeds - not altogether benign either.  There is no good explanation for why dabigatran has higher rates of MI.  Finally, dabigatran is estimated to cost about five times more than coumadin.  What this does for our health care spending with 30 million prescriptions for warfarin each year is an open issue.  In conclusion, I think dabigatran is a definite first choice for patients with atrial-fibrillation or DVT with poor follow-up in INR clinics and good first-choice alternative in other patients with atrial fibrillation, who can afford the drug and understand the varying side-effect profile.

Tuesday, October 12, 2010

Giving an infant candy - does it work?

Motivation: In pediatrics clerkship, I have come across babies who just howl after blood draws, and one of the techniques we use is coating their pacifier with a sweet liquid.  I guess the theory is that sugar makes babies happy and must therefore counteract the pain.  But, does the sugar work?  Or, should we provide babies with analgesics instead?

Turns out that this topic has been extensively studied in newborns (with 44 RCTs!) stemming from concerns about initial pain experience on subsequent neurodevelopment, and based on subjective behavioral changes, sugar pacifies babies.  But, recent evidence suggests that neonates are perhaps too immature to link pain experience and behavior reliably.  So, does sugar really help?

Paper: Oral Sucrose as an Analgesic Drug for Procedural Pain in Newborn Infants: A Randomised Controlled Trial.  Slater, R., Cornellisen, L., et. al. Lancet, 2010 (376): 1225-1232.  http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(10)61303-7/abstract?rss=yes#

Method: At the University College Hospital in London, 59 full-term healthy neonates were randomized to either 0.5 mL 24% sucrose solution or 0.5  mL sterile water before receiving a heel-stick.  Before the babies received the heel-stick, they were subjected to a control non-painful stimulus by just touching the blunt end of the lancet to their heels.  The cortical response to both the control and noxious stimuli was recorded via an EEG.  The primary outcome was difference in pain-specific EEG activity in response to treatment with sugar.  Secondary outcomes were observation of facial expression and physiological changes.  Both parents and clinicians were entirely blinded to the identity of the administered solution.

Results: Because of technical difficulties of performing EEG on squirming babies and other measurement barriers, the sucrose group ultimately had 20 neonates and the control group had 24 neonates.  The two groups were not substantially different in baseline characteristics after the dropout.

There was no difference in cortical pain-specific EEG changes between the sucrose group and control group. Physiologically, there were no differences in heart rate or oxygen saturation.  However, based on facial expression, babies who received sucrose were less likely to alter facial expression to pain (35% in the sucrose group did not show any facial changes to heel-stick while all babies in the control group had facial changes like crying).

Conclusion: After this study, my conclusion is that we really don't know what babies are feeling.  The key fact that we want to understand is the subjective experience of pain in babies.  Since neonates do not talk, there is no definite gold standard.  Previously, the facial expression was used as a measure of the subjective experience.  This paper challenges that viewpoint.  There are two alternative explanations for the paper's results.  One is that the EEG is simply measuring nonciceptive input to the brain, and the facial expression is showing how the baby is processing that input.  From this perspective, relying on the baby's facial expression makes more sense.  Alternatively, the sucrose may be reflexively changing the baby's facial expression, but the pain is felt in equal intensity in the cortex.  From this viewpoint, relying on the EEG cortical response makes more sense.  Right now, I think that the only way to separate these alternative explanations is to do longer term studies to see which pain response is more closely linked with aberrant neurodevelopment.

Tuesday, September 21, 2010

Update on Cocaine and Beta-Blockers

Motivation: Cocaine use in Baltimore is unfortunately common.  During my sub-I in medicine, I met more than a couple of patients presenting with chest-pain after cocaine use.  If I did not know that the patient had been using cocaine, I would have suggested beta-blocker for its anti-arrythmic and anti-hypertensive effects.  But, the traditional teaching is that beta-blockers are contraindicated after cocaine use.  Just searching Google for "cocaine beta blocker" pulls up many sites warning against beta-blocker use - including trusty Wikipedia.

The traditional teaching is that since cocaine is a norepinephrine reuptake inhibitor, blocking the beta-receptor sites would lead to "unblocked" alpha-adrenergic effect of increased hypertension.  Beta-receptors (esp. Beta2 receptor) have some vasodilatory effect.  But, is this all theory or are there trials? Despite decades of official warning against beta-blocker use, the following paper is the first paper to assess use of beta-blockers in clinical chest pain.

Paper: Beta-blockers for Chest Pain Associated with Recent Cocaine Use. Rangel, C. et. al. Arch. Intern. Med. 2010 (170): 874-879.  http://archinte.ama-assn.org/cgi/content/full/170/10/874

Method: In a retrospective study, authors looked at patients admitted to San Francisco General Hospital with chest pain and U-tox positive for cocaine.  The authors primarily examined association between cocaine use and death.  Secondary outcomes were blood pressure levels, troponin levels, occurrence of v-fib/v-tach, intubation, or need for vasopressors.  Patients with clearly documented pulmonary etiologies such as pneumonia or pulmonary embolus were excluded.  331 patients met criteria of chest pain with positive urine toxicology.

Results:
Characteristics: Of 331 patients with chest pain and cocaine use, 46% got beta-blocker in the ED - mostly IV metoprolol.   Patients who got beta-blockers tended to be a little bit older (51 years versus 49 years) and likely to have higher blood pressure (SBP of 159 versus 141), history of HTN (70% vs 58%) and coronary bypass grafting (6% vs 1%), and have concurrent use of ace inhibitor (42% vs 29%) and statin (17% vs 8%).

Death: 45 patients died during follow-up after hospitalization.  12% of those who received beta-blocker died compared to 15% of those not getting beta-blockers (p = 0.38).  After adjusting for confounding variables, being discharged on a beta-blocker was associated with 70% reduction in risk of cardiovascular death (HR: 0.29 CI: 0.09-0.98).

Secondary outcomes: After adjusting for other medications received, patients on beta-blockers had a mean 8 mmHg greater decrease in systolic blood pressure compared to patients who did not get beta-blockers.  Receiving beta-blocker did not result in meaningful ECG differences, differences in peak troponin levels or incidence of malignant ventricular arrythmias.

Conclusion:   Beta-blockers did not seem to harm patients with positive cocaine use history.  In particular, beta-blocker administration in the ED resulted in lower rather than the hypothesized higher blood pressure!  Also, being discharged on beta-blockers significantly decreased risk of cardiovascular death.  What I found remarkable was that in general, the patients given beta-blockers might have been unhealthier in terms of age, blood pressure, and bypass history.

Being a retrospective study, of course, imposes some significant limitations on the study.  The group getting beta-blocker and the group not getting beta-blocker were different, and these differences may have influenced the results in unforseen ways not easily corrected by statistical adjusting.  Also, some of the confidence intervals were rather large.  The confidence interval showing 70% risk reduction in CVD death had an upper limit confidence interval of the hazard ratio at 0.98.

Friday, September 17, 2010

Is There Evidence for Lung Cancer Screening?

Is There Evidence for Mass Lung Cancer Screening?


Mr. JF is a 52 year old man with hypertension and a 30 pack year smoking history. In addition to smoking cessation is there anyway to decrease his mortality from lung cancer through screening?

Lung cancer is:
  • #1 cancer killer in men and women
  • Poor prognosis of 85-90% case fatality rate
  • Most present with advanced stage disease

Can mass screening lower fatality through earlier detection of localized disease?


National Guidelines Clearinghouse:
  • “Screening for lung cancer: ACCP evidence-based clinical practice guidelines” (2003).
  • We do not recommend that low-dose helical CT be used […]except in the context of a well-designed clinical trial. Grade of recommendation, 2C
  • We recommend against the use of serial chest radiographs[...]. Grade of recommendations, 1A
  • We recommend against the use of single or serial sputum cytologic […]. Grade of recommendation, 1A
Cochrane Reviews
“Screening for Lung Cancer” (2010)

  • Analyzed 7 major trials
  • Conducted in 1970’s-1980s worldwide
  • Population: mixed but most male smokers>45yo
  • Intervention: frequent CXR, sputum cytology
  • Comparison: less frequent CXR +/- sputum
  • Outcomes:
  • 1.lung cancer specific survival
  • 2.lung cancer specific mortality
  • 3.overall survival
Trial name, type and date
  • Czech Study, RCT, 1976-1982
  • Erfurt (German) Study, controlled-non randomized, 1972-1977
  • JHH Study, RCT, 1973-1978
  • Kaiser Study, RCT, 1964-1980
  • Mayo Study, RCT, 1971-1976
  • Sloan Kettering Study, RCT, 1974-1978
  • North London Study, Cluster Randomized Trial, 1960-1964
Population
  • Czech: Males 40-64, current smokers with greater than 20 pack-years hx. Expected to live and functionally participate for 5 yrs.
  • Erfurt: Males 40-65 living in Erfurt. 41k in intervention and 102K in control.
  • JHH: Males >45, smokers (>1pack/day) near Baltimore, recruited through mail ads.
  • Kaiser: M&F 35-54, of which only ~17% smoke, members of Kaiser Permanente Health Plan.
  • Mayo: Males >45 recruited from Mayo Outpatient practice.
  • MSKCC: Male smokers >45
  • N.London: Males>40, working in industrial firms in N.London
Interventions
Name
Control Arm
Intervention Arm
Screening Duration
N. London
CXR before and after study
CXR before and after study and CXR q 6 ms
3 Yrs
MSKCC
Annual CXR
Annual CXR + Sputum q4 ms
5 yrs
Mayo
Annual CXR/Sputum
CXR/Sputum q 4ms
6 yrs
Kaiser
Routine Care (Annual Physical+ CXR)
Additional Encouragement to undergo routine care
?
JHH
Annual CXR
Annual CXR + Sputum q4 ms
5yrs
Erfurt
CXR q 18 month
CXR q 6 month
5ys
Czech
One CXR/Sputum at study termination
CXR/Sputum q 6 month
3yrs
Czech
After initial 3 yrs, another 3 years of CXR for both
3yrs

Results




























































Critiques of Methodology

Name

Assignment Random

Allocation Concealed

Blinding of Death Assessment

Incomplete Data Addressed

No Other Bias

N. London

Y

?

?

Y

Baseline differences b/w Pt groups

MSKCC

Y

Y

Y

Y

Y

Mayo

Y

?

Y

Y

Y

Kaiser

N

?

Y

N

Baseline differences b/w Pt groups

JHH

Y

?

Y

?

Y

Erfurt

N

N

?

Y

Y

Czech

Y

?

?

N

Pt Baseline data not fully provided



Discussion
More frequent CXR vs. Less frequent CXR
  • 5 yr lung cancer survival. Small benefit
  • 5 yr lung cancer mortality. Same/?Harm
  • 5yr all cause mortality. Same

Annual CXR/4m Sputum vs. Annual CXR Alone
  • 5 yr lung cancer survival. Small benefit
  • 5 yr lung cancer mortality. Small benefit
  • 5yr all cause mortality. Same

Definitions:
  • Lung cancer survival: alive or died from non-lung cancer cause
  • Lung cancer mortality: died from lung cancer
  • All cause mortality: died for any reason

No study addressed whether screening is better than no screening

Survival results were most heterogeneous. Survival can be confounded by lead-time, length time and overdiagnosis bias.

More frequent CXR leading to both increase in disease specific mortality and disease specific survival in pooled data further suggest unreliability of survival as outcome.

Increased CXR was shown to actually increase cancer mortality in several studies.

CXR unlikely to cause increased mortality per se due to low radiation dose but may lead to unnecessary surgery and early diagnosis that can lead to depression.

o
o
Several studies had methodological flaws such as baseline differences b/w groups and poor randomization/masking

Contamination (control group pts received intervention) and compliance (intervention group pts not receiving intervention) decrease effect of screening

CXR does not detect small tumors whose removal may have the most benefit to pts.

Recent large uncontrolled trial of spiral CT showed 92% of lung cancers dx were stage I, with those undergoing resection having a 10yr survival of 85%.

CT Lung screening associated with 3x increase in lung cancer dx and 10x increase in surgery.

Current Undergoing Studies

Name

Type

Population

Control Arm

Intervention Arm

Start date

NELSON (Dutch)

Multicenter RCT, parallel grp, no blinding

47-75 current smokers or quit <10yrs goal="15K

Smoking cessation advice

Chest CT at year 4, sputum, blood tests, PFTs, smoking cessation

2003

NLST (US)

Multicenter RCT, parallel grp

Current or former smokers 55-74 goal=50K

Annual CXR for 3 yrs

Annual Chest CT for 3 yrs

2002

PLCO (US)

Multicenter RCT, parallel grp

Males and females 55-74

?

“Annual Chest Radiography” *

1992


Summary

Current ACCP guidelines do not recommend routine screening with sputum, CXR or CT for lung cancer

A recent Cochrane meta-analysis shows that most trials did not compare screening vs. no screening but only the type/frequency of screening.

More frequent CXR screening and addition of sputum did not improve all cause mortality but may improve lung cancer specific survival at 5 yrs.

Several large RCT’s are underway that compare screening with CT to no screening


Thursday, September 16, 2010

Arcus Senilis - What does it mean?

Motivation: A few years ago, before I entered medical school, I was volunteering at an inner-city health clinic for the homeless and thought that everyone over fifty was developing cataracts.  So many people had this white ring around their iris.  Since then, I have been disillusioned and learnt about arcus senilis.  Last month, however, arcus senilis again came to my attention.  A resident and I were evaluating a patient in ED with suspicious chest pain, and the resident talked about how the arcus in the context of the patient's history suggested underlying vascular disease.  So, I wondered, how predictive is arcus for atherosclerosis?

As introduction, corneal arcus is a lipid-rich deposit at the junction of the cornea and sclera.  The corneal arcus lipid deposition is thought to share some similarity with lipid deposition in atherosclerosis.

Paper: Relation of Corneal Arcus to Cardiovascular Disease (from the Framingham Heart Study Data Set).  Fernandez, A. B., et. al. The American Journal of Cardiology 2008 (103): 64-66.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2636700/?tool=pubmed

Method: The Framingham study is a prospective study initiated in 1948 to study factors influencing incidence of cardiovascular disease (CVD).  Based on initial evaluation of arcus, the paper determined the predictive value of arcus for CVD after four and eight years.  The total cohort examined in the paper consisted of 23,376 patients for four year prediction study and 13,469 for eight year study.

Results: 


Unadjusted: Just the presence of arcus was predictive of first cardiovascular event with a hazard ratio of 2.28 (2.02-2.57) at 4 years and 2.52 (2.15-2.95) at 8 years.

Age and gender adjusted: Since increased age and male gender correlate with CVD, the authors adjusted for age and gender.  After adjustment, arcus was predictive of events with hazard ratio of 1.07 (0.95-1.22) at 4 years and 1.18 (0.99-1.39) at 8 years.

Multivariate adjustment: Of course, there are more known factors influencing CVD than just age and gender.  The authors also modeled the data after adjusting for age, gender, cholesterol, blood pressure, diabetes, smoking, and BMI.  With multivariable adjustment, arcus was predictive of events with hazard ration of 1.04 (0.92-1.18) at four years and 1.14 (0.96-1.35) at eight years.

Conclusion: After adjusting for age and gender, arcus lost independent predictive value for CVD.  The most likely explanation for the unadjusted association of arcus with CVD is that older people are more likely to have both arcus and vascular disease.  This was the largest study examining the association of arcus with CVD.

Other studies in the past have linked arcus as an independent predictor of CVD.  A major difference in the Framingham study is that detailed ophthalmological exams were not done, and arcus was assigned by visual inspection.  So, while detailed examination of arcus may have some underlying predictive value, just visually finding arcus is not more predictive of CVD than just age and gender!

Friday, September 3, 2010

Temperature Measurement - Right way?

Motivation: In the clinic, many patients come with "high fever" - one patient insisted that at home he consistently measured his temperature at 105.0 although we found his fever to be no higher than 100.0.  What happened?  I wonder now whether we were using the same measurement techniques.  Talking to attendings, I have found that almost all favor or dislike one or more methods of temperature measurement.  The following paper compares some commonly used measurement techniques of core body temperature to the gold standard - pulmonary artery catheter temperature measurement.

Paper: Accuracy and Precision of Noninvasive Temperature Measurement in Adult Intensive Care Patients. Lawson, L. et. al. Am. J. Crit. Care, 2007 (16): 485-496.  http://ajcc.aacnjournals.org/cgi/content/abstract/16/5/485

Methodology: The authors collected temperature by pulmonary artery catheter (PAC), axillary, temporal artery, tympanic membrane, and oral techniques.  The four external measurements and PAC temperature were collected within a minute of each other.  Sequential temperature measurements using all techniques were taken three times at twenty minute intervals to analyze intra and inter-method variability and concordance.

Patient Selection: Sixty adults in ICU (40 male and 20 female) with cardiopulmonary disease and pulmonary artery catheter.  Patients were excluded if they had oral pathology, head trauma, or not visible tympanic membrane.

Results:
PAC vs oral - On average, oral measurement underestimated PAC temperature by about 0.09°C (0.16°F).  The precision (reproducibility) was 0.43°C (0.77°F).  19% of measurements were more than 0.5°C (0.9°F) different from PAC.  Oxygen delivery via nasal cannula did not make a clinical difference in temperature measure, but intubated patients consistently had higher oral measurements.

PAC vs tympanic membrane - On average, tympanic membrane overestimated PAC temperature by 0.36°C (0.65°F). Precision was 0.56°C (1.0°F).  49% of measurements were more than 0.5°C (0.9°F) different from PAC.

PAC vs temporal artery - On average, temporal artery overestimated PAC temperature by 0.02°C (0.04°F).  Precision was 0.47°C (0.85°F).  Administration of vasopressor did not significantly alter concordance.  20% of measurements were more than 0.5°C (0.9°F) different from PAC.


PAC vs axillary - On average, axillary underestimated PAC temperature by 0.23°C (0.41°F).  Precision was 0.44°C (0.79°F).  27% of measurements were more than 0.5°C (0.9°F) different from PAC.

Conclusion: On average, oral and temporal artery measurements are likely good estimates of core body temperature.  Axillary temperature is probably next on the list followed last by tympanic membrane measure, in which 49% of measurements differed by more than 0.5°C from core body temperature.  Another point to take home is that for any technique, changes of about 0.5°C - the precision level of almost all the techniques - can be explained simply by measurement variability.  A final point is that even for the best non-invasive techniques like oral measurement, about 20% of the time, the temperature will be off by 0.5°C or higher.

Limitations: The major limitation in this paper is that only three patients were actually febrile.  The concordance rates may differ with febrile patients.  Also, all of the patients were in the ICU.  Perhaps, in an outpatient setting, the results may vary.  Finally, the measurements were taken by experienced ICU nurses.  The accuracy and precision of measurements by medical students or by patients may be a whole different story.

Monday, August 23, 2010

Ibuprofen or Tylenol for Fever - Surprise

Sorry for the long delay in writing a blog post.  From now on, expect a weekly edition of Siriasis .

Motivation: On the inpatient service, when someone has fever, the first line of symptomatic treatment is administration of acetaminophen.  This summer, however, when I was febrile for a few days with a viral illness, I found that I had better control of fever with ibuprofen than with acetaminophen.  I wondered is acetaminophen really better than ibuprofen?  What are the data?

Paper: Efficacy and Safety of Ibuprofen and Acetaminophen in Children and Adults: A Meta-Analysis and Qualitative Review.  Pierce, C.A., The Annals of Pharmacotherapy (2010) Vol. 44: 489-506. 

Type of Study: A meta-analysis containing randomized controlled trials studies that directly compared ibuprofen to acetaminophen and provided comparative safety data.  The authors analyzed pediatric and adult populations separately.  While analyzing adverse events, the authors excluded expected side-effects such as GI disturbance for ibuprofen and mild liver enzyme abnormalities for tylenol from "serious" side effects.

Results:
Analgesic efficacy in adults: Out of 36 studies included, 26 concluded that ibuprofen was superior to acetaminophen.  No study showed acetaminophen superiority.  The overall effect size was medium (standardized mean difference of 0.69(CI: 0.57 to 0.81)). 

Antipyretic efficacy in adults:  Of the five studies, three concluded that ibuprofen was superior while two found no difference.  Of note, not all of these studied infectious fevers - some studied fever caused by iatrogenic sources like interferon injections.

Antipyretic efficacy in children: Meta-analysis of seven trials concluded that fever control at four hours is significantly better with ibuprofen compared to acetaminophen - the effect size is relatively small (standardized mean difference of 0.26 (0.10 to 0.41)).

Adverse events: When taken as directed in adults and children, the odds of suffering at least one adverse event is not significantly different between ibuprofen to acetaminophen.  When taken as directed, very few serious adverse events occur in either arm.

Limitations of Data: There are a number of limitations in the meta-analysis.  To me, the chief one is that no single model of fever is considered.  Rather, the efficacy of treating fever caused by diverse causes is lumped together.  While potentially powerful in one sense since you can apply the data to fever from any source, we don't know if subgroups benefit differently.  Also, for adverse events, excluding GI side-effects from ibuprofen group may have changed the results.

Conclusion: For fever, good data do not exist for adults, but in the pediatric population and extrapolating from this population, ibuprofen is likely a better anti-pyretic compared to acetaminophen.  As an analgesic, ibuprofen is better for adults.  Also, excluding expected side effects for the drugs, acetaminophen and ibuprofen do not carry additional toxicity.  My conclusion is that in the outpatient setting if patient does not have history of serious risk factors for GI bleeding, better fever control and analgesia are likely achieved by prescribing time limited doses of ibuprofen.

Link to paper: http://www.theannals.com/cgi/content/full/44/3/489

Sunday, July 25, 2010

Medicine on Vacation

Hello friends, Just got back from a refreshing five week vacation in India.  During my stay there trying to get away from medicine, I found myself involved in a fascinating case.  I was staying at a town outside the city of Kolkata, which is a gigantic metropolis with population of at least 5 million people.  The house we lived in had two stories, and one evening around 6 pm, I was talking to our neighbors downstairs, when the man mentioned that his wife was having chest pain.  He knew incidentally that I was in medical school, and after hearing about the chest pain, I went downstairs - very nervously - to take a look at his wife.  Turned out that his wife was not just having some chest pain but very significant chest pain.  She was diaphoretic and thrashing in bed in agonizing pain.  She is a 45 year old woman with absolutely no past medical history.  She is usually very active, and starting from the night before, she abruptly started experiencing constant chest pain in the middle of her chest.  The pain did not radiate and did not worsen with exertion.  She had some difficulty taking a deep breath, and sitting up in a bent position worsened her pain.  My first instinct was to say that she needed to go to a hospital.  But, in India, hospital stays are usually paid out-of-pocket, and the decision to go to a hospital is hardly casual in terms of financial cost.  The husband asked if this was a heart attack.  However, given the age group, lack of any medical history, and constant non-exertional nature of pain, my instinct was that this was not a heart attack.  So, she stayed in bed tossing and turning during the night, and I advised them to see a "real doctor" the next morning.

The next morning, the pain changed.  In fact, she no longer had chest pain but rather had diffuse, non-focal abdominal pain.  There was no rebound tenderness, rigidity, hematemesis, or crampy quality to the pain.  They went to see a local doctor and came back with a diagnosis of "gas" along with antacid prescriptions.  The day went on, and despite the antacids, the pain did not decrease.  She also developed anorexia and nausea with one episode of vomiting.  By this time, they had also called their nephew, who had just graduated from medical school.  The nephew and I consulted together and agreed that she probably did not have an acute abdomen.  We were not sure what she had.  Since in India diagnostic tests don't need doctor's referral, we both decided to pitch in tests that we thought she needed - I voted for an abdominal ultrasound and the nephew wanted an abdominal X-ray (for probable obstruction).  They got the ultrasound, and it turned out that she had acute cholecystitis.  The next day when I left, she was getting ready to have an operation at a hospital.

The story is sort of crazy from a U.S. perspective because the whole process would be conducted in a hospital here.  But, in an effort to be economical, we made the diagnosis at home.  I learnt two lessons from this experience.  One is that epidemiology helps.  The patient was female in her forties with two children.  Despite her initial presentation of chest pain, gallbladder should remain high on the list.  Second lesson is that being a doctor is so much fun!!!

Wednesday, June 9, 2010

Shellfish Allergies and Radiocontrast

Last month, I was paged by a nurse that a patient scheduled to receive radiocontrast had seafood allergies.  Could the patient still receive radiocontrast?  My gut instinct was to say, "yes."  But, regarding the common perception of a link between shellfish allergies and radiocontrast allergy, is there any data to support the claim?

Paper: Schabelman, E., and Witting, M. The Relationship of Radiocontrast, Iodine, and Seafood Allergies: A Medical Myth Exposed, J. of Emer. Med. (2009).

Objective: In a survey of Midwestern medical centers, 2/3 of radiologists and 89% of cardiologists ask about shellfish allergies prior to giving contrast.  Also, 35% of radiologists and 50% of cardiologists deny patients contrast or pre-medicate patients prior to giving contrast to patients with shellfish allergies.  The paper is a systematic review of data estimating the risk of contrast allergy in patients with shellfish allergy.

Result: There is actually only one study which examines the rate of allergic reaction to radiocontrast in patients with shellfish allergies - it was done in 1975.  The study revealed that the rate of allergic reaction to radiocontrast is no higher in those with shellfish allergies when compared to patients who have any allergies in general (including food allergies and asthma).  In general, atopy confers an increased risk of reaction, but people with shellfish allergies are at no higher risk.  Even in patients with atopy, the risk of a severe allergic reaction to modern radiocontrast is pretty low - estimated at about 0.05%.  Most cases of allergies to radiocontrast are pretty mild.

Conclusion:  Shellfish allergy does not especially increase the rate of allergy to radiocontrast.  A general history of allergy should suggest increased susceptibility to allergic-type reactions, but in most cases, the reaction is pretty mild.  Also, from a molecular standpoint, there is no clear relationship between shellfish allergy and radiocontrast.  Most patients who are allergic to shellfish are actually allergic to the tropomyosin in shellfish, which is a protein for muscle contraction and unrelated to iodine!  Finally, even in those with "allergies" to radiocontrast, the reaction is not actually mediated by sensitized IgE antibodies.  The reaction to IV contrast is anaphylactoid as opposed to anaphylactic (true IgE mediated reaction).  In anaphylactoid reactions, mast cells degranulate as a result of direct stimulation rather than IgE immune triggering.  A clinical consequence of this immunological fact is that someone with a previous mild allergy will likely continue to have mild allergies since the phenomenon does not depend on titers of sensitized IgE antibodies or immune memory.

Monday, May 31, 2010

Mysterious Purpura & Neutropenia in a Cocaine User

Introduction:
The following clinical presentation is increasingly being seen in urban hospitals throughout the country. The main findings are:
  • Purpuric skin lesions
  • Severe neutropenia
  • High fever
  • Swollen glands
  • Painful sores on the mouth or anus
  • Lingering infections, including sore throat, mouth sores, skin infections, abscesses, thrush, or pneumonia
It's diagnosis is challenging given that this condition has many shared exam and laboratory findings as many vasculitides. In particular, these patients often have lupus anticoagulant and c- or p-ANCA positivity.

Case:
Ms. F is a 38-year-old woman with hepatitis C, remote miscarriage, and active polysubstance abuse who was admitted for MRSA endocarditis. A toxicology screen performed on admission was positive for cocaine, opiates, and benzodiazepines. She developed a deep venous thrombosis on her right lower extremity that was initially treated with heparin and an appropriate bridge to warfarin. On day 27 (day 12 of warfarin), she developed multiple discrete, stellate, purpuric macules, papules, and plaques with a bright erythematous border on her pinna, earlobes, cheeks, right breast, and bilateral proximal upper and lower extremities. Concomitant unexplained episodic tachycardia and neutropenia were noted. Skin biopsy specimens revealed leukocytoclastic vasculitis with mural fibrin deposition, neutrophilic infiltrate, nuclear dust, and extravasated erythrocytes involving superficial small vessels with pauciinflammatory luminal thrombosis in a few vessels. Synchronous tests revealed positive platelet factor IV antibody (but negative serotonin release assay), mixing studies (noncorrecting), lupus anticoagulant, and Russell viper venom time. Antineutrophil cytoplasmic antibodies (ANCAs) directed against proteinase-3 (PR-3) were 39.2 EU/mL (normal, <4.0>9/L). A subsequent urine toxicology screen on hospital day 33 was positive for cocaine, confirming in-hospital cocaine use.


Diagnosis:
This is a case of levamisole toxicity. Levamisole is an antihelmithic drug that has increasingly been used as a cutting agent for cocaine. In July 2009, the Substance Abuse and Mental Health Services Administration (SAMHSA) found the drug in over 70% of cocaine analyzed. Another recent analysis in Seattle, WA found individuals who tested positive for cocaine also tested positive for levamisole nearly 80% of the time.The Drug Enforcement Agency says it has seen a steady increase in the amount of the medication found in cocaine since 2002. Complete clinical resolution of skin lesions occurs 2 to 3 weeks after stopping levamisole and serologies normalize within 2 to 14 months. Detection of levamisole is challenging, because specific testing is necessary but not routinely available; levamisole's half-life is so short (5.6 hours) that only 2% to 5% of the parent drug is detected in urine; and the sensitivity of available testing is low. However, the clinical constellation of retiform purpura, neutropenia, lupus anticoagulant and ANCA positivity, and temporal association with cocaine use is nearly pathognomonic for levamisole toxicity. So, next time you're in the ED and you see a patient with purpura and neutopenia in the context of a positive utox for cocaine, add this to your differential!

Reference: Trimarchi et al. Cocaine-induced midline destructive lesions: clinical, radiographic, histopathologic, and serologic features and their differentiation from Wegner granulomatosis. Medicine (Baltimore) 2001; 80:391-404.