Archive | Uncategorized RSS for this section

Heart drug shows potential in HIV treatment

heart-66892_1280

A well-known heart drug reveals new ways to control HIV infection by limiting viral replication in infected cells.

Digoxin is a drug isolated extracted from the foxglove plant (Digitalis lanata) and has been widely prescribed for the treatment of various heart conditions. Digoxin is used in the treatment of congestive heart failure to increase heartbeat strength. However, the therapeutic effects of digoxin appear to go beyond the treatment of cardiovascular disease. Recent findings by Alan Cochrane and colleagues at the University of Toronto show that digoxin induces a dramatic inhibition in the synthesis of HIV-1 proteins in infected cells.

The team screened a panel of drugs for antiviral activity and found that digoxin significantly inhibits HIV replication in white blood cells. Using fluorescence microscopy images, the researchers were able to show a decrease in the accumulation of HIV viral ribonucleic acid (RNA) within infected cells treated with the drug. Digoxin directly interferes with RNA processing, which causes a reduction in viral RNA levels and impedes the production of new viral proteins. The study shows that digoxin specifically targets the RNA processing of the Rev gene, which is pivotal in the production of structural proteins that form new viral particles.

A new drug target in HIV treatment

Currently HIV infection is treated via antiretroviral therapies (ART), which have proven successful in slowing disease progression. However, the ability of HIV to adapt to ARTs has given rise to drug-resistant virus strains that now represent ≥16% of newly infected people. The rising adaptation of HIV to current treatments calls for the generation of new treatment strategies. Digoxin targets viral RNA processing, one of the early stages of HIV replication. Since this stage of the virus lifecycle is not targeted by current ARTs, the digoxin family of drugs represent a novel class of HIV inhibitors. Nonetheless, these promising results await further confirmation in experimental models beyond cell culture assays.

Since digoxin interrupts the viral life cycle of HIV at a stage not currently targeted by the available therapies and given this FDA-approved drug is already in clinical use, there is great potential for the development of members of the digoxin family into  new ARTs for HIV infection.

Article Reference: PLoS Pathogens (doi: 10.1371/journal.ppat.1003241)

Giving scientific relevance a number.

As much as religious texts are considered direct evidence of the existence of God, scientific publications are the testament of a scientific career. However in the eyes of the scientific community not all papers are created equal and publishing an article can elicit very different reactions depending on the journal in which it is published. These range from the sympathetic nod with feigned knowledge of the journal’s name, to genuine excitement and newfound respect for the author. But in the vast sea of scientific publications, how do people separate the wheat from the chaff? Simply by using one of various types of journal metrics that allow us to know ”how relevant” a published article is.

Impact Factor

The king of kings of journal metrics is Impact Factor (IF). It is the industry standard and it was developed in the 1950s by Eugene Garfield, in an effort to quantify the relative importance of published articles. It is based on two elements:

A: The number of citations in the current year to any items published in the journal in the previous 2 years.

B:The number of “citable items” published in the same 2 years.

IF=A/B

For example, if a journal has an impact factor of 5 in 2010, then its papers published in 2008 and 2009 received 5 citations each on average in 2010. However the industry standard is not free from shortcomings, three of the main issues with this metrics system are:

  • High variability across disciplines. Just as music, art or fashion are subject to tremendous differences in popularity between their various styles, science is no exception. One of the most prestigious journals on mathematics, Annals of Mathematics has an IF of 2.928 (2011). In stark contrast the highly regarded journal on biology Cell has an IF of 32.403 (2011).
  • Definition of citable items. Editorial boards on journals tend to be surgically careful in their definition of citable items. In most cases articles in sections titled “letters to the editor” or “brief reports” are not considered to be citable items and therefore do not add to the denominator of the impact factor. However, citations of such items will still contribute to the numerator, thereby inflating the impact factor.
  • Differential effect of individual articles on IF.  In February 2001 the human genome sequence was published in both Science and Nature journals. These articles have accumulated approximately 10000 citations up-to-date, making their contribution to the IF of their respective journal monumental when compared to regular articles.

For some time now, these weaknesses have been a source of concern in the scientific community. After all, why would the most advanced mathematical research be considered less relevant than cutting edge biological studies? In a more fundamental perspective, why should a publication record be vulnerable to changes in journal reputation?

Fortunately in recent years different metrics system have been proposed. Most notably approaches like Source-Normalized Impact per Paper (SNIP), which is based on citations from peer-reviewed articles to other peer-reviewed articles. This prevents abuse of the definition of citable item. SNIP also takes into account citation potential of the scientific field, making comparison between journal subjects possible. Another novel approach is article-level metrics by the Public Library of Science group (PLoS). Along with their open access publishing model, they include article-level metrics, which provide detailed information on article views, number of downloads and citations. Nevertheless these alternative metrics depend on public usage and feedback for their definite acceptance. Only then can there be a change in how we perceive scientific relevance and avoid a standard in which all journals are created equal, but some journals are more equal than others.