PNAS lifts embargo early after finding three-month old article by author describing findings

A particularly effective illustration of how absurd and craven the embargo system and Ingelfinger’s Rule are.

Embargo Watch

pnas juneAfter what the Proceedings of the National Academy of Sciences (PNAS) is calling an “embargo break,” the journal has lifted the embargo early on a paper because the findings were described by the author in a popular science magazine in April.

Here’s the email that went out yesterday a bit before 7 p.m. Eastern, days before the scheduled 3 p.m. Eastern embargo Monday:

View original post 328 more words

“Just significant” results have been around for decades in psychology — but have gotten worse: study

Retraction Watch, which does such a splendid job covering retractions and other signs of trouble in the scientific literature, has done a fine round-up on recent studies suggesting that the psychology literature seems to set a low bar for ‘significance’ in many of its publications.

Retraction Watch

qjepLast year, two psychology researchers set out to figure out whether the statistical results psychologists were reporting in the literature were distributed the way you’d expect. We’ll let the authors, E.J. Masicampo, of Wake Forest, and Daniel Lalande, of the Université du Québec à Chicoutimi, explain why they did that:

The psychology literature is meant to comprise scientific observations that further people’s understanding of the human mind and human behaviour. However, due to strong incentives to publish, the main focus of psychological scientists may often shift from practising rigorous and informative science to meeting standards for publication. One such standard is obtaining statistically significant results. In line with null hypothesis significance testing (NHST), for an effect to be considered statistically significant, its corresponding p value must be less than .05.

When Masicampo and Lalande looked at a year’s worth of three highly cited psychology journals — the

View original post 579 more words

Science is in bad shape « Why Evolution Is True

Much ado lately about a pair of articles in The Economist that gather the growing evidence that science is getting an awfully lot wrong these days, and may be less reliable than we think (and certainly far less than we wish.) 

Perhaps the best summary of this I’ve read is from Jerry Coyne. Here’s the nut:

One piece is called “How science goes wrong“; the other is “Trouble at the lab.”Both are free online, and both, as is the custom with The Economist, are written anonymously.

 

As I read these pieces, I did so captiously, really wanting to find some flaws with their conclusions. I don’t like to think that there are so many problems with my profession. But the authors have done their homework and present a pretty convincing case that science, especially given the fierce competition to get jobs and succeed in them, is not doing a bang-up job.  That doesn’t mean it is completely flawed, for if that were true we’d make no advances at all, and we do know that many discoveries in recent years (dinosaurs evolving into birds, the Higgs boson, black matter, DNA sequences, and so on) seem solid.

 

I see five ways that a reported scientific result may be wrong:

 

  • The work could be shoddy and the results therefore untrustworthy.
  • There could be duplicity, either deliberate fraud or a “tweaking” of results in one’s favor, which might even be unconscious.
  • The statistical analysis could be wrong in several ways. For example, under standard criteria you will reject a correct “null” hypothesis and accept an alternative but incorrect hypothesis 5% of the time, which means that something like 1 in 20 “positive” results—rejection of the null hypothesis—could be wrong. Alternatively, you could accept a false null hypothesis if you don’t have sufficient statistical power to discriminate between it and an alternative true hypothesis.  Further, as the Economist notes, many scientists simply aren’t using the right statistics, particularly when analyzing large datasets.
  • There could be a peculiarity in one’s material, so that your conclusions apply just to a particular animal, group of animals, species, or ecosystem.  I often think this might be the case in evolutionary biology and ecology, in which studies are conducted in particular places at particular times, and are often not replicated in different locations or years. Is a study of bird behavior in, say, California, going to give the same results as a similar study of the same species in Utah? Nature is complicated, with many factors differing among locations and times (food abundance, parasites, predators, weather, etc.), and these could lead to results that can’t be generalized across an entire species. I myself have failed to replicate at least three published results by other people in my field. (Happily, I’m not aware that anyone has failed to replicate any of my published results.)
  • There could be “craft skills”—technical proficiency gained by experience that isn’t or can’t be reported in a paper’s “materials and methods,” that make a given result irreproducible by other investigators.

If you read the Economist pieces, all of these are mentioned save #4 (peculiarity of one’s material). And the findings are disturbing.

 

Coyne follows with a particularly lucid commentary on these articles and their implications. Highly recommended. 

Find it at Science is in bad shape « Why Evolution Is True.

 

The Brain Train Runs Away With Temple Grandin

Much of autism’s mystery and fascination lies in a paradox: On one hand, autism seems to create a profound disconnect between inner and outer lives; on the other, it generates what the neuroscientist Oliver Sacks calls an essential and “most intricate interaction” between the disorder and one’s other traits.

In the autistic person, it seems, hums a vital and distinctive essence — but one whose nature is obscured by thick layers of behavior and perception. Or, as Temple Grandin puts it, “two panes of glass.”

For a quarter century, Dr. Grandin — the brainy, straight-speaking, cowboy-shirt-wearing animal scientist and slaughterhouse designer who at 62 is perhaps the world’s most famous autistic person — has been helping people break through the barriers separating autistic from nonautistic experience.

These, I note at my review of Grandin’s new book, “The Autistic Brain,” are her strengths:

When they burst upon the scene in her 1995 book “Thinking in Pictures,” they amazed people, as they continue to do in many of her YouTube and TED talks (not to mention the 2010 biopic “Temple Grandin,” in which she was played by Claire Danes). Alas, in “The Autistic Brain,” her fourth book, she largely abandons these strengths, setting out instead to examine autism via its roots in the brain. It does not lead to rich ground.

Read the entire review: ‘The Autistic Brain’ Review — Temple Grandin Traces Roots of a Disorder – NYTimes.com.

Merging the streams

I spent part of this day using WordPress to pull into this particular blog, my old Smooth Pebbles, many of my different online thread of the past 5 years: earlier WordPress blogs, a Tumblr into which I’d earlier pulled a Posterous and an Instagram account and a Twitter account. Even without my main blog — perhaps because it doesn’t include my main blog, Neuron Culture, which has bounced around from home to home — it makes a surprisingly intricate tour through my various obsessions, and the paths left not just by me but some of my compatriots.

For @microecos. How about *this* guy! (at North Branch Nature Center)

Water as needed. (at Le jardin)

A place we made. I helped, anyway. (at Le jardin)

Tulip snuck indoors. (at da house)