Antipsychotic drugs could be shrinking brains. A large study seems to offer a fair degree of confirmation. I think, among other things, we should look more carefully at off-label use of these drugs. Smaller brains may not be bad, but ...
Meanwhile, about 50 percent of people prescribed antidepressants are off-label users. It's stuff like this that leads to "Big Pharma" cries.
You not only have a "second brain" in your gut, but your intestinal microbes may influence both that and the actual brain, through effects on neurotransmitters. Woo-ers running wild with this aside, how could this affect antibiotics prescriptions? What is antibiotic resistance going to do to this?
This is a slice of my philosophical, lay scientific, musical, religious skepticism, and poetic musings. (All poems are my own.) The science and philosophy side meet in my study of cognitive philosophy; Dan Dennett was the first serious influence on me, but I've moved beyond him. The poems are somewhat related, as many are on philosophical or psychological themes. That includes existentialism and questions of selfhood, death, and more. Nature and other poems will also show up here on occasion.
Showing posts with label antidepressants. Show all posts
Showing posts with label antidepressants. Show all posts
Wednesday, February 09, 2011
Tuesday, January 12, 2010
SSRIs no better than placebo? Not quite
The truth is, no new study claimed that. Rather, that story last week was based on meta-analysis. Regular readers know my feelings about meta-analysis. Worse yet, the meta-analysis included only 23 original studies, which in turn focused on just two antidepressants.
Hardly scientific.
Hardly scientific.
Thursday, December 10, 2009
Anti-depressants beat CBT on personality help
I'm not a fan or touter of Big Pharma, nor do I denigrate talk therapy.
But, it seems that SSRI antidepressants are better than cognitive therapy in lowering neuroticism and raising extraversion in depressed people. CBT helps make changes there, too, but the changes are neither as profound nor as lasting as with medication.
But, it seems that SSRI antidepressants are better than cognitive therapy in lowering neuroticism and raising extraversion in depressed people. CBT helps make changes there, too, but the changes are neither as profound nor as lasting as with medication.
Wednesday, February 27, 2008
The PLoS antidepressants study, the ‘looseness’ of medical research statistics and ‘faith’ in meta-analysis
Way too loose of p-values for false positives in studies, in medicine (and social sciences) compared to natural sciences, is one reason to not read too much into any individual study that claims antidepressants are ineffective, like the Public Library of Science meta-analysis of individual studies did.
P-values of the same looseness as in medicine/social sciences have been used to claim intercessory prayer actually works on sick people (halfway down the linked page), for example, or here (two-third down the linked page):
Also, per a blogger, I came across a good statement on how many people misunderstand p-values in general:
In short, as I’ve tried to explain to people over at Kevin Drum’s blog, p values in medicine are simply too loose.
But, as the study’s authors claim, doesn’t meta-analysis take care of all those p-value problems? No.
Meta-analysis, no matter how much it’s defended, can’t totally cover that up.
I’m not saying that the results of a meta-analysis are no stronger than the weakest study in its umbrella. I am saying that, with p values as loose as they are in health/medicine (and social sciences), is that no massive amount of individual research studies being included under one meta-analysis will make the meta-analysis’ results anything more than a little bit stronger than the best individual study.
In other words, in medicine, and in social sciences, meta-analysis adds a very modest bump, nothing more. The problem is, most people believe it does much more than that when it doesn’t.
Or, to put it another way, meta-analysis is no better than the material it’s analyzing.
So, what’s needed is medical studies to continue with the p of 0.05, because we don’t want to risk screening out potentially life-saving study, but, to re-crunch research studies at the same time. I’m not saying we need to do that with a p of 0.0001, or 1/100 of 1 percent, like the natural sciences, especially physics, normally do. But to re-crunch with a p of 0.01, or 1 percent instead of 5 percent? Absolutely.
Research that made the 5 percent cutoff but not the 1 percent cutoff would be categorized as “worthy of further study but without any immediate conclusions from it being acceptable.”
A sidebar benefit would be that a lot of alt-medicine research would get a less than full imprimatur.
P-values of the same looseness as in medicine/social sciences have been used to claim intercessory prayer actually works on sick people (halfway down the linked page), for example, or here (two-third down the linked page):
Targ's paper is not the only questionable study on the efficacy of prayer that has been published by medical journals. The editors and referees of these journals have done a great disservice to both science and society by allowing such highly flawed papers to be published. I have previously commented about the low statistical significance threshold of these journals (p-value of 0.05) and how it is inappropriate for extraordinary claims (Skeptical Briefs, March 2001). This policy has given a false scientific credibility to the assertion that prayer or other spiritual techniques work miracles, and several best selling books have appeared that exploit that theme. Telling people what they want to hear, these authors have made millions.
Also, per a blogger, I came across a good statement on how many people misunderstand p-values in general:
First, the p value is often misinterpreted to mean the “probability for the result being due to chance”. In reality, the p-value makes no statement that a reported observation is real. “It only makes a statement about the expected frequency that the effect would result from chance when the effect is not real”.
In short, as I’ve tried to explain to people over at Kevin Drum’s blog, p values in medicine are simply too loose.
But, as the study’s authors claim, doesn’t meta-analysis take care of all those p-value problems? No.
Meta-analysis, no matter how much it’s defended, can’t totally cover that up.
I’m not saying that the results of a meta-analysis are no stronger than the weakest study in its umbrella. I am saying that, with p values as loose as they are in health/medicine (and social sciences), is that no massive amount of individual research studies being included under one meta-analysis will make the meta-analysis’ results anything more than a little bit stronger than the best individual study.
In other words, in medicine, and in social sciences, meta-analysis adds a very modest bump, nothing more. The problem is, most people believe it does much more than that when it doesn’t.
Or, to put it another way, meta-analysis is no better than the material it’s analyzing.
So, what’s needed is medical studies to continue with the p of 0.05, because we don’t want to risk screening out potentially life-saving study, but, to re-crunch research studies at the same time. I’m not saying we need to do that with a p of 0.0001, or 1/100 of 1 percent, like the natural sciences, especially physics, normally do. But to re-crunch with a p of 0.01, or 1 percent instead of 5 percent? Absolutely.
Research that made the 5 percent cutoff but not the 1 percent cutoff would be categorized as “worthy of further study but without any immediate conclusions from it being acceptable.”
A sidebar benefit would be that a lot of alt-medicine research would get a less than full imprimatur.
Subscribe to:
Posts (Atom)