Most of the papers published in scientific papers are false.
http://www.bbc.co.uk/programmes/w3csvsy8
I listened to it last night, a thought provoking assessment from The Inquiry.
Most of the papers published in scientific papers are false.
http://www.bbc.co.uk/programmes/w3csvsy8
I listened to it last night, a thought provoking assessment from The Inquiry.
Delete the broken information.
Fine people for broken information.
Fixed.
Tau.Neutrino said:
Delete the broken information.Fine people for broken information.
Fixed.
Perhaps science is becoming more social media focussed and look at me not look at what we have done/discovered so its sexed up or out rightly falsified to get attention
Cymek said:
Tau.Neutrino said:
Delete the broken information.Fine people for broken information.
Fixed.
Perhaps science is becoming more social media focussed and look at me not look at what we have done/discovered so its sexed up or out rightly falsified to get attention
A bit like that.
Its the time wasters that are most heinous.
Tau.Neutrino said:
Cymek said:
Tau.Neutrino said:
Delete the broken information.Fine people for broken information.
Fixed.
Perhaps science is becoming more social media focussed and look at me not look at what we have done/discovered so its sexed up or out rightly falsified to get attention
A bit like that.
Its the time wasters that are most heinous.
It’s a worry as science that’s caught in a lie gives ammunition to anti-science believers to justify their beliefs that events like global warming is made up
Tau.Neutrino said:
Cymek said:
Tau.Neutrino said:
Delete the broken information.Fine people for broken information.
Fixed.
Perhaps science is becoming more social media focussed and look at me not look at what we have done/discovered so its sexed up or out rightly falsified to get attention
A bit like that.
Its the time wasters that are most heinous.
and its the pressure to come up with something as part of a uni course
the un creative ones will just make up stuff, plagiarize etc
some have crazy ideas that they convince themselves that it will work but it just wont work etc
Cymek said:
Tau.Neutrino said:
Cymek said:Perhaps science is becoming more social media focussed and look at me not look at what we have done/discovered so its sexed up or out rightly falsified to get attention
A bit like that.
Its the time wasters that are most heinous.
It’s a worry as science that’s caught in a lie gives ammunition to anti-science believers to justify their beliefs that events like global warming is made up
Which is one reason why the science community needs to be vigilant against misinformation in what ever form its in.
Tau.Neutrino said:
Tau.Neutrino said:
Cymek said:Perhaps science is becoming more social media focussed and look at me not look at what we have done/discovered so its sexed up or out rightly falsified to get attention
A bit like that.
Its the time wasters that are most heinous.
and its the pressure to come up with something as part of a uni course
the un creative ones will just make up stuff, plagiarize etc
some have crazy ideas that they convince themselves that it will work but it just wont work etc
It can be quite difficult to be totally factual when, after long tedious experimentation, the result is almost but not quite what it should be.
Tamb said:
Tau.Neutrino said:
Tau.Neutrino said:A bit like that.
Its the time wasters that are most heinous.
and its the pressure to come up with something as part of a uni course
the un creative ones will just make up stuff, plagiarize etc
some have crazy ideas that they convince themselves that it will work but it just wont work etc
It can be quite difficult to be totally factual when, after long tedious experimentation, the result is almost but not quite what it should be.
yes
that’s a time and validation issue
which can be difficult
Specifically on medical research, but you might find this interesting.
http://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0020124&type=printable
Why Most Published Research Findings Are False
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research….
more at link
Tau.Neutrino said:
Tamb said:
Tau.Neutrino said:and its the pressure to come up with something as part of a uni course
the un creative ones will just make up stuff, plagiarize etc
some have crazy ideas that they convince themselves that it will work but it just wont work etc
It can be quite difficult to be totally factual when, after long tedious experimentation, the result is almost but not quite what it should be.
yes
that’s a time and validation issue
which can be difficult
I found the best thing to do was consult with my colleagues, ostensibly to see if I had made an error but also to make sure I couldn’t fudge the figures because now there were others in the loop.
Isn’t that where peer review comes into it?
esselte said:
Specifically on medical research, but you might find this interesting.
http://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0020124&type=printable
Why Most Published Research Findings Are False
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research….
more at link
proper peer review is an issue
scam operators is another one
maybe for some cases AI could be used, in cases peer review
maybe try student to student peer review?
maybe set up government organization that looks for all that bad stuff and takes action against baddies.
buffy said:
Isn’t that where peer review comes into it?
Yes, we need more of it it seems.
Tau.Neutrino said:
buffy said:Isn’t that where peer review comes into it?
Yes, we need more of it it seems.
how about community based peer review
A lot of papers are behind pay walls
Finding mis information is difficult when a paywall stops peer review
Well I’d rather someone reviewed my paper who actually did know the subject, really.
Tau.Neutrino said:
Tau.Neutrino said:
buffy said:Isn’t that where peer review comes into it?
Yes, we need more of it it seems.
how about community based peer review
you mean like the peer review FB users are known for?
buffy said:
Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes.
buffy said:
Well I’d rather someone reviewed my paper who actually did know the subject, really.
+1
IMO that is the true meaning of peer review.
Peak Warming Man said:
Most of the papers published in scientific papers are false.http://www.bbc.co.uk/programmes/w3csvsy8
I listened to it last night, a thought provoking assessment from The Inquiry.
False? Does that also include papers that say scientific papers are false, are also false?
buffy said:
Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
but keep it open for community comments,
they are a lot of creative thinks are there who can see other angles for things etc
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
that could be problematic. whilst they might be good at checking methodology they might not be up on the latest research in that field.
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
No, why use retired people? Just use people who know the field. This is the way the journals in my field work. It a whole panel of reviewers so they can give the papers to whoever knows that particular aspect best.
Tau.Neutrino said:
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
but keep it open for community comments,
they are a lot of creative thinks are there who can see other angles for things etc
thinks = thinkers
This is getting silly.
I’m out.
ChrispenEvan said:
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
that could be problematic. whilst they might be good at checking methodology they might not be up on the latest research in that field.
maybe put the papers up for community comments first
then proper peer review
Tau.Neutrino said:
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
but keep it open for community comments,
they are a lot of creative thinks are there who can see other angles for things etc
That’s another thing entirely.
Peer review is to test the validity of the paper. Other applications etc are new research & the subject of a new paper.
Woodie said:
Peak Warming Man said:
Most of the papers published in scientific papers are false.http://www.bbc.co.uk/programmes/w3csvsy8
I listened to it last night, a thought provoking assessment from The Inquiry.
False? Does that also include papers that say scientific papers are false, are also false?
That question also occurred to me.
I suspect that if they applied their won definitions of falsity the answer would be yes.
Tau.Neutrino said:
Tau.Neutrino said:
buffy said:Well I’d rather someone reviewed my paper who actually did know the subject, really.
Yes, use retired people in the concerned field who are willing to do that kind of stuff.
but keep it open for community comments,
they are a lot of creative thinks are there who can see other angles for things etc
Scientific papers are reviewed by the Community. That is, the community with knowledge of the subject.
Tau.Neutrino said:
maybe put the papers up for community comments first
how would that work? how is the community able to judge a paper and whether it is accurate? most of the public have no idea about science and the process.
buffy said:
This is getting silly.
I’m out.
No its not silly.
Peer review is important
Maybe universities need to put more effort into it.
I was suggesting a more open approach before actual peer review. (to generate discussion and ideas)
Yes its looks like some universities / companies are not peer reviewing properly.
http://andrewgelman.com/2017/06/29/lets-stop-talking-published-research-findings-true-false/
Let’s stop talking about published research findings being true or false
Posted by Andrew on 29 June 2017, 9:47 am
I bear some of the blame for this.
When I heard about John Ioannidis’s paper, “Why Most Published Research Findings Are False,” I thought it was cool. Ioannidis was on the same side as me, and Uri Simonsohn, and Greg Francis, and Paul Meehl, in the replication debate: he felt that there was a lot of bad work out there, supported by meaningless p-values, and his paper was a demonstration of how this could come to pass, how it was that the seemingly-strong evidence of “p less than .05” wasn’t so strong at all.
I didn’t (and don’t) quite buy Ioannidis’s mathematical framing of the problem, in which published findings map to hypotheses that are “true” or “false.” I don’t buy it for two reasons: First, statistical claims are only loosely linked to scientific hypotheses. What, for example, is the hypothesis of Satoshi Kanazawa? Is it that sex ratios of babies are not identical among all groups? Or that we should believe in “evolutionary psychology”? Or that strong powerful men are more likely to have boys, in all circumstances? Some circumstances? Etc. Similarly with that ovulation-and-clothing paper: is the hypothesis that women are more likely to wear red clothing during their most fertile days? Or during days 6-14 (which are not the most fertile days of the cycle)? Or only on warm days? Etc. The second problem is that the null hypotheses being tested and rejected are typically point nulls—the model of zero difference, which is just about always false. So the alternative hypothesis is just about always true. But the alternative to the null is not what is being specified in the paper. And, as Bargh etc. have demonstrated, the hypothesis can keep shifting. So we go round and round.
Here’s my point. Whether you think the experiments and observational studies of Kanazawa, Bargh, etc., are worth doing, or whether you think they’re a waste of time: either way, I don’t think they’re making claims that can be said to be either “true” or “false.” And I feel the same way about medical studies of the “hormone therapy causes cancer” variety. It could be possible to coerce these claims into specific predictions about measurable quantities, but that’s not what these papers are doing.
I agree that there are true and false statements. For example, “the Stroop effect is real and it’s spectacular” is true. But when you move away from these super-clear examples, it’s tougher. Does power pose have real effects? Sure, everything you do will have some effect. But that’s not quite what Ioannidis was talking about, I guess.
Anyway, I’m still glad that Ioannidis wrote that paper, and I agree with his main point, even if I feel it was awkwardly expressed by being crammed into the true-positive, false-positive framework.
But it’s been 12 years now, and it’s time to move on. Back in 2013, I was not so pleased with Jager and Leek’s paper, “Empirical estimates suggest most published medical research is true.” Studying the statistical properties published scientific claims, that’s great. Doing it in the true-or-false framework, not so much.
I can understand Jager and Leek’s frustration: Ioannidis used this framework to write a much celebrated paper; Jager and Leek do something similar—but with real data!—and get all this skepticism. But I do think we have to move on.
And I feel the same way about this new paper, “Too True to be Bad: When Sets of Studies With Significant and Nonsignificant Findings Are Probably True,” by Daniel Lakens and Alexander Etz, sent to me by Kevin Lewis. I suppose such analyses are helpful for people to build their understanding, but I think the whole true/false thing with social science hypotheses is just pointless. These people are working within an old-fashioned paradigm, and I wish they’d take the lead from my 2014 paper with Carlin on Type M and S errors. I suspect that I would agree with the recommendations of this paper (as, indeed, I agree with Ioannidis), but at this point I’ve just lost the patience for decoding this sort of argument and reframing it in terms of continuous and varying effects. That said, I expect this paper by Lakens and Etz, like the earlier papers by Ioannidis and Jager/Leek, could be useful, as I recognize that many people are still comfortable working within the outmoded framework of true and false hypotheses.
http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html
Ioannidis presents a Bayesian analysis of the problem which most people will find utterly confusing.
Suppose there are 1000 possible hypotheses to be tested. There are an infinite number of false hypotheses about the world and only a finite number of true hypotheses so we should expect that most hypotheses are false. Let us assume that of every 1000 hypotheses 200 are true and 800 false.
It is inevitable in a statistical study that some false hypotheses are accepted as true. In fact, standard statistical practice guarantees that at least 5% of false hypotheses are accepted as true. Thus, out of the 800 false hypotheses 40 will be accepted as “true,” i.e. statistically significant.
It is also inevitable in a statistical study that we will fail to accept some true hypotheses (Yes, I do know that a proper statistician would say “fail to reject the null when the null is in fact false,” but that is ugly). It’s hard to say what the probability is of not finding evidence for a true hypothesis because it depends on a variety of factors such as the sample size but let’s say that of every 200 true hypotheses we will correctly identify 120 or 60%. Putting this together we find that of every 160 (120+40) hypotheses for which there is statistically significant evidence only 120 will in fact be true or a rate of 75% true.
Ioannidis says most published research findings are false. This is plausible in his field of medicine where it is easy to imagine that there are more than 800 false hypotheses out of 1000. In medicine, there is hardly any theory to exclude a hypothesis from being tested. Want to avoid colon cancer? Let’s see if an apple a day keeps the doctor away. No? What about a serving of bananas? Let’s try vitamin C and don’t forget red wine. Studies in medicine also have notoriously small sample sizes. Lots of studies that make the NYTimes involve less than 50 people – that reduces the probability that you will accept a true hypothesis and raises the probability that the typical study is false.
So economics does ok on the main factors in the diagram but there are other effects which also reduce the probability the typical result is true and economics has no advantages on these – see the extension.
Sadly, things get really bad when lots of researchers are chasing the same set of hypotheses. Indeed, the larger the number of researchers the more likely the average result is to be false! The easiest way to see this is to note that when we have lots of researchers every true hypothesis will be found to be true but eventually so will every false hypothesis. Thus, as the number of researchers increases, the probability that a given result is true goes to the probability in the population, in my example 200/1000 or 20 percent.
A meta analysis will go some way to fixing the last problem so the point is not that knowledge declines with the number of researchers but rather that with lots of researchers every crackpot theory will have at least one scientific study that it can cite in it’s support.
The meta analysis approach, however, will work well only if the results that are published reflect the results that are discovered. But editors and referees (and authors too) like results which reject the null – i.e. they want to see a theory that is supported not a paper that says we tried this and this and found nothing (which seems like an admission of failure).
Brad DeLong and Kevin Lang wrote a classic paper suggesting that one of the few times that journals will accept a paper that fails to reject the null is when the evidence against the null is strong (and thus failing to reject the null is considered surprising and important). DeLong and Lang show that this can result in a paradox. Taken on its own, a paper which fails to reject the null provides evidence in favor of the null, i.e. against the alternative hypothesis and so should increase the probability that a rational person thinks the null is true. But when a rational person takes into account the selection effect, the fact that the only time papers which fail to reject the null are published is when the evidence against the null is strong, the publication of a paper failing to reject the null can cause him to increase his belief in the alternative theory!
What can be done about these problems? (Some cribbed straight from Ioannidis and some my own suggestions.)
1) In evaluating any study try to take into account the amount of background noise. That is, remember that the more hypotheses which are tested and the less selection which goes into choosing hypotheses the more likely it is that you are looking at noise.
2) Bigger samples are better. (But note that even big samples won’t help to solve the problems of observational studies which is a whole other problem).
3) Small effects are to be distrusted.
4) Multiple sources and types of evidence are desirable.
5) Evaluate literatures not individual papers.
6) Trust empirical papers which test other people’s theories more than empirical papers which test the author’s theory.
7) As an editor or referee, don’t reject papers that fail to reject the null.
The programme presents 4 witnesses who discuss various aspects of the debate.
One of the aspects is that very little research is being done now for research sake ie to increase our pool of knowledge.
This type of research increases our pool of knowledge but it may lead to nowhere or it could lead to something great and unexpected.
Most research is now done with the end game being profit or a sensational headline that will lead to more funding.
dv said:
http://andrewgelman.com/2017/06/29/lets-stop-talking-published-research-findings-true-false/Let’s stop talking about published research findings being true or false
Posted by Andrew on 29 June 2017, 9:47 am
I bear some of the blame for this.When I heard about John Ioannidis’s paper, “Why Most Published Research Findings Are False,” I thought it was cool. Ioannidis was on the same side as me, and Uri Simonsohn, and Greg Francis, and Paul Meehl, in the replication debate: he felt that there was a lot of bad work out there, supported by meaningless p-values, and his paper was a demonstration of how this could come to pass, how it was that the seemingly-strong evidence of “p less than .05” wasn’t so strong at all.
I didn’t (and don’t) quite buy Ioannidis’s mathematical framing of the problem, in which published findings map to hypotheses that are “true” or “false.” I don’t buy it for two reasons: First, statistical claims are only loosely linked to scientific hypotheses. What, for example, is the hypothesis of Satoshi Kanazawa? Is it that sex ratios of babies are not identical among all groups? Or that we should believe in “evolutionary psychology”? Or that strong powerful men are more likely to have boys, in all circumstances? Some circumstances? Etc. Similarly with that ovulation-and-clothing paper: is the hypothesis that women are more likely to wear red clothing during their most fertile days? Or during days 6-14 (which are not the most fertile days of the cycle)? Or only on warm days? Etc. The second problem is that the null hypotheses being tested and rejected are typically point nulls—the model of zero difference, which is just about always false. So the alternative hypothesis is just about always true. But the alternative to the null is not what is being specified in the paper. And, as Bargh etc. have demonstrated, the hypothesis can keep shifting. So we go round and round.
Here’s my point. Whether you think the experiments and observational studies of Kanazawa, Bargh, etc., are worth doing, or whether you think they’re a waste of time: either way, I don’t think they’re making claims that can be said to be either “true” or “false.” And I feel the same way about medical studies of the “hormone therapy causes cancer” variety. It could be possible to coerce these claims into specific predictions about measurable quantities, but that’s not what these papers are doing.
I agree that there are true and false statements. For example, “the Stroop effect is real and it’s spectacular” is true. But when you move away from these super-clear examples, it’s tougher. Does power pose have real effects? Sure, everything you do will have some effect. But that’s not quite what Ioannidis was talking about, I guess.
Anyway, I’m still glad that Ioannidis wrote that paper, and I agree with his main point, even if I feel it was awkwardly expressed by being crammed into the true-positive, false-positive framework.
But it’s been 12 years now, and it’s time to move on. Back in 2013, I was not so pleased with Jager and Leek’s paper, “Empirical estimates suggest most published medical research is true.” Studying the statistical properties published scientific claims, that’s great. Doing it in the true-or-false framework, not so much.
I can understand Jager and Leek’s frustration: Ioannidis used this framework to write a much celebrated paper; Jager and Leek do something similar—but with real data!—and get all this skepticism. But I do think we have to move on.
And I feel the same way about this new paper, “Too True to be Bad: When Sets of Studies With Significant and Nonsignificant Findings Are Probably True,” by Daniel Lakens and Alexander Etz, sent to me by Kevin Lewis. I suppose such analyses are helpful for people to build their understanding, but I think the whole true/false thing with social science hypotheses is just pointless. These people are working within an old-fashioned paradigm, and I wish they’d take the lead from my 2014 paper with Carlin on Type M and S errors. I suspect that I would agree with the recommendations of this paper (as, indeed, I agree with Ioannidis), but at this point I’ve just lost the patience for decoding this sort of argument and reframing it in terms of continuous and varying effects. That said, I expect this paper by Lakens and Etz, like the earlier papers by Ioannidis and Jager/Leek, could be useful, as I recognize that many people are still comfortable working within the outmoded framework of true and false hypotheses.
So I’m not the only anti-either-orist in the World.
Yay!
I mean shit … Isaac Newton’s Philosophiæ Naturalis Principia Mathematica was “false”, if you check it out closely enough, yet it was one of the most important pieces of published research in history.
dv said:
I mean shit … Isaac Newton’s Philosophiæ Naturalis Principia Mathematica was “false”, if you check it out closely enough, yet it was one of the most important pieces of published research in history.
And as for Galileo …
esselte said:
Specifically on medical research, but you might find this interesting.
http://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0020124&type=printable
Why Most Published Research Findings Are False
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research….
more at link
Ah, now I get it, that makes more sense.
Physics doesn’t have that problem. It has other problems.
But first other sciences. I’ve noticed a strong tendency among science writers to only publish the bleedin’ obvious. Which is written in longer words, confusing acronyms and stilted grammar in order to hide that it is so obvious.
The problem with physics is that it’s intrinsically broken. Which has led to most physics papers discussing theoretical possibilities that haven’t a snowflake’s chance in hell of being correct.
mollwollfumble said:
But first other sciences. I’ve noticed a strong tendency among science writers to only publish the bleedin’ obvious.
Yeah?
To me it seems they spend too much time pushing the very dodgy.
There is a misunderstanding amongst some participants in this thread about what peer review is. It’s not public review. It’s review by peers. People who know the subject. A jury of your peers is a jury of the public. Peer review in science is review by experts.
buffy said:
There is a misunderstanding amongst some participants in this thread about what peer review is.
That’s weird.
dv said:
mollwollfumble said:But first other sciences. I’ve noticed a strong tendency among science writers to only publish the bleedin’ obvious.
Yeah?
To me it seems they spend too much time pushing the very dodgy.
So we see things differently.
Let’s see, pick a random science article, the first off the rank from a scholar search on “geochronology”. Obvious?
“It was unanimously agreed to recommend the adoption of a standard set of decay constants and isotopic abundances in isotope geology. The values have been selected, based on current information and usage, to provide for uniform international use in published communications. The recommendation represents a convention for the sole purpose of achieving interlaboratory standardization. The Subcommission does not intend to endorse specific methods of investigation or to specifically select the works of individual authors, institutions, or publications. All selected values are open to and should be the subjects of continuing critical scrutinizing and laboratory investigation. The recommendations will be reviewed by the Subcommission from time to time so as to bring the adopted conventional values in line with significant new research data.”
You might call that “dodgy”. I call it “bleedin’ obvious”.
For me, the dodginess of the science as reported in popular science columns is about five times as bad as that in the original articles. In political speeches and advertising blurb, another five times more dodgy, at least.
How could one improve peer review?
mollwollfumble said:
dv said:
mollwollfumble said:But first other sciences. I’ve noticed a strong tendency among science writers to only publish the bleedin’ obvious.
Yeah?
To me it seems they spend too much time pushing the very dodgy.
So we see things differently.
Let’s see, pick a random science article, the first off the rank from a scholar search on “geochronology”. Obvious?
“It was unanimously agreed to recommend the adoption of a standard set of decay constants and isotopic abundances in isotope geology. The values have been selected, based on current information and usage, to provide for uniform international use in published communications. The recommendation represents a convention for the sole purpose of achieving interlaboratory standardization. The Subcommission does not intend to endorse specific methods of investigation or to specifically select the works of individual authors, institutions, or publications. All selected values are open to and should be the subjects of continuing critical scrutinizing and laboratory investigation. The recommendations will be reviewed by the Subcommission from time to time so as to bring the adopted conventional values in line with significant new research data.”
You might call that “dodgy”. I call it “bleedin’ obvious”.
For me, the dodginess of the science as reported in popular science columns is about five times as bad as that in the original articles. In political speeches and advertising blurb, another five times more dodgy, at least.
We could have accepted the permaculture theory, where there was at least according to Bill Mollison, a web of links to known disciplines thtat made lead to the perception that the whole thing could actually be perceived as being wholistic.
Tau.Neutrino said:
How could one improve peer review?
By changing peers?
Tau.Neutrino said:
How could one improve peer review?
Limit the ages of the peers to be no more than that of the lead author? As people age, enthusiasm declines and so does truthfulness.
OK. Another example at random. First result on a scholar search on “sneeze”.
“Urethral closure mechanisms under sneeze-induced stress condition in rats: a new animal model for evaluation of stress urinary incontinence. The urethral response was much less than the bladder response. The sneeze leak point pressure was also measured to investigate the role of active urethral closure mechanisms in maintaining total urethral resistance against sneeze-induced urinary incontinence. In sham-operated rats, no urinary leakage was observed during sneeze, which produced an increase of intravesical pressure up to …. However, in nerve-transected rats urinary leakage was observed when the intravesical pressure during sneezing exceeded …. These results indicate that during sneezing, pressure increases elicited by reflex contractions of external urethral sphincter and pelvic floor muscles occur in the middle portion of the urethra. These reflexes in addition to passive transmission of increased abdominal pressure significantly contribute to urinary continence mechanisms under a sneeze-induced stress condition.”
I still call it “bleedin’ obvious with big words” rather than “dodgy”.
roughbarked said:
We could have accepted the permaculture theory, where there was at least according to Bill Mollison, a web of links to known disciplines thtat made lead to the perception that the whole thing could actually be perceived as being wholistic.
There are few things more dodgy than permaculture, because it’s been heavily politicised.
PS. I’ve written science papers on the holistic approach. You’re misusing the word badly.
mollwollfumble said:
roughbarked said:
We could have accepted the permaculture theory, where there was at least according to Bill Mollison, a web of links to known disciplines thtat made lead to the perception that the whole thing could actually be perceived as being wholistic.
There are few things more dodgy than permaculture, because it’s been heavily politicised.
PS. I’ve written science papers on the holistic approach. You’re misusing the word badly.
Of course, time has moved on but he did pioneer the branching.