In your belief, what percentage of paternity tests conducted in the US shows the man tested is not the father?
- Anonymous1 decade agoFavorite Answer
I've heard the percentage is about 30%. The same is said for the incorrect father being listed on the birth certificate. Some guys just don't think when they sign on the dotted line.
- AliceLv 41 decade ago
I wouldn't think it would be an ideally small percentage, because if I were to be pregnant right now I'd know who it was and the average girl would probably already know who it was too, so a test is not needed. If a test is needed, then there's a greater chance it could be a couple of people, unless the guy just wants proof it is him.
- H5Lv 71 decade ago
In my belief? Why not look at facts, instead of my belief? I'm sure there are some details/estimate out there.
I think I saw something like 10%, BUT that was out of people who must have had a reason for getting the test in the first place, therefore presumably being unsure due to knowledge of multiple sexual partners at time of conception.
- ʄaçadeLv 71 decade ago
Do not know. But a better question might be: "What percentage of American men think they are the father of a particular child but are not?"
@Pat: >"31.7% of all American high school seniors have used marijuana within the last 12 months (44.4 % within the last 30 days)."
Pat, I think you have your figures fouled up. The last 30 days INCLUDES the last 12 months. So the percentage for the shorter included period cannot be MORE than the longer period.
- How do you think about the answers? You can sign in to vote the answer.
- 1 decade ago
My guess would be somewhere around 40% at most. Generally speaking the mother already has a pretty good idea as to whom the father really is, so only highly suspect potential fathers would be tested.
- antagonistLv 41 decade ago
10% of the cases women guessed the wrong partner. It doesn't mean they didn't have sex - it means that she had sex with someone else in the relevant period of time. Sometimes they test two or more men at the same time.
- Anonymous1 decade ago
The ones tested have a 35% to 40% rate showing the claimed father is not the true father.
- Anonymous1 decade ago
Well, paternity fraud is pretty rampant in the US.. New Hampshire founded that at least 30% of men paying child support were not the biological fathers of the children being supported.
- Anonymous1 decade ago
I've traced the misinformation in the blurb Guns cited to and have discovered that no such evidence as claimed even exists. There are no credible studies, not even published studies. We can't find out about flawed methodology in non-existent research.
Your source "Worldnet Daily" is not even a newspaper. Its an online tabloid. I see you can't find any real data on the subject either. No wonder, it doesn't yet exist.
@ Pat: LOGICAL FALLACY
BMJ 1995;311:485 (19 August)
Absence of evidence is not evidence of absence
Douglas G Altman, head,a J Martin Bland, reader in medical statistics b
a Medical Statistics Laboratory, Imperial Cancer Research Fund, London WC2A 3PX, b Department of Public Health Sciences, St George's Hospital Medical School, London SW17 0RE
Correspondence to: Mr Altman.
The non-equivalence of statistical significance and clinical importance has long been recognised, but this error of interpretation remains common. Although a significant result in a large study may sometimes not be clinically important, a far greater problem arises from misinterpretation of non-significant findings. By convention a P value greater than 5% (P>0.05) is called "not significant." Randomised controlled clinical trials that do not show a significant difference between the treatments being compared are often called "negative." This term wrongly implies that the study has shown that there is no difference, whereas usually all that has been shown is an absence of evidence of a difference. These are quite different statements.
The sample size of controlled trials is generally inadequate, with a consequent lack of power to detect real, and clinically worthwhile, differences in treatment. Freiman et al1 found that only 30% of a sample of 71 trials published in the New England Journal of Medicine in 1978-9 with P>0.1 were large enough to have a 90% chance of detecting even a 50% difference in the effectiveness of the treatments being compared, and they found no improvement in a similar sample of trials published in 1988. To interpret all these "negative" trials as providing evidence of the ineffectiveness of new treatments is clearly wrong and foolhardy. The term "negative" should not be used in this context.2
A recent example is given by a trial comparing octreotide and sclerotherapy in patients with variceal bleeding.3 The study was carried out on a sample of only 100 despite a reported calculation that suggested that 1800 patients were needed. This trial had only a 5% chance of getting a statistically significant result if the stated clinically worthwhile treatment difference truly existed. One consequence of such low statistical power was a wide confidence interval for the treatment difference. The authors concluded that the two treatments were equally effective despite a 95% confidence interval that included differences between the cure rates of the two treatments of up to 20 percentage points.
Similar evidence of the dangers of misinterpretation of non-significant results is found in numerous metaanalyses (overviews) of published trials, when few or none of the individual trials were statistically large enough. A dramatic example is provided by the overview of clinical trials evaluating fibrinolytic treatment (mostly streptokinase) for preventing reinfarction after acute myocardial infarction. The overview of randomised controlled trials found a modest but clinically worthwhile (and highly significant) reduction in mortality of 22%,4 but only five of the 24 trials had shown a statistically significant effect with P<0.05. The lack of statistical significance of most of the individual trials led to a long delay before the true value of streptokinase was appreciated.
While it is usually reasonable not to accept a new treatment unless there is positive evidence in its favour, when issues of public health are concerned we must question whether the absence of evidence is a valid enough justification for inaction. A recent publicised example is the suggested link between some sudden infant deaths and antimony in cot mattresses. Statements about the absence of evidence are common--for example, in relation to the possible link between violent behaviour and exposure to violence on television and video, the possible harmful effects of pesticide residues in drinking water, the possible link between electromagnetic fields and leukaemia, and the possible transmission of bovine spongiform encephalopathy from cows. Can we be comfortable that the absence of clear evidence in such cases means that there is no risk or only a negligible one?
When we are told that "there is no evidence that A causes B" we should first ask whether absence of evidence means simply that there is no information at all. If there are data we should look for quantification of the association rather than just a P value. Where risks are small P values may well mislead: confidence intervals are likely to be wide, indicating considerable uncertainty. While we can never prove the absence of a relation, when necessary we should seek evidence against the link between A and B--for example, from case-control studies. The importance of carrying out such studies will relate to the seriousness of the postulated effect and how widespread is the exposure in the population.
*Pat, your only credible source cites statistics on drug use - which isn't the subject of discussion. You need to find credible, authoritative sources by recognised scientific bodies on the subject - and drug use ain't the subject under discussion here. I have never come across any rigourous, controlled studies on the subject. If I am wrong and you can furnish said evidence then please feel free to correct me and post links.
You need a bit of education on the subject of what is called "the scientific method".
Its critically important that the population sampled be accurately representative of the group as a whole, the group being studied. Otherwise, the methodology is flawed and the results will be wrong.
Are people who go on the Maury Povitch show or the Jerry Springer Show representative of the general American population? I think not.
- Anonymous1 decade ago