首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Comments on an article by Dixon et al. (see record 2007-06671-001) regarding the effect sizes they presented in their meta-analysis of psychological interventions for arthritis pain management. The author of this comment claims that some of the individual effect sizes that they presented are erroneous and have therefore undermined their cumulative effect size estimates. After examining findings from other studies, he concludes that the Dixon et al. meta-analysis reports cumulative effect sizes (Hedge’s g) that overestimate the effects of psychological treatments upon arthritis pain. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
In response to concerns described by H. N. Garb et al(see record 2001-05665-003), the authors present the weighted and unweighted means and medians of the effect sizes obtained by J. B. Hiller et al (see record 1999-11130-005). These indices of central tendency are presented separately for MMPI and Rorschach effect sizes, both for all the studies in the meta-analysis and for a 10% trimmed sample designed to obtain more robust estimates of central tendency. The variability of these 4 indices is noticeably greater for the MMPI than for the Rorschach. Meta-analysts must compute, compare, and evaluate a variety of indices of central tendency, and they must examine the effects of moderator variables. The authors also comment briefly on the use of phi versus kappa, combining correlated effect sizes and possible hindsight biases. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Presents a framework for studying the influence of reporting quality on meta-analytic results in which 3 sources of reporting deficiency are identified: quality (adequacy) of publicizing, quality of macrolevel reporting, and quality of the review process in a different way. To assess the influence of reporting quality empirically, 25 reports were sampled from the psychotherapy meta-analysis reported by M. L. Smith et al (1980) and recorded by the present authors. Two sources of information pertinent to reporting quality were established: interrater reliabilities and confidence judgments. Reanalyses incorporating reliability corrections and confidence judgments suggested that deficient reporting injects considerable noise into meta-analytic data and can lead to spurious conclusions. (43 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Despite the recent popularity of meta-analysis as a tool for summarizing empirical results across a number of studies, surprisingly little research has been conducted on the accuracy of these procedures under a variety of population conditions. Of concern in this study was the 90% credibility value (K. Pearlman et al, see record 1980-31533-001) advocated as a rule of thumb regarding the transportability of employment test validities. We investigated the ability of this meta-analytic rule to detect the presence of discretely defined moderator variables, that is, the ability of the rule to detect instances where transportability is inappropriate. An infinite sample size analysis and a mathematical proof demonstrated that the transportability rule may produce erroneous inferences at rates higher than expected. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
B. E. Wampold et al.'s (1997) meta-analysis provides a useful and methodologically sophisticated summary of the results of comparative psychotherapy outcome research. Despite its strengths, some limitations of the meta-analysis that may have biased the results against finding differences between treatments are pointed out in this article. In addition, the types of treatments and patient populations to which the results can be generalized are clarified through an analysis of the studies contained within the meta-analysis. The importance of exceptions to the Dodo bird verdict is emphasized. Disagreements with Wampold et al. on the implications of the their meta-analysis for research and practice, in particular the role of clinical trials in psychotherapy research and the need for identifying treatments that are "empirically supported," are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In meta-analysis, it is possible to average results across multiple studies because the effect sizes estimated from each study are in the same metric (e.g., the standardized mean difference). However, when effect sizes are computed from a factorial analysis of variance, these estimates are influenced by the other factors in the design. A correction developed by G. V. Glass, B. McGaw, and M. L. Smith (1981) solves this problem; however, it requires information (e.g., sums of squares) that is often not available in published research. A reformulated version of the correction is presented, which requires only F values and degrees of freedom. The impact of the correction on effect size estimates is examined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The issue of violent video game influences on youth violence and aggression remains intensely debated in the scholarly literature and among the general public. Several recent meta-analyses, examining outcome measures most closely related to serious aggressive acts, found little evidence for a relationship between violent video games and aggression or violence. In a new meta-analysis, C. A. Anderson et al. (2010) questioned these findings. However, their analysis has several methodological issues that limit the interpretability of their results. In their analysis, C. A. Anderson et al. included many studies that do not relate well to serious aggression, an apparently biased sample of unpublished studies, and a “best practices” analysis that appears unreliable and does not consider the impact of unstandardized aggression measures on the inflation of effect size estimates. They also focused on bivariate correlations rather than better controlled estimates of effects. Despite a number of methodological flaws that all appear likely to inflate effect size estimates, the final estimate of r = .15 is still indicative of only weak effects. Contrasts between the claims of C. A. Anderson et al. (2010) and real-world data on youth violence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Disputes the methodology and conclusions of M. L. Smith and G. V. Glass (see record 1978-10341-001) in their meta-analysis of psychotherapy outcome studies. Smith and Glass's use of a compilation of studies, mostly of poor design, is an abandonment of scholarship. There remains no acceptable evidence for the efficacy of psychotherapy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Missing effect-size estimates pose a particularly difficult problem in meta-analysis. Rather than discarding studies with missing effect-size estimates or setting missing effect-size estimates equal to 0, the meta-analyst can supplement effect-size procedures with vote-counting procedures if the studies report the direction of results or the statistical significance of results. By combining effect-size and vote-counting procedures, the meta-analyst can obtain a less biased estimate of the population effect size and a narrower confidence interval for the population effect size. This article describes 3 vote-counting procedures for estimating the population correlation coefficient in studies with missing sample correlations. Easy-to-use tables, based on equal sample sizes, are presented for the 3 procedures. More complicated vote-counting procedures also are given for unequal sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Meta-analytic estimates of the absolute efficacy of psychotherapy indicate an effect size of approximately 0.80. However, various biases in primary and meta-analytic studies may have influenced this estimate. This study examines 4 nonsystematic biases that increase error variance (i.e., nonrandomized designs, methodological deficiencies, failure to use the study as the unit of analysis, and violations of homogeneity), 4 underestimation biases that primarily concern psychometric issues (i.e., unreliability of outcome measures, failure to report nonsignificant effect sizes, nonoptimal composite outcome measures, and nonstandardized outcome measures), and 8 overestimation biases (i.e., excluding nonsignificant effects from calculations of effect size estimates, failure to adjust for small sample bias, failure to separate studies using single-group pre-post designs vs. control group designs, using unweighted average effect sizes, analyzing biased partial samples that reflect treatment dropout and research attrition, researcher allegiance bias, publication bias, and wait-list control group bias). Wherever possible, evidence regarding the magnitude of these biases is presented, and methods for addressing these biases separately and collectively are discussed. Implications of the meta-analytic evidence on psychotherapy for the effect sizes of other psychological interventions are also considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Reviews criticisms of M. L. Smith and G. V. Glass's (see record 1978-10341-001) meta-analysis of psychotherapy outcome studies, including the comments of H. J. Eysenck (1978), P. S. Gallo (1978), and S. Presby (1978). Smith and Glass's data tend to negate the claimed benefits of psychotherapy, as well as the value of educational and experiential achievement in the field. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Replies to H. J. Eysenck's (1978) criticisms of M. L. Smith and G. V. Glass's (see record 1978-10341-001) meta-analysis of psychotherapy outcome studies. Eysenck's rejection of any evidence supporting the effectiveness of psychotherapy is disputed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The computation of effect sizes is a key feature of meta-analysis. In treatment outcome meta-analyses, the standardized mean difference statistic on posttest scores (d) is usually the effect size statistic used. However, when primary studies do not report the statistics needed to compute d, many methods for estimating d from other data have been developed. Little is known about the accuracy of these estimates, yet meta-analysts frequently use them on the assumption that they are estimating the same population parameter as d. This study investigates that assumption empirically. On a sample of 140 psychosocial treatment or prevention studies from a variety of areas, the present study shows that these estimates yield results that are often not equivalent to d in either mean or variance. The frequent mixing of d and other estimates of d in past meta-analyses, therefore, may have led to biased effect size estimates and inaccurate significance tests. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
K. S. Dobson (see record 1989-30221-001) conducted a meta-analysis of 28 studies of cognitive-behavioral therapy for depression that used the Beck Depression Inventory as outcome measure. He concluded that the outcome of this type of therapy was superior to that of other forms of psychotherapy and to that of pharmacotherapy. The present study reanalyzed the same studies, and a further set of 37 similar ones published from 1987 to 1994, taking into account variations in sample size and researcher allegiance. This study confirmed Dobson's conclusions for his original sample but showed that about half the difference between CT and other treatments was predictable from researcher allegiance. However, comparable analyses of the later set of studies showed no effect of researcher allegiance. Two causes for these phenomena are (a) a historical effect, whereby both effect sizes and allegiance were large in earlier years and declined over time and (b) a treatment effect whereby effect size and allegiance were correlated, but more for some treatments than others. This correlation has also weakened over time. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The authors examine the reassessments of the National Reading Panel (NRP) report (National Institute of Child Health and Human Development, 2000) by G. Camilli, S. Vargas, and M. Yurecko (2003); G. Camilli, P. M. Wolfe, and M. L. Smith (2006); and D. D. Hammill and H. L. Swanson (2006) that disagreed with the NRP on the magnitude of the effect of systematic phonics instruction. Using the coding of the NRP studies by Camilli et al. (2003, 2006), multilevel regression analyses show that their findings do not contradict the NRP findings of effect sizes in the small to moderate range favoring systematic phonics. Extending Camilli et al. (2003, 2006), the largest effects are associated with reading instruction enhanced with components that increase comprehensiveness and intensity. In contrast to Hammill and Swanson, binomial effect size displays show that effect sizes of the magnitude found for systematic phonics by the NRP are meaningful and could result in significant improvement for many students depending on the base rate of struggling readers and the size of the effect. Camilli et al. (2003, 2006) and Hammill and Swanson do not contradict the NRP report, concurring in supporting comprehensive approaches to reading instruction. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The proportion of studies that use one-tailed statistical significance tests (π) in a population of studies targeted by a meta-analysis can affect the bias of the sample effect sizes (sample ESs, or ds) that are accessible to the meta-analyst. H. C. Kraemer, C. Gardner, J. O. Brooks, and J. A. Yesavage (1998) found that, assuming π?=?1.0, for small studies (small Ns) the overestimation bias was large for small population ESs (δ?=?0.2) and reached a maximum for the smallest population ES (viz., δ?=?0). The present article shows (with a minor modification of H. C. Kraemer et al.'s model) that when π?=?0, the small-N bias of accessible sample ESs is relatively small for δ?≤?0.2, and a minimum (in fact, nonexistent) for δ?=?0. Implications are discussed for interpretations of meta-analyses of (a) therapy efficacy and therapy effectiveness studies, (b) comparative outcome studies, and (c) studies targeting small but important population ESs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Critiques H. J. Eysenck's (1978) criticism of M. L. Smith and G. V. Glass's (see record 1978-10333-001) meta-analysis of psychotherapy outcome studies. Eysenck's insistance on a position regarding spontaneous remission is examined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Critiques J. T. Landman and R. M. Dawes's (see record 1982-30838-001) reanalysis of the M. L. Smith and G. V. Glass (see record 1978-10341-001) psychotherapy outcome study. It is suggested that the Landman and Dawes study has 2 serious sampling problems. The first is the mismatch between the population sampled from and the population to which the sample, after reanalysis, is compared. Also, there is a flaw in the sampling procedure itself. It is further suggested that the conceptual shortcoming of the reanalysis turns around Landman and Dawes's definition of well-controlled studies. (10 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Comparative studies of psychotherapy often find few or no differences in the outcomes that alternative treatments produce. Although these findings may reflect the comparability of alternative treatments, studies are often not sufficiently powerful to detect the sorts of effect sizes likely to be found when two or more treatments are contrasted. The present survey evaluated the power of psychotherapy outcome studies to detect differences for contrasts of two or more treatments and treatment vs no-treatment. 85 outcome studies were drawn from 9 journals over a 3-yr period (1984–1986). Data in each article were examined first to provide estimates of effect sizes and then to evaluate statistical power at posttreatment and follow-up. Findings indicate that the power of studies to detect differences between treatment and no treatment is quite adequate given the large effect sizes usually evident for this comparison. However, the power is relatively weak to detect the small-to-medium effect sizes likely to be evident when alternative treatments are contrasted. Thus, the equivalent outcomes that treatments produce may be due to the relatively weak power of the tests. Implications for interpreting outcome studies and for designing comparative studies are highlighted. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Replies to M. J. Lambert's (1979) criticism of H. J. Eysenck's (1978) critique of a study of psychotherapy outcome by M. L. Smith et al (see record 1978-10341-001), defending Eysenck's estimate of spontaneous remission. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号