🔬research

Effect Size Calculator

Calculate effect sizes: Cohen's d, Hedges' g, η², ω², and more.

Share This Tool

This tool is 100% free and requires no login

Loading tool...

This may take a few seconds

Frequently Asked Questions

What is effect size and why is it important?

Effect size measures the magnitude or practical significance of a relationship or difference, independent of sample size. While p-values tell whether an effect exists (statistical significance), effect sizes tell how large the effect is (practical significance). With large samples, even trivial differences can be statistically significant (p < .05). Effect sizes help interpret whether findings matter in real-world contexts. They enable meta-analysis by standardizing results across studies with different metrics. Report effect sizes with all statistical tests to give readers complete information about your findings' magnitude and importance.

How do I interpret Cohen's d effect size values?

Cohen's d conventions (differences between means in standard deviation units): d = 0.20 is small (subtle difference), d = 0.50 is medium (noticeable difference), d = 0.80 is large (substantial difference). However, these are rough guidelines - interpretation depends on context. In educational interventions, d = 0.20 may be meaningful if affecting thousands of students. In clinical psychology, d = 0.80 may be modest for intensive therapy. Consider practical significance in your field, costs of implementation, and comparison to existing interventions when interpreting effect sizes.

Should I report effect sizes even if my results are not statistically significant?

Yes! Effect sizes should be reported for all analyses, significant or not. Non-significant results with moderate effect sizes suggest inadequate statistical power (sample too small) rather than no effect. Non-significant results with small effect sizes indicate truly minimal effects. Reporting effect sizes for non-significant findings helps prevent publication bias and enables meta-analysts to include your data. It provides more complete information than p-values alone. Always report effect sizes with confidence intervals to show precision of estimates.