Diseño y Análisis de Datos de Diseños Experimentales de Caso Único

Autores/as

DOI:

https://doi.org/10.5944/ap.22.1.42833

Palabras clave:

diseños experimentales de caso único; recomendaciones metodológicas; análisis de datos; software

Resumen

Los diseños experimentales de caso único implican el estudio intensivo de una o pocas unidades (e.g., personas) en diferentes condiciones manipuladas por los investigadores. Algunos diseños conllevan una replicación intrasujeto (diseño ABAB, diseño de cambio de criterio y diseño de tratamientos alternantes), mientras que el diseño de línea base múltiple suele incluir replicación entre-sujetos. En ambos casos se dispone de varias oportunidades para demostrar el efecto de la intervención (introduciendo o retirándola) en diferentes momentos del tiempo. Asimismo, es imprescindible la replicación de los resultados en diferentes estudios para poder establecer la generalidad de las conclusiones. En cuanto al análisis de datos, actualmente se dispone de múltiples propuestas sin un consenso sobre cuáles son las opciones más apropiadas. Para favorecer la necesaria justificación de cualquier elección, se ofrece una serie de criterios organizativos que señalan en qué situaciones es más útil cada una de las propuestas comentadas. Asimismo, para acercar a los investigadores aplicados a las opciones analíticas, se comentan las páginas web gratuitas que las implementan. Finalmente, debido a que no es posible discutir con detalle todos los pormenores metodológicos, ni tampoco revisar todas las alternativas analíticas, el lector interesado es dirigido mediante múltiples referencias a las fuentes primarias.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

Barlow, D. H. y Hayes S. C. (1979). Alternating treatments design: One strategy for comparing the effects of two treatments in a single subject. Journal of Applied Behavior Analysis, 12(2), 199–210. https://doi.org/10.1901/jaba.1979.12-199

Bono, R. y Arnau, J. (2014). Diseños experimentales de caso único en ciencias sociales y de la salud [Single-case Experimental Designs in Social and health sciences]. Síntesis.

Busk, P. L. y Serlin, R. C. (1992). Meta-analysis for single-case research. En T. R. Kratochwill y J. R. Levin (Eds.), Single-case Research Designs and Analysis: New Directions for Psychology and Education (pp. 187−212). Lawrence Erlbaum.

Byun, T. M., Hitchcock, E. R. y Ferron, J. (2017). Masked visual analysis: Minimizing Type I error in visually guided single-case design for communication disorders. Journal of Speech, Language, and Hearing Research, 60(6), 1455−1466. https://doi.org/10.1044/2017_JSLHR-S-16-0344

Center, B. A., Skiba, R. J. y Casey, A. (1985). A Methodology for the Quantitative Synthesis of intra-Subject Design Research. The Journal of Special Education, 19(4), 387−400. https://doi.org/10.1177/002246698501900404

Christ, T. J. (2007). Experimental Control and Threats to Internal Validity of Concurrent and Nonconcurrent Multiple Baseline Designs. Psychology in the Schools, 44(5), 451–459. https://doi.org/10.1002/pits.20237

Dart, E. H. y Radley, K. C. (2018). Toward a standard assembly of linear graphs. School Psychology Quarterly, 33(3), 350−355. https://doi.org/10.1037/spq0000269

Declercq, L., Cools, W., Beretvas, S. N., Moeyaert, M., Ferron, J. M. y Van den Noortgate, W. (2020). MultiSCED: A Tool for (Meta-)Analyzing Single-Case Experimental Data with Multilevel Modeling. Behavior Research Methods, 52(1), 177–192. https://doi.org/10.3758/s13428-019-01216-2

Declercq, L., Jamshidi, L., Fernández Castilla, B., Moeyaert, M., Beretvas, S. N., Ferron, J. M. y Van den Noortgate, W. (2022). Multilevel Meta-Analysis of Individual Participant Data of Single-Case Experimental Designs: One-stage versus Two-Stage Methods. Multivariate Behavioral Research, 57(2–3), 298–317. https://doi.org/10.1080/00273171.2020.1822148

Eilers, H. J. y Hayes, S. C. (2015). Exposure and Response Prevention Therapy with Cognitive Defusion Exercises to Reduce Repetitive and Restrictive Behaviors Displayed by Children with Autism Spectrum Disorder. Research in Autism Spectrum Disorders, 19, 18–31. https://doi.org/10.1016/j.rasd.2014.12.014

Estrada, E., Ferrer, E. y Pardo, A. (2019). Statistics for Evaluating Pre-Post Change: Relation between Change in the Distribution Center and Change in the Individual Scores. Frontiers in Psychology, 9, Artículo 2696. https://doi.org/10.3389/fpsyg.2018.02696

Facon, B., Sahiri, S. y Riviere, V. (2008). A Controlled Single-Case Treatment of Severe Long-Term Selective Mutism in a Child with Mental Retardation. Behavior Therapy, 39(4), 313–321. https://doi.org/10.1016/j.beth.2007.09.004

Feeney, T. y Ylvisaker, M. (2006). Context-Sensitive Cognitive-Behavioural Supports for Young Children with TBI: A Replication Study. Brain Injury, 20(6), 629–645. https://doi.org/10.1080/02699050600744194

Ferron, J. M., Bell, B. A., Hess, M. R., Rendina-Gobioff, G. y Hibbard, S. T. (2009). Making Treatment Effect Inferences from Multiple-Baseline Data: The Utility of Multilevel Modeling Approaches. Behavior Research Methods, 41(2), 372–384. https://doi.org/10.3758/BRM.41.2.372

Ferron, J. M., Farmer, J. L. y Owens, C. M. (2010). Estimating Individual Treatment Effects from Multiple-Baseline Data: A Monte Carlo study for multilevel-modeling approaches. Behavior Research Methods, 42(4), 930–943. https://doi.org/10.3758/BRM.42.4.930

Ferron, J. M., Goldstein, H., Olszewski, A. y Rohrer, L. (2020). Indexing Effects in Single-Case Experimental Designs by Estimating the Percent of Goal Obtained. Evidence-Based Communication Assessment and Intervention, 14(1–2), 6–27. https://doi.org/10.1080/17489539.2020.1732024

Ferron, J. M., Moeyaert, M., Van den Noortgate, W. y Beretvas, S. N. (2014). Estimating Causal Effects from Multiple-Baseline Studies: Implications for Design and Analysis. Psychological Methods, 19(4), 493–510. https://doi.org/10.1037/a0037038

Fisher, W. W., Kelley, M. E. y Lomas, J. E. (2003). Visual Aids and Structured Criteria for Improving Visual Inspection and Interpretation of single-Case Designs. Journal of Applied Behavior Analysis, 36(3), 387–406. https://doi.org/10.1901/jaba.2003.36-387

Hartmann, D. P. y Hall, R. V. (1976). The Changing Criterion Design. Journal of Applied Behavior Analysis, 9(4), 527–532. https://doi.org/10.1901/jaba.1976.9-527

Heyvaert, M. y Onghena, P. (2014). Analysis of Single-Case Data: Randomisation Tests for Measures of Effect Size. Neuropsychological Rehabilitation, 24(3–4), 507–527. https://doi.org/10.1080/09602011.2013.818564

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S. y Wolery, M. (2005). The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education. Exceptional Children, 71(2), 165−179. https://doi.org/10.1177/001440290507100203

Jacobs, K. W. (2019). Replicability and Randomization Test Logic in Behavior Analysis. Journal of the Experimental Analysis of Behavior, 111(2), 329–341. https://doi.org/10.1002/jeab.501

Kratochwill, T. R., Horner, R. H., Levin, J. R., Machalicek, W., Ferron, J. y Johnson, A. (2021). Single-case Design Standards: An Update and Proposed Upgrades. Journal of School Psychology, 89, 91–105. https://doi.org/10.1016/j.jsp.2021.10.006

Klein, L. A., Houlihan, D., Vincent, J. L. y Panahon, C. J. (2017). Best Practices in Utilizing the Changing Criterion Design. Behavior Analysis in Practice, 10(1), 52–61. https://doi.org/10.1007/s40617-014-0036-x

Lane, J. D. y Gast, D. L. (2014). Visual Analysis in Single Case Experimental Design Studies: Brief Review and Guidelines. Neuropsychological Rehabilitation, 24(3–4), 445–463. https://doi.org/10.1080/09602011.2013.815636

Ledford, J. R., Barton, E. E., Severini, K. E. y Zimmerman, K. N. (2019). A Primer on Single-Case Research Designs: Contemporary Use and Analysis. American Journal on Intellectual and Developmental Disabilities, 124(1), 35–56. https://doi.org/10.1352/1944-7558-124.1.35

Levin, J. R., Ferron, J. M. y Gafurov, B. S. (2018). Comparison of Randomization-Test Procedures for Single-Case Multiple-Baseline Designs. Developmental Neurorehabilitation, 21(5), 290–311. https://doi.org/10.1080/17518423.2016.1197708

Ma, H. H. (2006). An Alternative Method for Quantitative Synthesis of Single-Subject Research: Percentage of Data Points Exceeding the Median. Behavior Modification, 30(5), 598–617. https://doi.org/10.1177/0145445504272974

Maggin, D. M., Cook, B. G. y Cook, L. (2018). Using Single‐Case Research Designs to Examine the Effects of Interventions in Special Education. Learning Disabilities Research & Practice, 33(4), 182–191. https://doi.org/10.1111/ldrp.12184

Maggin, D. M., Cook, B. G. y Cook, L. (2019). Making Sense of Single‐Case Design Effect Sizes. Learning Disabilities Research & Practice, 34(3), 124–132. https://doi.org/10.1111/ldrp.12204

Manolov, R., Moeyaert, M. y Fingerhut, J. (2022). A Priori Justification for Effect Measures in Single-Case Experimental Designs. Perspectives on Behavior Science, 45(1), 156–189. https://doi.org/10.1007/s40614-021-00282-2

Manolov, R. y Onghena, P. (2018). Analyzing Data from Single-Case Alternating Treatments Designs. Psychological Methods, 23(3), 480–504. https://doi.org/10.1037/met0000133

Manolov, R. y Onghena, P. (2022). Defining and Assessing Immediacy in Single Case Experimental Designs. Journal of the Experimental Analysis of Behavior, 118(3), 462−492. https://doi.org/10.1002/JEAB.799

Manolov, R. y Tanious, R. (2022). Assessing Consistency in Single-Case Data Features using Modified Brinley Plots. Behavior Modification, 46(3), 581–627. https://doi.org/10.1177/0145445520982969

Manolov, R., Tanious, R. y Fernández-Castilla, B. (2022). A Proposal for the Assessment of Replication of Effects in Single-Case Experimental Designs. Journal of Applied Behavior Analysis, 55(3), 997–1024. https://doi.org/10.1002/jaba.923

McDougale, C. B., Richling, S. M., Longino, E. B. y O’Rourke, S. A. (2020). Mastery Criteria and Maintenance: A Descriptive Analysis of Applied Research Procedures. Behavior Analysis in Practice, 13(2), 402–410. https://doi.org/10.1007/s40617-019-00365-2

McDougall, D. (2005). The Range-Bound Changing Criterion Design. Behavioral Interventions, 20(2), 129–137. https://doi.org/10.1002/bin.189

Moeyaert, M., Ugille, M., Ferron, J., Beretvas, S. N. y Van den Noortgate, W. (2014). The Influence of the Design Matrix on Treatment Effect Estimates in the Quantitative Analyses of Single-Case Experimental Designs Research. Behavior Modification, 38(5), 665–704. https://doi.org/10.1177/0145445514535243

Natesan, P. y Hedges, L. V. (2017). Bayesian Unknown Change-Point Models to Investigate Immediacy in Single Case Designs. Psychological Methods, 22(4), 743–759. https://doi.org/10.1037/met0000134

Onghena, P. (1992). Randomization Tests for Extensions and Variations of ABAB single-Case Experimental Designs: A Rejoinder. Behavioral Assessment, 14(2), 153–171.

Onghena, P. y Edgington, E. S. (1994). Randomization Tests for Restricted Alternating Treatments Designs. Behaviour Research and Therapy, 32(7), 783–786. https://doi.org/10.1016/0005-7967(94)90036-1

Onghena, P., Tanious, R., De, T. K. y Michiels, B. (2019). Randomization Tests for Changing Criterion Designs. Behaviour Research and Therapy, 117(6), 18–27. https://doi.org/10.1016/j.brat.2019.01.005

Parker, R. I. y Vannest, K. J. (2009). An Improved Effect Size for Single-Case Research: Nonoverlap of all Pairs. Behavior Therapy, 40(4), 357−367. https://doi.org/10.1016/j.beth.2008.10.006

Parker, R. I., Vannest, K. J. y Davis, J. L. (2011). Effect Size in Single-Case Research: A Review of Nine Nonoverlap Techniques. Behavior Modification, 35(4), 303–322. https://doi.org/10.1177/0145445511399147

Parker, R. I., Vannest, K. J., Davis, J. L. y Sauber, S. B. (2011). Combining Nonoverlap and Trend for Single-Case Research: Tau-U. Behavior Therapy, 42(2), 284−299. https://doi.org/10.1016/j.beth.2010.08.006

Perdices, M., Tate, R. L. y Rosenkoetter, U. (2023). An Algorithm to Evaluate Methodological Rigor and Risk of Bias in Single-Case Studies. Behavior Modification, 47(6), 1482–1509. https://doi.org/10.1177/0145445519863035

Pustejovsky, J. E. (2018). Using Response Ratios for Meta-Analyzing Single-Case Designs with Behavioral Outcomes. Journal of School Psychology, 68(6), 99−112. https://doi.org/10.1016/j.jsp.2018.02.003

Pustejovsky, J. E., Hedges, L. V. y Shadish, W. R. (2014). Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework. Journal of Educational and Behavioral Statistics, 39(5), 368–393. https://doi.org/10.3102/1076998614547577

Scruggs, T. E., Mastropieri, M. A. y Casto, G. (1987). The Quantitative Synthesis of Single-Subject Research: Methodology and Validation. Remedial and Special Education, 8(2), 24–33. https://doi.org/10.1177/074193258700800206

Shadish, W. R., Hedges, L. V. y Pustejovsky, J. E. (2014). Analysis and Meta-Analysis of Single-Case Designs with a standardized Mean Difference Statistic: A Primer and Applications. Journal of School Psychology, 52(2), 123–147. https://doi.org/10.1016/j.jsp.2013.11.005

Slocum, T. A., Pinkelman, S. E., Joslyn, P. R. y Nichols, B. (2022). Threats to Internal Validity in Multiple-baseline Design Variations. Perspectives on Behavior Science, 45(3), 619−638. https://doi.org/10.1007/s40614-022-00326-1

Snodgrass, M., Cook, B. G. y Cook, L. (2023). Considering Social Validity in Special Education Research. Learning Disabilities Research & Practice, 38(4), 311–319. https://doi.org/10.1111/ldrp.12326

Swaminathan, H., Rogers, H. J., Horner, R., Sugai, G. y Smolkowski, K. (2014). Regression Models for the Analysis of Single Case Designs. Neuropsychological Rehabilitation, 24(3–4), 554−571. https://doi.org/10.1080/09602011.2014.887586

Tanious, R. y Onghena, P. (2021). A Systematic Review of Applied Single-Case Research Published between 2016 and 2018: Study Designs, Randomization, Data Aspects, and Data Analysis. Behavior Research Methods, 53(4), 1371–1384. https://doi.org/10.3758/s13428-020-01502-4

Tate, R. L. y Perdices, M. (2019). Single-case Experimental Designs for Clinical Research and Neurorehabilitation Settings: Planning, Conduct, Analysis, and Reporting. Routledge.

Tate, R. L., Perdices, M., Rosenkoetter, U., McDonald, S., Togher, L., Shadish, W., Horner, R., Kratochwill, T., Barlow, D. H., Kazdin, A. E., Sampson, M., Shamseer, L. y Vohra, S. (2016). The Single-Case Reporting Guideline in Behavioural Interventions (SCRIBE) 2016: Explanation and elaboration. Archives of Scientific Psychology, 4(1), 10–31. https://doi.org/10.1037/arc0000027

Vannest, K. J. y Sallese, M. R. (2021). Benchmarking Effect Sizes in Single-Case Experimental Designs. Evidence-Based Communication Assessment and Intervention, 15(3), 142–165. https://doi.org/10.1080/17489539.2021.1886412

What Works Clearinghouse. (2022). Procedures and Standards Handbook, Version 5.0. U.S. Department of Education, Institute of Education Sciences. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/Final_WWC-HandbookVer5.0-0-508.pdf

Wine, B., Freeman, T. R. y King, A. (2015). Withdrawal versus Reversal: A Necessary Distinction? Behavioral Interventions, 30(1), 87–93. https://doi.org/10.1002/bin.1399

Wolfe, K., Barton, E. E. y Meadan, H. (2019). Systematic Protocols for the Visual Analysis of Single-Case Research Data. Behavior Analysis in Practice, 12(2), 491–502. https://doi.org/10.1007/s40617-019-00336-7

Descargas

Publicado

2025-06-30

Cómo citar

Manolov, R. (2025). Diseño y Análisis de Datos de Diseños Experimentales de Caso Único. Acción Psicológica, 22(1), 7–22. https://doi.org/10.5944/ap.22.1.42833

Número

Sección

Número especial: Nuevos avances metodológicos en Psicología

Artículos similares

<< < 1 2 3 4 5 6 7 > >> 

También puede Iniciar una búsqueda de similitud avanzada para este artículo.