IBCCES is the global leader in training and certification for healthcare professionals, educators and corporate partners who work with individuals with autism and other cognitive disorders. Our reach extends to more than 2 million people in all 50 states and over 70 countries around the globe. IBCCES Member Learning Community is provided as a free service to all IBCCES members who have completed one or more of our training and certification programs.

The Side Effects of Education: Research and Practice

September 12, 2018
This post was originally published on this site

In the early 1900s, Edward Thorndike emerged as the educational leader of the day (Lagemann, 1989). As a behaviorist, he asserted that students should gain knowledge in measurable ways (Thorndike, 1906). On the other hand, John Dewey, one of Thorndike’s compatriots, presented an inherently different perspective and philosophy. He advocated that students should wrestle with complex problems, engage in deep inquiry, and view their education as an opportunity to develop as citizens prepared to critically contribute to a democratic society (Dewey, 1916). However, as educational historian Ellen Condliffe Lagemann wrote in 1989, “one cannot understand the history of education in the United States during the twentieth century unless one realizes that Edward L. Thorndike won and John Dewey lost” (p. 184).

Over a century later, Thorndike’s legacy continues to influence modern educational policy as Dr. Yong Zhao reveals in his new book: What Works May Hurt – The Side Effects of Education. In the first chapter, Zhao explains that the accountability movement associated with the No Child Left Behind (NCLB) legislation created several unintended consequences which further institutionalized many norms that could be attributed to Thorndike: a limited view of reading and math scores as the purpose of education, a national obsession with testing and standards, and a stifling of innovation due to a requirement for “scientific evidence.”  

In the second and third chapters of the book, Zhao describes the unintended side effects of two notable, scientifically-based strategies: Reading First and Direct Instruction.  With both examples, he presents the empirical evidence to support the implementation and promotion of these programs. However, he also describes the unintended consequences and potential harm that they may pose to student creativity, critical thinking, and long-term growth.  

In the NCLB legislation, the term “scientifically-based” appears over 100 times, and Randomized Control Trial (RCT) experiments became known as the “gold standard” of research because of their use in the medical field (Zhao, 2018). RCTs intend to reduce sampling bias through their random selection of participants and create a more easily replicable experiment. However, in education, reality rarely mirrors a controlled context. So while these “scientifically-based” studies may ensure higher quality evidence about a specific program or treatment, they do not necessarily predict the final outcomes in practice (Zhao, 2018).

Much like how NCLB resulted in a narrow definition of the purpose of education (Zhao, 2018), a side effect of the establishment of RCTs as a gold-standard may be the narrow definition of “evidence.” According to guidance developed by the U.S. Department of Education, only studies that present statistically positive or significant findings should be considered strong or moderate evidence of effect. Though quantitative data can certainly guide decision-making, many research questions might be better answered with qualitative observations, interviews, focus groups, or document analysis (Creswell & Plano Clark, 2011).  

Further, quantitative outcome studies focus on the effects of an intervention but not necessarily the process of implementation. When describing the pitfalls of the Reading First program, Zhao (2018) explains that many of the failed programs did not adhere to the fidelity of the design. Reading First may have been successful with some students, in some contexts, when implemented in specific ways. However, the “scientifically-based” evidence does not account for that variation as it only presented statistical outcomes.

Recently, I had a chance to ask Dr. Zhao about the side-effects of this emphasis on quantitative research in a webinar sponsored by the Mendon-Upton Regional School District.

[embedded content]

First, Dr. Zhao acknowledged that this persistent focus on the quantitative definitely had negative repercussions as it meant that researchers, policymakers, and educators continued to reinforce existing practices. Second, he remarked that as society has changed, so have the requirements for students. Therefore, the goal of research should not be to recreate the past but to look at ways to study programs that would affect the future. Finally, given all of these changes, Zhao explained that if the goal is to prepare students to create a new future, then researchers and educators cannot continue to measure them based on the ideas of the past.

Since the beginning of institutionalized education, and the proclamations of psychologists such as Thorndike, there has been a predominant belief in the existence of a universally applied set of knowledge, aptitudes, skills, dispositions, and attitudes. Dr. Zhao explains that this belief rests on an assumption that every individual has the same natural potential, social condition, and opportunity to achieve the same level, at the same time, and following the same pace. In chapter six of his book, Zhao explains that individual student variability serves as a fundamental driver of side-effects. When it comes to considering new programs, educators, policymakers, and researchers need to acknowledge that “the effect of any treatment is the result of interaction between the characteristics of the treatment and characteristics of the individual” (p. 90).

The history of education in the U.S. seems to be driven by the quest for a quantifiable, silver-bullet solution despite the fact that no two districts, schools, or classrooms can be considered the same. This history seems to have been further exacerbated by the use of a single methodology to examine the effects of proposed innovations. However, as Zhao argues throughout his book, we need to examine not only the effects, effectiveness, and side-effects of these new endeavors but also how they might serve individual students in individual contexts.

As I mentioned in the first post in this series, I have been pondering the dichotomy between this emphasis on quantitative, controlled experiences as scientifically-based evidence and mixed-methods studies that account for reality in context. Whether considering the Improvement Science work led by the Carnegie Foundation, the idea of Research-Practice Partnerships (Henrick, Cobb, Penuel, Jackson, & Clark, 2017), or Design-Based Research, researchers, educators, and policymakers need to consider how these new methods might overcome the side-effects of Thorndike’s legacy that seem to be constraining the potential for future innovation.

References

Creswell, J.W., & Plano Clark, V.L. (2011). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage.

Dewey, J. (1916). Democracy and education: An introduction to the philosophy of education. [html version]. Retrieved from http://www.gutenberg.org/files/852/852-h/852-h.htm

Henrick, E. C., Cobb, P., Penuel, W. R., Jackson, K., & Clark, T. (2017). Assessing research-practice partnerships. Retrieved from the William T. Grant Foundation: http://wtgrantfoundation.org/library/uploads/2017/10/Assessing-Research-Practice-Partnerships.pdf

Lagemann, E. C. (1989). The plural worlds of educational research. History of Education Quarterly, 29(2), 185-214. doi:10.2307/368309

Thorndike, E. (1906). The principles of teaching based on psychology. [pdf version] Retrieved from https://archive.org/details/principlesofteac00thor

Zhao, Y. (2018). What works may hurt – Side Effects in Education. New York: Teachers College Press.

Share: