Fordham Institute’s pretend research

The Thomas B. Fordham Institute has released a report, Evaluating the Content and Quality of Next Generation Assessments,[i] ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT’s Aspire, and the Commonwealth of Massachusetts’ MCAS.[ii] Of course, anyone familiar with Fordham’s past work knew beforehand which tests would win.

This latest Fordham Institute Common Core apologia is not so much research as a caricature of it.

  1. Instead of referencing a wide range of relevant research, Fordham references only friends from inside their echo chamber and others paid by the Common Core’s wealthy benefactors. But, they imply that they have covered a relevant and adequately wide range of sources.
  2. Instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employ “a brand new methodology” specifically developed for Common Core, for the owners of the Common Core, and paid for by Common Core’s funders.
  3. Instead of suggesting as fact only that which has been rigorously evaluated and accepted as fact by skeptics, the authors continue the practice of Common Core salespeople of attributing benefits to their tests for which no evidence exists
  4. Instead of addressing any of the many sincere, profound critiques of their work, as confident and responsible researchers would do, the Fordham authors tell their critics to go away—“If you don’t care for the standards…you should probably ignore this study” (p. 4).
  5. Instead of writing in neutral language as real researchers do, the authors adopt the practice of coloring their language as so many Common Core salespeople do, attaching nice-sounding adjectives and adverbs to what serves their interest, and bad-sounding words to what does not.

1.  Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the Core and its associated testing programs.[iii] A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

  • the Human Resources Research Organization (HumRRO),[vi]
  • the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the test evaluation “Criteria.”[vii]
  • the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of one of the federally-subsidized Common Core-aligned testing programs, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
  • Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

The Common Core’s grandees have always only hired their own well-subsidized grantees for evaluations of their products. The Buros Center for Testing at the University of Nebraska has conducted test reviews for decades, publishing many of them in its annual Mental Measurements Yearbook for the entire world to see, and critique. Indeed, Buros exists to conduct test reviews, and retains hundreds of the world’s brightest and most independent psychometricians on its reviewer roster. Why did Common Core’s funders not hire genuine professionals from Buros to evaluate PARCC and SBAC? The non-psychometricians at the Fordham Institute would seem a vastly inferior substitute, …that is, had the purpose genuinely been an objective evaluation.

2.  A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of North American testing experts is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.[x]

Had Fordham compared the tests using the Standards for Educational and Psychological Testing (or any of a number of other widely-respected test evaluation standards, guidelines, or protocols[xi]) SBAC and PARCC would have flunked. They have yet to accumulate some the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest they will fail on all three counts.[xii]

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-owns the Common Core standards and co-sponsored their development (Council of Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others.[xiii],[xiv] Thus, Fordham compares SBAC and PARCC to other tests according to specifications that were designed for SBAC and PARCC.[xv]

The authors write “The quality and credibility of an evaluation of this type rests largely on the expertise and judgment of the individuals serving on the review panels” (p.12). A scan of the names of everyone in decision-making roles, however, reveals that Fordham relied on those they have hired before and whose decisions they could safely predict. Regardless, given the evaluation criteria employed, the outcome was foreordained regardless whom they hired to review, not unlike a rigged election in a dictatorship where voters’ decisions are restricted to already-chosen candidates.

Still, PARCC and SBAC might have flunked even if Fordham had compared tests using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 14 of the criteria.[xvi] And those just happened to be criteria mostly favoring PARCC and SBAC.

Without exception the Fordham study avoided all the evaluation criteria in the categories:

“Meet overall assessment goals and ensure technical quality”,

“Yield valuable reports on student progress and performance”,

“Adhere to best practices in test administration”, and

“State specific criteria”[xvii]

What types of test characteristics can be found in these neglected categories? Test security, providing timely data to inform instruction, validity, reliability, score comparability across years, transparency of test design, requiring involvement of each state’s K-12 educators and institutions of higher education, and more. Other characteristics often claimed for PARCC and SBAC, without evidence, cannot even be found in the CCSSO criteria (e.g., internationally benchmarked, backward mapping from higher education standards, fairness).

The report does not evaluate the “quality” of tests, as its title suggests; at best it is an alignment study. And, naturally, one would expect the Common Core consortium tests to be more aligned to the Common Core than other tests. The only evaluative criteria used from the CCSSO’s Criteria are in the two categories “Align to Standards—English Language Arts” and “Align to Standards—Mathematics” and, even then, only for grades 5 and 8.

Nonetheless, the authors claim, “The methodology used in this study is highly comprehensive” (p. 74).

The authors of the Pioneer Institute’s report How PARCC’s false rigor stunts the academic growth of all students,[xviii] recommended strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also did not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that the familiar multiple-choice/short answer/essay standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xix]. Ironically, it is they—opponents of traditional testing content and formats—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

”Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xx]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is simulated by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xxi]

PARCC, SBAC, and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two. It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC and SBAC tests “deeper” than others. In practice, the alleged deeper parts are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxiii] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

3.  The authors continue the Common Core sales tendency of attributing benefits to their tests for which no evidence exists. For example, the Fordham report claims that SBAC and PARCC will:

“make traditional ‘test prep’ ineffective” (p. 8)

“allow students of all abilities, including both at-risk and high-achieving youngsters, to demonstrate what they know and can do” (p. 8)

produce “test scores that more accurately predict students’ readiness for entry-level coursework or training” (p. 11)

“reliably measure the essential skills and knowledge needed … to achieve college and career readiness by the end of high school” (p. 11)

“…accurately measure student progress toward college and career readiness; and provide valid data to inform teaching and learning.” (p. 3)

eliminate the problem of “students … forced to waste time and money on remedial coursework.” (p. 73)

help “educators [who] need and deserve good tests that honor their hard work and give useful feedback, which enables them to improve their craft and boost their students’ success.” (p. 73)

The Fordham Institute has not a shred of evidence to support any of these grandiose claims. They share more in common with carnival fortune telling than empirical research. Granted, most of the statements refer to future outcomes, which cannot be known with certainty. But, that just affirms how irresponsible it is to make such claims absent any evidence.

Furthermore, in most cases, past experience would suggest just the opposite of what Fordham asserts. Test prep is more, not less, likely to be effective with SBAC and PARCC tests because the test item formats are complex (or, convoluted), introducing more “construct irrelevant variance”—that is, students will get lower scores for not managing to figure out formats or computer operations issues, even if they know the subject matter of the test. Disadvantaged and at-risk students tend to be the most disadvantaged by complex formatting and new technology.

As for Common Core, SBAC, and PARCC eliminating the “problem of” college remedial courses, such will be done by simply cancelling remedial courses, whether or not they might be needed, and lowering college entry-course standards to the level of current remedial courses.

4.  When not dismissing or denigrating SBAC and PARCC critiques, the Fordham report evades them, even suggesting that critics should not read it: “If you don’t care for the standards…you should probably ignore this study” (p. 4).

Yet, cynically, in the very first paragraph the authors invoke the name of Sandy Stotsky, one of their most prominent adversaries, and a scholar of curriculum and instruction so widely respected she could easily have gotten wealthy had she chosen to succumb to the financial temptation of the Common Core’s profligacy as so many others have. Stotsky authored the Fordham Institute’s “very first study” in 1997, apparently. Presumably, the authors of this report drop her name to suggest that they are broad-minded. (It might also suggest that they are now willing to publish anything for a price.)

Tellingly, one will find Stotsky’s name nowhere after the first paragraph. None of her (or anyone else’s) many devastating critiques of the Common Core tests is either mentioned or referenced. Genuine research does not hide or dismiss its critiques; it addresses them.

Ironically, the authors write, “A discussion of [test] qualities, and the types of trade-offs involved in obtaining them, are precisely the kinds of conversations that merit honest debate.” Indeed.

5.  Instead of writing in neutral language as real researchers do, the authors adopt the habit of coloring their language as Common Core salespeople do. They attach nice-sounding adjectives and adverbs to what they like, and bad-sounding words to what they don’t.

For PARCC and SBAC one reads:

“strong content, quality, and rigor”

“stronger tests, which encourage better, broader, richer instruction”

“tests that focus on the essential skills and give clear signals”

“major improvements over the previous generation of state tests”

“complex skills they are assessing.”

“high-quality assessment”

“high-quality assessments”

“high-quality tests”

“high-quality test items”

“high quality and provide meaningful information”

“carefully-crafted tests”

“these tests are tougher”

“more rigorous tests that challenge students more than they have been challenged in the past”

For other tests one reads:

“low-quality assessments poorly aligned with the standards”

“will undermine the content messages of the standards”

“a best-in-class state assessment, the 2014 MCAS, does not measure many of the important competencies that are part of today’s college and career readiness standards”

“have generally focused on low-level skills”

“have given students and parents false signals about the readiness of their children for postsecondary education and the workforce”

Appraising its own work, Fordham writes:

“groundbreaking evaluation”

“meticulously assembled panels”

“highly qualified yet impartial reviewers”

Considering those who have adopted SBAC or PARCC, Fordham writes:

“thankfully, states have taken courageous steps”

“states’ adoption of college and career readiness standards has been a bold step in the right direction.”

“adopting and sticking with high-quality assessments requires courage.”

 

A few other points bear mentioning. The Fordham Institute was granted access to operational SBAC and PARCC test items. Over the course of a few months in 2015, the Pioneer Institute, a strong critic of Common Core, PARCC, and SBAC, appealed for similar access to PARCC items. The convoluted run-around responses from PARCC officials excelled at bureaucratic stonewalling. Despite numerous requests, Pioneer never received access.

The Fordham report claims that PARCC and SBAC are governed by “member states”, whereas ACT Aspire is owned by a private organization. Actually, the Common Core Standards are owned by two private, unelected organizations, the Council of Chief State School Officers and the National Governors’ Association, and only each state’s chief school officer sits on PARCC and SBAC panels. Individual states actually have far more say-so if they adopt ACT Aspire (or their own test) than if they adopt PARCC or SBAC. A state adopts ACT Aspire under the terms of a negotiated, time-limited contract. By contrast, a state or, rather, its chief state school officer, has but one vote among many around the tables at PARCC and SBAC. With ACT Aspire, a state controls the terms of the relationship. With SBAC and PARCC, it does not.[xxiv]

Just so you know, on page 71, Fordham recommends that states eliminate any tests that are not aligned to the Common Core Standards, in the interest of efficiency, supposedly.

In closing, it is only fair to mention the good news in the Fordham report. It promises on page 8, “We at Fordham don’t plan to stay in the test-evaluation business”.

 

[i] Nancy Doorey & Morgan Polikoff. (2016, February). Evaluating the content and quality of next generation assessments. With a Foreword by Amber M. Northern & Michael J. Petrilli. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/publications/evaluating-the-content-and-quality-of-next-generation-assessments

[ii] PARCC is the Partnership for Assessment of Readiness for College and Careers; SBAC is the Smarter-Balanced Assessment Consortium; MCAS is the Massachusetts Comprehensive Assessment System; ACT Aspire is not an acronym (though, originally ACT stood for American College Test).

[iii] The reason for inventing a Fordham Institute when a Fordham Foundation already existed may have had something to do with taxes, but it also allows Chester Finn, Jr. and Michael Petrilli to each pay themselves two six figure salaries instead of just one.

[iv] http://www.gatesfoundation.org/search#q/k=Fordham

[v] See, for example, http://www.ohio.com/news/local/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318 ; http://www.cleveland.com/metro/index.ssf/2015/03/ohios_charter_schools_ridicule.html ; http://www.dispatch.com/content/stories/local/2014/12/18/kasich-to-revamp-ohio-laws-on-charter-schools.html ; https://www.washingtonpost.com/news/answer-sheet/wp/2015/06/12/troubled-ohio-charter-schools-have-become-a-joke-literally/

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 23 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2016 collectively exceeding $100 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[viii] http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] The authors write that the standards they use are “based on” the real Standards. But, that is like saying that Cheez Whiz is based on cheese. Some real cheese might be mixed in there, but it’s not the product’s most distinguishing ingredient.

[xi] (e.g., the International Test Commission’s (ITC) Guidelines for Test Use; the ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores; the ITC Guidelines on the Security of Tests, Examinations, and Other Assessments; the ITC’s International Guidelines on Computer-Based and Internet-Delivered Testing; the European Federation of Psychologists’ Association (EFPA) Test Review Model; the Standards of the Joint Committee on Testing Practices)

[xii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiii]http://www.ccsso.org/Documents/2014/CCSSO Criteria for High Quality Assessments 03242014.pdf

[xiv] A rationale is offered for why they had to develop a brand new set of test evaluation criteria (p. 13). Fordham claims that new criteria were needed, which weighted some criteria more than others. But, weights could easily be applied to any criteria, including the tried-and-true, preexisting ones.

[xv] For an extended critique of the CCSSO Criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvi] Doorey & Polikoff, p. 14.

[xvii] MCAS bests PARCC and SBAC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xviii] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xix] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xx] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xxi] McQuillan, Phelps, & Stotsky, p. 46.

[xxiii] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

[xxiv] For an in-depth discussion of these governance issues, see Peter Wood’s excellent Introduction to Drilling Through the Core, http://www.amazon.com/gp/product/0985208694

Fordham Institute’s pretend research was originally published on Nonpartisan Education Blog

Fordham Institute’s pretend research was originally published on Nonpartisan Education Blog

Fordham report predictable, conflicted

On November 17, the Massachusetts Board of Elementary and Secondary Education (BESE) will decide the fate of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for Assessment of College Readiness for College and Careers (PARCC) in the Bay State. MCAS is homegrown; PARCC is not. Barring unexpected compromises or subterfuges, only one program will survive.

Over the past year, PARCC promoters have released a stream of reports comparing the two testing programs. The latest arrives from the Thomas B. Fordham Institute in the form of a partial “evaluation of the content and quality of the 2014 MCAS and PARCC “relative to” the “Criteria for High Quality Assessments”[i] developed by one of the organizations that developed Common Core’s standards—with the rest of the report to be delivered in January, it says.[ii]

PARCC continues to insult our intelligence. The language of the “special report” sent to Mitchell Chester, Commissioner of Elementary and Secondary Education, reads like a legitimate study.[iii] The research it purports to have done even incorporated some processes typically employed in studies with genuine intentions of objectivity.

No such intentions could validly be ascribed to the Fordham report.

First, Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the standards and its associated testing programs. A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

– the Human Resources Research Organization (HumRRO), which will deliver another pro-PARCC report sometime soon,[vi]
– the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the “Criteria.”, [vii]
– the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of the other federally-subsidized Common Core-aligned testing program, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
– Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

Fordham acknowledges the pervasive conflicts of interest it claims it faced in locating people to evaluate MCAS versus PARCC. “…it is impossible to find individuals with zero conflicts who are also experts”.[x] But, the statement is false; hundreds, perhaps even thousands, of individuals experienced in “alignment or assessment development studies” were available.[xi] That they were not called reveals Fordham’s preferences.

A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of test developers is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-sponsored the development of Common Core’s standards (Council for Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others. Thus, Fordham compares PARCC to MCAS according to specifications that were designed for PARCC.[xii]

Had Fordham compared MCAS and PARCC using the Standards for Educational and Psychological Testing, MCAS would have passed and PARCC would have flunked. PARCC has not yet accumulated the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest it will fail on all three counts.[xiii]

Third, PARCC should have been flunked had Fordham compared MCAS and PARCC using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 15 of the criteria.[xiv] And those just happened to be the criteria favoring PARCC.

Fordham agreed to compare the two tests with respect to their alignment to Common Core-based criteria. With just one exception, the Fordham study avoided all the criteria in the groups “Meet overall assessment goals and ensure technical quality”, “Yield valuable report on student progress and performance”, “Adhere to best practices in test administration”, and “State specific criteria”[xv]

Not surprisingly, Fordham’s “memo” favors the Bay State’s adoption of PARCC. However, the authors of How PARCC’s false rigor stunts the academic growth of all students[xvi], released one week before Fordham’s “memo,” recommend strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also do not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that ordinary multiple-choice-predominant standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xvii]. Ironically, it is they—opponents of traditional testing regimes—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

“Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xviii]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is done by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xix]

PARCC and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two.[xx] It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC tests “deeper” than others. In practice, the alleged deeper parts of PARCC are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxi] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

Dr. Richard P. Phelps is editor or author of four books: Correcting Fallacies about Educational and Psychological Testing (APA, 2008/2009); Standardized Testing Primer (Peter Lang, 2007); Defending Standardized Testing (Psychology Press, 2005); and Kill the Messenger (Transaction, 2003, 2005), and founder of the Nonpartisan Education Review (http://nonpartisaneducation.org).

[i] http://www.ccsso.org/Documents/2014/CCSSO%20Criteria%20for%20High%20Quality%20Assessments%20 03242014.pdf

[ii] Michael J. Petrilli & Amber M. Northern. (2015, October 30). Memo to Dr. Mitchell Chester, Commissioner of Elementary and Secondary Education, Massachusetts Department of Elementary and Secondary Education. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/articles/evaluation-of-the-content-and-quality-of-the-2014-mcas-and-parcc-relative-to-the-ccsso

[iii] Nancy Doorey & Morgan Polikoff. (2015, October). Special report: Evaluation of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for the Assessment of Readiness for College and Careers (PARCC). Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/articles/evaluation-of-the-content-and-quality-of-the-2014-mcas-and-parcc-relative-to-the-ccsso

[iv] http://www.gatesfoundation.org/search#q/k=Fordham

[v] See, for example, http://www.ohio.com/news/local/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318 ; http://www.cleveland.com/metro/index.ssf/2015/03/ohios_charter_schools_ridicule.html ; http://www.dispatch.com/content/stories/local/2014/12/18/kasich-to-revamp-ohio-laws-on-charter-schools.html ; https://www.washingtonpost.com/news/answer-sheet/wp/2015/06/12/troubled-ohio-charter-schools-have-become-a-joke-literally/

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 22 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2015 exceeding $90 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[viii] http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] Doorey & Polikoff, p. 4.

[xi] To cite just one example, the world-renowned Center for Educational Measurement at the University of Massachusetts-Amherst has accumulated abundant experience conducting alignment studies.

[xii] For an extended critique of the CCSSO criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xiii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiv] Doorey & Polikoff, p. 23.

[xv] MCAS bests PARCC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xvi] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvii] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xviii] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xix] McQuillan, Phelps, & Stotsky, p. 46.

[xxi] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

Fordham report predictable, conflicted was originally published on Nonpartisan Education Blog

Fordham report predictable, conflicted was originally published on Nonpartisan Education Blog

Brief sketch of the problem…

In the United States, we pay attention to and celebrate the work of HS athletes.
We carefully ignore the exemplary academic work of diligent HS scholars–the results follow as you might expect—we get what we want.

Will Fitzhugh

———————————
HIGH SCHOOL ATHLETES COLLEGE SIGNING NEWS!!—GEORGIA!!
Atlanta Journal-Constitution
11:02 am Wednesday, February 5th, 2014
AJC’s Signing Day Central

By Michael Carvell

Welcome to the AJC’s Signing day Day Central. This is the place to be to catch up with all the recruiting information with UGA, Georgia Tech and recruits from the state of Georgia. We will update the news as it happens, and interact on the message board below.

UGA’S TOP TARGETS FOR WEDNESDAY…AND RESULTS

Lorenzo Carter, DE, 6-5, 240, Norcross: UGA reeled in the big fish, landing the state’s No.1 overall prospect for the first time since 2011 (Josh Harvey-Clemons).
Isaiah McKenzie, WR, 5-8, 175, Ft. Lauderdale (Fla.) American Heritage: This was one of two big surprises for UGA to kick off signing day. McKenzie got a last-minute offer from UGA and picked the Bulldogs because of his best buddy and high school teammate, 5-star Sony Michel (signed with UGA).
Hunter Atkinson, TE, 6-6, 250, West Hall: The Cincinnati commit got a last-minute call from Mark Richt and flipped to UGA. I’m not going to say we saw it coming, but … Atkinson had grayshirt offers from Alabama, Auburn and UCF.
Tavon Ross, S, 6-1, 200, Bleckley County: The Missouri commit took an official visit to UGA but decided to stick with Missouri. He’s signed.
Andrew Williams, DE, 6-4, 247, ECLA: He signed with Auburn over Clemson and Auburn. He joked with Auburn’s Gus Malzahn when he called with the news, saying “I’m sorry to inform you….. That I will be attending your school,” according to 247sports.com’s Kipp Adams.
Tyre McCants, WR-DB, 5-11, 200, Niceville, Fla.: Turned down late interest from UGA to sign with USF.

UGA COMMITS TO WORRY ABOUT? NOPE

Lamont Gaillard, DT, 6-3, 310, Fayetteville (N.C.) Pine Forest: This was probably the biggest scare on signing day. Gaillard’s coach said he signed with UGA over Miami at 9 a.m but UGA didn’t announce it until 10:35 a.m.
Gilbert Johnson, WR, 6-2, 190, Homestead (Fla.) Senior: Speedster scared UGA after he told Rivals.com on Sunday night that he would sign with Bulldogs, South Florida or Louisville .. and then went MIA. UGA can relax after he was one of team’s first signees.

Kendall Gant, safety, 6-2, 180, Lakeland (Fla.): He flipped from UGA to Marshall on Tuesday due to “academic reasons,” according to his coach, who also claimed his offer “got pulled” by the Bulldogs.

For the rest of UGA’s Big Board for 2014, including a rundown of commitments, go HERE

GEORGIA TECH’S TOP TARGETS FOR WEDNESDAY

Myles Autry, ATH, 5-9, 170, Norcross: Georgia Tech fans are always screaming about wanting to have a high-profile recruit commit on signing day on national TV. Autry picked Georgia Tech over FSU on ESPNU cameras. His older brother plays wide receiver for the Yellow Jackets.
Mike Sawyers, DT, 6-2, 300, Nashville, Tenn.: He signed with Tennessee after taking an official visit to Volunteers on the final weekend before signing day.

For the rest of Georgia Tech’s Big Board for 2014, including a rundown of commitments, go HERE

======================================

FOR COMPARISON, HERE IS SOME EXEMPLARY HS ACADEMIC WORK, BY DILIGENT HIGH SCHOOL STUDENTS, WHICH THE MEDIA (completely) IGNORED. We take it for granted that the media (including their coverage of education) should ignore the exemplary academic work of HS students, but we also ignore the consequences of doing that.

[height and weight of authors omitted…]

High School History Students”Teach with Examples”The Concord Review reports:

Nathaniel Bernstein of San Francisco, California: Bernstein, a senior at San Francisco University High School, published an 11,176-word history research paper on the unintended consequences of Direct Legislation in California. (Harvard)

Gabriel Grand of Bronx, New York: Grand, a senior at Horace Mann School, published a 9,250-word history research paper on the difficulties The New York Times had with the anti-semitism of the day and also in covering the Holocaust. (Harvard)

Reid Grinspoon of Waltham, Massachusetts: Grinspoon, a senior at Gann Academy, published a 7,380-word history research paper on the defeat of legislation to allow eugenic sterilization in Massachusetts. (Harvard)

Emma Scoble of Oakland, California: Scoble a senior at the College Preparatory Academy, published a 9,657-word history research paper on the Broderick-Terry Duel, which defeated pro-slavery forces in California in 1859. (NYU)

Brief sketch of the problem… was originally published on Nonpartisan Education Blog

Brief sketch of the problem… was originally published on Nonpartisan Education Blog

Brief sketch of the problem… was originally published on Nonpartisan Education Blog

On Writing

First, we stopped demanding that students read anything very challenging in school, and then we stopped holding our teachers or students accountable for the quality of student writing.”
On Writing
National Center on Education and the Economy
By Marc Tucker on January 17, 2014 10:21 AM
 
 
I read a news story the other day that made my heart sink.  It was written by a professor in a business school at a public university.  He told a tale in which his colleagues agreed that the writing skills of their students were miserable, but none would take responsibility for dealing with it.  They were not, they said, writing teachers, and could not be expected to spend time doing what those miserable souls in the understaffed writing labs were expected to do.  This was just as true of the professors in the English department as it was of all their other colleagues.  The author of the article was pretty astute about the causes of that refusal.  Teaching someone to write well takes a lot of time and individual attention, he pointed out.  Professors in university departments are not compensated for that time.  Teaching students to write will take time away from what they need to do to advance in their profession.  And it is not likely to earn them the esteem of their colleagues.  So it was no surprise that his colleagues suggested that the students would be going into a business environment in which presentations were usually done with power points, so maybe the students did not have to learn how to write anyway.  Yes, they said that!
A year ago, my own organization reported on a study we had done of what is required of freshman in their first-year credit bearing courses in a typical community college.  We reported that the texts they are assigned are generally written at an 11th or 12th grade level and the students cannot read them, so their instructors are now used to summarizing the gist of the texts in power points they prepare for their students.  In these circumstances, it is hardly surprising that they assign little or no writing to their students.  They have evidently anticipated the suggestion of the business school faculty I was just quoting that they solve the problem by assuming that their students would not have to write.
But surely, you might be saying, it cannot really be that bad. Oh, but it can.  The attitudes of the college faculty I just reported on are not new.  The departmental faculty might have been prepared in the past to help their students with the technical aspects of writing in their particular field, but they never expected to have to teach basic competence in writing.  They assumed that would be done in our schools.  So what happened?
Two things happened.  First, we stopped demanding that students read anything very challenging in school, and then we stopped holding our teachers or students accountable for the quality of student writing.
I did not learn how to write from a writing manual.  I mostly learned to write by reading good writing, a lot it, some of it fiction, much of it non-fiction. And I had instructors in high school and college who were themselves good writers and took the time to coach me.  My friend William Fitzhugh tells us that very few students are ever asked to read a single non-fiction book from end to end in their entire school career, much less many such books.  More to the point, they are rarely asked to write very much and the expectations for what they do write are, on the whole, absurdly low.
And why is that?  Because we do not hold our teachers accountable for the quality of student writing.  Under prevailing federal law, we hold our teachers accountable for student performance in English, mathematics and, to a minor degree, science.  But the tests we use to hold them accountable for student performance in English typically do not require them to write anything, and, when they do, it is rarely more than a paragraph.  And why is that?  There is only one way to find out if a student can write a well-crafted 15-page essay and that is to ask them to write one.  And, if they are required to write one, someone has to read it.  To make sure that the scores given on the essay are reliable, it may be necessary to have more than one person read it.  That is time-consuming and expensive.  So we talk about English tests, but they do not really test speaking, listening or writing skills. They test reading skills.  The teachers know this, so they don’t waste their time teaching writing, probably the single most important skill we can teach.
It is unclear whether they could if they wanted to.  They could certainly ask students to write more, but most teachers of English do not have the time to do more than skim student written work and give it a global grade and maybe a comment or two.  But that is not going to help a developing writer very much.  Extended coaching is needed, at the hands of a good writer and editor.  And, by the way, we have no idea whether our teachers are themselves good writers, never mind good editors.  Many come from the lower ranks of high school graduates, and those are the same young people whose low writing skills I described at the beginning of this essay.
I have a cognitive dissonance problem.  There is a lot of talk about implementing the Common Core State Standards.  The Common Core calls for much deeper understanding of the core subjects in the curriculum, the ability to reason well, and to make a logical, compelling argument based on good evidence, which in turn requires the student to be able to marshal that evidence in an effective way.  Sounds like good writing to me.
But we talk about implementation of the Common Core as if it can be accomplished by giving teachers a workshop lasting several days and handing them a manual.  I don’t think so.  I would argue that there is no single skill more important to our students than the ability to write well.  Is there anyone who believes that students whose college instructors have discovered that they cannot write will somehow now emerge from high school as accomplished writers because their teacher got a manual and attended a three-day workshop on the Common Core State Standards?  That would qualify as a miracle.
If my analysis is anywhere near right, making sure our students have the single most important skill they will ever need requires us to 1) make sure that our teachers read extensively, write well and have the skills needed to coach others to be good writers; 2) organize our schools so that teachers have the time to teach writing, give students extended writing assignments, read carefully what the students have written and provide extensive and helpful feedback on it (all of which would required major adjustments in teacher load and school master schedules); and 3) change the incentives facing teachers, so that those incentives are based to a significant degree on the ability of students to write high quality extended essays.  If we don’t do that, we are just whistling Dixie.
 

————————–

“Teach by Example”
Will Fitzhugh [founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Institute [2002]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
978-443-0022; 800-331-5007
Varsity Academics®

 

On Writing was originally published on Nonpartisan Education Blog

On Writing was originally published on Nonpartisan Education Blog

On Writing was originally published on Nonpartisan Education Blog

WHEELBARROW

“Wheelbarrow”
13 December 2013

There is an old story about a worker, at one of the South African diamond mines, who would leave work once a week or so pushing a wheelbarrow full of sand. The guard would stop him and search the sand thoroughly, looking for any smuggled diamonds. When he found none, he would wave the worker through. This happened month after month, and finally the guard said, “Look, I know you are smuggling something, and I know it isn’t diamonds. If you tell me what it is, I won’t say anything, but I really want to know.” The worker smiled, and said, “wheelbarrows.”

I think of this story when teachers find excuses for not letting their students see the exemplary history essays written by their high school peers for The Concord Review. Often they feel they cannot give their students copies unless they can “teach” the contents. Or they already teach the topic of one of the essays they see in the issue. Or they don’t know anything about one of the topics. Or they know more about the topic than the HS author does. Or they don’t have time to teach one of the topics they see, or they don’t think students have time to read one or more of the essays, or they worry about plagiarism, or something else. There are many reasons to keep this unique journal away from secondary students.

They are, to my mind, “searching the sand.” The most important reason to show their high school students the journal is to let them see the wheelbarrow itself, that is, to show them that there exists in the world a professional journal that takes the history research papers of high school students seriously enough to have published them on a quarterly basis for the last 21 years. Whether the students read all the essays, or one of them, or none of them, they will see that for some of their peers academic work is treated with respect. And that is a message worth letting through the guard post, whatever anyone may think about, or want to do something with, the diamonds inside.

Will Fitzhugh
The Concord Review
http://www.tcr.org; fitzhugh@tcr.org
And of course some teachers are eager to show their students the work of their peers….

The Concord Review—Varsity Academics®

WHEELBARROW was originally published on Nonpartisan Education Blog

WHEELBARROW was originally published on Nonpartisan Education Blog

WHEELBARROW was originally published on Nonpartisan Education Blog

Driven to Distraction

DRIVEN TO DISTRACTION
 
Will Fitzhugh
The Concord Review

7 February 2013

 
“We have now sunk to a depth at which the restatement of the obvious is the first duty of intelligent men.”—George Orwell 
 
 
While we spend billions on standards for skill-building and the assessment of skills, we don’t seem to notice that our students, in general, are not doing any academic work. This assumes that there is a connection between the academic work of students and their academic achievement, but for most of those who study and comment on education that link seems not to be apparent.
 
The Kaiser Foundation reported in  January 2010, that:
 
“Over the past five years, there has been a huge increase in media use among young people. Five years ago, we reported that young people spent an average of nearly 61/2 hours (6:21) a day with media—and managed to pack more than 81/2 hours (8:33) worth of media content into that time by multitasking. At that point it seemed that young people’s lives were filled to the bursting point with media. Today, however, those levels of use have been shattered. Over the past five years, young people have increased the amount of time they spend consuming media by an hour and seventeen minutes daily, from 6:21 to 7:38—almost the amount of time most adults spend at work each day, except that young people use media seven days a week instead of five. [53 hours a week]”
 
If our students spend that much time, in addition to sports, being with friends, and other activities, like sleep, when do they do their academic work?
 
Indiana University’s High School Survey of Student Engagement found most recently that:
 
“Among (U.S.) Public High School students: 
82.7% spend 5 or fewer hours a week on homework.
42.5% spend an hour or less each week on their homework.”
 
 
This may help to explain how they manage to free up 53 hours a week to play with electronic entertainment media, but is there any effect of such low academic expectations on our students’ engagement with the educational enterprise we provide for them?
 
 
Meanwhile, our high school students are reading books written at the fifth-grade level—The 2013 Renaissance Learning Report on student reading levels: “The Book-Reading Habits of Students in American Schools 2011-2012″ found that: “The average ATOS book level of the top 40 books readby ninth–twelfth graders (high school students) was 5.6 overall (fifth-grade level), 5.7 for boys, and 5.4 for girls.”  
 
Brandon Busteed, Executive Director of Gallup Education reported on January 7th of this year that: 


“Gallup research strongly suggests that the longer students stay in school, the less engaged they become. The Gallup Student Poll surveyed nearly 500,000 students in grades five through 12 from more than 1,700 public schools in 37 states in 2012. We found that nearly eight in 10 elementary students who participated in the poll are engaged with school. By middle school that falls to about six in 10 students. And by high school, only four in 10 students qualify as engaged. Our educational system sends students and our country’s future over the school cliff every year.”

 
The statement of the obvious which applies here would seem to be that we have driven our high school students to distraction, by asking them to do little or no homework and by spending billions of dollars to lead them to prefer electronic entertainment media to the academic work on which their futures depend.
 
On June 3, 1990, Albert Shanker, president of the American Federation of Teachers, wrote in his regular New York Times column that:
 
“As we’ve known for a long time, factory workers who never saw the completed product and worked on only a small part of it soon became bored and demoralized. But when they were allowed to see the whole process—or better yet become involved with it—productivity and morale improved. Students are no different. When we chop up the work they do into little bits—history facts and vocabulary and grammar rules to be learned—it’s no wonder they are bored and disengaged. The achievement of The Concord Review‘s authors offers a different model of learning. Maybe it’s time to take it seriously.”
 
Despite my own bias for having students read history books and write history research papers, I think it may be argued that if we give students nothing to do academically, we clearly contribute to the academic disengagement which we now find.
 
If we don’t take their academic work seriously, neither will they. What they take seriously they have a chance of doing well, and when they don’t take something seriously, they have little chance of achievement there. Verbum Sap.
————————-
“Teach by Example”
Will Fitzhugh [founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Institute [2002]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
978-443-0022; 800-331-5007
Varsity Academics®

Driven to Distraction was originally published on Nonpartisan Education Blog

Driven to Distraction was originally published on Nonpartisan Education Blog

Driven to Distraction was originally published on Nonpartisan Education Blog

Driven to Distraction

DRIVEN TO DISTRACTION
 
Will Fitzhugh
The Concord Review

7 February 2013

 
“We have now sunk to a depth at which the restatement of the obvious is the first duty of intelligent men.”—George Orwell 
 
 
While we spend billions on standards for skill-building and the assessment of skills, we don’t seem to notice that our students, in general, are not doing any academic work. This assumes that there is a connection between the academic work of students and their academic achievement, but for most of those who study and comment on education that link seems not to be apparent.
 
The Kaiser Foundation reported in  January 2010, that:
 
“Over the past five years, there has been a huge increase in media use among young people. Five years ago, we reported that young people spent an average of nearly 61/2 hours (6:21) a day with media—and managed to pack more than 81/2 hours (8:33) worth of media content into that time by multitasking. At that point it seemed that young people’s lives were filled to the bursting point with media. Today, however, those levels of use have been shattered. Over the past five years, young people have increased the amount of time they spend consuming media by an hour and seventeen minutes daily, from 6:21 to 7:38—almost the amount of time most adults spend at work each day, except that young people use media seven days a week instead of five. [53 hours a week]”
 
If our students spend that much time, in addition to sports, being with friends, and other activities, like sleep, when do they do their academic work?
 
Indiana University’s High School Survey of Student Engagement found most recently that:
 
“Among (U.S.) Public High School students: 
82.7% spend 5 or fewer hours a week on homework.
42.5% spend an hour or less each week on their homework.”
 
 
This may help to explain how they manage to free up 53 hours a week to play with electronic entertainment media, but is there any effect of such low academic expectations on our students’ engagement with the educational enterprise we provide for them?
 
 
Meanwhile, our high school students are reading books written at the fifth-grade level—The 2013 Renaissance Learning Report on student reading levels: “The Book-Reading Habits of Students in American Schools 2011-2012″ found that: “The average ATOS book level of the top 40 books readby ninth–twelfth graders (high school students) was 5.6 overall (fifth-grade level), 5.7 for boys, and 5.4 for girls.”  
 
Brandon Busteed, Executive Director of Gallup Education reported on January 7th of this year that: 


“Gallup research strongly suggests that the longer students stay in school, the less engaged they become. The Gallup Student Poll surveyed nearly 500,000 students in grades five through 12 from more than 1,700 public schools in 37 states in 2012. We found that nearly eight in 10 elementary students who participated in the poll are engaged with school. By middle school that falls to about six in 10 students. And by high school, only four in 10 students qualify as engaged. Our educational system sends students and our country’s future over the school cliff every year.”

 
The statement of the obvious which applies here would seem to be that we have driven our high school students to distraction, by asking them to do little or no homework and by spending billions of dollars to lead them to prefer electronic entertainment media to the academic work on which their futures depend.
 
On June 3, 1990, Albert Shanker, president of the American Federation of Teachers, wrote in his regular New York Times column that:
 
“As we’ve known for a long time, factory workers who never saw the completed product and worked on only a small part of it soon became bored and demoralized. But when they were allowed to see the whole process—or better yet become involved with it—productivity and morale improved. Students are no different. When we chop up the work they do into little bits—history facts and vocabulary and grammar rules to be learned—it’s no wonder they are bored and disengaged. The achievement of The Concord Review‘s authors offers a different model of learning. Maybe it’s time to take it seriously.”
 
Despite my own bias for having students read history books and write history research papers, I think it may be argued that if we give students nothing to do academically, we clearly contribute to the academic disengagement which we now find.
 
If we don’t take their academic work seriously, neither will they. What they take seriously they have a chance of doing well, and when they don’t take something seriously, they have little chance of achievement there. Verbum Sap.
————————-
“Teach by Example”
Will Fitzhugh [founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Institute [2002]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
978-443-0022; 800-331-5007
Varsity Academics®

Driven to Distraction was originally published on Nonpartisan Education Blog

Driven to Distraction was originally published on Nonpartisan Education Blog